반응형
728x90
반응형

Nginx Ingress Controller 설치 후 

https://kubernetes.github.io/ingress-nginx/user-guide/monitoring/ 가이드 대로 수행

 

1. Ingress에 prometheus metric 수집용 포트 설정

helm upgrade ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--set controller.metrics.enabled=true \
--set-string controller.podAnnotations."prometheus\.io/scrape"="true" \
--set-string controller.podAnnotations."prometheus\.io/port"="10254"

 

2. Prometheus 배포

kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/

 

3. Grafana 배포

kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/

 

4. grafana 접속

NAME                                     TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
grafana                                  ClusterIP      10.109.165.0     <none>        3000/TCP                     11h
nginx-ingress-nginx-controller           LoadBalancer   10.107.254.83    <pending>     80:30000/TCP,443:30001/TCP   4d13h
nginx-ingress-nginx-controller-metrics   ClusterIP      10.106.200.239   <none>        10254/TCP                    11h
prometheus-server                        ClusterIP      10.101.140.30    <none>        9090/TCP                     11h

ex) grafana 서비스의 노드포트인 31086로 접속하기 위해 "k8s node IP":31086로 접속 후
ID: admin
PW: admin 입력

=> admin 비밀번호 변경 후 접속

 

* 필요 시 ingress를 작성해 접속

 

5. 대시보드json import

반응형
728x90
반응형

1. kube-state-metirc 설치

$ git clone https://github.com/kubernetes/kube-state-metrics.git
$ cd kube-state-metrics
$ kubectl apply -f examples/standard

2. namespace 생성

$ kubectl create ns monitoring

3. RBAC(ClusterRole) 생성

$ vi prometheus-cluster-role.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
  namespace: monitoring
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
  namespace: monitoring
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitoring
---

$ kubectl apply -f prometheus-cluster-role.yaml

4. PV, PVC설정

$ vi prometheus-pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: prometheus-pv
  namespace: monitoring
  labels:
    type: local
    app: prometheus
spec:
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
#연결 해제되어도 값 보존
  persistentVolumeReclaimPolicy: Retain
#storageClass가 있는경우 작성
  storageClassName: manual
  hostPath:
    path: /opt/prometheus
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
# host 지정
          - kube-worker-1
---

$ vi prometheus-pvc.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: prometheus-pvc
  namespace: monitoring
  labels:
    type: local
    app: prometheus
spec:
#storageClass가 있는경우 작성(PV와 값이 똑같아야함)
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      app: prometheus
      type: local
---

$ kubectl apply -f prometheus-pvc.yaml

5. Prometheus 설정값 작성

$ vim prometheus.rules
---
groups:
- name: container memory alert
  rules:
  - alert: container memory usage rate is very high( > 55%)
    expr: sum(container_memory_working_set_bytes{pod!="", name=""}) / sum (kube_node_status_allocatable_memory_bytes) * 100 > 55
    for: 1m
    labels:
      severity: fatal
    annotations:
      summary: High Memory Usage on {{ $labels.instance }}
      identifier: "{{ $labels.instance }}"
      description: "{{ $labels.job }} Memory Usage: {{ $value }}"
- name: container CPU alert
  rules:
  - alert: container CPU usage rate is very high( > 10%)
    expr: sum (rate (container_cpu_usage_seconds_total{pod!=""}[1m])) / sum (machine_cpu_cores) * 100 > 10
    for: 1m
    labels:
      severity: fatal
    annotations:
      summary: High Cpu Usage
---


$ vim prometheus.yml
#필요한 metric의 경우 scrape_configs: 아래 형식에 맞게 작성
---
global:
  scrape_interval: 10s
  evaluation_interval: 10s
rule_files:
  - /etc/prometheus/prometheus.rules
alerting:
  alertmanagers:
  - scheme: http
    static_configs:
    - targets:
      - "alertmanager.monitoring.svc:9093"
 
scrape_configs:
  - job_name: 'kubernetes-apiservers'
 
    kubernetes_sd_configs:
    - role: endpoints
    scheme: https
 
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
      action: keep
      regex: default;kubernetes;https
 
  - job_name: 'kubernetes-nodes'
 
    scheme: https
 
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    kubernetes_sd_configs:
    - role: node
 
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - target_label: __address__
      replacement: kubernetes.default.svc:443
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}/proxy/metrics
 
 
  - job_name: 'kubernetes-pods'
 
    kubernetes_sd_configs:
    - role: pod
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
      target_label: __address__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name
 
  - job_name: 'kube-state-metrics'
    static_configs:
      - targets: ['kube-state-metrics.kube-system.svc:8080']
 
  - job_name: 'kubernetes-cadvisor'
 
    scheme: https
 
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    kubernetes_sd_configs:
    - role: node
 
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - target_label: __address__
      replacement: kubernetes.default.svc:443
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
 
  - job_name: 'kubernetes-service-endpoints'
 
    kubernetes_sd_configs:
    - role: endpoints
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
      action: replace
      target_label: __scheme__
      regex: (https?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
      action: replace
      target_label: __address__
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: kubernetes_name
 ---
 
 # 두 파일을 하나로 합쳐 prometheus-config라는 이름의 configmap 생성
 
 $ kubectl create configmap prometheus-config -n monitoring --from-file=./

6. Deployment 작성 후 배포, 트러블 슈팅

vi prometheus-deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-deployment
  namespace: monitoring
  labels:
    app: prometheus-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-server
  template:
    metadata:
      labels:
        app: prometheus-server
    spec:
      serviceAccountName: prometheus
      containers:
        - name: prometheus
          image: prom/prometheus
          args:
            - "--config.file=/etc/prometheus/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
          volumeMounts:
            - name: prometheus-config-volume
              mountPath: /etc/prometheus/
            - name: prometheus-storage-volume
              mountPath: /prometheus/
      nodeSelector:
        kubernetes.io/hostname: kube-worker-1
      volumes:
        - name: prometheus-config-volume
          configMap:
            defaultMode: 420
            name: prometheus-config

        - name: prometheus-storage-volume
          persistentVolumeClaim:
            claimName: prometheus-pvc
---

$ kubectl apply -f prometheus-deployment.yaml

#PV 쪽 권한 에러가 발생하므로, PV생성한 host에(예시에서는 kube-worker-1) 다음 명령어를 입력
$ chmod 757 /opt/prometheus

7. 서비스 배포 및 node-exporter 배포

$ vim prometheus-service.yml
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus-service
  namespace: monitoring
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/port:   '9090'
 
spec:
  selector:
    app: prometheus-server
  type: NodePort
  ports:
    - port: 8080
      targetPort: 9090
      nodePort: 30003
---

$ kubectl apply -f prometheus-service.yaml

$ vim prometheus-node-exporter.yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
      k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - containerPort: 9100
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 9100
    nodePort: 31672
    protocol: TCP
  type: NodePort
  selector:
    k8s-app: node-exporter
---

  $ kubectl apply -f prometheus-node-exporter.yaml

8. 확인

K8s 워커노드 ip:30003

반응형
728x90
반응형


출처 : https://severalnines.com/database-blog/how-monitor-mysql-containers-prometheus-deployment-standalone-and-swarm-part-one

 

1. Docker swarm 구축

#host1에서
$ sudo docker swarm init --advertise-addr [서버1 IP]

#host 2,3에 생성된 토큰으로 node join 예시
$ sudo docker swarm join --token SWMTKN-1-4v0nzrgo3ke6c9eqtwhkpatp2o0rjqpktluxxe1bh9yezm4bqa-1apfoiskt0fvx4400c4f3qcq4 172.30.1.38:2377

 

2. Docker swarm network 생성 및 galera cluster bootstrap 서버 구성

$ vi my.cnf
---
[mysqld]
 
default_storage_engine          = InnoDB
binlog_format                   = ROW
 
innodb_flush_log_at_trx_commit  = 0
innodb_flush_method             = O_DIRECT
innodb_file_per_table           = 1
innodb_autoinc_lock_mode        = 2
innodb_lock_schedule_algorithm  = FCFS # MariaDB >10.1.19 and >10.2.3 only
 
wsrep_on                        = ON
wsrep_provider                  = /usr/lib/galera/libgalera_smm.so
wsrep_sst_method                = mariabackup
---

$ sudo docker network create --driver overlay db_swarm

$ cat ~/my.cnf | sudo docker config create my-cnf -

$ docker service create \
--name galera0 \
--replicas 1 \
--hostname galera0 \
--network db_swarm \
--publish 3306 \
--publish 4444 \
--publish 4567 \
--publish 4568 \
--config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \
--env MYSQL_ROOT_PASSWORD=mypassword \
--mount type=volume,src=galera0-datadir,dst=/var/lib/mysql \
mariadb:10.2 \
--wsrep_cluster_address=gcomm:// \
--wsrep_sst_auth="root:mypassword" \
--wsrep_node_address=galera0

 

3. galera cluster 구성 컨테이너 배포(galera1~3)

$ sudo docker service create \
--name galera1 \
--replicas 1 \
--hostname galera1 \
--network db_swarm \
--publish 3306 \
--publish 4444 \
--publish 4567 \
--publish 4568 \
--config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \
--env MYSQL_ROOT_PASSWORD=mypassword \
--mount type=volume,src=galera1-datadir,dst=/var/lib/mysql \
mariadb:10.2 \
--wsrep_cluster_address=gcomm://galera0,galera1,galera2,galera3 \
--wsrep_sst_auth="root:mypassword" \
--wsrep_node_address=galera1

$ sudo docker service create \
--name galera2 \
--replicas 1 \
--hostname galera2 \
--network db_swarm \
--publish 3306 \
--publish 4444 \
--publish 4567 \
--publish 4568 \
--config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \
--env MYSQL_ROOT_PASSWORD=mypassword \
--mount type=volume,src=galera2-datadir,dst=/var/lib/mysql \
mariadb:10.2 \
--wsrep_cluster_address=gcomm://galera0,galera1,galera2,galera3 \
--wsrep_sst_auth="root:mypassword" \
--wsrep_node_address=galera2

$ sudo docker service create \
--name galera3 \
--replicas 1 \
--hostname galera3 \
--network db_swarm \
--publish 3306 \
--publish 4444 \
--publish 4567 \
--publish 4568 \
--config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \
--env MYSQL_ROOT_PASSWORD=mypassword \
--mount type=volume,src=galera3-datadir,dst=/var/lib/mysql \
mariadb:10.2 \
--wsrep_cluster_address=gcomm://galera0,galera1,galera2,galera3 \
--wsrep_sst_auth="root:mypassword" \
--wsrep_node_address=galera3

 

4. 배포 확인

$ sudo docker service ls

 

5. 구성 이후 Bootstrap node 제거

$ sudo docker service rm galera0

 

6. DB에 mysql_exporter용 계정 정보 삽입

#host1에서
$ docker exec -it [galera1 컨테이너 이름] mysql -uroot -pmypassword

mysql> CREATE USER 'exporter'@'%' IDENTIFIED BY 'exporterpassword' WITH MAX_USER_CONNECTIONS 3;
mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';

#ctrl을 누른채로 p와 q를 눌러 빠져나옴

 

7. Mysql_exporter1~3 배포

$sudo docker service create \
--name galera1-exporter \
--network db_swarm \
--replicas 1 \
-p 9104 \
-e DATA_SOURCE_NAME="exporter:exporterpassword@(galera1:3306)/" \
prom/mysqld-exporter:latest \
--collect.info_schema.processlist \
--collect.info_schema.innodb_metrics \
--collect.info_schema.tablestats \
--collect.info_schema.tables \
--collect.info_schema.userstats \
--collect.engine_innodb_status

$ sudo docker service create \
--name galera2-exporter \
--network db_swarm \
--replicas 1 \
-p 9104 \
-e DATA_SOURCE_NAME="exporter:exporterpassword@(galera2:3306)/" \
prom/mysqld-exporter:latest \
--collect.info_schema.processlist \
--collect.info_schema.innodb_metrics \
--collect.info_schema.tablestats \
--collect.info_schema.tables \
--collect.info_schema.userstats \
--collect.engine_innodb_status

$ sudo docker service create \
--name galera3-exporter \
--network db_swarm \
--replicas 1 \
-p 9104 \
-e DATA_SOURCE_NAME="exporter:exporterpassword@(galera3:3306)/" \
prom/mysqld-exporter:latest \
--collect.info_schema.processlist \
--collect.info_schema.innodb_metrics \
--collect.info_schema.tablestats \
--collect.info_schema.tables \
--collect.info_schema.userstats \
--collect.engine_innodb_status

8. 프로메테우스 타겟용 설정파일(prometheus.yml) 작성 및 컨테이너 배포

$ vim ~/prometheus.yml
---
global:
  scrape_interval:     5s
  scrape_timeout:      3s
  evaluation_interval: 5s
 
# Our alerting rule files
rule_files:
  - "alert.rules"
 
# Scrape endpoints
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']
 
  - job_name: 'galera'
    static_configs:
      - targets: ['galera1-exporter:9104','galera2-exporter:9104', 'galera3-exporter:9104']
---

$ cat ~/prometheus.yml | sudo docker config create prometheus-yml -

$ sudo docker service create \
--name prometheus-server \
--publish 9090:9090 \
--network db_swarm \
--replicas 1 \
--config src=prometheus-yml,target=/etc/prometheus/prometheus.yml \
--mount type=volume,src=prometheus-data,dst=/prometheus \
prom/prometheus

9. 확인

$ sudo docker service ls

ID             NAME                MODE         REPLICAS   IMAGE                         PORTS
0cryq8h0wlmg   galera1             replicated   1/1        mariadb:10.2                  *:30004->3306/tcp, *:30005->4444/tcp, *:30006-30007->4567-4568/tcp
wwwn1ir7ebtz   galera1-exporter    replicated   1/1        prom/mysqld-exporter:latest   *:30016->9104/tcp
u3nha71h41qx   galera2             replicated   1/1        mariadb:10.2                  *:30008->3306/tcp, *:30009->4444/tcp, *:30010-30011->4567-4568/tcp
uqnl5r9m79j4   galera2-exporter    replicated   1/1        prom/mysqld-exporter:latest   *:30017->9104/tcp
ireqrvbhdfyl   galera3             replicated   1/1        mariadb:10.2                  *:30012->3306/tcp, *:30013->4444/tcp, *:30014-30015->4567-4568/tcp
37c2r23s482s   galera3-exporter    replicated   1/1        prom/mysqld-exporter:latest   *:30018->9104/tcp
ziftvb8dutdk   prometheus-server   replicated   1/1        prom/prometheus:latest        *:9090->9090/tcp

#아무 스웜 호스트 ip:9090으로 접속하면 Prometheus 웹으로 접속가능하며, PromQL 테스트를 진행할 수 있다.

host2로 접속 시도
host1로 접속 시도
클러스터 구성 확인 쿼리

 

반응형
728x90
반응형

1. 테스트환경

   4Core 32GB RAM, Ubuntu 18.04

 

2. Topology

Docker
mysql57 mysql80
mysql57-exporter mysql80-exporter
Prometheus

- 사용 포트 

  1) docker-proxy 자동 지정(mysql57 mysql80 mysql57-exporter mysql80-exporter)

  2) Prometheus : 9090

     * mysql의 3306포트는 host와 매핑하지 않음

 

3. 절차

0) docker 설정

 

$ vi /etc/docker/daemon.json

---

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "1"
  },
  "storage-driver": "overlay2",
  "metrics-addr" : "[hostIP]:9323",
  "experimental" : true

}

---

$ sudo systemctl daemon-reload

$ sudo systemctl restart docker

 

1) mysql57 배포

 

$ sudo docker network create db_network

$ docker run -d --name mysql57 --publish 3306 --network db_network --restart unless-stopped \
--env MYSQL_ROOT_PASSWORD=mypassword --volume mysql57-datadir:/var/lib/mysql \
mysql:5.7

 

2) mysql80 배포

 

$ docker run -d --name mysql80 --publish 3306 --network db_network --restart unless-stopped \

--env MYSQL_ROOT_PASSWORD=mypassword --volume mysql80-datadir:/var/lib/mysql \

--default-authentication-plugin=mysql_native_password \

mysql:8

 

3) mysql에 exporter 계정생성

 

$ docker exec -it mysql80 mysql -uroot -pmypassword

mysql> CREATE USER 'exporter'@'%' IDENTIFIED BY 'exporterpassword' WITH MAX_USER_CONNECTIONS 3;
mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';

 

$ docker exec -it mysql57 mysql -uroot -pmypassword

mysql> CREATE USER 'exporter'@'%' IDENTIFIED BY 'exporterpassword' WITH MAX_USER_CONNECTIONS 3;
mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';

 

4) Node_exporter 생성

 
$ docker run -d --name mysql80-exporter --publish 9104 --network db_network --restart always \
--env DATA_SOURCE_NAME="exporter:exporterpassword@(mysql80:3306)/" \
--collect.info_schema.processlist --collect.info_schema.innodb_metrics --collect.info_schema.tablestats \
--collect.info_schema.tables --collect.info_schema.userstats --collect.engine_innodb_status \
prom/mysqld-exporter:latest

$ docker run -d --name mysql57-exporter --publish 9104 --network db_network --restart always \
-e DATA_SOURCE_NAME="exporter:exporterpassword@(mysql57:3306)/" \
--collect.info_schema.processlist --collect.info_schema.innodb_metrics --collect.info_schema.tablestats \
--collect.info_schema.tables --collect.info_schema.userstats --collect.engine_innodb_status \
prom/mysqld-exporter:latest

 

5) 프로메테우스 생성

 

$ vim ~/prometheus.yml

---

global:
  scrape_interval:     5s
  scrape_timeout:      3s
  evaluation_interval: 5s

# Our alerting rule files
rule_files:
  - "alert.rules"

# Scrape endpoints
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'mysql'
    static_configs:
      - targets: ['mysql57-exporter:9104','mysql80-exporter:9104']

  - job_name: 'docker'
    static_configs:
      - targets: ['hostIP:9323']

---

$ docker run -d --name prometheus-server --publish 9090:9090 --network db_network --restart unless-stopped \

--mount type=volume,src=prometheus-data,target=/prometheus --mount type=bind,src=$(pwd)/prometheus.yml,target=/etc/prometheus/prometheus.yml \

prom/prometheus

 

6) 확인

 

hostip:9090 접속 후 status > target에서 연결 확인

graph 탭에서 Execute 왼쪽 둥근 모양 버튼을 누르면 정의된 쿼리를 찾아볼 수 있다.
테이블로 쿼리 조회모습
쿼리 결과를 그래프로 나타낸 모습

참고 :

https://severalnines.com/database-blog/how-monitor-mysql-containers-prometheus-deployment-standalone-and-swarm-part-one

반응형

+ Recent posts