반응형
728x90
반응형

- 이미지 선명화 옵션 복구

https://blog.naver.com/opirus1223/222640336055

 

- 최적화 설정

 

https://www.inven.co.kr/board/lostark/4821/81765

 

로스트아크 인벤 : 로아 그래픽 설정 종결 세팅법 (NVIDIA) - 로스트아크 인벤 팁과 노하우 게시판

 

www.inven.co.kr

 

* 불칸 적용은 할 필요 없음.

반응형

'취미 > 최적화' 카테고리의 다른 글

GPD WIN MAX2 와 Razer Core X Chroma 사용 후기  (0) 2023.04.19
Z20t-B Chrome OS 세팅  (0) 2022.04.03
x98pro x86_android 9 관련 자료  (0) 2021.05.29
728x90
반응형

1. kube-state-metirc 설치

$ git clone https://github.com/kubernetes/kube-state-metrics.git
$ cd kube-state-metrics
$ kubectl apply -f examples/standard

2. namespace 생성

$ kubectl create ns monitoring

3. RBAC(ClusterRole) 생성

$ vi prometheus-cluster-role.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
  namespace: monitoring
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
  namespace: monitoring
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitoring
---

$ kubectl apply -f prometheus-cluster-role.yaml

4. PV, PVC설정

$ vi prometheus-pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: prometheus-pv
  namespace: monitoring
  labels:
    type: local
    app: prometheus
spec:
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
#연결 해제되어도 값 보존
  persistentVolumeReclaimPolicy: Retain
#storageClass가 있는경우 작성
  storageClassName: manual
  hostPath:
    path: /opt/prometheus
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
# host 지정
          - kube-worker-1
---

$ vi prometheus-pvc.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: prometheus-pvc
  namespace: monitoring
  labels:
    type: local
    app: prometheus
spec:
#storageClass가 있는경우 작성(PV와 값이 똑같아야함)
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      app: prometheus
      type: local
---

$ kubectl apply -f prometheus-pvc.yaml

5. Prometheus 설정값 작성

$ vim prometheus.rules
---
groups:
- name: container memory alert
  rules:
  - alert: container memory usage rate is very high( > 55%)
    expr: sum(container_memory_working_set_bytes{pod!="", name=""}) / sum (kube_node_status_allocatable_memory_bytes) * 100 > 55
    for: 1m
    labels:
      severity: fatal
    annotations:
      summary: High Memory Usage on {{ $labels.instance }}
      identifier: "{{ $labels.instance }}"
      description: "{{ $labels.job }} Memory Usage: {{ $value }}"
- name: container CPU alert
  rules:
  - alert: container CPU usage rate is very high( > 10%)
    expr: sum (rate (container_cpu_usage_seconds_total{pod!=""}[1m])) / sum (machine_cpu_cores) * 100 > 10
    for: 1m
    labels:
      severity: fatal
    annotations:
      summary: High Cpu Usage
---


$ vim prometheus.yml
#필요한 metric의 경우 scrape_configs: 아래 형식에 맞게 작성
---
global:
  scrape_interval: 10s
  evaluation_interval: 10s
rule_files:
  - /etc/prometheus/prometheus.rules
alerting:
  alertmanagers:
  - scheme: http
    static_configs:
    - targets:
      - "alertmanager.monitoring.svc:9093"
 
scrape_configs:
  - job_name: 'kubernetes-apiservers'
 
    kubernetes_sd_configs:
    - role: endpoints
    scheme: https
 
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
      action: keep
      regex: default;kubernetes;https
 
  - job_name: 'kubernetes-nodes'
 
    scheme: https
 
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    kubernetes_sd_configs:
    - role: node
 
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - target_label: __address__
      replacement: kubernetes.default.svc:443
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}/proxy/metrics
 
 
  - job_name: 'kubernetes-pods'
 
    kubernetes_sd_configs:
    - role: pod
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
      target_label: __address__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name
 
  - job_name: 'kube-state-metrics'
    static_configs:
      - targets: ['kube-state-metrics.kube-system.svc:8080']
 
  - job_name: 'kubernetes-cadvisor'
 
    scheme: https
 
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    kubernetes_sd_configs:
    - role: node
 
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - target_label: __address__
      replacement: kubernetes.default.svc:443
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
 
  - job_name: 'kubernetes-service-endpoints'
 
    kubernetes_sd_configs:
    - role: endpoints
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
      action: replace
      target_label: __scheme__
      regex: (https?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
      action: replace
      target_label: __address__
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: kubernetes_name
 ---
 
 # 두 파일을 하나로 합쳐 prometheus-config라는 이름의 configmap 생성
 
 $ kubectl create configmap prometheus-config -n monitoring --from-file=./

6. Deployment 작성 후 배포, 트러블 슈팅

vi prometheus-deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-deployment
  namespace: monitoring
  labels:
    app: prometheus-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-server
  template:
    metadata:
      labels:
        app: prometheus-server
    spec:
      serviceAccountName: prometheus
      containers:
        - name: prometheus
          image: prom/prometheus
          args:
            - "--config.file=/etc/prometheus/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
          volumeMounts:
            - name: prometheus-config-volume
              mountPath: /etc/prometheus/
            - name: prometheus-storage-volume
              mountPath: /prometheus/
      nodeSelector:
        kubernetes.io/hostname: kube-worker-1
      volumes:
        - name: prometheus-config-volume
          configMap:
            defaultMode: 420
            name: prometheus-config

        - name: prometheus-storage-volume
          persistentVolumeClaim:
            claimName: prometheus-pvc
---

$ kubectl apply -f prometheus-deployment.yaml

#PV 쪽 권한 에러가 발생하므로, PV생성한 host에(예시에서는 kube-worker-1) 다음 명령어를 입력
$ chmod 757 /opt/prometheus

7. 서비스 배포 및 node-exporter 배포

$ vim prometheus-service.yml
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus-service
  namespace: monitoring
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/port:   '9090'
 
spec:
  selector:
    app: prometheus-server
  type: NodePort
  ports:
    - port: 8080
      targetPort: 9090
      nodePort: 30003
---

$ kubectl apply -f prometheus-service.yaml

$ vim prometheus-node-exporter.yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
      k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - containerPort: 9100
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 9100
    nodePort: 31672
    protocol: TCP
  type: NodePort
  selector:
    k8s-app: node-exporter
---

  $ kubectl apply -f prometheus-node-exporter.yaml

8. 확인

K8s 워커노드 ip:30003

반응형
728x90
반응형
  1. 준비 사항
    - 쿠버네티스 마스터에 helm v3 설치
    - ceph-csi-rbd 헬름차트
       $ helm repo add ceph-csi https://ceph.github.io/csi-charts
       $ helm pull ceph-csi/ceph-csi-rbd
       $ tar xvaf ceph-csi-rbd-3.5.1.tgz

  2. 쿠버네티스 네임스페이스 생성
    $ kubectl create namespace ceph-csi-rbd; 

  3. 헬름 차트 배포에 필요한 ceph-csi-rbd-values.yaml 작성
    $ cat <<EOF > ceph-csi-rbd-values.yaml
    csiConfig:
    #ceph의 fsid
      - clusterID: "af39f080-af03-11ec-9050-fa163e37df68"
    monitors:
    #ceph의 mon host ip:6789
      - "172.30.3.170:6789"
      - "172.30.1.200:6789"
      - "172.30.2.96:6789"
      - "172.30.0.193:6789"
    provisioner:
      name: provisioner
      replicaCount: 2
    EOF

  4. Ceph에서 OSD kubePool 생성(Ceph-1에서 실행) 및 RBD(Rados Block Device) pool 초기화
    $ sudo ceph osd pool create kubePool 64 64
    $ sudo rbd pool init kubePool

  5. 설정에 필요한 client.kubeAdmin 키값 조회 및 user.ID base64로 변환
    $ sudo ceph auth get-or-create-key client.kubeAdmin mds 'allow *' mgr 'allow *' mon 'allow *' osd 'allow * pool=kubePool' | tr -d '\n' | base64;

    결과 예시:
    $ sudo ceph auth get-or-create-key client.kubeAdmin mds 'allow *' mgr 'allow *' mon 'allow *' osd 'allow * pool=kubePool' | tr -d '\n' | base64;
    QVFBaXZVSmlrTSt1TkJBQStuOE0reUoyd095azcxK3BQZytqa0E9PQ==

    $ echo "kubeAdmin" | tr -d '\n' | base64;
    a3ViZUFkbWlu

  6. 5의 설정값으로 secret 생성

    $ cat > ceph-admin-secret.yaml << EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-admin
      namespace: default
    type: kubernetes.io/rbd
    data:
      userID: a3ViZUFkbWlu
    #조회한 client.kubeAdmin 키값
      userKey: QVFBaXZVSmlrTSt1TkJBQStuOE0reUoyd095azcxK3BQZytqa0E9PQ==
    EOF
  7. 스토리지 클래스 yaml 파일 생성
    $ cat > ceph-rbd-sc.yaml <<EOF
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ceph-rbd-sc
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: rbd.csi.ceph.com
    parameters:
       clusterID: af39f080-af03-11ec-9050-fa163e37df68
       pool: kubePool
       imageFeatures: layering
       csi.storage.k8s.io/provisioner-secret-name: ceph-admin
       csi.storage.k8s.io/provisioner-secret-namespace: default
       csi.storage.k8s.io/controller-expand-secret-name: ceph-admin
       csi.storage.k8s.io/controller-expand-secret-namespace: default
       csi.storage.k8s.io/node-stage-secret-name: ceph-admin
       csi.storage.k8s.io/node-stage-secret-namespace: default
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    mountOptions:
       - discard
     EOF

  8. 헬름차트 배포 및 ceph-admin-secret.yaml , ceph-rbd-sc.yaml 배포
    $ helm install --namespace ceph-csi-rbd ceph-csi-rbd --values ceph-csi-rbd-values.yaml ceph-csi-rbd
    $ kubectl rollout status deployment ceph-csi-rbd-provisioner -n ceph-csi-rbd
    $ kubectl apply -f ceph-admin-secret.yaml
    $ kubectl apply -f ceph-rbd-sc.yaml

  9. 확인
    $ kubectl get sc
    $ kubectl get po -A
    $ helm status ceph-csi-rbd -n ceph-csi-rbd

  10. Test용 Pod 배포 후 PV확인
    $ cat <<EOF > pv-pod.yaml
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-rbd-sc-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
      storageClassName: ceph-rbd-sc
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: ceph-rbd-pod-pvc-sc
    spec:
      containers:
      - name:  ceph-rbd-pod-pvc-sc
        image: busybox
        command: ["sleep", "infinity"]
        volumeMounts:
        - mountPath: /mnt/ceph_rbd
          name: volume
      volumes:
      - name: volume
        persistentVolumeClaim:
          claimName: ceph-rbd-sc-pvc
    EOF

    $kubectl apply -f pv-pod.yaml

    #확인
    $ kubectl get pv

    결과 예시:
    NAME                                                                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS                  CLAIM               STORAGECLASS   REASON   AGE
    pvc-291cc4a8-c2ff-4601-908b-0eab90b2ebe6          2Gi                 RWO                    Delete            Bound    default/ceph-rbd-sc-pvc   ceph-rbd-sc                          1s
반응형
728x90
반응형


출처 : https://severalnines.com/database-blog/how-monitor-mysql-containers-prometheus-deployment-standalone-and-swarm-part-one

 

1. Docker swarm 구축

#host1에서
$ sudo docker swarm init --advertise-addr [서버1 IP]

#host 2,3에 생성된 토큰으로 node join 예시
$ sudo docker swarm join --token SWMTKN-1-4v0nzrgo3ke6c9eqtwhkpatp2o0rjqpktluxxe1bh9yezm4bqa-1apfoiskt0fvx4400c4f3qcq4 172.30.1.38:2377

 

2. Docker swarm network 생성 및 galera cluster bootstrap 서버 구성

$ vi my.cnf
---
[mysqld]
 
default_storage_engine          = InnoDB
binlog_format                   = ROW
 
innodb_flush_log_at_trx_commit  = 0
innodb_flush_method             = O_DIRECT
innodb_file_per_table           = 1
innodb_autoinc_lock_mode        = 2
innodb_lock_schedule_algorithm  = FCFS # MariaDB >10.1.19 and >10.2.3 only
 
wsrep_on                        = ON
wsrep_provider                  = /usr/lib/galera/libgalera_smm.so
wsrep_sst_method                = mariabackup
---

$ sudo docker network create --driver overlay db_swarm

$ cat ~/my.cnf | sudo docker config create my-cnf -

$ docker service create \
--name galera0 \
--replicas 1 \
--hostname galera0 \
--network db_swarm \
--publish 3306 \
--publish 4444 \
--publish 4567 \
--publish 4568 \
--config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \
--env MYSQL_ROOT_PASSWORD=mypassword \
--mount type=volume,src=galera0-datadir,dst=/var/lib/mysql \
mariadb:10.2 \
--wsrep_cluster_address=gcomm:// \
--wsrep_sst_auth="root:mypassword" \
--wsrep_node_address=galera0

 

3. galera cluster 구성 컨테이너 배포(galera1~3)

$ sudo docker service create \
--name galera1 \
--replicas 1 \
--hostname galera1 \
--network db_swarm \
--publish 3306 \
--publish 4444 \
--publish 4567 \
--publish 4568 \
--config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \
--env MYSQL_ROOT_PASSWORD=mypassword \
--mount type=volume,src=galera1-datadir,dst=/var/lib/mysql \
mariadb:10.2 \
--wsrep_cluster_address=gcomm://galera0,galera1,galera2,galera3 \
--wsrep_sst_auth="root:mypassword" \
--wsrep_node_address=galera1

$ sudo docker service create \
--name galera2 \
--replicas 1 \
--hostname galera2 \
--network db_swarm \
--publish 3306 \
--publish 4444 \
--publish 4567 \
--publish 4568 \
--config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \
--env MYSQL_ROOT_PASSWORD=mypassword \
--mount type=volume,src=galera2-datadir,dst=/var/lib/mysql \
mariadb:10.2 \
--wsrep_cluster_address=gcomm://galera0,galera1,galera2,galera3 \
--wsrep_sst_auth="root:mypassword" \
--wsrep_node_address=galera2

$ sudo docker service create \
--name galera3 \
--replicas 1 \
--hostname galera3 \
--network db_swarm \
--publish 3306 \
--publish 4444 \
--publish 4567 \
--publish 4568 \
--config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \
--env MYSQL_ROOT_PASSWORD=mypassword \
--mount type=volume,src=galera3-datadir,dst=/var/lib/mysql \
mariadb:10.2 \
--wsrep_cluster_address=gcomm://galera0,galera1,galera2,galera3 \
--wsrep_sst_auth="root:mypassword" \
--wsrep_node_address=galera3

 

4. 배포 확인

$ sudo docker service ls

 

5. 구성 이후 Bootstrap node 제거

$ sudo docker service rm galera0

 

6. DB에 mysql_exporter용 계정 정보 삽입

#host1에서
$ docker exec -it [galera1 컨테이너 이름] mysql -uroot -pmypassword

mysql> CREATE USER 'exporter'@'%' IDENTIFIED BY 'exporterpassword' WITH MAX_USER_CONNECTIONS 3;
mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';

#ctrl을 누른채로 p와 q를 눌러 빠져나옴

 

7. Mysql_exporter1~3 배포

$sudo docker service create \
--name galera1-exporter \
--network db_swarm \
--replicas 1 \
-p 9104 \
-e DATA_SOURCE_NAME="exporter:exporterpassword@(galera1:3306)/" \
prom/mysqld-exporter:latest \
--collect.info_schema.processlist \
--collect.info_schema.innodb_metrics \
--collect.info_schema.tablestats \
--collect.info_schema.tables \
--collect.info_schema.userstats \
--collect.engine_innodb_status

$ sudo docker service create \
--name galera2-exporter \
--network db_swarm \
--replicas 1 \
-p 9104 \
-e DATA_SOURCE_NAME="exporter:exporterpassword@(galera2:3306)/" \
prom/mysqld-exporter:latest \
--collect.info_schema.processlist \
--collect.info_schema.innodb_metrics \
--collect.info_schema.tablestats \
--collect.info_schema.tables \
--collect.info_schema.userstats \
--collect.engine_innodb_status

$ sudo docker service create \
--name galera3-exporter \
--network db_swarm \
--replicas 1 \
-p 9104 \
-e DATA_SOURCE_NAME="exporter:exporterpassword@(galera3:3306)/" \
prom/mysqld-exporter:latest \
--collect.info_schema.processlist \
--collect.info_schema.innodb_metrics \
--collect.info_schema.tablestats \
--collect.info_schema.tables \
--collect.info_schema.userstats \
--collect.engine_innodb_status

8. 프로메테우스 타겟용 설정파일(prometheus.yml) 작성 및 컨테이너 배포

$ vim ~/prometheus.yml
---
global:
  scrape_interval:     5s
  scrape_timeout:      3s
  evaluation_interval: 5s
 
# Our alerting rule files
rule_files:
  - "alert.rules"
 
# Scrape endpoints
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']
 
  - job_name: 'galera'
    static_configs:
      - targets: ['galera1-exporter:9104','galera2-exporter:9104', 'galera3-exporter:9104']
---

$ cat ~/prometheus.yml | sudo docker config create prometheus-yml -

$ sudo docker service create \
--name prometheus-server \
--publish 9090:9090 \
--network db_swarm \
--replicas 1 \
--config src=prometheus-yml,target=/etc/prometheus/prometheus.yml \
--mount type=volume,src=prometheus-data,dst=/prometheus \
prom/prometheus

9. 확인

$ sudo docker service ls

ID             NAME                MODE         REPLICAS   IMAGE                         PORTS
0cryq8h0wlmg   galera1             replicated   1/1        mariadb:10.2                  *:30004->3306/tcp, *:30005->4444/tcp, *:30006-30007->4567-4568/tcp
wwwn1ir7ebtz   galera1-exporter    replicated   1/1        prom/mysqld-exporter:latest   *:30016->9104/tcp
u3nha71h41qx   galera2             replicated   1/1        mariadb:10.2                  *:30008->3306/tcp, *:30009->4444/tcp, *:30010-30011->4567-4568/tcp
uqnl5r9m79j4   galera2-exporter    replicated   1/1        prom/mysqld-exporter:latest   *:30017->9104/tcp
ireqrvbhdfyl   galera3             replicated   1/1        mariadb:10.2                  *:30012->3306/tcp, *:30013->4444/tcp, *:30014-30015->4567-4568/tcp
37c2r23s482s   galera3-exporter    replicated   1/1        prom/mysqld-exporter:latest   *:30018->9104/tcp
ziftvb8dutdk   prometheus-server   replicated   1/1        prom/prometheus:latest        *:9090->9090/tcp

#아무 스웜 호스트 ip:9090으로 접속하면 Prometheus 웹으로 접속가능하며, PromQL 테스트를 진행할 수 있다.

host2로 접속 시도
host1로 접속 시도
클러스터 구성 확인 쿼리

 

반응형
728x90
반응형

모든 커맨드는 루트로 작업

 

  1. 호스트 추가
    #ceph ssh-key 배포(클러스터 구성한 노드에서 각 노드들에게)
    $ ssh-copy-id -f -i /etc/ceph/ceph.pub root@"host ip"

    #클러스터에 호스트 추가

    $ ceph orch host add "호스트명" "호스트 ip" "tag(생략 가능)"

    #클러스터에 호스트 추가 결과 확인
    $ ceph orch host ls

    결과 예시:
    $ ceph orch host ls
    HOST    ADDR          LABELS  STATUS
    ceph-1  172.30.0.193  _admin
    ceph-2  172.30.3.170  OSD
    ceph-3  172.30.1.200  OSD
    ceph-4  172.30.2.96   OSD


  2. OSD 추가
    $ ceph orch daemon add osd "호스트명":"드라이브명"

    #확인
    $ ceph -s
    $ ceph orch device ls

    결과 예시:
    $ ceph -s
      cluster:
        id:     af39f080-af03-11ec-9050-fa163e37df68
        health: HEALTH_OK
      services:
        mon: 4 daemons, quorum ceph-1,ceph-2,ceph-3,ceph-4 (age 2d)
        mgr: ceph-1.ppytcz(active, since 25h), standbys: ceph-2.dedeoe
        mds: 1/1 daemons up, 3 standby
        osd: 4 osds: 4 up (since 2d), 4 in (since 2d)
    
    
    $ ceph orch device ls
    HOST    PATH      TYPE  DEVICE ID              SIZE  AVAILABLE  REJECT REASONS
    ceph-1  /dev/vdb  hdd   0e8c4f4b-ca72-48c3-8  1073G             Insufficient space (<10 extents) on vgs, LVM detected, locked
    ceph-2  /dev/vdb  hdd   382bb362-d64e-4041-9  1073G             Insufficient space (<10 extents) on vgs, LVM detected, locked
    ceph-3  /dev/vdb  hdd   3e5cec61-0c30-4d61-a  1073G             Insufficient space (<10 extents) on vgs, LVM detected, locked
    ceph-4  /dev/vdb  hdd   c63d1d1f-6a74-4c3a-9  1073G             Insufficient space (<10 extents) on vgs, LVM detected, locked


  3. 결과 확인
    $ ceph orch status
    $ ceph orch ps

    결과 예시:

    $ ceph orch status
    Backend: cephadm
    Available: Yes
    Paused: No
    
    $ ceph orch ps
    NAME                          HOST    PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID
    alertmanager.ceph-1           ceph-1  *:9093,9094  running (2d)      9m ago   2d    12.7M        -  0.20.0   0881eb8f169f  169d759e6ebb
    crash.ceph-1                  ceph-1               running (2d)      9m ago   2d    7436k        -  16.2.7   c92aec2cd894  cf8d4667fc0e
    crash.ceph-2                  ceph-2               running (2d)      9m ago   2d    7304k        -  16.2.7   c92aec2cd894  220f004b583c
    crash.ceph-3                  ceph-3               running (2d)     49s ago   2d    10.9M        -  16.2.7   c92aec2cd894  efa886f81ef9
    crash.ceph-4                  ceph-4               running (2d)     49s ago   2d    7256k        -  16.2.7   c92aec2cd894  276eaf7238a4
    grafana.ceph-1                ceph-1  *:3000       running (2d)      9m ago   2d    35.8M        -  6.7.4    557c83e11646  2684c2c21a43
    mgr.ceph-1.ppytcz             ceph-1  *:9283       running (1h)      9m ago   2d     506M        -  16.2.7   c92aec2cd894  654bc9d468db
    mgr.ceph-2.dedeoe             ceph-2  *:8443,9283  running (2d)      9m ago   2d     380M        -  16.2.7   c92aec2cd894  730a9e27d05f
    mon.ceph-1                    ceph-1               running (2d)      9m ago   2d     881M    2048M  16.2.7   c92aec2cd894  c2f75db158da
    mon.ceph-2                    ceph-2               running (2d)      9m ago   2d     888M    2048M  16.2.7   c92aec2cd894  05f31cf6a2d3
    mon.ceph-3                    ceph-3               running (2d)     49s ago   2d     883M    2048M  16.2.7   c92aec2cd894  d31c6d4115c4
    mon.ceph-4                    ceph-4               running (2d)     49s ago   2d     891M    2048M  16.2.7   c92aec2cd894  8bade1f43df6
    node-exporter.ceph-1          ceph-1  *:9100       running (2d)      9m ago   2d    11.8M        -  0.18.1   e5a616e4b9cf  3debf7ae68eb
    node-exporter.ceph-2          ceph-2  *:9100       running (2d)      9m ago   2d    11.8M        -  0.18.1   e5a616e4b9cf  7fe3fbc71085
    node-exporter.ceph-3          ceph-3  *:9100       running (2d)     49s ago   2d    12.0M        -  0.18.1   e5a616e4b9cf  37e0338834bb
    node-exporter.ceph-4          ceph-4  *:9100       running (2d)     49s ago   2d    11.0M        -  0.18.1   e5a616e4b9cf  4ba70a679bf2
    osd.0                         ceph-2               running (2d)      9m ago   2d     212M    4096M  16.2.7   c92aec2cd894  20bf30027ca5
    osd.1                         ceph-3               running (2d)     49s ago   2d     226M    4096M  16.2.7   c92aec2cd894  36607cbb6458
    osd.2                         ceph-4               running (2d)     49s ago   2d     222M    4096M  16.2.7   c92aec2cd894  c90cf1973629
    osd.3                         ceph-1               running (2d)      9m ago   2d     216M    4096M  16.2.7   c92aec2cd894  0fc6bbac67eb
    prometheus.ceph-1             ceph-1  *:9095       running (2d)      9m ago   2d    36.5M        -  2.18.1   de242295e225  71d62fcef51e

  4. Dashboard 접속
    Ceph mgr 노드 ip에서 8. Ceph 클러스터 구성의 접속정보로 로그인 후 비밀번호 변경하면 Dashboard로 관리 가능




    참고 : https://yjwang.tistory.com/119
반응형

+ Recent posts