반응형
728x90
반응형

- 이미지 선명화 옵션 복구

https://blog.naver.com/opirus1223/222640336055

 

- 최적화 설정

 

https://www.inven.co.kr/board/lostark/4821/81765

 

로스트아크 인벤 : 로아 그래픽 설정 종결 세팅법 (NVIDIA) - 로스트아크 인벤 팁과 노하우 게시판

 

www.inven.co.kr

 

* 불칸 적용은 할 필요 없음.

반응형

'취미 > 최적화' 카테고리의 다른 글

GPD WIN MAX2 와 Razer Core X Chroma 사용 후기  (0) 2023.04.19
Z20t-B Chrome OS 세팅  (0) 2022.04.03
x98pro x86_android 9 관련 자료  (0) 2021.05.29
728x90
반응형

1. kube-state-metirc 설치

$ git clone https://github.com/kubernetes/kube-state-metrics.git
$ cd kube-state-metrics
$ kubectl apply -f examples/standard

2. namespace 생성

$ kubectl create ns monitoring

3. RBAC(ClusterRole) 생성

$ vi prometheus-cluster-role.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
  namespace: monitoring
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
  namespace: monitoring
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitoring
---

$ kubectl apply -f prometheus-cluster-role.yaml

4. PV, PVC설정

$ vi prometheus-pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: prometheus-pv
  namespace: monitoring
  labels:
    type: local
    app: prometheus
spec:
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
#연결 해제되어도 값 보존
  persistentVolumeReclaimPolicy: Retain
#storageClass가 있는경우 작성
  storageClassName: manual
  hostPath:
    path: /opt/prometheus
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
# host 지정
          - kube-worker-1
---

$ vi prometheus-pvc.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: prometheus-pvc
  namespace: monitoring
  labels:
    type: local
    app: prometheus
spec:
#storageClass가 있는경우 작성(PV와 값이 똑같아야함)
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      app: prometheus
      type: local
---

$ kubectl apply -f prometheus-pvc.yaml

5. Prometheus 설정값 작성

$ vim prometheus.rules
---
groups:
- name: container memory alert
  rules:
  - alert: container memory usage rate is very high( > 55%)
    expr: sum(container_memory_working_set_bytes{pod!="", name=""}) / sum (kube_node_status_allocatable_memory_bytes) * 100 > 55
    for: 1m
    labels:
      severity: fatal
    annotations:
      summary: High Memory Usage on {{ $labels.instance }}
      identifier: "{{ $labels.instance }}"
      description: "{{ $labels.job }} Memory Usage: {{ $value }}"
- name: container CPU alert
  rules:
  - alert: container CPU usage rate is very high( > 10%)
    expr: sum (rate (container_cpu_usage_seconds_total{pod!=""}[1m])) / sum (machine_cpu_cores) * 100 > 10
    for: 1m
    labels:
      severity: fatal
    annotations:
      summary: High Cpu Usage
---


$ vim prometheus.yml
#필요한 metric의 경우 scrape_configs: 아래 형식에 맞게 작성
---
global:
  scrape_interval: 10s
  evaluation_interval: 10s
rule_files:
  - /etc/prometheus/prometheus.rules
alerting:
  alertmanagers:
  - scheme: http
    static_configs:
    - targets:
      - "alertmanager.monitoring.svc:9093"
 
scrape_configs:
  - job_name: 'kubernetes-apiservers'
 
    kubernetes_sd_configs:
    - role: endpoints
    scheme: https
 
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
      action: keep
      regex: default;kubernetes;https
 
  - job_name: 'kubernetes-nodes'
 
    scheme: https
 
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    kubernetes_sd_configs:
    - role: node
 
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - target_label: __address__
      replacement: kubernetes.default.svc:443
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}/proxy/metrics
 
 
  - job_name: 'kubernetes-pods'
 
    kubernetes_sd_configs:
    - role: pod
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
      target_label: __address__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name
 
  - job_name: 'kube-state-metrics'
    static_configs:
      - targets: ['kube-state-metrics.kube-system.svc:8080']
 
  - job_name: 'kubernetes-cadvisor'
 
    scheme: https
 
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    kubernetes_sd_configs:
    - role: node
 
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - target_label: __address__
      replacement: kubernetes.default.svc:443
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
 
  - job_name: 'kubernetes-service-endpoints'
 
    kubernetes_sd_configs:
    - role: endpoints
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
      action: replace
      target_label: __scheme__
      regex: (https?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
      action: replace
      target_label: __address__
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: kubernetes_name
 ---
 
 # 두 파일을 하나로 합쳐 prometheus-config라는 이름의 configmap 생성
 
 $ kubectl create configmap prometheus-config -n monitoring --from-file=./

6. Deployment 작성 후 배포, 트러블 슈팅

vi prometheus-deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-deployment
  namespace: monitoring
  labels:
    app: prometheus-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-server
  template:
    metadata:
      labels:
        app: prometheus-server
    spec:
      serviceAccountName: prometheus
      containers:
        - name: prometheus
          image: prom/prometheus
          args:
            - "--config.file=/etc/prometheus/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
          volumeMounts:
            - name: prometheus-config-volume
              mountPath: /etc/prometheus/
            - name: prometheus-storage-volume
              mountPath: /prometheus/
      nodeSelector:
        kubernetes.io/hostname: kube-worker-1
      volumes:
        - name: prometheus-config-volume
          configMap:
            defaultMode: 420
            name: prometheus-config

        - name: prometheus-storage-volume
          persistentVolumeClaim:
            claimName: prometheus-pvc
---

$ kubectl apply -f prometheus-deployment.yaml

#PV 쪽 권한 에러가 발생하므로, PV생성한 host에(예시에서는 kube-worker-1) 다음 명령어를 입력
$ chmod 757 /opt/prometheus

7. 서비스 배포 및 node-exporter 배포

$ vim prometheus-service.yml
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus-service
  namespace: monitoring
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/port:   '9090'
 
spec:
  selector:
    app: prometheus-server
  type: NodePort
  ports:
    - port: 8080
      targetPort: 9090
      nodePort: 30003
---

$ kubectl apply -f prometheus-service.yaml

$ vim prometheus-node-exporter.yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
      k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - containerPort: 9100
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 9100
    nodePort: 31672
    protocol: TCP
  type: NodePort
  selector:
    k8s-app: node-exporter
---

  $ kubectl apply -f prometheus-node-exporter.yaml

8. 확인

K8s 워커노드 ip:30003

반응형
728x90
반응형
  1. 준비 사항
    - 쿠버네티스 마스터에 helm v3 설치
    - ceph-csi-rbd 헬름차트
       $ helm repo add ceph-csi https://ceph.github.io/csi-charts
       $ helm pull ceph-csi/ceph-csi-rbd
       $ tar xvaf ceph-csi-rbd-3.5.1.tgz

  2. 쿠버네티스 네임스페이스 생성
    $ kubectl create namespace ceph-csi-rbd; 

  3. 헬름 차트 배포에 필요한 ceph-csi-rbd-values.yaml 작성
    $ cat <<EOF > ceph-csi-rbd-values.yaml
    csiConfig:
    #ceph의 fsid
      - clusterID: "af39f080-af03-11ec-9050-fa163e37df68"
    monitors:
    #ceph의 mon host ip:6789
      - "172.30.3.170:6789"
      - "172.30.1.200:6789"
      - "172.30.2.96:6789"
      - "172.30.0.193:6789"
    provisioner:
      name: provisioner
      replicaCount: 2
    EOF

  4. Ceph에서 OSD kubePool 생성(Ceph-1에서 실행) 및 RBD(Rados Block Device) pool 초기화
    $ sudo ceph osd pool create kubePool 64 64
    $ sudo rbd pool init kubePool

  5. 설정에 필요한 client.kubeAdmin 키값 조회 및 user.ID base64로 변환
    $ sudo ceph auth get-or-create-key client.kubeAdmin mds 'allow *' mgr 'allow *' mon 'allow *' osd 'allow * pool=kubePool' | tr -d '\n' | base64;

    결과 예시:
    $ sudo ceph auth get-or-create-key client.kubeAdmin mds 'allow *' mgr 'allow *' mon 'allow *' osd 'allow * pool=kubePool' | tr -d '\n' | base64;
    QVFBaXZVSmlrTSt1TkJBQStuOE0reUoyd095azcxK3BQZytqa0E9PQ==

    $ echo "kubeAdmin" | tr -d '\n' | base64;
    a3ViZUFkbWlu

  6. 5의 설정값으로 secret 생성

    $ cat > ceph-admin-secret.yaml << EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-admin
      namespace: default
    type: kubernetes.io/rbd
    data:
      userID: a3ViZUFkbWlu
    #조회한 client.kubeAdmin 키값
      userKey: QVFBaXZVSmlrTSt1TkJBQStuOE0reUoyd095azcxK3BQZytqa0E9PQ==
    EOF
  7. 스토리지 클래스 yaml 파일 생성
    $ cat > ceph-rbd-sc.yaml <<EOF
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ceph-rbd-sc
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: rbd.csi.ceph.com
    parameters:
       clusterID: af39f080-af03-11ec-9050-fa163e37df68
       pool: kubePool
       imageFeatures: layering
       csi.storage.k8s.io/provisioner-secret-name: ceph-admin
       csi.storage.k8s.io/provisioner-secret-namespace: default
       csi.storage.k8s.io/controller-expand-secret-name: ceph-admin
       csi.storage.k8s.io/controller-expand-secret-namespace: default
       csi.storage.k8s.io/node-stage-secret-name: ceph-admin
       csi.storage.k8s.io/node-stage-secret-namespace: default
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    mountOptions:
       - discard
     EOF

  8. 헬름차트 배포 및 ceph-admin-secret.yaml , ceph-rbd-sc.yaml 배포
    $ helm install --namespace ceph-csi-rbd ceph-csi-rbd --values ceph-csi-rbd-values.yaml ceph-csi-rbd
    $ kubectl rollout status deployment ceph-csi-rbd-provisioner -n ceph-csi-rbd
    $ kubectl apply -f ceph-admin-secret.yaml
    $ kubectl apply -f ceph-rbd-sc.yaml

  9. 확인
    $ kubectl get sc
    $ kubectl get po -A
    $ helm status ceph-csi-rbd -n ceph-csi-rbd

  10. Test용 Pod 배포 후 PV확인
    $ cat <<EOF > pv-pod.yaml
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-rbd-sc-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
      storageClassName: ceph-rbd-sc
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: ceph-rbd-pod-pvc-sc
    spec:
      containers:
      - name:  ceph-rbd-pod-pvc-sc
        image: busybox
        command: ["sleep", "infinity"]
        volumeMounts:
        - mountPath: /mnt/ceph_rbd
          name: volume
      volumes:
      - name: volume
        persistentVolumeClaim:
          claimName: ceph-rbd-sc-pvc
    EOF

    $kubectl apply -f pv-pod.yaml

    #확인
    $ kubectl get pv

    결과 예시:
    NAME                                                                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS                  CLAIM               STORAGECLASS   REASON   AGE
    pvc-291cc4a8-c2ff-4601-908b-0eab90b2ebe6          2Gi                 RWO                    Delete            Bound    default/ceph-rbd-sc-pvc   ceph-rbd-sc                          1s
반응형

+ Recent posts