728x90
반응형
- 준비 사항
- 쿠버네티스 마스터에 helm v3 설치
- ceph-csi-rbd 헬름차트
$ helm repo add ceph-csi https://ceph.github.io/csi-charts
$ helm pull ceph-csi/ceph-csi-rbd
$ tar xvaf ceph-csi-rbd-3.5.1.tgz - 쿠버네티스 네임스페이스 생성
$ kubectl create namespace ceph-csi-rbd; - 헬름 차트 배포에 필요한 ceph-csi-rbd-values.yaml 작성
$ cat <<EOF > ceph-csi-rbd-values.yaml
csiConfig:
#ceph의 fsid
- clusterID: "af39f080-af03-11ec-9050-fa163e37df68"
monitors:
#ceph의 mon host ip:6789
- "172.30.3.170:6789"
- "172.30.1.200:6789"
- "172.30.2.96:6789"
- "172.30.0.193:6789"
provisioner:
name: provisioner
replicaCount: 2
EOF - Ceph에서 OSD kubePool 생성(Ceph-1에서 실행) 및 RBD(Rados Block Device) pool 초기화
$ sudo ceph osd pool create kubePool 64 64
$ sudo rbd pool init kubePool - 설정에 필요한 client.kubeAdmin 키값 조회 및 user.ID base64로 변환
$ sudo ceph auth get-or-create-key client.kubeAdmin mds 'allow *' mgr 'allow *' mon 'allow *' osd 'allow * pool=kubePool' | tr -d '\n' | base64;
결과 예시:
$ sudo ceph auth get-or-create-key client.kubeAdmin mds 'allow *' mgr 'allow *' mon 'allow *' osd 'allow * pool=kubePool' | tr -d '\n' | base64;
QVFBaXZVSmlrTSt1TkJBQStuOE0reUoyd095azcxK3BQZytqa0E9PQ==
$ echo "kubeAdmin" | tr -d '\n' | base64;
a3ViZUFkbWlu - 5의 설정값으로 secret 생성
$ cat > ceph-admin-secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: ceph-admin
namespace: default
type: kubernetes.io/rbd
data:
userID: a3ViZUFkbWlu
#조회한 client.kubeAdmin 키값
userKey: QVFBaXZVSmlrTSt1TkJBQStuOE0reUoyd095azcxK3BQZytqa0E9PQ==
EOF - 스토리지 클래스 yaml 파일 생성
$ cat > ceph-rbd-sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rbd.csi.ceph.com
parameters:
clusterID: af39f080-af03-11ec-9050-fa163e37df68
pool: kubePool
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: ceph-admin
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: ceph-admin
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: ceph-admin
csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
EOF - 헬름차트 배포 및 ceph-admin-secret.yaml , ceph-rbd-sc.yaml 배포
$ helm install --namespace ceph-csi-rbd ceph-csi-rbd --values ceph-csi-rbd-values.yaml ceph-csi-rbd
$ kubectl rollout status deployment ceph-csi-rbd-provisioner -n ceph-csi-rbd
$ kubectl apply -f ceph-admin-secret.yaml
$ kubectl apply -f ceph-rbd-sc.yaml - 확인
$ kubectl get sc
$ kubectl get po -A
$ helm status ceph-csi-rbd -n ceph-csi-rbd - Test용 Pod 배포 후 PV확인
$ cat <<EOF > pv-pod.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-rbd-sc-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: ceph-rbd-sc
---
apiVersion: v1
kind: Pod
metadata:
name: ceph-rbd-pod-pvc-sc
spec:
containers:
- name: ceph-rbd-pod-pvc-sc
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- mountPath: /mnt/ceph_rbd
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: ceph-rbd-sc-pvc
EOF
$kubectl apply -f pv-pod.yaml
#확인
$ kubectl get pv
결과 예시:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-291cc4a8-c2ff-4601-908b-0eab90b2ebe6 2Gi RWO Delete Bound default/ceph-rbd-sc-pvc ceph-rbd-sc 1s
반응형
'엔지니어링 > Ceph' 카테고리의 다른 글
CephAdm을 이용한 Ceph Pacific 구축(Ubuntu18.04) -2 (0) | 2022.04.07 |
---|---|
CephAdm을 이용한 Ceph Pacific 구축(Ubuntu18.04) -1 (0) | 2022.04.07 |