엔지니어링/Ceph
CephAdm을 이용한 Ceph Pacific 구축(Ubuntu18.04) -2
Ripiad
2022. 4. 7. 01:39
728x90
반응형
모든 커맨드는 루트로 작업
- 호스트 추가
#ceph ssh-key 배포(클러스터 구성한 노드에서 각 노드들에게)
$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@"host ip"
#클러스터에 호스트 추가
$ ceph orch host add "호스트명" "호스트 ip" "tag(생략 가능)"
#클러스터에 호스트 추가 결과 확인
$ ceph orch host ls
결과 예시:
$ ceph orch host ls HOST ADDR LABELS STATUS ceph-1 172.30.0.193 _admin ceph-2 172.30.3.170 OSD ceph-3 172.30.1.200 OSD ceph-4 172.30.2.96 OSD
- OSD 추가
$ ceph orch daemon add osd "호스트명":"드라이브명"
#확인
$ ceph -s $ ceph orch device ls
결과 예시:
$ ceph -s cluster: id: af39f080-af03-11ec-9050-fa163e37df68 health: HEALTH_OK services: mon: 4 daemons, quorum ceph-1,ceph-2,ceph-3,ceph-4 (age 2d) mgr: ceph-1.ppytcz(active, since 25h), standbys: ceph-2.dedeoe mds: 1/1 daemons up, 3 standby osd: 4 osds: 4 up (since 2d), 4 in (since 2d) $ ceph orch device ls HOST PATH TYPE DEVICE ID SIZE AVAILABLE REJECT REASONS ceph-1 /dev/vdb hdd 0e8c4f4b-ca72-48c3-8 1073G Insufficient space (<10 extents) on vgs, LVM detected, locked ceph-2 /dev/vdb hdd 382bb362-d64e-4041-9 1073G Insufficient space (<10 extents) on vgs, LVM detected, locked ceph-3 /dev/vdb hdd 3e5cec61-0c30-4d61-a 1073G Insufficient space (<10 extents) on vgs, LVM detected, locked ceph-4 /dev/vdb hdd c63d1d1f-6a74-4c3a-9 1073G Insufficient space (<10 extents) on vgs, LVM detected, locked
- 결과 확인
$ ceph orch status $ ceph orch ps
결과 예시:
$ ceph orch status Backend: cephadm Available: Yes Paused: No $ ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.ceph-1 ceph-1 *:9093,9094 running (2d) 9m ago 2d 12.7M - 0.20.0 0881eb8f169f 169d759e6ebb crash.ceph-1 ceph-1 running (2d) 9m ago 2d 7436k - 16.2.7 c92aec2cd894 cf8d4667fc0e crash.ceph-2 ceph-2 running (2d) 9m ago 2d 7304k - 16.2.7 c92aec2cd894 220f004b583c crash.ceph-3 ceph-3 running (2d) 49s ago 2d 10.9M - 16.2.7 c92aec2cd894 efa886f81ef9 crash.ceph-4 ceph-4 running (2d) 49s ago 2d 7256k - 16.2.7 c92aec2cd894 276eaf7238a4 grafana.ceph-1 ceph-1 *:3000 running (2d) 9m ago 2d 35.8M - 6.7.4 557c83e11646 2684c2c21a43 mgr.ceph-1.ppytcz ceph-1 *:9283 running (1h) 9m ago 2d 506M - 16.2.7 c92aec2cd894 654bc9d468db mgr.ceph-2.dedeoe ceph-2 *:8443,9283 running (2d) 9m ago 2d 380M - 16.2.7 c92aec2cd894 730a9e27d05f mon.ceph-1 ceph-1 running (2d) 9m ago 2d 881M 2048M 16.2.7 c92aec2cd894 c2f75db158da mon.ceph-2 ceph-2 running (2d) 9m ago 2d 888M 2048M 16.2.7 c92aec2cd894 05f31cf6a2d3 mon.ceph-3 ceph-3 running (2d) 49s ago 2d 883M 2048M 16.2.7 c92aec2cd894 d31c6d4115c4 mon.ceph-4 ceph-4 running (2d) 49s ago 2d 891M 2048M 16.2.7 c92aec2cd894 8bade1f43df6 node-exporter.ceph-1 ceph-1 *:9100 running (2d) 9m ago 2d 11.8M - 0.18.1 e5a616e4b9cf 3debf7ae68eb node-exporter.ceph-2 ceph-2 *:9100 running (2d) 9m ago 2d 11.8M - 0.18.1 e5a616e4b9cf 7fe3fbc71085 node-exporter.ceph-3 ceph-3 *:9100 running (2d) 49s ago 2d 12.0M - 0.18.1 e5a616e4b9cf 37e0338834bb node-exporter.ceph-4 ceph-4 *:9100 running (2d) 49s ago 2d 11.0M - 0.18.1 e5a616e4b9cf 4ba70a679bf2 osd.0 ceph-2 running (2d) 9m ago 2d 212M 4096M 16.2.7 c92aec2cd894 20bf30027ca5 osd.1 ceph-3 running (2d) 49s ago 2d 226M 4096M 16.2.7 c92aec2cd894 36607cbb6458 osd.2 ceph-4 running (2d) 49s ago 2d 222M 4096M 16.2.7 c92aec2cd894 c90cf1973629 osd.3 ceph-1 running (2d) 9m ago 2d 216M 4096M 16.2.7 c92aec2cd894 0fc6bbac67eb prometheus.ceph-1 ceph-1 *:9095 running (2d) 9m ago 2d 36.5M - 2.18.1 de242295e225 71d62fcef51e
- Dashboard 접속
Ceph mgr 노드 ip에서 8. Ceph 클러스터 구성의 접속정보로 로그인 후 비밀번호 변경하면 Dashboard로 관리 가능
참고 : https://yjwang.tistory.com/119
반응형