Install Rook Ceph¶
About this task
Rook Ceph in an orchestrator providing a containerized solution for Ceph Storage with a specialized Kubernetes Operator to automate the management of the cluster. It is an alternative solution to the bare metal Ceph storage. See https://rook.io/docs/rook/latest-release/Getting-Started/intro/ for more details.
Prerequisites
Before configuring the deployment model and services.
Ensure that there is no ceph-store storage backend configured on the system.
~(keystone_admin)$ system storage-backend-list
Create a storage backend for Rook Ceph, choose your deployment model (controller, dedicated, open), and the desired services (block or ecblock, filesystem, object).
~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller --confirmed
Create a
host-fs ceph
for each host that will use a Rook Ceph monitor (preferably an ODD number of hosts):~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
It is recommended to use a AIO-DX platform adding a floating monitor. To add a floating monitor the inactive controller should be locked.
~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller) ~(keystone_admin)$ system controllerfs-add ceph-float=<size>
Configure OSDs.
Check the UUID of the disks of the desired host that will use the OSDs:
~(keystone_admin)$ system host-disk-list <hostname>
Note
The OSD placement should follow the chosen deployment model placement rules.
Add the desired disks to the system as OSDs (Preferably an EVEN number of OSDs):
~(keystone_admin)$ system host-stor-add <hostname> osd <disk_uuid>
For more details on deployment models and services see Deployment Models and Services for Rook Ceph.
Procedure
After configuring the environment based on the selected deployment model, Rook Ceph will be installed automatically.
Check the health of the cluster after a few minutes after the application is applied using any ceph command, for example ceph status.
~(keystone_admin)$ ceph -s
e.g. (STD with 3 mon and 12 OSDs):
~(keystone_admin)$ ceph -s
cluster:
id: 5c8eb4ff-ba21-40f4-91ed-68effc47a08b
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 2d)
mgr: c(active, since 5d), standbys: a, b
mds: 1/1 daemons up, 1 hot standby
osd: 12 osds: 12 up (since 5d), 12 in (since 5d)
data:
volumes: 1/1 healthy
pools: 4 pools, 81 pgs
objects: 133 objects, 353 MiB
usage: 3.8 GiB used, 5.7 TiB / 5.7 TiB avail
pgs: 81 active+clean
Check if the cluster contains all the required elements. All pods should be running or completed on the cluster to be considered healthy. Use the following command to check the Rook Ceph pods on the cluster.
~(keystone_admin)$ kubectl get pod -n rook-ceph
e.g. (SX with 1 mon and 2 OSDs):
~(keystone_admin)$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-2g9pz 0/1 Completed 0 11m
csi-cephfsplugin-4j7l6 2/2 Running 0 11m
csi-cephfsplugin-provisioner-67bd9fcc8d-jckzq 5/5 Running 0 11m
csi-rbdplugin-dzdb8 2/2 Running 0 11m
csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m
rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m
rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m
rook-ceph-mds-kube-cephfs-a-76847477bf-2snzp 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-b-6984b58b79-fzhk6 2/2 Running 0 11m
rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m
rook-ceph-mon-a-6976b847f4-5vmg9 2/2 Running 0 11m
rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m
rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m
rook-ceph-osd-1-7ff8bc8bc7-7tqhz 2/2 Running 0 11m
rook-ceph-osd-prepare-controller-0-s4bzz 0/1 Completed 0 11m
rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s
rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m
stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m
stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s
Additional Enhancements¶
Add new OSDs on a running cluster¶
To add new OSDs to the cluster, add the new OSD to the platform and reapply the application.
~(keystone_admin)$ system host-stor-add <host> <disk_uuid>
~(keystone_admin)$ system application-apply rook-ceph
Add a new monitor on a running cluster¶
To add a new monitor to the cluster, add the host-fs
to the desired host
and reapply the application.
~(keystone_admin)$ system host-fs-add <host> ceph=<size>
~(keystone_admin)$ system application-apply rook-ceph
Enable the Ceph Dashboard¶
To enable the Ceph dashboard a Helm override must be provided to the application. Provide a password coded in base64.
Procedure
Create the override file.
$ openssl base64 -e <<< "my_dashboard_passwd" bXlfZGFzaGJvYXJkX3Bhc3N3ZAo= $ cat << EOF >> dashboard-override.yaml cephClusterSpec: dashboard: enabled: true password: "bXlfZGFzaGJvYXJkX3Bhc3N3ZAo=" EOF
Update the Helm chart with the created user-override.
~(keystone_admin)$ system helm-override-update --values dashboard-override.yaml rook-ceph rook-ceph-cluster rook-ceph +----------------+-------------------+ | Property | Value | +----------------+-------------------+ | name | rook-ceph-cluster | | namespace | rook-ceph | | user_overrides | cephClusterSpec: | | | dashboard: | | | enabled: true | | | | +----------------+-------------------+
Apply/reapply the Rook Ceph application.
~(keystone_admin)$ system application-apply rook-ceph
You can access the dashboard using the following address: https://<floating_ip>:30443
.
Check Rook Ceph pods¶
You can check the pods of the storage cluster using the following command:
kubectl get pod -n rook-ceph
Installation on AIO-SX deployments¶
For example, you can manually install a controller model, a monitor and some services (block and cephfs) on AIO-SX deployments.
In this configuration, you can add monitors and OSDs on the AIO-SX node.
On a system with no bare metal Ceph storage backend on it, add a ceph-rook storage backend using block (RBD), cephfs (default option).
$ system storage-backend-add ceph-rook --deployment controller --confirmed
Add the
host-fs ceph
on the controller, thehost-fs ceph
is configured with 20GB.$ system host-fs-add controller-0 ceph=20
To add OSDs, get the UUID of each disk and run the host-stor-add command.
$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 9 | | | | | | | | | | | | | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 9 | | | | | | | | | | | | | 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 4 | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
Add all the desired disks as OSDs.
# system host-stor-add controller-0 #UUID $ system host-stor-add controller-0 9bb0cb55-7eba-426e-a1d3-aba002c7eebc +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 0 | | function | osd | | state | configuring-with-app | | journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 0fb88b8b-a134-4754-988a-382c10123fbb | | ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d | | idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:19:41.335302+00:00 | | updated_at | None | +------------------+--------------------------------------------------+ # system host-stor-add controller-0 #UUID $ system host-stor-add controller-0 283359b5-d06f-4e73-a58f-e15f7ea41abd +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 1 | | function | osd | | state | configuring-with-app | | journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part2 | | journal_node | /dev/cdb2 | | uuid | 13baee21-daad-4266-bfdd-b549837d8b88 | | ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 | | idisk_uuid | 283359b5-d06f-4e73-a58f-e15f7ea41abd | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:18:28.107688+00:00 | | updated_at | None | +------------------+--------------------------------------------------+
Check the progress of the application. With a valid configuration of
host-fs
and OSDs the application will be applied automatically.$ system application-show rook-ceph #or $ system application-list
After applying the application the pod list of the namespace Rook Ceph is as follows:
$ kubectl get pod -n rook-ceph NAME READY STATUS RESTARTS AGE ceph-mgr-provision-2g9pz 0/1 Completed 0 11m csi-cephfsplugin-4j7l6 2/2 Running 0 11m csi-cephfsplugin-provisioner-6726cfcc8d-jckzq 5/5 Running 0 11m csi-rbdplugin-dzdb8 2/2 Running 0 11m csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m rook-ceph-mds-kube-cephfs-a-49c4747797-2snzp 2/2 Running 0 11m rook-ceph-mds-kube-cephfs-b-6fc4b58b08-fzhk6 2/2 Running 0 11m rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m rook-ceph-mon-a-6976b847f4-c4g6s 2/2 Running 0 11m rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m rook-ceph-osd-1-rfgr4984-t653f 2/2 Running 0 11m rook-ceph-osd-prepare-controller-0-8ge4z 0/1 Completed 0 11m rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s
Installation on AIO-DX deployments¶
For example, you can manually install a controller model, three monitors and some services (block and cephfs) on AIO-DX deployments.
In this configuration, you can add monitors and OSDs on the AIO-DX node.
On a system with no bare metal Ceph storage backend on it, add a ceph-rook storage backend using block (RBD), cephfs (default option).
$ system storage-backend-add ceph-rook --deployment controller --confirmed
Add the
controller-fs
ceph-float
configured with 20GB.$ system controllerfs-add ceph-float=20
Add the
host-fs ceph
on each controller, thehost-fs ceph
is configured with 20GB.$ system host-fs-add controller-0 ceph=20
To add OSDs, get the UUID of each disk and run the host-stor-add command.
$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 9 | | | | | | | | | | | | | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 9 | | | | | | | | | | | | | 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 4 | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ $ system host-disk-list controller-1 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+ | 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 32be8509 | | | | | | | | | | | | | 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 62d4613b | | | | | | | | | | | | | 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 3003aa5e | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
Add all the desired disks as OSDs.
# system host-stor-add controller-0 #UUID $ system host-stor-add controller-0 9bb0cb55-7eba-426e-a1d3-aba002c7eebc +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 0 | | function | osd | | state | configuring-with-app | | journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 0fb88b8b-a134-4754-988a-382c10123fbb | | ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d | | idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:19:41.335302+00:00 | | updated_at | None | +------------------+--------------------------------------------------+ # system host-stor-add controller-1 #UUID $ system host-stor-add controller-1 1e36945e-e0fb-4a72-9f96-290f9bf57523 +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 1 | | function | osd | | state | configuring-with-app | | journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 13baee21-daad-4266-bfdd-b549837d8b88 | | ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 | | idisk_uuid | 1e36945e-e0fb-4a72-9f96-290f9bf57523 | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:18:28.107688+00:00 | | updated_at | None | +------------------+--------------------------------------------------+
Check the progress of the application. With a valid configuration of monitors and OSDs, the app will be applied automatically.
$ system application-show rook-ceph #or $ system application-list
After applying the application the pod list of the namespace
rook-ceph
should be as follows:$ kubectl get pod -n rook-ceph NAME READY STATUS RESTARTS AGE csi-cephfsplugin-64z6c 2/2 Running 0 34m csi-cephfsplugin-dhsqp 2/2 Running 2 (17m ago) 34m csi-cephfsplugin-gch9g 2/2 Running 0 34m csi-cephfsplugin-pkzg2 2/2 Running 0 34m csi-cephfsplugin-provisioner-5467c6c4f-r2lp6 5/5 Running 0 22m csi-rbdplugin-2vmzf 2/2 Running 2 (17m ago) 34m csi-rbdplugin-6j69b 2/2 Running 0 34m csi-rbdplugin-6j8jj 2/2 Running 0 34m csi-rbdplugin-hwbl7 2/2 Running 0 34m csi-rbdplugin-provisioner-fd84899c-wwbrz 5/5 Running 0 22m mon-float-post-install-sw8qb 0/1 Completed 0 6m5s mon-float-pre-install-nfj5b 0/1 Completed 0 6m40s rook-ceph-crashcollector-controller-0-6f47c4c9f5-hbbnt 1/1 Running 0 33m rook-ceph-crashcollector-controller-1-76585f8db8-cb4jl 1/1 Running 0 11m rook-ceph-exporter-controller-0-c979d9977-kt7tx 1/1 Running 0 33m rook-ceph-exporter-controller-1-86bc859c4-q4mxd 1/1 Running 0 11m rook-ceph-mds-kube-cephfs-a-55978b78b9-dcbtf 2/2 Running 0 22m rook-ceph-mds-kube-cephfs-b-7b8bf4549f-thr7g 2/2 Running 2 (12m ago) 33m rook-ceph-mgr-a-649cf9c487-vfs65 3/3 Running 0 17m rook-ceph-mgr-b-d54c5d7cb-qwtnm 3/3 Running 0 33m rook-ceph-mon-a-5cc7d56767-64dbd 2/2 Running 0 6m30s rook-ceph-mon-b-6cf5b79f7f-skrtd 2/2 Running 0 6m31s rook-ceph-mon-float-85c4cbb7f9-k7xwj 2/2 Running 0 6m27s rook-ceph-operator-69b5674578-lmmdl 1/1 Running 0 22m rook-ceph-osd-0-847f6f7dd9-6xlln 2/2 Running 0 16m rook-ceph-osd-1-7cc87df4c4-jlpk9 2/2 Running 0 33m rook-ceph-osd-prepare-controller-0-4rcd6 0/1 Completed 0 22m rook-ceph-tools-84659bcd67-r8qbp 1/1 Running 0 22m stx-ceph-manager-689997b4f4-hk6gh 1/1 Running 0 22m
Installation on Standard deployments¶
For example, you can install on standard deployments with a dedicated model, five monitors and services (ecblock and cephfs)
In this configuration, you can add monitors on 5 hosts and fit this deployment in a dedicated model, and OSDs will be added on workers only. You can choose compute-1 and compute-2 hosts to keep the cluster OSDs.
On a system with no bare metal Ceph storage backend on it, add a Ceph Rook storage backend using cephfs and ecblock. To fit in the dedicated model, the OSDs must be placed on dedicated workers only.
$ system storage-backend-add ceph-rook --deployment dedicated --confirmed --services ecblock,filesystem
Add all the
host-fs
on the nodes that will keepmon
,mgr
andmds
. In this case, 5 hosts will have thehost-fs ceph
configured.$ system host-fs-add controller-0 ceph=20 $ system host-fs-add controller-1 ceph=20 $ system host-fs-add compute-0 ceph=20 $ system host-fs-add compute-1 ceph=20 $ system host-fs-add compute-2 ceph=20
To add OSDs get the UUID of each disk run the host-stor-add command.
$ system host-disk-list compute-1 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 9 | | | | | | | | | | | | | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 9 | | | | | | | | | | | | | 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 4 | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ $ system host-disk-list compute-2 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+ | 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 32be8509 | | | | | | | | | | | | | 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 62d4613b | | | | | | | | | | | | | 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 3003aa5e | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
Add all the desired disks as OSDs, for example, only one OSD on compute-1 and compute-2 will be added.
# system host-stor-add compute-1 #UUID $ system host-stor-add compute-1 9bb0cb55-7eba-426e-a1d3-aba002c7eebc +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 0 | | function | osd | | state | configuring-with-app | | journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 0fb88b8b-a134-4754-988a-382c10123fbb | | ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d | | idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:19:41.335302+00:00 | | updated_at | None | +------------------+--------------------------------------------------+ # system host-stor-add compute-2 #UUID $ system host-stor-add compute-2 1e36945e-e0fb-4a72-9f96-290f9bf57523 +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 1 | | function | osd | | state | configuring-with-app | | journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 13baee21-daad-4266-bfdd-b549837d8b88 | | ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 | | idisk_uuid | 1e36945e-e0fb-4a72-9f96-290f9bf57523 | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:18:28.107688+00:00 | | updated_at | None | +------------------+--------------------------------------------------+
Check the progress of the application. With a valid configuration of
host-fs
and OSDs, the app will be applied automatically.$ system application-show rook-ceph #or $ system application-list
After the app is applied the pod list of the namespace
rook-ceph
should be as follows:$ kubectl get pod -n rook-ceph NAME READY STATUS RESTARTS AGE ceph-mgr-provision-2g9pz 0/1 Completed 0 11m csi-cephfsplugin-4j7l6 2/2 Running 0 11m csi-cephfsplugin-provisioner-6726cfcc8d-jckzq 5/5 Running 0 11m csi-rbdplugin-dzdb8 2/2 Running 0 11m csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m rook-ceph-mds-kube-cephfs-a-49c4747797-2snzp 2/2 Running 0 11m rook-ceph-mds-kube-cephfs-b-6fc4b58b08-fzhk6 2/2 Running 0 11m rook-ceph-mds-kube-cephfs-c-12f4b58b1e-fzhk6 2/2 Running 0 11m rook-ceph-mds-kube-cephfs-d-a6s4d6a8w4-4d64g 2/2 Running 0 11m rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m rook-ceph-mgr-b-wd12af64t4-dw62i 2/2 Running 0 11m rook-ceph-mgr-c-s684gs86g4-62srg 2/2 Running 0 11m rook-ceph-mgr-d-68r4864f64-8a4a6 2/2 Running 0 11m rook-ceph-mgr-e-as5d4we6f4-6aef4 2/2 Running 0 11m rook-ceph-mon-a-6976b847f4-c4g6s 2/2 Running 0 11m rook-ceph-mon-b-464fc6e8a3-fd864 2/2 Running 0 11m rook-ceph-mon-c-468fc68e4c-6w8sa 2/2 Running 0 11m rook-ceph-mon-d-8fc5686c4d-5v1w6 2/2 Running 0 11m rook-ceph-mon-e-21f3c12e3a-6s7qq 2/2 Running 0 11m rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m rook-ceph-osd-1-7ff8bc8bc7-7tqhz 2/2 Running 0 11m rook-ceph-osd-prepare-compute-1-8ge4z 0/1 Completed 0 11m rook-ceph-osd-prepare-compute-2-s32sz 0/1 Completed 0 11m rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s