Rook Ceph Tiers and Chassis

About this task

This section describes the Tiers and Chassis feature in Rook Container-based Ceph, including usage instructions.

Tiers and Chassis

Tiers and Chassis introduce a new way to distribute and organize OSDs in Rook Ceph.

Each tier contains one or more chassis. The chassis are created automatically inside each tier depending on the replication and the deployment model.

Each chassis should fulfill the replication rules by itself. For example: if the replication is 2 on a controller deployment model (failure domain is host, in this case), to fulfill the replication rules, one OSD in each controller deployed inside each tier will be needed.

For more information on deployment models and failure domains check Deployment Models and Services see Failure domain for deployment models.

The default tier is the “storage” tier. If multiple tiers are declared, you must specify which tier you want while placing a new OSD.

To add a custom tier

  1. List cluster to check name and UUID.

    $ system cluster-list
    +--------------------------------------+--------------+-----------+-------------------+------------------+
    | uuid                                 | cluster_uuid | type      | name              | deployment_model |
    +--------------------------------------+--------------+-----------+-------------------+------------------+
    | 4a593906-1509-4617-93aa-1718a0c2a2aa | None         | ceph-rook | ceph_rook_cluster | None             |
    +--------------------------------------+--------------+-----------+-------------------+------------------+
    
  2. Add a new tier.

    $ system storage-tier-add ceph_rook_cluster <tier name>
    

    For example:

    $ system storage-tier-add ceph_rook_cluster newCustomTier
    +--------------+--------------------------------------+
    | Property     | Value                                |
    +--------------+--------------------------------------+
    | uuid         | cb496c8d-3091-425f-8c2f-88bf9e993b90 |
    | name         | test                                 |
    | type         | ceph                                 |
    | status       | defined                              |
    | backend_uuid | 09eef38b-7948-42ee-a25f-afd9c92afd6a |
    | cluster_uuid | 4a593906-1509-4617-93aa-1718a0c2a2aa |
    | OSDs         | []                                   |
    | created_at   | 2026-01-08T00:44:36.741742+00:00     |
    | updated_at   | None                                 |
    +--------------+--------------------------------------+
    

To add an OSD to a specific tier

Once a custom tier is added, the OSD configuration must indicate the target tier.

  1. List tiers to check the UUID.

    $ system storage-tier-list ceph_rook_cluster
    [sysadmin@controller-1 ~(keystone_admin)]$ system storage-tier-list ceph_rook_cluster
    +--------------------------------------+---------+---------+--------------------------------------+
    | uuid                                 | name    | status  | backend_using                        |
    +--------------------------------------+---------+---------+--------------------------------------+
    | a1994874-3a01-4fb0-a9de-aac07d289ded | storage | in-use  | 09eef38b-7948-42ee-a25f-afd9c92afd6a |
    | cb496c8d-3091-425f-8c2f-88bf9e993b90 | test    | defined | 09eef38b-7948-42ee-a25f-afd9c92afd6a |
    +--------------------------------------+---------+---------+--------------------------------------+
    
  2. List disks to check the UUID.

    $ system host-disk-list <host>
    +--------------------------------------+-----------+---------+---------+-------+------------+--------------+---------------------+--------------------------------------------+
    | uuid                                 | device_no | device_ | device_ | size_ | available_ | rpm          | serial_id           | device_path                                |
    |                                      | de        | num     | type    | gib   | gib        |              |                     |                                            |
    +--------------------------------------+-----------+---------+---------+-------+------------+--------------+---------------------+--------------------------------------------+
    | 7f2b3c01-3a66-45c9-975f-847d838803c6 | /dev/sda  | 2048    | HDD     | 292.  | 0.0        | Undetermined | VB8731127c-4cf53050 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
    |                                      |           |         |         | 968   |            |              |                     |                                            |
    |                                      |           |         |         |       |            |              |                     |                                            |
    | b6f6c3e3-7e28-4a7f-80d5-383f760a4a5f | /dev/sdb  | 2064    | HDD     | 9.765 | 0.0        | Undetermined | VB28b002a7-6a49cb67 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
    | c1708b18-c850-41e1-a34c-7b95ceaa08c9 | /dev/sdc  | 2080    | HDD     | 9.765 | 9.765      | Undetermined | VB538f1b7c-52e90c94 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
    +--------------------------------------+-----------+---------+---------+-------+------------+--------------+---------------------+--------------------------------------------+
    
  3. Add a new OSD.

    $ system host-stor-add <host> <disk uuid> --tier-uuid <tier-uuid>
    +------------------+--------------------------------------------------+
    | Property         | Value                                            |
    +------------------+--------------------------------------------------+
    | osdid            | None                                             |
    | function         | osd                                              |
    | state            | configuring-with-app                             |
    | journal_location | 013a5f71-660b-408d-b1b8-12ce1040d15a             |
    | journal_size_gib | 1024                                             |
    | journal_path     | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part2 |
    | journal_node     | /dev/sdc2                                        |
    | uuid             | 013a5f71-660b-408d-b1b8-12ce1040d15a             |
    | ihost_uuid       | b9d807d7-d7c5-439e-9096-1f5ab1cae3a2             |
    | idisk_uuid       | c1708b18-c850-41e1-a34c-7b95ceaa08c9             |
    | tier_uuid        | cb496c8d-3091-425f-8c2f-88bf9e993b90             |
    | tier_name        | test                                             |
    | created_at       | 2026-01-08T00:52:21.088510+00:00                 |
    | updated_at       | None                                             |
    +------------------+--------------------------------------------------+
    

    For example:

    $ system host-stor-add controller-0 c1708b18-c850-41e1-a34c-7b95ceaa08c9 --tier-uuid cb496c8d-3091-425f-8c2f-88bf9e993b90
    

To remove an OSD from a tier

Follow the procedure in Rook Ceph OSD Management.

To remove a tier

To remove a tier, all OSDs associated must be removed from the Rook Ceph cluster before removing the tier.

  1. Check the tier tree.

    $ ceph osd tree
    [sysadmin@controller-1 ~(keystone_admin)]$ ceph osd tree
    ID   CLASS  WEIGHT   TYPE NAME                       STATUS  REWEIGHT  PRI-AFF
    -13         0.00899  root test-tier
    -12         0.00899      chassis group-0-test
    -11         0.00899          host controller-0-test
        4    hdd  0.00899              osd.4                   up   1.00000  1.00000
    -2         0.02698  root storage-tier
    -4         0.00899      chassis group-0
    -7         0.00899          host controller-0
        0    hdd  0.00899              osd.0                   up   1.00000  1.00000
        2    hdd  0.00899              osd.2                   up   1.00000  1.00000
    -9         0.01799      chassis group-1
    -3         0.01799          host controller-1
        1    hdd  0.00899              osd.1                   up   1.00000  1.00000
        3    hdd  0.00899              osd.3                   up   1.00000  1.00000
    
  2. Remove all OSDs from the tier as described in Rook Ceph OSD Management.

  3. Remove the tier.

    $ system storage-tier-delete ceph_rook_cluster <tier_name>