Deployment Models and Services for Rook Ceph¶
The deployment model is the topology strategy that defines the storage backend capabilities of the deployment. The deployment model dictates how the storage solution will look like defining rules for the placement of the storage cluster elements.
Available Deployment Models¶
Deployment Models Rules¶
Each deployment model works with different deployment strategies and rules to fit different needs. Choose one of the following models according to the demands of your cluster:
- Controller Model (default)
The OSDs must be added only in hosts with controller personality.
The replication factor can be configured up to size 3.
- Dedicated Model
The OSDs must be added only in hosts with worker personality.
The replication factor can be configured up to size 3.
- Open Model
The OSD placement does not have any limitation.
The replication factor does not have any limitation.
Important
The Open deployment model offers greater flexibility in configuration. However, users must thoroughly understand the implications of their settings, as they are solely responsible for ensuring proper configuration.
Change the Deployment Model¶
The deployment models can be changed as long as the system follows the previously established rules.
To change to another deployment model, execute the following command:
~(keystone_admin)$ system storage-backend-modify ceph-rook-store -d <desired_deployment_model>
Replication Factor¶
The replication factor is the number of copies that each piece of data has spread across the cluster to provide redundancy.
You can change the replication of an existing Rook Ceph storage backend with the following command:
~(keystone_admin)$ system storage-backend-modify ceph-rook-store replication=<size>
Possible Replication Factors on Deployment Models for platforms.
- Simplex Controller Model:
Default: 1 Max: 3
- Simplex Open Model:
Default: 1 Max: Any
- Duplex Controller Model:
Default: 2 Max: 2
- Duplex Open Model:
Default: 2 Max: Any
- Duplex+ or Standard Controller Model:
Default: 2 Max: 3
- Duplex+ or Standard Dedicated Model:
Default: 2 Max: 3
- Duplex+ or Standard Open Model:
Default: 2 Max: Any
Minimum Replication Factor¶
The minimum replication factor is the least number of copies that each piece of data have spread across the cluster to provide redundancy.
You can assign any number smaller than the replication factor to this parameter. The default value is (replication - 1).
Note
When the replication factor changes, the minimum replication will be readjusted automatically.
You can change the minimum replication of an existing Rook Ceph storage backend with the command:
~(keystone_admin)$ system storage-backend-modify ceph-rook-store min_replication=<size>
Monitor, Host-fs and controllerfs¶
Ceph monitors are the central nervous system of the Ceph cluster, ensuring that
all components are aware of each other and that data is stored and accessed
reliably. To properly set the environment for Rook Ceph monitors, some
filesystems are needed: host-fs
for fixed monitor and controllerfs
for
the floating monitor.
Note
All changes in host-fs
and controllerfs
require a reapply on the
application to properly propagate the modifications in the Rook Ceph
cluster.
Functions¶
The functions parameter contains the Ceph cluster function of a given host. A
host-fs
can have monitor and osd functions, a controllerfs
can only
have the monitor function.
To modify the function of a host-fs
the complete list of functions desired
must be informed.
Examples:
host-fs
#(only monitor)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=monitor
#(only osd)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd
#(no function)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=
controllerfs
#(only monitor)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=monitor
#(no function)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=
Monitor Count¶
Monitors (mons) are allocated on all the hosts that have a host-fs ceph
with the monitor capability on it.
Possible Monitor Count on Deployment Models for Platforms¶
- Simplex:
Min: 1 Max: 1
- Duplex:
Min: 1 Recommended: 3 (using floating monitor) Max: 3 (using floating monitor)
- Duplex+ or Standard:
Min: 1 Recommended: 3 Max: 5
Fixed Monitors¶
Fixed monitor is the normal monitor that is associated with a given host. Each
fixed monitor requires a host-fs ceph
properly set and configured on the
host.
Add a monitor
To add a monitor the host-fs ceph
must be created or have the function
‘monitor’ added to its capabilities
When the host has no OSD registered on the platform, add host-fs ceph
on
every node intended to house a monitor. Creating a host-fs this way
automatically sets the monitor function. To create a host-fs ceph
, run the
command:
~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
When there are OSDs registered on a host, add the ‘monitor’ function to the
existing host-fs
.
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd,monitor
After adding the ‘monitor’ function, reapply the application.
~(keystone_admin)$ system application-apply rook-ceph
Remove a monitor
To remove a monitor, the function ‘monitor’ must be removed from the
capabilities list of the host-fs ceph
.
When the host has no OSD registered on the platform, remove the ‘monitor’
function from the host-fs ceph
.
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=
When there are OSDs registered on the same host, only the ‘monitor’ function
should be removed from the host-fs ceph
capabilities list.
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd
After the removal of the ‘monitor’ function, reapply the application.
~(keystone_admin)$ system application-apply rook-ceph
Floating Monitor (only in Duplex)¶
A floating monitor is supported and recommended on AIO-DX platforms. The monitor roams and is always allocated on the active controller, providing redundancy and improving stability.
Add the floating monitor
Note
Lock the standby controller before adding the controllerfs ceph-float
to the platform.
Lock the standby controller.
# Considering controller-0 as the active controller ~(keystone_admin)$ system host-lock controller-1
Add the
controllerfs
with the standby controller locked.~(keystone_admin)$ system controllerfs-add ceph-float=<size>
Unlock the standby controller.
# Considering controller-0 as the active controller ~(keystone_admin)$ system host-unlock controller-1
Reapply the Rook Ceph application, with the standby controller unlocked and available.
~(keystone_admin)$ system application-apply rook-ceph
Remove the floating monitor
To remove the floating monitor, the function ‘monitor’ must be removed from the
capabilities list of the controllerfs ceph-float
.
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=
After the removal of the ‘monitor’ function, reapply the Rook Ceph application.
~(keystone_admin)$ system application-apply rook-ceph
Migration between AIO-Duplex and AIO-Duplex+
- Migrating to AIO-Duplex+
To migrate from AIO-Duplex to AIO-Duplex+ the floating monitor must be removed before the migration, and a new fixed monitor should be added in a worker after the migration is done.
- Migrating to AIO-Duplex
To migrate from AIO-Duplex+ to AIO-Duplex, the fixed monitor should be removed from the cluster before the migration, and a floating monitor should be added after the migration is done.
Services¶
Services are the storage types (or classes) that provides storage to each pod with some mount or storage space allocation.
Available Services¶
There are four possible services compatible with Rook Ceph. You can combine them, following the rules below:
block
(default)Not possible to be deployed together with ecblock.
Will enable the block service in rook, will use cephRBD.
ecblock
Not possible to be deployed together with block.
Will enable the ecblock service in rook, will use cephRBD.
filesystem
(default)Will enable the Ceph filesystem and use cephFS.
object
Will enable the Ceph object store (RGW).
Important
A Service cannot be removed or replaced. Services can only be added.
Add New Services¶
To add a new service to the storage-backend, first choose a possible service compatible with the aforementioned rules.
Get the list of the current services of the storage-backend.
~(keystone_admin)$ system storage-backend-show ceph-rook-store
Add the desired service to the list.
~(keystone_admin)$ system storage-backend-modify ceph-rook-store --services=<previous_list>,<new_service>
Reapply the Rook Ceph application.
~(keystone_admin)$ system application-apply rook-ceph
For example, in a storage-backend with the service list block,filesystem
,
only object
can be added as a service:
~(keystone_admin)$ system storage-backend-modify ceph-rook-store --services=block,filesystem,object
Services Parameterization for the Open Model¶
In the ‘open’ deployment model, no specific configurations are enforced.
You are responsible for customizing settings based on your specific needs. To update configurations, a Helm override is required.
When applying a helm-override update, list-type values are completely replaced, not incrementally updated.
For example, modifying cephFileSystems
(or cephBlockPools
,
cephECBlockPools
, cephObjectStores
) via Helm override will overwrite
the entire entry.
This is an example of how to change a parameter, using failureDomain
, for
Cephfs and RBD:
# Get the current CRUSH rule information
ceph osd pool get kube-cephfs-data crush_rule
# Get the current default values
helm get values -n rook-ceph rook-ceph-cluster -o yaml | sed -n '/^cephFileSystems:/,/^[[:alnum:]_-]*:/p;' | sed '$d' > cephfs_overrides.yaml
# Update the failure domain
sed -i 's/failureDomain: osd/failureDomain: host/g' cephfs_overrides.yaml
# Get the current user override values ("combined overrides" is what will be deployed):
system helm-override-show rook-ceph rook-ceph-cluster rook-ceph
# Set the new overrides
system helm-override-update rook-ceph rook-ceph-cluster rook-ceph --reuse-values --values cephfs_overrides.yaml
# Get the updated user override values
system helm-override-show rook-ceph rook-ceph-cluster rook-ceph
# Apply the application
system application-apply rook-ceph
# Confirm the current crush rule information:
ceph osd pool get kube-cephfs-data crush_rule
# Retrieve the current values and extract the cephBlockPools section:
helm get values -n rook-ceph rook-ceph-cluster -o yaml | sed -n '/^cephBlockPools:/,/^[[:alnum:]_-]*:/p;' | sed '$d' > rbd_overrides.yaml
# Modify the failureDomain parameter from osd to host in the rbd_overrides.yaml file:
sed -i 's/failureDomain: osd/failureDomain: host/g' rbd_overrides.yaml
# Set the update configuration:
system helm-override-update rook-ceph rook-ceph-cluster rook-ceph --reuse-values --values rbd_overrides.yaml
# Apply the application
system application-apply rook-ceph
Disable Helm Chart¶
Important
Do not disable any of the Rook Ceph Helm charts using system helm-chart-attribute-modify as this may result in a broken installation.
Ceph Health Status Filter¶
Some Ceph health statuses can be filtered to avoid generating alarms. The detection of a particular Health error or warning can be disabled.
Important
Disabling the detection of any health error or warning can prevent the system from generating alarms, detecting issues and log generating. This feature must be used at your discretion. It is recommended to use this feature temporarily during some analysis or procedure and then revert back to the default empty values.
There are two filters: health_filters_for_ignore
for filtering at any time
(always active) and health_filters_for_upgrade
that applies the filter only
during an upgrade of the Rook Ceph.
To apply the always-on filter (health_filters_for_ignore
), use the
following procedure.
Check for the name of any Ceph health issues that might want to be filtered out.
~(keystone_admin)$ ceph health detail
Consult the list of the Ceph health issues currently ignored.
~(keystone_admin)$ kubectl get configmap ceph-manager-config -n rook-ceph -o yaml | sed -n '/ceph-manager-config.yaml:/,/^[^ ]/p' | sed -n 's/^[ ]*health_filters_for_ignore:[ ]*//p'
Edit the ConfigMap adding the name of the all Ceph health issues, comma separated and delimited by [], to the list
health_filters_for_ignore
.#Examples of useful health statuses to ignore: MON_DOWN,OSD_DOWN, BLUESTORE_SLOW_OP_ALERT ~(keystone_admin)$ health_filters='[<ceph_health_status_1>,<ceph_health_status_2>]' ~(keystone_admin)$ kubectl get configmap ceph-manager-config -n rook-ceph -o yaml | sed "s/^\(\s*health_filters_for_ignore:\s*\).*/\1$health_filters/" | kubectl apply -f -
Restart the stx-ceph-manager pod.
~(keystone_admin)$ kubectl rollout restart -n rook-ceph deployment stx-ceph-manager
To use the upgrade only filter (health_filters_for_upgrade
), follow the
procedure above changing commands for consult and edit for the following
versions:
Check for the name of any Ceph health issues that might want to be filtered out.
~(keystone_admin)$ ceph health detail
Consult the list of the Ceph health issues currently ignored.
~(keystone_admin)$ kubectl get configmap ceph-manager-config -n rook-ceph -o yaml | sed -n '/ceph-manager-config.yaml:/,/^[^ ]/p' | sed -n 's/^[ ]*health_filters_for_upgrade:[ ]*//p'
Edit the ConfigMap adding the name of the all Ceph health issues, comma separated and delimited by [], to the list
health_filters_for_upgrade
.~(keystone_admin)$ health_filters='[<ceph_health_status_1>,<ceph_health_status_2>]' ~(keystone_admin)$ kubectl get configmap ceph-manager-config -n rook-ceph -o yaml | sed "s/^\(\s*health_filters_for_upgrade:\s*\).*/\1$health_filters/" | kubectl apply -f -
Restart the stx-ceph-manager pod.
~(keystone_admin)$ kubectl rollout restart -n rook-ceph deployment stx-ceph-manager