NetApp External Storage Prerequisites¶
About this task
Ensure the following prerequisites are met before deploying NetApp external storage for OpenStack:
StarlingX is installed and operational (see Prerequisites).
NetApp Trident has been installed and configured with proper network connectivity to the NetApp cluster and all required NetApp backends, including the recommended NetApp NFS backend for Nova Ephemeral volumes and Cinder backups (see Trident Prerequisites).
Required NetApp FlexVol volumes are created for each service and follow the proper naming pattern for NetApp iSCSI and FC (see NetApp Prerequisites).
The core OpenStack services, including Cinder, Nova, and Glance, are configured (see OpenStack Service Configuration).
The NetApp TLS certificate is available at
/var/opt/openstack/certs/netapp.pem(see NetApp TLS Certificate Setup).Connectivity to the NetApp Data and Management LIFs has been validated from all nodes (see Validating your NFS configuration).
This section covers the following topics:
StarlingX Requirements¶
Allocate 200 GB each to the kubelet and var filesystems to support
large Glance images. Verify filesystem allocation using the following command:
$ system host-fs-list controller-0
+--------------------------------------+---------+------+----------------+--------+-------------------+
| UUID | FS Name | Size | Logical Volume | State | Capabilities |
| | | in | | | |
| | | GiB | | | |
+--------------------------------------+---------+------+----------------+--------+-------------------+
| <UUID> | kubelet | 200 | kubelet-lv | In-Use | {'functions': []} |
| <UUID> | var | 200 | var-lv | In-Use | {'functions': []} |
+--------------------------------------+---------+------+----------------+--------+-------------------+
Trident Prerequisites¶
Configure volume expansion and
sanTypefor Tridentnetapp_k8s_storageclassesand/ornetapp_backends. Refer to the followingnetapp-config-overrides.ymlexample, which covers all supported NetApp backend types:trident_namespace: trident ansible_become_pass: <sysadmin_password> netapp_k8s_storageclasses: - metadata: name: netapp-nfs provisioner: netapp.io/trident allowVolumeExpansion: true parameters: backendType: "ontap-nas" mountOptions: ["rw", "hard", "intr", "bg", "vers=4", "proto=tcp", "timeo=600", "rsize=65536", "wsize=65536"] - metadata: name: netapp-iscsi provisioner: csi.trident.netapp.io allowVolumeExpansion: true parameters: backendType: "ontap-san" sanType: iscsi - metadata: name: netapp-fc provisioner: csi.trident.netapp.io allowVolumeExpansion: true parameters: backendType: "ontap-san" sanType: fcp netapp_k8s_snapshotstorageclasses: - metadata: name: csi-snapclass driver: csi.trident.netapp.io deletionPolicy: Delete netapp_backends: - metadata: name: "netapp-nfs-backend" spec: version: 1 storageDriverName: "ontap-nas" backendName: "netapp-nfs-backend" managementLIF: "<NFS MANAGEMENT LIF IP>" dataLIF: "<NFS DATA LIF IP>" svm: "<NFS SVM NAME>" credentials: name: backend-tbc-secret - metadata: name: "netapp-iscsi-backend" spec: version: 1 storageDriverName: "ontap-san" sanType: iscsi backendName: "netapp-iscsi-backend" managementLIF: "<ISCSI MANAGEMENT LIF IP>" dataLIF: "<ISCSI DATA LIF IP>" svm: "<ISCSI SVM NAME>" credentials: name: backend-tbc-secret - metadata: name: "netapp-fc-backend" spec: version: 1 storageDriverName: "ontap-san" sanType: fcp backendName: "netapp-fc-backend" managementLIF: "<FC MANAGEMENT LIF IP>" dataLIF: "<FC DATA LIF IP>" svm: "<FC SVM NAME>" credentials: name: backend-tbc-secret tbc_secret: - metadata: name: backend-tbc-secret type: Opaque data: username: "<USERNAME BASE64 ENCODED>" password: "<PASSWORD BASE64 ENCODED>"
Note
If any of netapp-iscsi, netapp-fc, and/or netapp-nfs are not required, remove
the corresponding entries from the netapp_k8s_storageclasses and
netapp_backends lists.
Run the Ansible playbook to install the NetApp overrides:
$ sudo ansible-playbook \ /usr/share/ansible/stx-ansible/playbooks/install_netapp_backend.yml \ -e @netapp-overrides.yml
Verify available storage backends:
$ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE netapp-nas-backend csi.trident.netapp.io Delete Immediate true 3d21h netapp-san csi.trident.netapp.io Delete Immediate true 3d21h
NetApp Prerequisites¶
Install and configure NetApp Trident as the storage backend. For more information, see Configure an External NetApp Deployment as the Storage Backend.
ONTAP Version¶
The minimum supported NetApp ONTAP version is ONTAP 9.8 or later. You require this version to ensure compatibility with:
NetApp Trident CSI (full CSI feature support)
FlexClone operations used by Cinder snapshots
ALUA multipath used by iSCSI and FC backends
For detailed compatibility between Trident versions and ONTAP releases, refer to the NetApp Trident Support Matrix.
Based on the deployment requirements, provision one or more FlexVol volumes on the NetApp system for the following OpenStack storage services:
Cinder volumes
Cinder volume backups
Nova ephemeral storage (ephemeral volumes)
Note
Glance does not require you to provision any FlexVols in advance for image
storage. When you configure Glance to use Cinder as the image store
(recommended), it stores images in Cinder volume FlexVols. When you
configure Glance to use PVC, Trident automatically provisions the
corresponding PersistentVolume FlexVol.
When you configure Cinder volumes with netapp-iscsi or netapp-fc, ensure
that the FlexVol names match the regex pattern defined by Cinder in
netapp_pool_name_search_pattern for each backend. By default, the
application uses the following regex pattern:
conf:
backends:
<netapp-iscsi|netapp-fc>:
netapp_pool_name_search_pattern: .*_openstack_volumes$
This regex pattern discovers Cinder volume FlexVols with names ending in
_openstack_volumes. To use a different naming convention for specific
deployment requirements, update netapp_pool_name_search_pattern through the
Cinder user overrides.
When HTTPS is enabled for NetApp Storage Virtual Machines (recommended), place
the NetApp CA certificate file (netapp.pem) on the active controller node
at: /var/opt/openstack/certs. This certificate is used during the StarlingX OpenStack
application-apply operation.
NetApp NFS Export Policy Configuration¶
NetApp controls NFS mount access based on the client source IP address. The Data LIF defines only the network path and not access control.
When a compute node mounts an NFS export, the flow is as follows:
The compute node sends NFS traffic to the Data LIF.
The compute node selects a source IP address based on routing.
NetApp validates the source IP against the export policy rules.
If a matching export rule allows read/write and superuser access, the mount succeeds; otherwise, the mount request fails.
Subnet Configuration¶
The Trident overrides YAML used to install the storage backends
(netapp-config-overrides.yml) defines the NFS Data LIF:
# IPv6 example
dataLIF: "[<DATA_LIF_IPv6>]"
# IPv4 example
dataLIF: "<DATA_LIF_IPv4>"
Network Requirements¶
All NFS traffic flows to the Data LIF address. The compute node source address must route to the Data LIF. Source address must be allowed in the export policy.
Example environment:
Compute node source IP: <COMPUTE_NODE_IP>
Data LIF: <DATA_LIF_IP>
Required Subnets:
- Compute node subnet (e.g., 192.168.1.0/24 or fd00:aa:bb:cc::/64)
- Data LIF subnet (e.g., 192.168.2.0/24 or fd00:aa:bb:dd::/64)
Export Policy Rules:
RO: Any
RW: Any
Superuser: sys
Anon: 65534
Required Client Match Rules:
- <COMPUTE_SUBNET>
- <DATA_LIF_SUBNET>
NetApp Storage Configuration for OpenStack¶
OpenStack infrastructure services, such as MariaDB and RabbitMQ, use
Kubernetes persistent storage. Each service is deployed as an independent Helm
chart and selects its StorageClass based on a configured storage class priority
list. By default, these charts use the platform’s general storage class.
To use a NetApp-backed storage class, set the
storage_conf.volume_storage_class_priority field in the Helm overrides for
the MariaDB and/or RabbitMQ charts to reference one of the supported NetApp
backends: netapp-nfs, netapp-iscsi or netapp-fc.
The following examples show how to configure the storage class priority for different NetApp protocols:
NFS backend
storage_conf:
volume_storage_class_priority:
- netapp-nfs
iSCSI backend
storage_conf:
volume_storage_class_priority:
- netapp-iscsi
Fibre Channel backend
storage_conf:
volume_storage_class_priority:
- netapp-fc
Note
Apply these overrides separately to each chart (MariaDB and RabbitMQ) that should use NetApp storage.
Note
The value you set for volume_storage_class_priority must match one of the supported NetApp backend names: netapp-nfs, netapp-iscsi, or netapp-fc. However, you can use any name for the corresponding storageClass. The application automatically discovers and uses the storageClass associated with the available NetApp backend that has the highest priority.
Cinder Configuration¶
NetApp TLS Certificate Configuration¶
Cinder communicates with NetApp using HTTPS by default. Store the NetApp CA
certificate on the controller where you run the application deployment. During
system application-apply, StarlingX OpenStack automatically stores it as a Kubernetes
secret and distributes the certificate to all controllers.
Procedure
Create the certificate directory. The default certificate (which can be overridden) is retrieved from
/var/opt/openstack/certs/netapp.pemand used to create the secret during the StarlingX OpenStack apply phase.$ mkdir -p /var/opt/openstack/certs
Both the host-side and container-side paths are customizable using Cinder user overrides:
storage_conf: netapp_tls: host_cert: "/var/opt/openstack/certs/netapp.pem" # Host filesystem path (default) container_cert: "/usr/lib/ssl/cert.pem" # Container mount path (default)
Note
By default, the Cinder NetApp backends and drivers are configured with
netapp_use_legacy_client: trueand setscontainer_certto/usr/lib/ssl/cert.pem, which the legacy NetApp ZAPI client driver requires. When thenetapp_use_legacy_clientis set tofalseto use the newer NetApp REST API, you must updatecontainer_certto/etc/cinder/certs/ca.crtthrough user overrides, as required by the REST client driver.Download the CA certificate from your NetApp appliance.
$ openssl s_client -connect <NETAPP_MGMT_IP_OR_FQDN>:443 -showcerts </dev/null \ | sed -n '/BEGIN CERTIFICATE/,/END CERTIFICATE/p' \ > /var/opt/openstack/certs/netapp.pem
Replace
<NETAPP_MGMT_IP_OR_FQDN>with the NetApp management LIF address or FQDN configured for the backend. This is the same address configured as netapp_server_hostname for the backend. Multiple certificates (certificate bundles) are supported in a single PEM file.Apply or reapply the StarlingX OpenStack application.
$ source /etc/platform/openrc $ system application-apply StX-openstack
During the apply, StarlingX OpenStack reads the certificate from the host, stores it as a Kubernetes secret (
netapp-ca-certin theopenstacknamespace), and mounts it inside the cinder-volume and cinder-backup pods at/usr/lib/ssl/cert.pem.Verify the certificate is mounted inside a running Cinder pod.
$ kubectl exec -it -n openstack \ $(kubectl get pods -n openstack -o name | grep cinder-volume | head -n 1) \ -- cat /usr/lib/ssl/cert.pem
Verify that the Kubernetes secret was created.
$ kubectl get secret netapp-ca-cert -n openstack \ -o jsonpath='{.data.ca\.crt}' | base64 --decode
To rotate the certificate, update the host file and re-apply. Replace the file
at /var/opt/openstack/certs/netapp.pem (or your custom path) with the new
certificate and reapply:
$ system application-apply StX-openstack
The secret is automatically updated during the apply.
Enabling and Disabling Cinder Backends¶
Cinder automatically discovers NetApp backends from Trident. To control which backends are enabled:
Procedure
Define the enabled storage backends.
Specify which NetApp backends Cinder enables by defining
storage_conf.storage_backends.storage_conf: storage_backends: - name: netapp-nfs enabled: true - name: netapp-iscsi enabled: true - name: netapp-fc enabled: false
Note
If you do not define
storage_conf.storage_backends, StarlingX OpenStack enables all backends that Cinder automatically discovers from Trident.Control backend selection priority.
In addition to
storage_conf.storage_backends, Cinder usesvolume_storage_class_priorityandbackup_storage_class_priorityto determine which storage backends it enables for volumes and backups.Use
volume_storage_class_prioritylist to define the default volume type. Cinder selects the first matching backend in the list as the default. Cinder also makes the remaining backends involume_storage_class_priorityavailable for volumes, but you must explicitly select a backend by passing its name to OpenStack during volume creation (for example,openstack volume create --type netapp-nfs --size <size-in-GB> <volume-name>).Use
backup_storage_class_prioritylist to independently select the backend used for volume backups.
By default, StarlingX OpenStack includes all supported backends in both lists:
storage_conf: volume_storage_class_priority: - ceph - netapp-nfs - netapp-iscsi - netapp-fc backup_storage_class_priority: - ceph - netapp-nfs - netapp-iscsi - netapp-fc
Note
When you modify the configuration through user overrides, Cinder enables only the backends that you specify in
storage_conf.storage_backendsand that you include involume_storage_class_priorityorbackup_storage_class_priority.
Glance Configuration¶
Glance supports two NetApp-backed image storage models.
Cinder-backed Image Storage (Recommended)¶
Configure Glance to store images in Cinder volumes.
storage_conf:
volume_storage_class_priority:
- cinder
When using iSCSI or FC backends, the Glance API pod automatically runs with host networking and privileged access to reach the host iSCSI services.
PVC-backed Image Storage¶
Configure Glance to store images directly on a PVC provisioned by NetApp.
storage_conf:
volume_storage_class_priority:
- netapp-nfs # or netapp-iscsi, netapp-fc
volume:
size: 100Gi
Resizing the Glance PVC¶
Resize the Glance PVC by updating the override file and reapplying the application.
Procedure
Update the Glance Helm override file.
storage_conf: volume_storage_class_priority: - netapp-nfs volume: size: 200Gi
Apply the updated override and redeploy.
system helm-override-update --reuse-values --values glance.yaml StX-openstack glance openstack system application-apply StX-openstack
The StorageClass must support volume expansion, and a CSI driver must dynamically provision the PVC.
Note
Although using a Glance PVC is supported, it is not recommended when storing Glance images on NetApp backends. For full support and optimal behavior for Glance images stored on NetApp backends, use Cinder-backed Image Storage instead of a Glance PVC.
Nova Configuration¶
Nova supports two backends for ephemeral storage:
Inline NFS shares
Note
Inline NFS configuration is currently supported only in IPv4 environments.
PVC-backed ephemeral storage
Inline NFS Ephemeral Storage (Recommended for NetApp NFS IPv4 Environments)¶
Configure Nova to mount an NFS share directly.
storage_conf:
volume_storage_class_priority:
- nfs
nfs_shares:
server: <NFS Shares IP>
path: <NFS Shares junction path, e.g. /openstack_instances>
Storage Class PVC Ephemeral Storage¶
Configure Nova to use a PVC for ephemeral storage.
storage_conf:
volume_storage_class_priority:
- pvc
pvc:
volume:
size: 100Gi
storage_class_priority:
- netapp-nfs
Note
Storage Class PVC Ephemeral Storage is recommended only for NetApp NFS IPv6 environments, where Inline NFS Ephemeral Storage is not supported.
Storage Validation¶
These procedures validate host-level connectivity to NetApp storage and must be completed before deploying StarlingX OpenStack.
Validating NFS Configuration¶
Use the following procedure to validate NFS connectivity and export policy configuration from a node.
Clean up any stale mounts.
$ sudo umount /mnt/netapp-test 2>/dev/null $ sudo umount /var/rootdirs/mnt/netapp-test 2>/dev/null
Create a mount directory.
$ sudo mkdir -p /mnt/netapp-test
Verify connectivity to the Data LIF.
# IPv4 $ ping <DATA_LIF_IPv4> # IPv6 $ ping6 <DATA_LIF_IPv6>
Mount the export using NFSv4.
# IPv4 $ sudo mount -v -t nfs -o nfsvers=4,proto=tcp \ <DATA_LIF_IPv4>:/<SVM_VOLUME_PATH> /mnt/netapp-test # IPv6 $ sudo mount -v -t nfs -o nfsvers=4,proto=tcp6 \ [<DATA_LIF_IPv6>]:/<SVM_VOLUME_PATH> /mnt/netapp-test
Note
On StarlingX systems, the mount appears under
/var/rootdirs/mnt/netapp-test.Verify that the mount is present.
$ mount | grep netapp-test
Test read and write operations.
$ sudo touch /mnt/netapp-test/testfile $ ls -l /mnt/netapp-test $ sudo rm /mnt/netapp-test/testfile
Unmount the filesystem.
$ sudo umount /mnt/netapp-test
Validating iSCSI and FC Configuration¶
Use the following steps to validate SAN connectivity prior to deployment.
Discover iSCSI targets.
# IPv4 $ sudo iscsiadm -m discovery -t sendtargets -p <DATA_LIF_IPv4> # IPv6 $ sudo iscsiadm -m discovery -t sendtargets -p [<DATA_LIF_IPv6>]
Expected output:
[<DATA_LIF>]:3260,<TPGROUP> iqn.1992-08.com.netapp:sn.<SERIAL>:vs.<VSID>
Log in to the discovered iSCSI targets.
$ sudo iscsiadm -m node --login
Verify active iSCSI sessions.
$ sudo iscsiadm -m session
List block devices.
$ lsblkVerify Fibre Channel HBA ports are online.
$ grep -H . /sys/class/fc_host/host*/{port_name,port_state,fabric_name}
Verify Fibre Channel targets are visible.
$ grep -H . /sys/class/fc_remote_ports/rport-*/{port_name,port_state,roles}
The output should show one or more remote ports corresponding to the NetApp target FC ports in
Onlinestate.List SCSI devices mapped from FC targets.
$ lsblk --scsi | grep NETAPP
If LUNs are already mapped, they should appear as SCSI block devices. If no LUNs are mapped yet, no output is expected at this stage.