Introduction

StarlingX is a fully integrated edge cloud software stack that provides everything needed to deploy an edge cloud on one, two, or up to 100 servers.

Key features of StarlingX include:

  • Provided as a single, easy to install package that includes an operating system, storage and networking components, and all the cloud infrastructure needed to run edge workloads.

  • Optimized software that meets edge application requirements.

  • Designed with pre-defined configurations to meet a variety of edge cloud deployment needs.

  • Tested and released as a complete stack, ensuring compatibility among open source components.

  • Included fault management and service management capabilities, which provide high availability for user applications.

  • Optimized by the community for security, ultra-low latency, extremely high service uptime, and streamlined operation.

Download the StarlingX ISO image from the StarlingX mirror.

Learn more about StarlingX:

Projects

StarlingX contains multiple sub-projects that include additional edge cloud support services and clients. API documentation and release notes for each project are found on the specific project page:

Supporting projects and repositories:

For additional information about project teams, refer to the StarlingX wiki.

New features in this version

The sections below provides a detailed list of new features and links to the associated user guides (if applicable).

Debian OS

StarlingX Release 8.0 (and onwards) will only support a Debian-based Solution; full StarlingX functionality is supported. StarlingX R8 release runs Debian Bullseye (11.3) with the 5.10 kernel version from the Yocto project.

Debian is a well-established Linux Distribution supported by a large and mature open-source community and used by hundreds of commercial organizations, including Google. StarlingX has full functional equivalence to the earlier CentOS-based versions of StarlingX. From StarlingX Release 8.0 Debian OS is the only supported OS in StarlingX.

Major features of Debian-based StarlingX 8.0 include:

StarlingX leverages its existing relationships with the Yocto Project to enhance development, bug fixes and other activities in the Yocto Project kernel to drive StarlingX quality and feature content.

  • Debian Bullseye (11.3)

    Debian is a well-established Linux Distribution supported by a large and mature open-source community.

  • OSTree ( https://ostree.readthedocs.io/en/stable/manual/introduction/ )

    OSTree provides for robust and efficient versioning, packaging and upgrading of Linux-based systems.

  • An updated Installer to seamlessly adapt to Debian and OSTree

  • Updated software patching and upgrades for Debian and OSTree.

Operational Impacts of Debian

The operational impacts of Debian-based StarlingX are:

  • Functional equivalence with CentOS-based StarlingX

  • Use of the StarlingX CLIs and APIs will remain the same:

    • StarlingX on Debian will provide the same CLIs and APIs as StarlingX on CentOS.

    • StarlingX on Debian will run on a 5.10 based kernel.

    • StarlingX on Debian will support the same set of Kubernetes APIs used in StarlingX on CentOS.

    • The procedure to install hosts will be unchanged by the migration from CentOS to Debian. Only the grub menu has been modified.

    • The CLIs used for software updates (patching) will be unchanged by the migration from CentOS to Debian.

  • User applications running in containers on CentOS should run on Debian without modification. Re-validation of containers on Debian is encouraged to identify any exceptions.

  • A small subset of operating system-specific commands will differ. Some of these changes result from the switch in distributions while others are generic changes that have accumulated since the release of the CentOS distribution currently used. For example:

    • The Debian installation requires new pxeboot grub menus. See PXE Boot Controller-0.

    • Some prompt strings will be slightly different (for example: ssh login, passwd command, and others).

    • Many 3rd-party software packages are running a newer version in Debian and this may lead to minor changes in syntax, output, config files, and logs.

    • The URL to expose keystone service does not have the version appended.

    • On Debian, interface and static routes need to be handled using system-API (host-route-*, host-if-* and host-addr-*).

      • Do not edit configuration files in /etc/network/ as they are regenerated from sysinv database after a system reboot. Any changes directly done there will be lost.

      • The static routes configuration file is /etc/network/routes

      • Interface configuration files are located in /etc/network/interfaces.d/

    • Debian stores network information in /etc/network instead of /etc/sysconfig/network-scripts location used in CentOS. However, the StarlingX system commands are unchanged.

    • Patching on Debian is done using ostree commits rather than individual RPMs.

      You can see which packages are updated by ostree using the dpkg -l instead of rpm -qa used on CentOS.

    • The patching CLI commands and Horizon interactions are the same as for CentOS.

      • The supported patching CLI commands for release 8.0 are:

        • sw-patch upload

        • sw-patch upload-dir

        • sw-patch apply

        • sw-patch remove

        • sw-patch delete

Change in Login for Systems installed from Prestaged ISO

In StarlingX Systems installed using prestaging ISO has a sysadmin account, and the default initial password is sysadmin (default login / password combination is sysadmin/sysadmin). The initial password must be changed immediately after logging in to the host for the first time. Follow the steps below:

  1. login: sysadmin

  2. password: sysadmin

  3. Current password: sysadmin

  4. New Password:

  5. Re-enter New Password:

CVSS v3 Adoption

StarlingX is now using CVSS v3 instead of CVSS v2 as a fix criteria to evaluate CVEs that need to be fixed.

On a monthly basis, the StarlingX is scanned for CVE’s and the reports that are generated are reviewed by the Security team.

See: CVE Maintenance for details.

Single Physical Core for Platform Function in All-In-One Deployments

The platform core usage is optimized to operate on a single physical core (with two logical cores with Hyper-Threading enabled) for AIO deployments.

Note

The use of single physical core for platform function is only suitable for Intel® 4th Generation Xeon® Scalable Processors or above and should not be configured for previous Intel® Xeon® CPU families. For All-In-One systems with older generation processors, two physical cores (or more) must be configured.

See:

AIO memory reserved for the platform has increased

The amount of memory reserved for the platform for StarlingX Release 8.0 on an AIO controller has increased to 11 GB for hosts with 2 numa nodes.

Resizing platform-backup partition during install

During Installation: If a platform-backup partition exists, it will no longer be wiped in normal installation operations. The platform-backup partition can be resized during the install; although it can only be increased in size, not reduced in size.

Caution

Attempts to install using a smaller partition size than the existing partition will result in installation failures.

During Installation and Provisioning of a Subcloud: For subcloud install operations, the persistent-size install value in the subcloud install-values.yaml file used during subcloud installation, controls platform-backup partition sizing. Since the platform-backup partition is non-destructive, this value can only be increased from previous installs. In this case, the partition size is extended and the filesystem is resized.

Caution

Any “persistent-size” values smaller than the existing partition will cause installation failures, with the partition remaining in place.

Recommended: For new installations where a complete reinstall is being performed it may be preferable to wipe the disks before the fresh install.

Optimized Backup and Restore

Note

The backup in earlier StarlingX releases are not compatible with the Optimized Restore functionality introduced in StarlingX Release 8.0.

Backup from one release to another release is not supported, except for an AIO-SX upgrade.

Optimized Backup

The extra var backup_registry_filesystem can now be used to backup users images in the registry backup (mainly for backup for reinstall usage scenario).

Optimized Restore

The new optimized restore method will support restore with registry backup only. It will obtain from the prestaged images the required platform images. If no prestaged images are available, it would need to resort to pulling from the registry.

See: AIO-SX - Restore on new hardware for details.

Enhancements for Generic Error Tolerance in Redfish Virtual Media

Redfish virtual media operations have been observed to fail with transient errors. While the conditions for those failures are not always known (network, BMC timeouts, etc), it has been observed that if the Subcloud install operation is retried, the operation is successful.

To alleviate the transient conditions, the robustness of the Redfish media operations are improved by introducing automatic retries.

Centralized Subcloud Backup and Restore

The StarlingX Backup and Restore feature allows for essential system data (and optionally some additional information, such as container registry images, and OpenStack application data) to be backed up, so that it can be used to restore the platform to a previously working state.

The user may backup system data, or restore previously backed up data into it, by running a set of ansible playbooks. They may be run either locally within the system, or from a remote location. The backups are saved as a set of compressed files, which can then be used to restore the system to the same state as it was when backed up.

The subcloud’s system backup data can either be stored locally on the subcloud or on the System Controller. The subcloud’s container image backup (from registry.local) can only be stored locally on the subcloud to avoid overloading the central storage and the network with large amount of data transfer.

See:

Improved Subcloud Deployment / Upgrading Error Reporting

In StarlingX Release 8.0 provides enhanced support for subcloud deployments and upgrading error reporting.

Key error messages from subcloud deployment or upgrade failures can now be accessed via RESTAPIs, the CLI or the GUI (Horizon).

Full logs for subcloud deployments and upgrades are still accessible by using SSH to the System Controller; however, this should no longer be required in most error scenarios.

See: Distributed Cloud Guide for details.

Kubernetes Pod Coredump Handler

A new Kubernetes aware core dump handler has been added in StarlingX Release 8.0.

Individual pods can control the core dump handling by specifying Kubernetes Pod annotations that will instruct the core dump handler for specific applications.

See: Kubernetes Pod Core Dump Handler for details.

Enhancements for Subcloud Rehoming without controller reboot

StarlingX Release 8.0 supports rehoming a subcloud to a new system controller without requiring a lock and unlock of the subcloud controller(s).

When the System Controller needs to be reinstalled, or when the subclouds from multiple System Controllers are being consolidated into a single System Controller, you can rehome an already deployed subcloud to a different System Controller.

See: Rehome a Subcloud for details.

KubeVirt

The KubeVirt system application in StarlingX includes; KubeVirt, Containerized Data Importer (CDI) and the Virtctl client tool.

KubeVirt is an open source project that allows VMs to be run and managed as pods inside a Kubernetes cluster. This is a particularly important innovation as traditional VM workloads can be moved into Kubernetes alongside already containerized workloads, thereby taking advantage of Kubernetes as an orchestration engine.

The CDI is an open source project that provides facilities for enabling PVCs to be used as disks for KubeVirt VMs by way of DataVolumes.

The Virtctl client tool is an open source tool distributed with KubeVirt and required to use advanced features such as serialand graphical console access. It also provides convenience commands for starting/stopping VMs, live migrating VMs, cancelling live migrations and uploading VM disk images.

Note

Limited testing of KubeVirt on StarlingX has been performed, along with some simple examples on creating a Linux VM and a Windows VM. In future releases, high performance capabilities of KubeVirt will be validated on StarlingX.

See:

Support for Intel Wireless FEC Accelerators using SR-IOV FEC operator

The SR-IOV FEC Operator for Intel Wireless FEC Accelerators supports the following vRAN FEC accelerators:

  • Intel® vRAN Dedicated Accelerator ACC100.

  • Intel® FPGA Programmable Acceleration Card N3000.

  • Intel® vRAN Dedicated Accelerator ACC200.

You can enable and configure detailed FEC parameters for an ACC100/ACC200 eASIC card so that it can be used as a hardware accelerator by hosted vRAN containerized workloads on StarlingX.

See:

Multiple Driver Version Support

StarlingX supports multiple driver versions for the ice, i40e, and iavf drivers.

See: Intel Multi-driver Version for details.

Intel 4th Generation Intel(R) Xeon(R) Scalable Processor Kernel Feature Support (5G ISA)

Introduction of the 5G ISA (Instruction Set Architecture) will facilitate an acceleration for vRAN workloads to improve performance and capacity for RAN solutions specifically compiled for the 4th Generation Intel(R) Xeon(R) Scalable Processor target with the 5G instruction set (AVX512-FP16) enabled.

vRAN Intel Tools Container

StarlingX Release 8.0 supports OpenSource vRAN tools that are being delivered in the docker.io/starlingx/stx-debian-tools-dev:stx.8.0-v1.0.3 container.

See: vRAN Tools for details.

Quartzville iqvlinux driver support

This OpenSource Quartzville driver is included in StarlingX in support of a user building a container with the Quartzville tools from Intel, using docker.io/starlingx/stx-debian-tools-dev:stx.8.0-v1.0.3 as a base container, as described in vRAN Tools .

See: vRAN Tools for details.

Pod Security Admission Controller

The PSA Controller is the PSP replacement which is supported in Kubernetes v1.24 in StarlingX Release 8.0. It replaces the deprecated PSP; PSP will be REMOVED in StarlingX Release 9.0 with Kubernetes v1.25.

The PSA controller acts on creation and modification of the pod and determines if it should be admitted based on the requested security context and the policies defined by Pod Security Standards. It provides a more usable k8s-native solution to enforce Pod Security Standards.

Note

StarlingX users should migrate their security policy configurations from PSP to PSA in StarlingX Release 8.0 .

See:

SSH integration with remote Windows Active Directory

By default, SSH to StarlingX hosts supports authentication using the ‘sysadmin’ Local Linux Account and StarlingX Local LDAP Linux User Accounts. SSH can also be optionally configured to support authentication with one or more remote LDAP identity providers (such as Windows Active Directory (WAD)). Internally, SSH uses SSSD service to provide NSS and PAM interfaces and a backend system able to remotely connect to multiple different LDAP domains.

SSSD provides a secure solution by using data encryption for LDAP user authentication. SSSD supports authentication only over an encrypted channel.

See: SSH User Authentication using Windows Active Directory (WAD) for details.

Keystone Account Roles

reader role support has been added for StarlingX commands: system, fm, swmanager and dcmanager.

Roles:

  • admin role in the admin projet can execute any action in the system

  • reader role in the admin project has access to only read-only type commands; i.e. list, query, show, summary type commands

  • member role is currently equivalent to reader role; this may change in the future.

See: Keystone Account Roles for details.

O-RAN O2 Compliance

In the context of hosting a RAN Application on StarlingX, the O-RAN O2 Application provides and exposes the IMS and DMS service APIs of the O2 interface between the O-Cloud (StarlingX) and the Service Management and Orchestration (SMO), in the O-RAN Architecture.

The O2 interfaces enable the management of the O-Cloud (StarlingX) infrastructure and the deployment life-cycle management of O-RAN cloudified NFs that run on O-Cloud (StarlingX). See O-RAN O2 General Aspects and Principles 2.0, and INF O2 documentation.

The O-RAN O2 application is integrated into StarlingX as a system application. The O-RAN O2 application package is saved in StarlingX during system installation, but it is not applied by default.

Note

StarlingX Release 8.0 O2 IMS and O2 DMS with Kubernetes profiles are compliant with the October 2022 version of the O-RAN standards.

See: O-RAN O2 Application for details.

O-RAN Spec Compliant Timing API Notification

StarlingX provides ptp-notification to support applications that rely on PTP for time synchronization and require the ability to determine if the system time is out of sync. ptp-notification provides the ability for user applications to query the sync state of hosts as well as subscribe to push notifications for changes in the sync status.

PTP-notification consists of two main components:

  • The ptp-notification system application can be installed on nodes using PTP clock synchronization. This monitors the various time services and provides the v1 and v2 REST API for clients to query and subscribe to.

  • The ptp-notification sidecar. This is a container image which can be configured as a sidecar and deployed alongside user applications that wish to use the ptp-notification API. User applications only need to be aware of the sidecar, making queries and subscriptions via its API. The sidecar handles locating the appropriate ptp-notification endpoints, executing the query and returning the results to the user application.

See: PTP Notifications Overview for details.

New features in StarlingX 7.0

The list below provides a detailed list of new features and links to the associated user guides (if applicable).

Debian-based Solution

StarlingX release 7.0 inherits the 5.10 kernel version from the Yocto project introduced in StarlingX release 6.0, i.e. the Debian 5.10 kernel is replaced with the Yocto project 5.10.x kernel (linux-yocto).

StarlingX release 7.0 is a Technology Preview Release of Debian StarlingX for evaluation purposes.

StarlingX release 7.0 release runs Debian Bullseye (11.3). It is limited in scope to the AIO-SX configuration, Duplex, and standard configurations are not available.. It is also limited in scope to Kubernetes apps and does not yet support running OpenStack on Debian.

Istio Service Mesh Application

The Istio Service Mesh application is integrated into StarlingX as a system application.

Istio provides traffic management, observability as well as security as a Kubernetes service mesh. For more information, see https://istio.io/.

StarlingX includes istio-operator container to manage the life cycle management of the Istio components.

See: Istio Service Mesh Application

Pod Security Admission Controller

The Beta release of Pod Security Admission (PSA) controller is available in StarlingX release 7.0 as a Technology Preview feature. It will replace Pod Security Policies in a future release.

PSA controller acts on creation and modification of the pod and determines if it should be admitted based on the requested security context and the policies defined. It provides a more usable k8s-native solution to enforce Pod Security Standards.

See:

Platform Application Components Revision

The following applications have been updated to a new version in StarlingX Release 7.0.

  • cert-manager, 1.7.1

  • metric-server, 1.0.18

  • nginx-ingress-controller, 1.1.1

  • oidc-dex, 2.31.1

cert-manager

The upgrade of cert-manager from 0.15.0 to 1.7.1 deprecated support for cert manager API versions cert-manager.io/v1alpha2 and cert-manager.io/v1alpha3. When creating cert-manager CRDs (certificates, issuers, etc) with StarlingX, Release 7.0, use API version of cert-manager.io/v1.

Cert manager resources that are already deployed on the system will be automatically converted to API version of cert-manager.io/v1. Anything created using automation or previous StarlingX releases should be converted with the cert-manager kubectl plugin using the instructions documented in https://cert-manager.io/docs/installation/upgrading/upgrading-0.16-1.0/#converting-resources before being deployed to the new release.

metric-server

In StarlingX Release 7.0 the Metrics Server will NOT be automatically updated. To update the Metrics Server, see Install Metrics Server

oidc-dex

StarlingX Release 7.0 supports helm-overrides of oidc-auth-apps application. The recommended and legacy example Helm overrides of oidc-auth-apps are supported for upgrades, as described in StarlingX documentation User Authentication Using Windows Active Directory.

See: Set up OIDC Auth Applications.

Bond CNI plugin

The Bond CNI plugin v1.0.1 is now supported in StarlingX Release 7.0.

The Bond CNI plugin provides a method for aggregating multiple network interfaces into a single logical “bonded” interface.

To add a bonded interface to a container, a network attachment definition of type bond must be created and added as a network annotation in the pod specification. The bonded interfaces can either be taken from the host or container based on the value of the linksInContainer parameter in the network attachment definition. It provides transparent link aggregation for containerized applications via K8s configuration for improved redundancy and link capacity.

See:

Bond Plugin

PTP GNSS and Time SyncE Support for 5G Solutions

Intel’s E810 Westport Channel and Logan Beach NICs support a built-in GNSS module and the ability to distribute clock via Synchronous Ethernet (SyncE). This feature allows a PPS signal to be taken in via the GNSS module and redistributed to additional NICs on the same host or on different hosts. This behavior is configured on StarlingX using the clock instance type in the PTP configuration.

These parameters are used to enable the UFL/SMA ports, recovered clock syncE etc. Refer to the user’s guide for the Westport Channel or Logan Beach NIC for additional details on how to operate these cards.

See: SyncE and Introduction

PTP Clock TAI Support

A special ptp4l instance level parameter is provided to allow a PTP node to set the currentUtcOffsetValid flag in its announce messages and to correctly set the CLOCK_TAI on the system.

PTP Multiple NIC Boundary Clock Configuration StarlingX 7.0 provides support for PTP multiple NIC Boundary Clock configuration. Multiple instances of ptp4l, phc2sys and ts2phc can now be configured on each host to support a variety of configurations including Telecom Boundary clock (T-BC), Telecom Grand Primary clock (T-GM) and Ordinary clock (OC).

See:

PTP Server Configuration

Enhanced Parallel Operations for Distributed Cloud

The following operations can now be performed on a larger number of subclouds in parallel. The supported maximum parallel number ranges from 100 to 500 depending on the type of operation.

  • Subcloud Install

  • Subcloud Deployment (bootstrap and deploy)

  • Subcloud Manage and Sync

  • Subcloud Application Deployment/Update

  • Patch Orchestration

  • Upgrade Orchestration

  • Firmware Update Orchestration

  • Kubernetes Upgrade Orchestration

  • Kubernetes Root CA Orchestration

  • Upgrade Prestaging

–force option

The --force option has been added to the dcmanager upgrade-strategy create command. This option upgrades both online and offline subclouds for a single subcloud or a group of subclouds.

See Distributed Upgrade Orchestration Process Using the CLI

Subcloud Local Installation Enhancements

Error preventive mechanisms have been implemented for subcloud local installation.

  • Pre-check to avoid overwriting installed systems

  • Unified ISO image for multiple systems and disk configurations

  • Prestage execution optimization

  • Effective handling of resized docker and docker-distribution filesystems over subcloud upgrade

See Subcloud Deployment with Local Installation.

Distributed Cloud Horizon Orchestration Updates

You can use the Horizon Web interface to upgrade Kubernetes across the Distributed Cloud system by applying the Kubernetes upgrade strategy for Distributed Cloud Orchestration.

See: Apply a Kubernetes Upgrade Strategy using Horizon

You can use Horizon to update the device/firmware image across the Distributed Cloud system by applying the firmware update strategy for Distributed Cloud Update Orchestration.

See: Apply the Firmware Update Strategy using Horizon

You can upgrade the platform software across the Distributed Cloud system by applying the upgrade strategy for Distributed Cloud Upgrade Orchestration.

See: Apply the Upgrade Strategy using Horizon

You can use the Horizon Web interface as an alternative to the CLI for managing device / firmware image update strategies (Firmware update).

See: Create a Firmware Update Orchestration Strategy using Horizon

You can use the Horizon Web interface as an alternative to the CLI for managing Kubernetes upgrade strategies.

See: Create a Kubernetes Upgrade Orchestration using Horizon

For more information, See: Distributed Cloud Guide

Security Audit Logging for Platform Commands

StarlingX logs all StarlingX REST API operator commands, except commands that use only GET requests. StarlingX also logs all SNMP commands, including GET requests.

See:

Security Audit Logging for K8s API

Kubernetes API Logging can be enabled and configured in StarlingX, and can be fully configured and enabled at bootstrap time. Post-bootstrap, Kubernetes API logging can only be enabled or disabled. Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster.

See: Kubernetes Operator Command Logging

Playbook for managing local LDAP Admin User

The purpose of this playbook is to simplify and automate the management of composite Local LDAP accounts across multiple DC systems or standalone systems. A composite Local LDAP account is defined as a Local LDAP account that also has a unique keystone account with admin role credentials and access to a K8S serviceAccount with cluster-admin role credentials.

See: Manage Composite Local LDAP Accounts at Scale

Kubernetes Custom Configuration

Kubernetes configuration can be customized during deployment by specifying bootstrap overrides in the localhost.yml file during the Ansible bootstrap process. Additionally, you can also override the extraVolumes section in the apiserver to add new configuration files that may be needed by the server.

See: Kubernetes Custom Configuration

Configuring Host CPU MHz Parameters

Some hosts support setting a maximum frequency for their CPU cores (application cores and platform cores). You may need to configure a maximum scaled frequency to avoid variability due to power and thermal issues when configured for maximum performance. For these hosts, the parameters control the maximum frequency of their CPU cores.

Enable support for power saving modes available on Intel processors to facilitate a balance between latency and power consumption.

  • StarlingX permits the CPU “p-states” and “c-states” control via the BIOS

  • Introduce a new starlingx-realtime tuned profile, specifically configured for the low latency profile to align with Intel recommendations for maximum performance while enabling support for higher c-states.

See: Host CPU MHz Parameters Configuration

vRAN Intel Tool Enablement

The following open-source vRAN tools are delivered in the following container image, docker.io/starlingx/stx-centos-tools-dev:stx.7.0-v1.0.1:

See: vRAN Tools

Coredump Configuration Support

You can change the default core dump configuration used to create core files. These are images of the system’s working memory used to debug crashes or abnormal exits.

See: Change the Default Coredump Configuration

FluxCD replaces Airship Armada

StarlingX application management provides a wrapper around FluxCD and Kubernetes Helm (see https://github.com/helm/helm) for managing containerized applications. FluxCD is a tool for managing multiple Helm charts with dependencies by centralizing all configurations in a single FluxCD YAML definition and providing life-cycle hooks for all Helm releases.

See: StarlingX Application Package Manager. See: FluxCD Limitation note applicable to StarlingX Release 7.0.

Kubernetes Upgrade

Kubernetes has now been upgraded to k8s 1.23.1 and is the default version for StarlingX Release 7.0.

NetApp Trident Version Upgrade

StarlingX r8.0 contains the installer for Trident 22.01

If you are using NetApp Trident in StarlingX r8.0 and have upgraded from the StarlingX previous version, ensure that your NetApp backend version is compatible with Trident 22.01.

Note

You need to upgrade the NetApp Trident driver to 22.01 before upgrading Kubernetes to 1.22.

See: Upgrade the NetApp Trident Software