Introduction

StarlingX is a fully integrated edge cloud software stack that provides everything needed to deploy an edge cloud on one, two, or up to 100 servers.

Key features of StarlingX include:

  • Provided as a single, easy to install package that includes an operating system, storage and networking components, and all the cloud infrastructure needed to run edge workloads.

  • Optimized software that meets edge application requirements.

  • Designed with pre-defined configurations to meet a variety of edge cloud deployment needs.

  • Tested and released as a complete stack, ensuring compatibility among open source components.

  • Included fault management and service management capabilities, which provide high availability for user applications.

  • Optimized by the community for security, ultra-low latency, extremely high service uptime, and streamlined operation.

Download the StarlingX ISO image from the StarlingX mirror.

Learn more about StarlingX:

Projects

StarlingX contains multiple sub-projects that include additional edge cloud support services and clients. API documentation and release notes for each project are found on the specific project page:

Supporting projects and repositories:

For additional information about project teams, refer to the StarlingX wiki.

New features in this version

The list below provides a detailed list of new features and links to the associated user guides (if applicable).

Debian-based Solution

StarlingX release 7.0 inherits the 5.10 kernel version from the Yocto project introduced in StarlingX release 6.0, i.e. the Debian 5.10 kernel is replaced with the Yocto project 5.10.x kernel (linux-yocto).

StarlingX release 7.0 is a Technology Preview Release of Debian StarlingX for evaluation purposes.

StarlingX release 7.0 release runs Debian Bullseye (11.3). It is limited in scope to the AIO-SX configuration, Duplex, and standard configurations are not available.. It is also limited in scope to Kubernetes apps and does not yet support running OpenStack on Debian.

See:

Istio Service Mesh Application

The Istio Service Mesh application is integrated into StarlingX as a system application.

Istio provides traffic management, observability as well as security as a Kubernetes service mesh. For more information, see https://istio.io/.

StarlingX includes istio-operator container to manage the life cycle management of the Istio components.

See: Istio Service Mesh Application

Pod Security Admission Controller

The Beta release of Pod Security Admission (PSA) controller is available in StarlingX release 7.0 as a Technology Preview feature. It will replace Pod Security Policies in a future release.

PSA controller acts on creation and modification of the pod and determines if it should be admitted based on the requested security context and the policies defined. It provides a more usable k8s-native solution to enforce Pod Security Standards.

See:

Platform Application Components Revision

The following applications have been updated to a new version in StarlingX Release 7.0.

  • cert-manager, 1.7.1

  • metric-server, 1.0.18

  • nginx-ingress-controller, 1.1.1

  • oidc-dex, 2.31.1

cert-manager

The upgrade of cert-manager from 0.15.0 to 1.7.1 deprecated support for cert manager API versions cert-manager.io/v1alpha2 and cert-manager.io/v1alpha3. When creating cert-manager CRDs (certificates, issuers, etc) with StarlingX, Release 7.0, use API version of cert-manager.io/v1.

Cert manager resources that are already deployed on the system will be automatically converted to API version of cert-manager.io/v1. Anything created using automation or previous StarlingX releases should be converted with the cert-manager kubectl plugin using the instructions documented in https://cert-manager.io/docs/installation/upgrading/upgrading-0.16-1.0/#converting-resources before being deployed to the new release.

metric-server

In StarlingX Release 7.0 the Metrics Server will NOT be automatically updated. To update the Metrics Server, see Install Metrics Server

oidc-dex

StarlingX Release 7.0 supports helm-overrides of oidc-auth-apps application. The recommended and legacy example Helm overrides of oidc-auth-apps are supported for upgrades, as described in StarlingX documentation User Authentication Using Windows Active Directory.

See: Set up OIDC Auth Applications.

Bond CNI plugin

The Bond CNI plugin v1.0.1 is now supported in StarlingX Release 7.0.

The Bond CNI plugin provides a method for aggregating multiple network interfaces into a single logical “bonded” interface.

To add a bonded interface to a container, a network attachment definition of type bond must be created and added as a network annotation in the pod specification. The bonded interfaces can either be taken from the host or container based on the value of the linksInContainer parameter in the network attachment definition. It provides transparent link aggregation for containerized applications via K8s configuration for improved redundancy and link capacity.

See:

Bond Plugin

PTP GNSS and Time SyncE Support for 5G Solutions

Intel’s E810 Westport Channel and Logan Beach NICs support a built-in GNSS module and the ability to distribute clock via Synchronous Ethernet (SyncE). This feature allows a PPS signal to be taken in via the GNSS module and redistributed to additional NICs on the same host or on different hosts. This behavior is configured on StarlingX using the clock instance type in the PTP configuration.

These parameters are used to enable the UFL/SMA ports, recovered clock syncE etc. Refer to the user’s guide for the Westport Channel or Logan Beach NIC for additional details on how to operate these cards.

See: SyncE and Introduction

PTP Clock TAI Support

A special ptp4l instance level parameter is provided to allow a PTP node to set the currentUtcOffsetValid flag in its announce messages and to correctly set the CLOCK_TAI on the system.

PTP Multiple NIC Boundary Clock Configuration StarlingX 7.0 provides support for PTP multiple NIC Boundary Clock configuration. Multiple instances of ptp4l, phc2sys and ts2phc can now be configured on each host to support a variety of configurations including Telecom Boundary clock (T-BC), Telecom Grand Primary clock (T-GM) and Ordinary clock (OC).

See:

PTP Server Configuration

Enhanced Parallel Operations for Distributed Cloud

The following operations can now be performed on a larger number of subclouds in parallel. The supported maximum parallel number ranges from 100 to 500 depending on the type of operation.

  • Subcloud Install

  • Subcloud Deployment (bootstrap and deploy)

  • Subcloud Manage and Sync

  • Subcloud Application Deployment/Update

  • Patch Orchestration

  • Upgrade Orchestration

  • Firmware Update Orchestration

  • Kubernetes Upgrade Orchestration

  • Kubernetes Root CA Orchestration

  • Upgrade Prestaging

–force option

The --force option has been added to the dcmanager upgrade-strategy create command. This option upgrades both online and offline subclouds for a single subcloud or a group of subclouds.

See Distributed Upgrade Orchestration Process Using the CLI

Subcloud Local Installation Enhancements

Error preventive mechanisms have been implemented for subcloud local installation.

  • Pre-check to avoid overwriting installed systems

  • Unified ISO image for multiple systems and disk configurations

  • Prestage execution optimization

  • Effective handling of resized docker and docker-distribution filesystems over subcloud upgrade

See Subcloud Deployment with Local Installation.

Distributed Cloud Horizon Orchestration Updates

You can use the Horizon Web interface to upgrade Kubernetes across the Distributed Cloud system by applying the Kubernetes upgrade strategy for Distributed Cloud Orchestration.

See: Apply a Kubernetes Upgrade Strategy using Horizon

You can use Horizon to update the device/firmware image across the Distributed Cloud system by applying the firmware update strategy for Distributed Cloud Update Orchestration.

See: Apply the Firmware Update Strategy using Horizon

You can upgrade the platform software across the Distributed Cloud system by applying the upgrade strategy for Distributed Cloud Upgrade Orchestration.

See: Apply the Upgrade Strategy using Horizon

You can use the Horizon Web interface as an alternative to the CLI for managing device / firmware image update strategies (Firmware update).

See: Create a Firmware Update Orchestration Strategy using Horizon

You can use the Horizon Web interface as an alternative to the CLI for managing Kubernetes upgrade strategies.

See: Create a Kubernetes Upgrade Orchestration using Horizon

For more information, See: Distributed Cloud Guide

Security Audit Logging for Platform Commands

StarlingX logs all StarlingX REST API operator commands, except commands that use only GET requests. StarlingX also logs all SNMP commands, including GET requests.

See:

Security Audit Logging for K8s API

Kubernetes API Logging can be enabled and configured in StarlingX, and can be fully configured and enabled at bootstrap time. Post-bootstrap, Kubernetes API logging can only be enabled or disabled. Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster.

See: Kubernetes Operator Command Logging

Playbook for managing local LDAP Admin User

The purpose of this playbook is to simplify and automate the management of composite Local LDAP accounts across multiple DC systems or standalone systems. A composite Local LDAP account is defined as a Local LDAP account that also has a unique keystone account with admin role credentials and access to a K8S serviceAccount with cluster-admin role credentials.

See: Manage Composite Local LDAP Accounts at Scale

Kubernetes Custom Configuration

Kubernetes configuration can be customized during deployment by specifying bootstrap overrides in the localhost.yml file during the Ansible bootstrap process. Additionally, you can also override the extraVolumes section in the apiserver to add new configuration files that may be needed by the server.

See: Kubernetes Custom Configuration

Configuring Host CPU MHz Parameters

Some hosts support setting a maximum frequency for their CPU cores (application cores and platform cores). You may need to configure a maximum scaled frequency to avoid variability due to power and thermal issues when configured for maximum performance. For these hosts, the parameters control the maximum frequency of their CPU cores.

Enable support for power saving modes available on Intel processors to facilitate a balance between latency and power consumption.

  • StarlingX permits the CPU “p-states” and “c-states” control via the BIOS

  • Introduce a new starlingx-realtime tuned profile, specifically configured for the low latency profile to align with Intel recommendations for maximum performance while enabling support for higher c-states.

See: Host CPU MHz Parameters Configuration

vRAN Intel Tool Enablement

The following open-source vRAN tools are delivered in the following container image, docker.io/starlingx/stx-centos-tools-dev:stx.7.0-v1.0.1:

See: vRAN Tools

Coredump Configuration Support

You can change the default core dump configuration used to create core files. These are images of the system’s working memory used to debug crashes or abnormal exits.

See: Change the Default Coredump Configuration

FluxCD replaces Airship Armada

StarlingX application management provides a wrapper around FluxCD and Kubernetes Helm (see https://github.com/helm/helm) for managing containerized applications. FluxCD is a tool for managing multiple Helm charts with dependencies by centralizing all configurations in a single FluxCD YAML definition and providing life-cycle hooks for all Helm releases.

See: StarlingX Application Package Manager. See: FluxCD Limitation note applicable to StarlingX Release 7.0.

Kubernetes Upgrade

Kubernetes has now been upgraded to k8s 1.23.1 and is the default version for StarlingX Release 7.0.

NetApp Trident Version Upgrade

StarlingX r7.0 contains the installer for Trident 22.01

If you are using NetApp Trident in StarlingX r7.0 and have upgraded from the StarlingX previous version, ensure that your NetApp backend version is compatible with Trident 22.01.

Note

You need to upgrade the NetApp Trident driver to 22.01 before upgrading Kubernetes to 1.22.

See: Upgrade the NetApp Trident Software