Ethernet Interface Configuration

You can review and modify the configuration for physical or virtual Ethernet interfaces using the OpenStack Horizon Web interface or the CLI.

Physical Ethernet Interfaces

The physical Ethernet interfaces on StarlingX OpenStack nodes are configured to use the following networks:

  • the internal management network

  • the internal cluster host network (by default sharing the same L2 interface as the internal management network)

  • the external OAM network

  • one or more data networks

A single interface can optionally be configured to support more than one network using VLAN tagging (see Shared (VLAN or Multi-Netted) Ethernet Interfaces).

Virtual Ethernet Interfaces

The virtual Ethernet interfaces for guest VMs running on StarlingX OpenStack are defined when an instance is launched. They connect the VM to project networks, which are virtual networks defined over data networks, which in turn are abstractions associated with physical interfaces assigned to physical networks on the compute nodes.

The following virtual network interfaces are available:

  • AVP

  • ne2k_pci (NE2000 Emulation)

  • pcnet (AMD PCnet/PCI Emulation)

  • rtl8139 (Realtek 8139 Emulation)

  • virtio (VirtIO Network)

  • pci-passthrough (PCI Passthrough Device)

  • pci-sriov (SR-IOV device)

Unmodified guests can use Linux networking and virtio drivers. This provides a mechanism to bring existing applications into the production environment immediately.

StarlingX OpenStack incorporates DPDK-Accelerated Neutron Virtual Router L3 Forwarding (AVR). Accelerated forwarding is used for directly attached project networks and subnets, as well as for gateway, SNAT and floating IP functionality.

StarlingX OpenStack also supports direct guest access to NICs using PCI passthrough or SR-IOV, with enhanced NUMA scheduling options compared to standard OpenStack. This offers very high performance, but because access is not managed by StarlingX OpenStack or the vSwitch process, there is no support for live migration, StarlingX OpenStack-provided LAG, host interface monitoring, QoS, or ACL. If VLANs are used, they must be managed by the guests.

For further performance improvements, StarlingX OpenStack supports direct access to PCI-based hardware accelerators, such as the Coleto Creek encryption accelerator from Intel. StarlingX OpenStack manages the allocation of SR-IOV VFs to VMs, and provides intelligent scheduling to optimize NUMA node affinity.