Spine‑and‑Leaf Network Topology Guide for Red Hat OpenStack Platform
This guide explains how to design and implement a spine‑leaf network topology for Red Hat OpenStack environments, covering composable networking, routing, VLAN isolation, DHCP relay, supernet routes, and the specific requirements and limitations of deploying an overcloud.
This guide provides information on how to build a spine‑leaf network topology for Red Hat OpenStack platform environments. It includes complete end‑to‑end scenarios and example files to help replicate broader network topologies in your own environment.
1.1 Spine and Leaf Network
Red Hat OpenStack’s composable network architecture allows you to adjust the network to suit the popular routing spine‑leaf data‑center topology. In a routing spine‑leaf deployment, a leaf typically represents a composable compute or storage role within a rack, as shown in Figure 1.1. Leaf 0 racks contain an undercloud node, controllers, and compute nodes, and the composable network is presented to the nodes assigned to composable roles.
Provide a Storage Leaf network for Ceph storage and compute nodes.
Network Leaf represents any network you may compose.
Figure 1.1 Routing spine‑leaf example
1.2 Network Topology
The routing spine‑leaf bare‑metal environment has one or more L3‑capable switches that route traffic between independent VLANs in separate L2 broadcast domains.
The design aims to isolate traffic by function. For example, if controller nodes host the API on an internal API network, compute nodes should use their own version of the internal API network when accessing the API. This routing requires forcing internal API traffic to use the desired interface, which can be configured with supernet routing.
For instance, using 172.18.0.0/24 as the internal API network for controller nodes, you could use 172.18.1.0/24 for a second internal API network, 172.18.2.0/24 for a third, and so on, allowing a route to a larger 172.18.0.0/16 supernet that points to the gateway IP on each local internal API network.
The scheme uses the following networks:
Table 1.1 Leaf 0 Networks
Table 1.2 Leaf 1 Networks
Table 1.3 Leaf 2 Networks
Table 1.4 Supernet Routes
1.3 Spine‑Leaf Requirements
To deploy an overcloud on a network with a layer‑3 routing architecture, the following requirements must be met:
Layer‑3 Routing
The network infrastructure must be configured to route between different layer‑2 segments. This can be done statically or dynamically.
DHCP Relay
Each non‑undercloud layer‑2 segment must provide a DHCP relay that forwards DHCP requests to the undercloud network segment connected to the undercloud.
Note
The undercloud uses two DHCP servers—one for bare‑metal node introspection and another for overcloud node deployment. When configuring DHCP relay, ensure you read the DHCP relay configuration to understand the requirements.
1.4 Spine‑Leaf Limitations
Some roles (e.g., controller) use virtual IP addresses and clustering, which require L2 connectivity between those nodes; they must reside in the same leaf.
Networker nodes have similar constraints. Network services use VRRP to provide a highly available default path; the primary and backup nodes must be on the same L2 segment.
When using tenant or provider networks with VLAN segmentation, the specific VLAN must be shared across all Networker and compute nodes.
Note
Multiple groups of Networker nodes can be configured for network services. Each group shares routing, and VRRP provides HA within the group. All Networker nodes in a shared network must be on the same L2 segment.
Source: http://jiagoushi.pro/openstack-spine-leaf-networking-introduction
Discussion: Join the Knowledge Planet "Chief Architect Circle" or add the small account "intelligenttimes".
Architects Research Society
A daily treasure trove for architects, expanding your view and depth. We share enterprise, business, application, data, technology, and security architecture, discuss frameworks, planning, governance, standards, and implementation, and explore emerging styles such as microservices, event‑driven, micro‑frontend, big data, data warehousing, IoT, and AI architecture.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.