Neutron Networking 101

Neutrons pluggable Application program interface ( API ) architecture enables the management of network services for cloud environments. The API allows users to interact with network constructs, such as routers and switches enabling instance reachability. OpenStack networking was initially built into Nova but lacked flexibility for advanced designs. It was useful for large Layer 2 networks but most environments require better multi-tenancy with advanced load balancing and firewalling functionality. The limited networking functionality gave rise to Neutron which offers a decoupled Layer 3 approach. It operates an Agent – Database model where the API receives and sends calls to agents installed locally on the hosts.

Logical network information is stored in the database. It’s the role of plugins and agents to extract the information and carry out the necessary low-level functions to pin the virtual network enabling instance connectivity. For example, the Open vSwitch agent converts information in the Neutron database to Open vSwitch flow while maintaining the local flows to match the network design in accordance with topology changes. Agents and plugins work together to build the network based on the logical data model. The screenshot below illustrates the agent to host installation.

 

Neutron Agents

 

Network, Subnets, and Ports

Neutron consists of 4 elements that form the foundation for OpenStack networking. The configuration consists of the following entities – Network, Subnets, and Ports. A network is a standard Layer 2 broadcast domain, in which subnets and ports are assigned. A subnet is an IPv4 or IPv6 address block ( IPAM – IP Address Management) assigned to a network. A port is a connection point with similar properties to that of a physical port, except for the fact that it is virtual. Ports have media access control address ( MAC ) and IP addresses. All port information is stored in the Neutron database used by plugins / agents to stitch and build the virtual infrastructure.

 

 

Neutron Features

Neutron enables core networking and the potential for a lot more once the appropriate extension and plugin are activated. Extensions enhance plugins to provide additional network functionality. Due to its pluggable architecture, Neutron can be extended with 3rd party open source or proprietary products, for example, an SDN OpenDaylight controller for advanced centralized functionality.

While Neutron does offer an API for network interaction, it does not offer an API to manage the network. Integrating an SDN controller along with Neutron enables a centralized viewpoint and management entity for the entire network infrastructure not just individual pieces. Some vendor plugins complement Neutron while others completely replace it. Advancements have been made to Neutron in an attempt to make it more “production ready” but some of these features are still at the experimental stage. There are bugs in all platforms but generally, early release features should be kept in nonproduction environments.

 

Virtual Switches, Routing and Advanced Services

Virtual switches are software switches that connect VM instances together at Layer 2. Any communication outside that boundary requires a Layer 3 router, either physical or virtual. Neutron has built in support for Linux Bridges and Open vSwitch virtual switches. Overlay networking, the foundation for multi-tenancy for cloud computing is supported on both. 

Layer 3 routing permits external connectivity and connectivity between VM’s in different subnets. Routing is enabled through the use of IP forwarding rules, IPtables, and network namespaces. IPtables provide ingress / egress filtering throughout different parts within the network ( example: perimeter edge or local compute ), namespaces provide network stack isolation, and IP forwarding rules provide the forwarding. 

Firewalling and security services are based on Security Groups or/and FWaaS (FireWall-as-a-Service). They can be used in conjunction with each for better defense in depth. Both operate with IPtables but differ in network placement. Security group IPtable rules are configured locally on ports corresponding to the compute node hosting the instance. Implementation is close to the actual workload, offering finer-grained filleting. Firewall IPtable rules are at the edge of the network on Neutron router’s ( namespaces ), filtering perimeter traffic.

 Load balancing enables request to be distributed to multiple instances. Dispersing load to multiple hosts offers similar advantages to that of the traditional world. The plugin is based on open source HAProxy. Finally, VPNs allow operators to extend the network securely with IPSec-based VPN tunnels.

 

 

Virtual Network Preparation

The diagram below displays initial configuration and physical interface assignments for a standard neutron deployment. The reference model consists of a controller node, network nodes and compute nodes. The compute nodes are restricted to provides compute resources while the controller / network node may be combined or separated for all other services. Separating services on the compute nodes allow compute services to be scaled horizontally. It’s quite common to see the controller and networking node operate on a single host.

Service OpenStack

The number and type of interfaces depend on how good you feel about combining control and data traffic. Networking can function with just one interface but it’s good to split the different types of network traffic to a number of separate interfaces. OpenStack uses 4 types of traffic – Management, API, External, and Guest. If you are going to separate anything, it’s recommended to physically separate management and API traffic from all other types of traffic. Separating the traffic to different interface splits the control from data traffic. Certainly a tick in the box from security auditors.

 

Neutron Referance DesignNEW

In the preceding diagram, Eth0 is used for both the management and API network, Eth1 for overlay traffic and Eth2 is used for either External and Tenant networks ( depending on host ). The tenant networks ( Eth2 ) reside on the compute nodes and the external network reside on the controller node ( Eth2 ). The controller Eth2 interface is used for external network traffic to instances by using Neutron routers. In certain Neutron Distributed Virtual Routing ( DVR ) scenarios, the external networks are at the compute nodes.

 

Plugins and Drivers

Neutron operates with the concept of plugins and drivers. Neutrons core plugin can be either ML2 or a vendor plugin. Prior to ML2, Neutron was limited to a single core plugin at any given time. The ML2 plugin introduces the concept of type and mechanism drivers. Type drivers represent type-specific network state and support local, flat, vlan, gre and vxlan network types. Mechanism drivers take information from the type driver and ensure its implementation correctly. There are agent-based, controller-based and Top-of-Rack models of the mechanism driver. The most popular are L2 population, Open vSwitch, Linux bridge. The mechanism driver arena is a popular space for vendors products.

 

Linux Namespaces

The majority of environments out there require some kind of multi-tenancy. Cloud environments would be pretty straight forward if built for only one customer or department. In reality, this is never the case. Multi-tenancy within Neutron is based on Linux Namespaces. Namespace offers a completely isolated stack to do what you want with. They enable a logical copy of the network stack supporting overlapping IP assignment. A lot of Neutron networking is made possible with the use of namespaces and the ability to connect them together. We have a qdhcp namespace, qrouter namespace, qlbass namespace and additional namespaces for DVR functionality, such as fip and snat. Namespaces are present on nodes running the respective agents. The following command displays different routing tables for NAMESPACE-A and the global namespace, illustrating the ability of network stack isolation.

 

namespace

 

Virtual Network Infrastructure

 

Local, Flat, VLAN, VXLAN, and GRE networks

Neutron supports Local, Flat, VLAN, VXLAN, and GRE networks. Local networks are isolated networks. Flat networks do not incorporate any VLAN tagging. On the other hand, VLAN networks use the standard .Q tagging ( IEEE 802.1Q ) to segregate traffic. VXLAN networks encapsulate Layer 2 traffic over IP using VTEP and VXLAN network identifier ( VNI ). GRE is another type of Layer 2 over Layer 3 overlay. GRE and VXLAN more a less accomplish the same goal of emulation over pure IP but have different ways of doing so – VXLAN uses UDP, GRE traffic uses IP protocol 47. Essentially, Layer 2 data is transported from an end host, encapsulated over IP to the egress switch that sends the data to the destination host. Obviously, with an underlay and overlay approach you have two layers to debug when something goes wrong.

GRE AND VXLAN

Virtual Network Switches

The first step in building out a virtual network is to build the virtual switching infrastructure. This acts as the base for any network design, be it virtual or physical. Virtual switching provides the connectivity to and from the virtual instances building the concrete for advanced networking services. The first piece of the puzzle are the virtual network switches. Neutron includes built-in support for the Linux Bridge and Open vSwitch. Both are virtual switches but operate with some major differences. The Linux bridge uses VLANs to tag traffic while the Open vSwitch uses flow rules to manipulate traffic before forwarding. Instead of mapping the local VLAN ID to a physical VLAN ID, the local ID is added or stripped from the Ethernet header by flow rules.

The “brctl show” command displays the Linux bridge. The bridge ID is automatically generated based on the NIC and the bridge name is based on the UUID of the corresponding Neutron network. The “ovsvsctl show” command displays the Open vSwtich. It has a slightly more complicated setup with the br-int ( integration bridge ) acting as the main center point of connections.

 

Open vSwitch and Linux Bridge

 

Neutron uses the bridge, 802.1q, and vxlan kernel modules to connect up instances with the Linux bridge. Bridge and Open vSwitch kernel modules are used for the Open vSwitch. Additionally, the Open vSwitch uses some userspace utilities to manage the OVS database. The majority of networking elements are connected up with virtual cables, known as veth cables. What goes in one end must come out the other best describes a virtual cable. Veths connect many elements including namespace to namespace, Open vSwitch to Linux bridge, Linux bridge to Linux bridge, are all connected with veth cables. The Open vSwitch uses an additional special patch ports to connect Open vSwitch bridges together. The Linux bridge doesn’t use patch ports.

The Linux Bridge and Open vSwitch can be used to complement each other. For example, when Neutron Security Groups are enabled, instances connect to the Linux bridge and then to the Open vSwitch Integration bridge with a veth cable. The workaround is caused by the inability to place IPtable rules ( needed by security groups ) on tap interfaces connected to Open vSwitch bridge ports.

 

Network Address Translation (NAT)

Neutron employs the concept of Network Address Translation (NAT) to predict inbound and outbound translations. The concept of NAT stays the same in the virtual world, either modifying the source or destination address of an IP packet. Neutron employs two types of translations – one-to-one and one-to-many. One-to-one translations utilize floating IP addresses and many-to-one is a Port Address Translation ( PAT ) style design where floating IP are not used. Floating IP addresses are externally routed IP addresses that provide a direct mapping between instances and an external IP address. The term floating comes from the fact that they can be modified on-the-fly between instances. They are associated with a Neutron port which is then logically mapped to an instance. Ports can have multiple IP addresses assigned.

 

  • SNAT refers to source NAT which changes the source IP address to appear as the externally connected interface.
  • Floating IPs is called Destination NAT ( DNAT ) which changes the source and destination IP address depending on traffic direction.

 

The external network connected to the virtual router serves as the network from which floating IP’s are derived. The default behavior is to source NAT traffic from instances that lack floating IP. Instances that use source NAT can not accept traffic initiated externally. If you want traffic initiated externally to hit an instance you have to use a one-to-one mapping with floating IP.

 

 

Neutron High Availability

 

Standalone Router

The easiest type of router to create in Neutron is a standalone router. As the name suggests, it lacks high availability. Routers created with neutron exist on namespaces that reside on the actual nodes running the L3 agent. It is the role of the Layer 3 agent to create the network namespace representing the routing function. A virtual router is essentially a network namespace called the qrouter namespace. The qrouter namespace uses routing tables to forward traffic and IPtable rules to dictate how traffic is translated.

sandaloneNEW

A virtual router can connect to two different type of networks. Either a single external provider network or one or more tenant networks. The interface to an external provider bridge network is “qg” and to a tenant network bridge is a “qr” interface. Traffic from tenant networks is routed out the “qr” interface to the “qg” interface for onward external forwarding.

 

Virtual Router Redundancy Protocol

VRRP is pretty simple and offers high available and redundant default gateways or next hop of a route. Essentially, the namespaces ( routers ) are spread across multiple hosts running the Layer 3 agent. Multiple router namespaces are created and distributed among the Layer 3 agents. VRRP operates with a Linux keepalived instance. Each one runs a “keepalived” service to detect each other’s availability. The keepalived service is a Linux keepalived tool that uses VRRP internally. It is the role of the L3 agent to start the keepalived instance on every namespace. A dedicated HA network is used for the routers to talk to each other. There are issues with split brain and MAC flapping issues and as far as I understand it’s still very much an experimental feature.

 

Distributed Virtual Routing

DVR eliminates the bottleneck caused by the Layer 3 agent and distributes most of the routing function across multiple compute nodes. This helps isolate failure domains and increase the high availability of the virtual network infrastructure. With DVR, the routing function is not centralized anymore but decentralized to the compute nodes. The compute nodes themselves become one big router. DVR routers are spawned on the compute nodes and all the routing now gets done closer to the workload. Distributing routing to the compute nodes is much better than having a central element perform the routing function.

There are two types of modes : dvr and dvr_snat. Mode dvr_snat handles north to south SNAT traffic. Mode dvr handles north to south DNAT traffic ( floating IP) and all east to west traffic.

 

Key Points

  • East – West traffic ( server to server ) previously went through the centralized network node. DVR pushes this down to the compute nodes hosting the VM’s.
  • North – South traffic with floating IP’s ( DNAT ) is routed directly by the compute nodes hosting the VM’s.
  • North – South traffic without floating IP ( SNAT ) is still routed through a central network node. There is some complications distributing the SNAT functions to the local compute nodes.
  • There is a requirement for DNAT that computes nodes using floating IP’s require direct external connectivity.

 

East – West Traffic between instances

East to west traffic ( traditionally server to server ) refers to local communication, for example, traffic between a frontend to backend application tier. DVR enables each compute node to now host a copy of the same router. The router namespace created on each compute node has the same interface, MAC, and IP settings.

DVR East to West

DVR East to WestNEWDVR East to West

The qr interfaces within the namespaces on each compute node share the same IP and MAC address. But how is this possible?? One can assume the distribution of ports would raise IP clashes and MAC flapping. To enable this type of forbidden sharing, Neutron makes clever use of routing tables and Open vSwitch flow rules. Neutron actually allocates a unique MAC address to each compute nodes. This MAC is used whenever traffic leaves the node. Once traffic leaves the virtual router, Open vSwitch rules rewrite the source MAC address with the unique MAC address allocated to the source node. All the manipulation is done before and after traffic leaves or enters so the VM is unaware of any rewriting and operates as normal.

 

Centralized SNAT

Source SNAT is used when instances do not have a floating IP address. Neutron decided not to distribute SNAT to the compute nodes and keep it central similar to the legacy model. Why they decide to do this when DVR distributes floating IP for north – south traffic? Decentralising SNAT would require an address from the external network on every node providing the SNAT service. This would consume a lot of address on your external network.

DVR East to WestNEW1

The Layer 3 agent configured as dvr_snat server acts as the centralized SNAT function. Two namespaces get created for the same router – regular qrouter namespace and an SNAT namespace. The SNAT and qrouter namespace are created on the centralized nodes, either the controller or the network node. The qrouter namespaces on the controller and compute nodes are identical. However, even though the router is attached to an external network, there are no qg interfaces. The qg interfaces are now inside the SNAT namespace. There is also now a new interface, called the sg. This is used as an extra hop.

 

Packet Flow

  • A VM without a floating IP sends a packet to an external destination.
  • Traffic arrives at the regular qrouter namespace on the actual node and gets redirect to the SNAT namespace on the central node.
  • To redirect traffic from the qrouter namespace to the SNAT namespace is carried out by clever tricks with source routing and multiple routing tables.

 

North to South Traffic with Neutron Floating IP

In the legacy world, floating IPs are configured as /32 prefixes on the router’s external device. The one-to-one mapping between the VM IP address and the floating IP address is used so external devices can initiate traffic externally to the internal instance. North to south traffic with floating IP is now handled with yet another namespace, called the fip namespace. The new fip namespace is created by the Layer 3 agent and represents the external network that the fip belongs too.

DVR and Floating IPNEW

Every router on the compute node is hooked into the new fip namespace with a veth pair. Already mentioned, veth pairs are commonly used to connect namespaces together. One side of the other pair is in the router namespace (rfp). The other end belongs to the fip namespace (fpr). Whenever a layer 3 agent creates a new floating IP, a new rule is specific to that IP. Neutron adds the fixed IP of the VM to the rules table with an additional new routing table

 

Packet Flow

  • When a VM with a floating IP sends traffic to an external destination, it arrives at the qrouter namespace.
  • The IP rules are consulted which shows a default route for that source to the next hop. IPtables rules kick in and the source IP is translated to the floating IP.
  • Traffic is forwarded out the rfp interface and arrives at the fpr interface at the fip namespace.
  • The fip namespace uses a default route to forward traffic out the ‘fg’ device to its external destination.

 

Traffic in the reverse direction requires Proxy ARP so the fip namespace answers request for the floating IP configured on the routers router namespace ( not the fip namespace ). Proxy ARP enables hosts ( fip namespace) to answers ARP requests intended for other hosts ( qrouter namespace ).

 

 

About Matt Conran

Matt Conran has created 155 entries.

Leave a Reply