OpenStack Networking with Neutron

OpenStack Networking offers virtual networking services and connectivity to and from Instances. It plays a big role in the OpenFlow and SDN adoption.The Neutron API manages the network configuration for individual networks, subnets and ports. It’s an enhancement to the original Nova-network implementation and introduced support for 3rd party plugins, such as Open vSwitch (OVS) and LinuxBridge. Both OVS and LinuxBridge provide Layer 2 connectivity with VLANs or Overlay encapsulation technologies, such as GRE or VXLAN. Neutron is pretty basic but it’s capability is gaining momentum with each distribution release.

Neutron supports a wide range of networks. Including Flat, Local, VLAN and VXLAN/GRE based networks. Local networks are isolated and local to the Compute node. In a Flat network, there is no VLAN tagging. VLAN capable networks implement 802.1Q tagging; segmentation is based on VLAN tags. Similar to the physical world, hosts in VLANs are considered to be in the same broadcast domain and inter-VLAN communication must pass a Layer 3 device. GRE and VXLAN encapsulation technologies create the concept known as Overlay networking. Network Overlays interconnect layer 2 segments over an Underlay network, commonly an IP fabric but could also be represented as a Layer 2 fabric. Their use case derives from multi-tenancy requirements and the scale limitations of VLAN-based networks.

Both Open vSwitch and Linux Bridge plugins are monolithic and cannot be used at the same time. There is a new plugin, introduced in Havana, called Modular Layer 2 ( ML2 ), allowing the use of multiple Layer 2 plugins simultaneously. It works with existing OVS and LinuxBridge agents and is intended to replace the plugins associated with those agents. OpenStack foundations are pretty flexible. OVS and other vendor appliances could be used in parallel to manage virtual networks in an OpenStack deployment. There is also plugins available to allow you to replace OVS with a physical managed switch to handle the virtual networks.

 

 

The ability to provide networks and network resources on demand via software is called Network-as-a-Service NaaS

 

Open vSwitch

OVS is a popular software-based switch used to orchestrate the underlying virtualized networking infrastructure. It consists of kernel module, vSwitch daemon, and database server. The kernel module is the data plane and is similar to an ASIC on a physical switch. The vSwitch daemon is a Linux process creating controls so the kernel can forward traffic. The database server is Open vSwitch Database Server ( OVSDB) and is local on every hosts. OVS consists of 4 distinct elements, – Tap device, Linux bridges, Virtual Ethernet cables, OVS bridges and OVS patch ports. Virtual Ethernet cables, known as veth mimic network patch cords. They are used to make connections to other bridges and namespaces (namespace discussed later). An OVS bridge is the virtualized switch. It behaves similarly to a physical switch and maintains MAC addresses.

 

OpenVswitch

 

Deployment Note

There are a few OpenStack deployment methods, such as Maas, Mirantis Fuel, Kickstack, and Packstack. They all have their advantages and disadvantages. Packstack is good for small deployment, Proof of Concepts and other test environments. It’s a simple Puppet based installer. It uses SSH to connect to the nodes and invokes a puppet run to install Openstack. Additional configurations can be passed to Packstack via an answer file.

As part of the Packstack run, a file called keystonerc_admin is created. Keystone is the identity management component of OpenStack. Each component in OpenStack registers with Keystone. It’s easier to source the file, then those values in the source file are automatically placed in the shell environment. Cat this file to see its content and get the login credentials. You will need this information to authenticate and interact with OpenStack.

keystone

 

Neutron Networking

OpenStack is a multi-tenant platform, each tenant can have multiple private networks and network services, isolated through the use of network namespaces. Network namespaces allow tenants to have overlapping networks with other tenants. Consider a namespace to an enhanced VRF instance, connected to one or more virtual switches. Neutron uses a “qrouter”, “glbaas” and “qdhcp” namespace. Regardless of the network plugins installed, at a minimum you need to install the neutron-server service. This service will expose the Neutron API for external administration. By default, it is configured to listen to API calls on ALL addresses. This can be changed in the neutron.conf file by editing the bind_host – 0.0.0.0.

“Neutron configuration file is found at /etc/neutron/neutron.conf”

Openstack networking provides extensions that allow the creation of virtual routers and virtual load balancers. Virtual routers are created with the neutron-l3-agent. They perform Layer 3 forwarding and NAT. By default, a router performs Source NAT on traffic originating from an instance, destined to an external service. Source NAT modifies the source of the packet appearing to upstream devices as if it came from the router’s external interface. When users want direct inbound access to an instance, Neutron uses what is known as a Floating IP address. It is similar to the analogy of Static NAT; one-to-one mapping of an external to an internal address.  

“Neutron stores its L3 configuration in the l3_agent.ini files”

The following screenshot displays that the L3 agent must first be associated with an interface driver before you can start it. The interface driver must correspond to the chosen network plugin, for example, LinuxBridge or OVS. The crudini commands set this.

Openstack

The neutron-lbaas-agent leverages the open-source HAProxy to load balance traffic destined to VIP’s. HAProxy is a free, open source load balancer. Third-party drivers are supported by LBaaS and they will be discussed in later posts. Load Balancing as a service enables tenants to scale their application programmatically through Neutron API. It supports basic load balancing algorithms and monitoring capabilities. The load balancing algorithms are restricted to round-robin, least connections and source IP. For monitoring, it can do basic TCP connect tests and full Layer 7 tests support HTTP status codes. As far as I’m aware it doesn’t support SSL offloading. By default, the HAProxy driver is installed in one ARM mode, meaning it uses the same interface for ingress and egress traffic. As it is not the default gateway for instances, it relies on Source NAT for proper return traffic forwarding. Neutron stores its configuration in the lbaas_agent.ini files. Similar to the l3 agent, it must associate with an interface driver before you can start it – “crudini –set /etc/neutron/lbaas_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver”

Both agents use network namespaces for isolated forwarding and load balancing context.

 

 

 

About Matt Conran

Matt Conran has created 155 entries.

Leave a Reply