Neutron-Based Clouds: Network-as-a-Service
The power of open source cloud environments is driven by Liberty OpenStack and its Neutron network components. OpenStack can now be used with many advanced technologies – kubernestes clusters and docker container networking. By default, neutron handles all the networking aspects for OpenStack cloud deployments and allows the creation of network objects such as routers, subnets, and ports. For example, with a standard multi-tier application, where you have a front, middle and backend tier, Neutron creates three subnets and defines the conditions for tier interaction. The filtering is done centrally or distributed with tenant level firewall security groups.
OpenStack is very modular, which allows it to be enhanced by commercial and open source network technologies. The plugin architecture allows different vendors to enhance networking and security with advanced routers, switches, and SDN controllers. Every OpenStack component manages a resource made available and virtualized to the user as a consumable service, whether it be creating a network or permitting traffic with ingress/egress rule chains. Everything is done in software – a powerful abstraction for cloud environments.
Control, Network, and Compute components
The OpenStack architecture for Neutron based clouds divides into Control, Network, and Compute components. At a very high level, the control tier runs the Application Programming Interfaces (API), compute is the actual hypervisor with various agents, and the network component provides network service control. All these components use a database and message bus. Examples of a database include MySQL, PostgreSQL, and MariaDB, for message bus we have for example RabbitMQ and Qpid. The defaults plugins are Modular Layer 2 (ML2) and Open vSwitch.
Ports, Networks, and Subnets
Neutrons core and the base for the API is very simple. It consists of Ports, Networks, and Subnets. Ports hold the IP and MAC address and define how a VM connects to the network. They are an abstraction for VM connectivity. A network is a Layer 2 broadcast domain, represented as an external network (reachable from the internet), provider network (mapped to an existing network), and tenant networks, created by users of the cloud, isolated from other tenant networks. Layer 3 routers connect network together and subnets are the subnet spaces attached to networks.
OpenStack Neutron Components
OpenStack networking with Neutron provides an API to create a variety of network objects. This is a very powerful abstraction that allows the creation of networks in software and the ability to attach multiple subnets to a single network. Neutron networks are isolated or connected together with Layer 3 routers for inter-network connectivity. Neutron employs the concepts of floating IP, best understood as a 1:1 NAT translation. The term “floating” comes from the fact that it can be modified on the fly between instances. It may seem that floating IP’s are assigned to instances but they are actually assigned to ports. Everything gets assigned to ports – fixed IP, Security Groups, MAC addresses. Inbound and outbound to and from tenants are enabled by either SNAT (source NAT) or DNAT (destination NAT). DNAT modifies the IP address of the destination in IP packet header and SNAT modifies the IP address of the sender in IP packets.
Open vSwitch and the Linux bridge
For switching functionality, Neutron can be integrated with Open vSwitch and Linux bridge. By default, it integrates with the ML2 plugin and Open vSwitch. Both Open vSwitch and Linux bridge are virtual switches, orchestrating the network infrastructure. For enhanced networking, the virtual switch can be controlled outside of Neutron by 3rd party network products and SDN controllers via the use of plugins. The Open vSwitch may also be totally replaced or used in parallel. Recently, there has been many enhancements to the classic forwarding with Open vSwitch and Linux Bridge. We now have numerous high availability options with L3 High Availability & VRRP and Distributed Virtual Routing (DVR) feature. DVR essentially moves routing from the Layer 3 agent to compute nodes. However, it only works with tunnels and L2pop enabled, requires that the compute nodes have external network connectivity. For production environments, these HA features are a welcomed update.
The following shows three bridges created in Open vSwitch – br-ex, br-ens3, and br-int. The br-int is the main integration bridge and all others connect via special patch ports.
Neutron has several parts, backed by a relationship database. The Neutron server is the API and RPC service talks to the various agents (L2 agent, L3 agent, DHCP agent etc) via the message queue. The Layer 2 agent runs on the compute and communicates with Neutron server with RPC. Some deployments don’t have an L2 agent, for example, if you are using an SDN controller. Also, if deploying the Linux bridge instead of the Open vSwitch you wouldn’t have the Open vSwitch agent, instead, would use standard the Linux Bridge utilities. The Layer 3 agent runs on the Neutron network node and uses Linux namespaces to implement multiple copies of the IP stack. It also runs the metadata agent and supports static routing.
An integral part of Neutron networking is Linux namespace for object isolation. Namespaces enable multi-tenancy and allow overlapping IP address assignment for tenants – a key requirement for many cloud environments. Every network and network services created by a user are represented as a namespace. For example, the qdhcp namespace represents the DHCP services, qrouter namespace represents the router namespace and the qlbaas represents the load balance service based on HAProxy. The qrouter namespaces provide routing amongst networks – north-south and east-west traffic. It also performs SNAT and DNAT in classic non-DVR scenarios. For certain cases with DVR, the snat namespaces perform SNAT for north-south network traffic.
Neutron Security Groups
OpenStack has the concept of Security Groups. They are a type of tenant-level firewall enabling Neutron to provide distributed security filtering. Due to limitations of Open vSwitch and iptables, the Linux bridge controls the security groups. Neutron security groups are not directly added to the Integration bridge. They are actually implemented on the Linux bridge that connects to the integrated bridge. The reliance on the Linux bridge stems from the fact that Neutron can not place iptable rules on tap interfaces connected to the Open vSwitch. Once a Security Group has been applied to Neutron port, the rules are translated into iptable rules, which are then applied to the node hosting the respective instance. Neutron also has the ability to protect instances with perimeter firewalls, known as Firewall-as-a-service. Firewall rules implemented with perimeter firewalls utilizing iptables within a Neutron routers namespace as opposed to being configured on every compute host. The following diagram displays ingress and egress rules for the default security group. For tenants that don’t have a security group assigned are placed in the default security group.