Docker Container Diagram

Container Based Virtualization

Container Based Virtualization

Container-based virtualization, or containerization, is a popular technology revolutionizing how we deploy and manage applications. In this blog post, we will explore what container-based virtualization is, why it is gaining traction, and how it differs from traditional virtualization techniques.

Container-based virtualization is a lightweight alternative to traditional methods such as hypervisor-based virtualization. Unlike virtual machines (VMs), which require a separate operating system (OS) instance for each application, containers share the host OS. This means containers can be more efficient regarding resource utilization and faster to start and stop.

Container-based virtualization, also known as operating system-level virtualization, is a lightweight virtualization method that allows multiple isolated user-space instances, known as containers, to run on a single host operating system. Unlike traditional virtualization techniques, which rely on hypervisors and full-fledged guest operating systems, containerization leverages the host operating system's kernel to provide resource isolation and process separation. This streamlined approach eliminates the need for redundant operating system installations, resulting in improved performance and efficiency.

Enhanced Portability: Containers encapsulate all the dependencies required to run an application, making them highly portable across different environments. Developers can package their applications with all the necessary libraries, frameworks, and configurations, ensuring consistent behavior regardless of the underlying infrastructure.

Scalability and Resource Efficiency: Containers enable efficient resource utilization by sharing the host's operating system and kernel. With their lightweight nature, containers can be rapidly provisioned, scaled up or down, and migrated across hosts, ensuring optimal resource allocation and responsiveness.

Isolation and Security: Containers provide isolation at the process level, ensuring that each application runs in its own isolated environment. This isolation prevents interference and minimizes security risks, making container-based virtualization an attractive choice for multi-tenant environments and cloud-native applications.

Container-based virtualization has gained significant traction across various industries and use cases. Some notable examples include:

Microservices Architecture: Containerization seamlessly aligns with the principles of microservices, allowing applications to be broken down into smaller, independent services. Each microservice can be encapsulated within its own container, enabling rapid development, deployment, and scaling.

DevOps and Continuous Integration/Continuous Deployment (CI/CD): Containers play a crucial role in modern DevOps practices, streamlining the software development lifecycle. With container-based virtualization, developers can easily package, test, and deploy applications across different environments, ensuring consistency and reducing deployment complexities.

Hybrid and Multi-Cloud Environments: Containers facilitate hybrid and multi-cloud strategies by abstracting away the underlying infrastructure dependencies. Applications can be packaged as containers and seamlessly deployed across different cloud providers or on-premises environments, enabling flexibility and avoiding vendor lock-in.

Traditional Deployment Models

So, how do containers facilitate virtualization? Firstly, the traditional application deployment was based on a single-server approach. As a result, one application was installed per physical server, wasting server resources, and components such as RAM and CPU were never fully utilized. There was also considerable vendor lock-in, making moving applications from one hardware vendor to another hard.

Then, the world of hypervisor-based virtualization was introduced, and the concept of a virtual machine (VM) was born. Soon after, we had container-based applications. Container-based virtualization introduced container networking, and new principles arose for security around containers, specifically, Docker container security.

container security

Introducing hypervisors

We still deployed physical servers but introduced hypervisors on the physical host, enabling the installation of multiple VMs on a single server. Each VM is isolated from its operating system. Hypervisor-based virtualization introduced better resource pooling as one physical server could now be divided into multiple VMs, each hosting a different application type. This was years better than single-server deployments and opened the doors to open networking. 

The VM deployment approach increased agility and scalability, as applications within a VM are scaled by simply spinning up more VMs on any physical host. While hypervisor-based virtualization was a step in the right direction, a guest operating system for each application is pretty intensive. Each VM requires RAM, CPU, storage, and an entire guest OS, all-consuming resources.

Introducing Virtualization

Another advantage of virtualization is the ability to isolate applications or services. Each virtual machine operates independently, with its resources and configurations. This enhances security and stability, as issues in one virtual machine do not affect others. It also allows for easy testing and development, as virtual machines can be quickly created and discarded.

container based virtualization

Virtualization also offers improved disaster recovery and business continuity. By encapsulating the entire virtual machine, including its operating system, applications, and data, into a single file, organizations can quickly back up, replicate, and restore virtual machines. This ensures that critical systems and data are protected and can rapidly recover during a failure or disaster.

Furthermore, virtualization enables workload balancing and dynamic resource allocation. Virtual machines can be dynamically migrated between physical servers to optimize resource utilization and performance. This allows for better utilization of computing resources and the ability to respond to changing workload demands.

Related: You may find the following helpful post before proceeding to how containers facilitate virtualization.

  1. Docker Default Networking 101
  2.  Kubernetes Networking 101
  3. Kubernetes Network Namespace
  4. WAN Virtualization
  5. OVS Bridge
  6. Remote Browser Isolation



Container Virtualization.

Key Container Based Virtualization Discussion points:


  • Introduction to containerized technologies.

  • The role of container based applications.

  • Discussion on container networking and Linux kernel. 

  • A final note on microsegmentation.

Back to Basics: Containers and Container Virtualization

The Traditional World

Before we address how containers facilitate virtualization, let’s address the basics. In the past, we could solely run one application per server. However, the open-systems world of Windows and Linux didn’t have the technologies to safely and securely run multiple applications on the same server.

So, every time we needed a new application, we would buy a new server. We had the virtual machine (VM) to solve the waste of resources. With the VM, we had a technology that permitted us to safely and securely run applications on a single server. Unfortunately, the VM model also has additional challenges.

Migrating VMs

For example, VMs are slow to boot, and portability isn’t great — migrating and moving VM workloads between hypervisors and cloud platforms is more complicated than it needs to be. All of which drove the need for a new technology of containers with container virtualization.

How do containers facilitate virtualization? So, we needed a lightweight tool without losing the scalability and agility benefits of the VM-based application approach. The lightweight tool is container-based virtualization, and Docker acts at the forefront. The container offers a similar capability to that of object-oriented programming. They let you build composable modular building blocks, making it easier to design distributed systems.

Docker Container Diagram
Diagram: Docker Container. Source Docker.

1st Lab Guide on Container Networking

In the following example, we have one Docker host. We can list the available networks for these Docker hosts with the command docker network ls. These are not WAN or VPN networks; they are only Docker networks.

Docker networks are virtual networks that allow containers to communicate with each other and the outside world. They provide isolation, security, and flexibility to manage network traffic flow between containers. By default, when you create a new Docker container, it is connected to a default bridge network, which allows communication with other containers on the same host.

Notice the subnets assigned of 172.17.0.0/16. So, the default gateway ( exit point) is set to the docker0 bridge.

Docker networking
Diagram: Docker networking

Types of Docker Networks:

Docker offers various types of networks, each serving a specific purpose:

1. Bridge Network:

The bridge network is the default network that enables communication between containers on the same host. Containers connected to the bridge network can communicate using IP addresses or container names. It provides a simple way to connect containers without exposing them to the outside world.

2. Host Network:

In the host network mode, a container shares the network stack with the host, using its network interface directly. This mode provides maximum network performance as no network address translation (NAT) is involved. However, it also means the container is directly exposed to the host’s network, potentially introducing security risks.

3. Overlay Network:

The overlay network allows containers to communicate across multiple Docker hosts, even in different physical or virtual networks. It achieves this by encapsulating network packets and routing them to the appropriate destination. Overlay networks are essential for creating distributed and scalable applications.

4. Macvlan Network:

The Macvlan network mode allows containers to have MAC addresses and appear as separate devices. This mode is useful when assigning IP addresses to containers and making them accessible from the external network. It is commonly used when containers must be treated as physical devices.

5. None Network:

The non-network mode isolates a container from all networking. It effectively disables all networking capabilities and prevents the container from communicating with other containers or the outside world. This mode is typically used when networking is not required or desired.

  • A key point: Lab Guide on Container Networking

You can attach as many containers as you like to a bridge. They will be assigned IP addresses within the same subnet, meaning they can communicate by default. You can have a container with two Ethernet interfaces ( virtual interfaces ) connected to two different bridges on the same host and have connectivity to two networks simultaneously.

Also, remember that the scope is local when you are doing this, and even if the docker hosts are on the same underlying network but with different hosts, they won’t have IP reachability. In this case, you may need a VXLAN overlay network to connect containers on different docker hosts.

inspecting container networks
Diagram: Inspecting container networks

Container-based Virtualization

One critical benefit of container-based virtualization is its portability. Containers encapsulate the application and all its dependencies, allowing it to run consistently across different environments, from development to production. This portability eliminates the “it works on my machine” problem and makes it easier to maintain and scale applications.

Scalability

Another advantage of containerization is its scalability. Containers can be easily replicated and distributed across multiple hosts, making it straightforward to scale applications horizontally. Furthermore, container orchestration platforms, like Kubernetes, provide automated management and scaling of containers, simplifying the deployment and management of complex applications.

Security

Security is crucial to any virtualization technology, and container-based virtualization is no exception. Containers provide isolation between applications, preventing them from interfering with each other. However, it is essential to note that containers share the same kernel as the host OS, which means a compromised container can potentially impact other containers. Proper security measures, such as regular updates and vulnerability scanning, are essential to ensure the security of containerized applications.

Tooling

Container-based virtualization also offers various tools and platforms for application development and deployment. Docker, for example, is a popular containerization platform that provides a user-friendly interface for building, running, and managing containers. It simplifies container image creation and enables developers to package their applications and dependencies.

Applications of Container-Based Virtualization:

1. DevOps and Continuous Integration/Continuous Deployment (CI/CD): Containerization enables developers to package applications, libraries, and configurations into portable and reproducible containers. This simplifies the deployment process and ensures consistency across different environments, facilitating faster software delivery.

2. Microservices Architecture: Container-based virtualization aligns well with the microservices architectural pattern. Organizations can develop, deploy, and scale each service independently using containers by breaking down complex applications into more minor, loosely coupled services. This approach enhances modularity, scalability, and fault tolerance.

3. Hybrid Cloud and Multi-Cloud Environments: Containers provide a unified platform for deploying applications across hybrid and multi-cloud environments. With container orchestration tools, organizations can leverage the benefits of multiple cloud providers while ensuring consistent deployment and management practices.

Application Landscape Changes

The application landscape has changed from a monolithic design to a design consisting of microservices. Today, applications are constantly developed. Patches usually patch only certain parts of the application, and the entire application is built from loosely coupled components instead of existing tightly coupled ones.

The entire application stack is broken into components and spread over multiple servers and locations, all requiring cross-communication.

For example, users connect to a presentation layer, the presentation layer then connects to some shopping cart, and the shopping cart connects to a catalog library. These components are potentially stored on different servers, maybe different data centers.

The application is built from several small parts, known as microservices. Each component or microservice can now be put into a lightweight container—a scaled-down VM. 

container based virtualization
Diagram: Container based virtualization.

How do containers facilitate virtualization?

  • Container-Based Applications

Now, we have complex distributed software stacks based on microservices. Its base consists of loosely coupled components that may change and software that runs on various hardware, including test machines, in-house clusters, cloud deployments, etc. The web front end may include the following:

  • Ruby + Rail.
  • API endpoints with Python 2.7.
  • Stack website with Nginx.
  • A variety of databases.

We have a very complex stack on top of various hardware devices. While the traditional monolithic application will likely remain for some time, containers still exhibit the use case to modernize the operational model for conventional stacks. Both monolithic and container-based applications can live together.

Container-based virtualization

The application’s complexity, scalability, and agility requirements have led us to the market of container-based virtualization. Container-based virtualization uses the host’s kernel to run multiple guest instances. Now, we can run multiple guest instances (containers), and each container will have its root file system, process, and network stack.

Containers allow you to package up an application with all its parts in an isolated environment. It is a complete abstraction and does not need to run dependencies on the hosts. Docker, a type of container (first based on Linux Containers but now powered by runC), separates the application from infrastructure using container technologies. 

Similar to how VMs separate the operating system from bare metal, containers let you build a layer of isolation in software that reduces the burden of human communication and specific workflows. An excellent way to understand containers is to accept that they are not VMs—they are simple wrappers around a single Unix process. Containers contain everything they need to run (runtime, code, libraries, etc.).

Linux kernel namespaces

Isolation or variants of isolation have been around for a while. We have mount namespacing in 2.4 kernels and userspace namespacing in 3.8. These technologies allow the kernel to create partitions and isolate PIDs. Linux containers (Lxc) started in 2008, and Docker was introduced in Jan 2013, with a public release of 1.0 in 2014. We are now at version 1.9, which has some new networking enhancements.

Docker uses Linux kernel namespaces and control groups, providing an isolated workspace, which offers the starting grounds for the Docker security options. Namespaces offer an isolated workspace that we call a container. They help us fool the container.

We have PID for process isolation, MOUNT for storage isolation, and NET for network-level isolation. The Linux network subsystem has the correct information for additional Linux network information.

how do containers facilitate virtualization
Diagram: How do containers facilitate virtualization

Container based application: Container operations

Containers use schedulers. A scheduler starts containers on the correct host and then connects them. It also needs to manage container failover and handle container scalability when there is too much data to process for a single instance. Popular container schedulers include Docker Swarm, Apache Mesos, and Google Kubernetes.

The correct host is selected depending on the type of scheduler used. For example, Docker Swarm will have three strategies: spread, binpack, and random. Spread means node selection is based on the fewest containers, disregarding their states. Binpack selection is based on hosts with minimum resources, i.e., the most packed. Finally, random strategy selections are chosen randomly.

Containers are quick to start.

How do containers facilitate virtualization? First, they are quick. Starting a container is much faster than starting a VM—lightweight containers can be started in as little as 300ms. Initial tests on Docker revealed that a newly created container from an existing image takes up only 12 kilobytes of disk space.

A VM could take up thousands of megabytes. The container is lightweight as it references points to a layered filesystem image. Container deployment is also swift and network-efficient.

Fewer data needs to travel across the network and storage fabrics. Elastic applications that have frequent state changes can be built more efficiently. Both Docker and Linux containers fundamentally change application consumption. 

As a side note, not all workloads are suitable for containers, and heavy loads like databases are put into VMs to support multi-cloud environments. 

Docker networking

Docker networking is an essential aspect of containerization that allows containers to communicate with each other and external networks. In this document, we will explore the different networking options available in Docker and how they can facilitate seamless communication between containers.

By default, Docker provides three networking options: bridge, host, and none. The bridge network is the default network created when Docker is installed. It allows containers to communicate with each other using IP addresses. Containers within the same bridge network can communicate with each other directly without the need for port mapping.

As the name suggests, the host network allows containers to share the network namespace with the host system. This means that containers using the host network can directly access the host system’s network interfaces. This option is helpful for scenarios where containers must bind to specific network interfaces on the host.

On the other hand, the non-network option completely isolates the container from the network. Containers using the none network cannot communicate with other containers or external networks. This option is useful when running a container in complete isolation.

Creating custom networks

In addition to these default networking options, Docker also provides the ability to create custom networks. Custom networks allow containers to communicate with each other, even if they are not in the same network namespace. Custom networks can be made using the `docker network create` command, specifying the desired driver (bridge, overlay, macvlan, etc.) and any additional options.

One of the main benefits of using custom networks is the ability to define network-level access control. Docker provides the ability to define network policies using network labels. These labels can control which containers can communicate with each other and which ports are accessible.

Closing Points on Docker networking

Networking is very different in Docker than what we are used to. Networks are domains that interconnect sets of containers. So, if you give access to a network, you can access all containers. However, if you want external access to other networks or containers, you must specify rules and port mapping.

A driver backs every network, be it a bridge or overlay driver. These Docker-based drivers can be swapped out with any ecosystem driver. The team at Docker views them as pluggable batteries.

Docker utilizes the concept of scope—local (default) and Global. The local scope is a local network, and the global scope has visibility across the entire cluster. If your driver is a global scope driver, your network belongs to a global scope. A local scope driver corresponds to the local scope.

Containers and Microsegmentation

Microsegmentation is a security technique that divides a network into smaller, isolated segments, allowing organizations to create granular security policies. This approach provides enhanced control and visibility over network traffic, preventing lateral movement and limiting the impact of potential security breaches.

Microsegmentation offers organizations a proactive approach to network security, allowing them to create an environment more resilient to cyber threats. By implementing microsegmentation, organizations can enhance their security posture, minimize the risk of lateral movement, and protect their most critical assets. As the cyber threat landscape continues to evolve, microsegmentation is an effective strategy to safeguard network infrastructure in an increasingly interconnected world.

  • Docker and Micro-segmentation

Docker 0 is the default bridge. They have now extended into bundles of multiple networks, each with independent bridges. Different bridges cannot directly talk to each other. It is a private, isolated network offering micro-segmentation and multi-tenancy features.

The only way for them to communicate is via host namespace and port mapping, which is administratively controlled. Docker multi-host networking is a new feature in 1.9. A multi-host network comprises several docker hosts that form a cluster.

Several containers in each host form the cluster by pointing to the same KV (example -zookeeper) store. The KV store that you point to defines your cluster. Multi-host networking enables the creation of different topologies and lets the container belong to several networks. The KV store may also be another container, allowing you to stay in a 100% container world.

Final points on container-based virtualization

In recent years, container-based virtualization has become famous for deploying and managing applications. Unlike traditional virtualization, which relies on hypervisors to run multiple virtual machines on a single physical server, container-based virtualization leverages lightweight, isolated containers to run applications.

So, what exactly is container-based virtualization, and why is it gaining traction in the technology industry? In this blog post, we will explore the concept of container-based virtualization, its benefits, and how it differs from traditional virtualization.

Operating system-level virtualization

Container-based virtualization, also known as operating system-level virtualization, is a form of virtualization that allows multiple containers to run on a single operating system kernel. Each container is isolated from the others, ensuring that applications and their dependencies are encapsulated within their runtime environment. This isolation eliminates application conflicts and provides a consistent environment across deployment platforms.

Docker default networking 101
Diagram: Docker default networking 101

Critical advantages of container virtualization

One of the critical advantages of container-based virtualization is its lightweight nature. Containers are designed to be portable and efficient, allowing for rapid deployment and scaling of applications. Unlike virtual machines, which require an entire operating system to run, containers share the host operating system kernel, reducing resource overhead and improving performance.

Another benefit of container-based virtualization is its ability to facilitate microservices architecture. By breaking down applications into more minor, independent services, containers enable developers to build and deploy applications more efficiently. Each microservice can be encapsulated within its container, making it easier to manage and update without impacting other application parts.

Greater flexibility and scalability

Moreover, container-based virtualization offers greater flexibility and scalability. Containers can be easily replicated and distributed across hosts, allowing for seamless horizontal scaling. This ability to scale quickly and efficiently makes container-based virtualization ideal for modern, dynamic environments where applications must adapt to changing demands.

Container virtualization is not a complete replacement.

It’s important to note that container-based virtualization is not a replacement for traditional virtualization. Instead, it complements it. While traditional virtualization is well-suited for running multiple operating systems on a single physical server, container-based virtualization is focused on maximizing resource utilization within a single operating system.

In conclusion, container-based virtualization has revolutionized application deployment and management. Its lightweight nature, isolation capabilities, and scalability make it a compelling choice for modern software development and deployment. As technology continues to evolve, container-based virtualization will likely play a significant role in shaping the future of application deployment.

Container-based virtualization has transformed how we develop, deploy, and manage applications. Its lightweight nature, scalability, portability, and isolation capabilities make it an attractive choice for modern software development. By adopting containerization, organizations can achieve greater efficiency, agility, and cost savings in their software development and deployment processes. As container technologies continue to evolve, we can expect even more exciting possibilities in virtualization.

Summary: Container Based Virtualization

In recent years, container-based virtualization has emerged as a game-changer in technology. This innovative approach offers numerous advantages over traditional virtualization methods, providing enhanced flexibility, scalability, and efficiency. This blog post delved into container-based virtualization, exploring its key concepts, benefits, and real-world applications.

Understanding Container-Based Virtualization

Container-based virtualization, or operating system-level virtualization, is a lightweight alternative to traditional hypervisor-based virtualization. Unlike the latter, where each virtual machine runs on a separate operating system, containerization allows multiple containers to share the same OS kernel. This approach eliminates redundant OS installations, resulting in a more efficient and resource-friendly system.

Benefits of Container-Based Virtualization

2.1 Enhanced Performance and Efficiency

Containers are lightweight and have minimal overhead, enabling faster deployment and startup times than traditional virtual machines. Additionally, the shared kernel architecture reduces resource consumption, allowing for higher density and better utilization of hardware resources.

2.2 Improved Scalability and Portability

Containers are highly scalable, allowing applications to be easily replicated and deployed across various environments. With container orchestration platforms like Kubernetes, organizations can effortlessly manage and scale their containerized applications, ensuring seamless operations even during periods of high demand.

2.3 Isolation and Security

Containers provide isolation between applications and the host operating system, enhancing security and reducing the risk of malicious attacks. Each container operates within its own isolated environment, preventing interference from other containers and mitigating potential vulnerabilities.

Section 3: Real-World Applications

3.1 Microservices Architecture

Container-based virtualization aligns perfectly with the microservices architectural pattern. By breaking down applications into more minor, decoupled services, organizations can leverage the agility and scalability containers offer. Each microservice can be encapsulated within its container, enabling independent development, deployment, and scaling.

3.2 DevOps and Continuous Integration/Continuous Deployment (CI/CD)

Containerization has become a cornerstone of modern DevOps practices. By packaging applications and their dependencies into containers, development teams can ensure consistent and reproducible environments across the entire software development lifecycle. This facilitates seamless integration, testing, and deployment processes, leading to faster time-to-market and improved collaboration between development and operations teams.

Conclusion:

Container-based virtualization has revolutionized how we build, deploy, and manage applications. Its lightweight nature, scalability, and efficient resource utilization make it an ideal choice for modern software development and deployment. As organizations continue to embrace digital transformation, containerization will undoubtedly play a crucial role in shaping the future of technology.

open vswitch

OVS Bridge and Open vSwitch (OVS) Basics

 

OVS bridge

 

Open vSwitch: What is OVS Bridge?

Open vSwitch (OVS) is an open-source multilayer virtual switch that provides a flexible and robust solution for network virtualization and software-defined networking (SDN) environments. It’s versatility and extensive feature set make it an invaluable tool for network administrators and developers. In this blog post, we will explore the world of Open vSwitch, its key features, benefits, and use cases.

Open vSwitch is a software switch designed for virtualized environments, enabling efficient network virtualization and SDN. It operates at layer 2 (data link layer) and layer 3 (network layer), offering advanced networking capabilities that enhance performance, security, and scalability.

 

Highlights: Open vSwitch

  • Barriers to Network Innovation

There are many barriers to network innovation, which makes it difficult for outsiders to drive features and innovate. Until recently, technologies were largely proprietary and controlled by a few vendors. The lack of tools available limited network virtualization and network resource abstraction. Many new initiatives are now challenging this space, and the Open vSwitch project with the OVS bridge, managed by the Open Network Foundation (ONF), is one of them. The ONF is a non-profit organization that promotes adopting software-defined networking through open standards and open networking.

  • The Role of OVS Switch

Since its release, the OVS switch has gained popularity and is now the de-facto open standard cloud networking switch. It changes the network landscape and moves the network edge to the hypervisor. The hypervisor is the new edge of the network. It resolves the problem of network separation; cloud users can now be assigned VMs with flexible configurations. It brings new challenges to networking and security, some of which the OVS network can alleviate in conjunction with OVS rules.

 

For pre-information, before you proceed, you may find the following post of interest:

  1. Container Networking
  2. OpenStack Neutron
  3. OpenStack Neuron Security Groups
  4. Neutron Networks
  5. Neutron Network

 



Open vSwitch.

Key OVS Bridge Discussion Points:


  • Introduction to OVS Bridge and how it can be used.

  • Discussion on virtual network bridges and flow rules.

  • Discussion on how the Open vSwitch works and the components involved.

  • Highlighting Flow Forwarding.

  • Programming the OVS switch with OVS rules.

  • A final note on OpenFlow and the OVS Bridge.

 

Back to Basics With Open vSwitch

The virtual switch

A virtual switch is a software-defined networking (SDN) device that enables the connection of multiple virtual machines within a single physical host. It is a Layer 2 device that operates within the virtualized environment and provides the same functionalities as a physical switch.

Virtual switches can be used to improve the performance and scalability of the network and are often used in cloud computing and virtualized environments. Virtual switches provide several advantages over their physical counterparts, including flexibility, scalability, and cost savings. In addition, as virtual switches are software-defined, they can be easily configured and managed by administrators.

Virtual switches are software-based switches that reside in the hypervisor kernel providing local network connectivity between virtual machines (and now containers). They deliver functions like MAC learning and features like link aggregation, SPAN, and sFlow, just like their physical switch companions have been doing for years. While these virtual switches are often found in more comprehensive SDN and network virtualization solutions, they are a switch that happens to be running in software.

Virtual Switch
Diagram: Virtual Switch. Source Fujitsu.

 

Network virtualization

network virtualization can also enable organizations to improve their network performance by allowing them to create multiple isolated networks. This can be particularly helpful when an organization’s network is experiencing congestion due to multiple applications, users, or customers. By segmenting the network into multiple isolated networks, each network can be optimized for the specific needs of its users.

In summary, network virtualization is a powerful tool that can enable organizations to control better and manage their network resources while still providing the flexibility and performance needed to meet the demands of their users. Network virtualization can help organizations improve their networks’ security, privacy, scalability, and performance by allowing organizations to create multiple isolated networks.

Network Virtualization
Diagram: Network and Server virtualization. Source Parallels.

 

Highlighting the OVS bridge

Open vSwitch is an open-source software switch designed for virtualized environments. It provides a multi-layer virtual switch designed to enable network connectivity and communication between virtual machines running within a single host or across multiple hosts. In addition, open vSwitch fully complies with the OpenFlow protocol, allowing it to be integrated with other OpenFlow-compatible software components.

The software switch can also manage various virtual networking functions, including LANs, routing, and port mirroring. Open vSwitch is highly configurable and can construct complex virtual networks. It supports a variety of features, including support for multiple VLANs, support for network isolation, and support for dynamic port configurations. As a result, open vSwitch is a critical component of many virtualized environments, providing an essential and powerful tool for managing the network environment.

 

  • A simple flow-based switch

Open vSwitch originates from the academic labs from a project known as Ethan – SIGCOMM 2007. Ethan created a simple flow-based switch with a central controller. The central controller has end-to-end visibility, allowing policies to be applied to one place while affecting many data plane devices. In addition, central controllers make orchestrating the network much more accessible. SIGCOMM 2007 introduced the OpenFlow protocol – SIGCOMM CCR 2008 and the first Open vSwitch (OVS) release in early 2009.

 

Key Features of Open vSwitch:

Virtual Switching: Open vSwitch allows the creation of virtual switches, enabling network administrators to define and manage multiple isolated networks on a single physical machine. This feature is particularly useful in cloud computing environments, where virtual machines (VMs) require network connectivity.

Flow Control: Open vSwitch supports flow-based packet processing, allowing administrators to define rules to handle network traffic efficiently. This feature enables fine-grained control over network traffic, implementing Quality of Service (QoS) policies, and enhancing network performance.

Network Virtualization: Open vSwitch enables network virtualization by supporting network overlays such as VXLAN, GRE, and Geneve. This allows the creation of virtual networks that span physical infrastructure, simplifying network management and enabling seamless migration of virtual machines across different hosts.

SDN Integration: Open vSwitch seamlessly integrates with SDN controllers, such as OpenDaylight and OpenFlow, enabling centralized network management and programmability. This integration empowers administrators to automate network provisioning, optimize traffic routing, and implement dynamic policies.

Benefits of Open vSwitch:

Flexibility: Open vSwitch offers a wide range of features and APIs, providing flexibility to adapt to various network requirements. Its modular architecture allows administrators to customize and extend functionalities per their needs, making it highly versatile.

Scalability: Open vSwitch scales effortlessly as network demands grow, efficiently handling large virtual machines and network flows. Its distributed nature enables load balancing and fault tolerance, ensuring high availability and performance.

Cost-Effectiveness: Being an open-source solution, Open vSwitch eliminates the need for expensive proprietary hardware. This reduces costs and enables organizations to leverage the benefits of software-defined networking without a significant investment.

Use Cases:

Cloud Computing: Open vSwitch plays a crucial role in cloud computing environments, enabling network virtualization, multi-tenant isolation, and seamless VM migration. It facilitates the creation and management of virtual networks, enhancing the agility and efficiency of cloud infrastructure.

SDN Deployments: Open vSwitch integrates seamlessly with SDN controllers, making it an ideal choice for SDN deployments. It allows for centralized network management, dynamic policy enforcement, and programmability, enabling organizations to achieve greater control and flexibility over their networks.

Network Testing and Development: Open vSwitch provides a powerful tool for testing and development. Its extensive feature set and programmability allow developers to simulate complex network topologies, test network applications, and evaluate network performance under different conditions.

 

Open vSwitch (OVS)

The OVS bridge is a multilayer virtual switch implemented in software. It uses virtual network bridges and flows rules to forward packets between hosts. It behaves like a physical switch, only virtualized. Namespaces and instance tap interfaces connect to what is known as OVS bridge ports.

Like a traditional switch, OVS maintains information about connected devices, such as MAC addresses. In addition, it enhances the monolithic Linux Bridge plugin and includes overlay networking (GRE & VXLAN), providing multi-tenancy in cloud environments. 

open vswitch
Diagram: The Open vSwitch basic layout.

 

Programming the Open vSwitch and OVS rules

The OVS switch can also be integrated with hardware and serve as the control plane for switching silicon. Programming flow rules work differently in the OVS switch than in the standard Linux Bridge. The OVS plugin does not use VLANs to tag traffic. Instead, it programs OVS flow rules on the virtual switches that dictate how traffic should be manipulated before being forwarded to the exit interface. The OVS rules essentially determine how inbound and outbound traffic should be treated. 

OVS has two fail modes a) Standalone and b) Secure. Standalone is the default mode and acts as a learning switch. Secure mode relies on the controller element to insert flow rules. Therefore, the secure mode has a dependency on the controller.

OVS bridge
Diagram: OVS Bridge: Source OpenvSwitch.

 

Open vSwitch Flow Forwarding.

Kernel mode, known as “fast path” processing, is where it does the switching. If you relate this to hardware components on a physical device, the kernel mode will map to the ASIC. User mode is known as the “slow path.” If there is a new flow, the kernel doesn’t know about the user mode and is instructed to engage. Once the flow is active, the user mode should not be invoked. So you may take a hit the first time.

The first packet in a flow goes to the userspace ovs-vswitchd, and subsequent packets hit cached entries in the kernel. When the kernel module receives a packet, the cache is inspected to determine if there is a flow entry. The associated action is carried out on the packet if a corresponding flow entry is found in the cache.

This could be forwarding the packet or modifying its headers. If no cache entry is found, the packet is passed to the userspace ovs-vswitchd process for processing. Subsequent packets are processed in the kernel without userspace interaction. The processing speed of the OVS is now faster than the original Linux Bridge. It also has good support for mega flows and multithreading

OVS rules
Diagram: OVS rules and traffic flow.

 

OVS component architecture

There are several CLI tools to interface with the various components:

CLI Component

OVS Component

Ovs-vsctl manages the state 

in the ovsdb-server

Ovs-appctl sends commands

to the ovs-vswitchd

Ovs-dpctl is the

Kernal module configuration

ovs-ofctl work with the 

 OpenFlow protocols

 

what is OVS bridge
Diagram: What is OVS bridge? The components involved.

 

You may have an off-host component, such as the controller. It communicates and acts as a manager of a set of OVS components in a cluster. The controller has a global view and manages all the components. An example controller is OpenDaylight. OpenDaylight promotes the adoption of SDN and serves as a platform for Network Function Virtualization (NFV).

NFV virtualized network services instead of using physical function-specific hardware. A northbound interface exposes the network application and southbound interfaces interface with the OVS components. 

  • RYU provides a framework for SDN controllers and allows you to develop controllers. It is written in Python. It supports OpenFlow, Netconf, and OF-config.

There are many interfaces used to communicate across and between components. The database has a management protocol known as OVSDB, RFC 7047. OVS has a local database server on every physical host. It maintains the configuration of the virtual switches. Netlink communicates between user and kernel modes and between different userspace processes. It is used between ovs-vswitchd and openvswitch.ko and is designed to transfer miscellaneous networking information.

 

OpenFlow and the OVS bridge

OpenFlow can also be used to talk and program the OVS. The ovsdb-server interfaces with an external controller (if used) and the ovs-vswitchd interface. Its purpose is to store information for the switches. Its state is persistent.

The central CLI tool is ovs-vsctlThe ovs-vswitchd interface with an external controller, kernel via Netlink, and the ovsdb server. Its purpose is to manage multiple bridges and is involved in the data path. It’s a core system component for the OVS. Two CLI tools ovs-ofctl and ovs-appctl are used to interface with this.

 

Linux containers and networking

OVS can make use of Linux and Docker containers. Containers provide a layer of isolation that reduces communication in humans. They make it easy to build out example scenarios. Starting a container takes milliseconds compared to the minutes of a virtual machine.

Deploying container images is much faster if less data needs to travel across the fabric. Elastic applications with frequent state changes and dynamic resource allocation can be built more efficiently with containers. 

Linux and Docker containers represent a fundamental shift in how we consume and manage applications. Libvirt is a tool used to make use of containers. It’s a virtualization application for Linux. Linux containers involve process isolation in Linux, so instead of running an entire-blown VM, you can do a container, but you share the same kernel but are entirely isolated.

Each container has its view of networking and processes. Containers isolate instances without the overhead of a VM. A lightweight way of doing things on a host and builds on the mechanism in the kernel.

 

Source versus package install

There are two paths for installation, a) Source code and b) Package installation based on your Linux distribution. The source code install is primarily used if you are a developer and is helpful if you are trying to make an extension or focusing on hardware component integration; before accessing the Repo-install, any build dependencies, such as git, autoconf, and libtool.

Then you pull the image from GitHub with the “clone” command. <git clone https://github.com/openvswitch/ovs>. Running from source code is a lot more difficult than installing through distribution. All the dependencies will be done for you when you install from packages. 

Conclusion:

Open vSwitch is a feature-rich and highly flexible virtual switch that empowers network administrators and developers to build efficient and scalable networks. Its support for network virtualization, flow control, and SDN integration makes it a valuable tool in cloud computing environments, SDN deployments, and network testing and development. By leveraging Open vSwitch, organizations can unlock the full potential of network virtualization and software-defined networking, enhancing their network capabilities and driving innovation in the digital era.

 

open vswitch