Computer Networks

Computer Networking


Computer Networking

Computer networking has revolutionized how we communicate and share information in today’s digital age. Computer networking offers many possibilities and opportunities, from the Internet to local area networks. This blog post will delve into the fascinating world of computer networking and discover its key components, benefits, and prospects.

Computer networking is essentially the practice of connecting multiple devices to share resources and information. It involves using protocols, hardware, and software to establish and maintain these connections. Understanding networking fundamentals, such as IP addresses, routers, and switches, is crucial for anyone venturing into this field.

Highlights: Computer Networking

  • Network Components

Creating a computer network requires a lot of preparation and knowledge of the right components used. One of the first steps in computer networking is identifying what features to use and where to place them. This includes selecting the proper hardware, such as the Layer 3 routers, Layer 2 switches, and Layer 1 hubs if you are on an older network. Along with the right software, such as operating systems, applications, and network services. Or if any advanced computer networking techniques, such as virtualization, are required.

  • Network Structure

Once the components are identified, it’s time to plan the network’s structure. This involves deciding where each piece will be placed and how they will be connected. The majority of networks you will see today will be Ethernet-based. You will need a design process for more extensive networks. Still, for smaller networks, such as your home network, once physically connected, you are ready as all the network services are set up for you on the WAN router by the local service provider.

computer networking


Related: Additional links to internal content for pre-information:

  1. Data Center Topologies
  2. Distributed Firewalls
  3. Internet of Things Access Technologies
  4. LISP Protocol and VM Mobility.
  5. Port 179
  6. IP Forwarding
  7. Forwarding Routing Protocols
  8. Technology Insight for Microsegmentation
  9. Network Security Components
  10. Network Connectivity


Computer Networks

Key Computer Networking Discussion Points:

  • Introduction to computer networks and what is involved.

  • Highlighting the details of how you connect up networks.

  • Technical details on approaching computer networking and the importance of security.

  • Scenario: The main network devices: Are Layer 2 switches and Layer 3 routers.

  • Notes on the different types of protocols sued in computer networks.


Back to Basics: Computer Networks

A network is a collection of interconnected systems that share resources. IoT (Internet of Things) devices, desktop computers, laptops, and mobile phones are all connected by networks. A computer network will consist of standard devices such as APs, switches, and routers, the essential network components.

Network services

You can connect your network’s devices to other computer networks and the Internet, a global system of interconnected networks. So when we connect to the Internet, we secure the Local Area Network (LAN) to the Wide Area Network (WAN). As we move between computer networks, we must consider security.

You will need a security device between these segments for a stateful inspection firewall. You are probably running IPv4, so you will need a network service called Network Address Translation (NAT). IPv6, the latest version of the IP protocol, does not need NAT but may need a translation service with communication with IPv4-only networks.


network address translation

 Types of Networks

There are various types of computer networks, each serving different purposes. Local Area Networks (LANs) connect devices within a limited geographical area, such as homes or offices. Wide Area Networks (WANs) span larger areas, connecting multiple LANs. The internet itself can be considered the most extensive WAN, connecting countless networks across the globe.

Computer networking brings numerous benefits to individuals and businesses. It enables seamless communication, file sharing, and resource access among connected devices. In industry, networking enhances productivity and collaboration, allowing employees to work together efficiently regardless of physical location. Moreover, networking facilitates the growth and expansion of companies by providing access to global markets.

Computer Networking

Computer Networking

Computer Networking Main Components 

  • A network is a collection of interconnected systems that share resources. The primary use case of a network was to share printers.

  • A nework must offer a range of network services such as NAT.

  • There are various types of computer networks, each serving different purposes. LAN vs WAN.

  • Protecting sensitive data, preventing unauthorized access, and mitigating potential threats are constant challenges.

Security and Challenges

With the ever-increasing reliance on computer networks, security becomes a critical concern. Protecting sensitive data, preventing unauthorized access, and mitigating potential threats are constant challenges. Network administrators employ various security measures such as firewalls, encryption, and intrusion detection systems to safeguard networks from malicious activities.

As technology continues to evolve, so does computer networking. Emerging trends such as cloud computing, Internet of Things (IoT), and Software-Defined Networking (SDN) are shaping the future of networking. The ability to connect more devices, handle massive amounts of data, and provide faster and more reliable connections opens up new possibilities for innovation and advancement.

Local Area Network

A Local Area Network (LAN) is a computer network that connects computers and other devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Ethernet cables typically connect LANs but may also be connected through wireless connections. LANs are usually used within a single organization or business but may connect multiple locations. The equipment in your LAN is in your control.

Wide Area Network

Then, we have the Wide Area Network (WAN). In contrast to the LAN, a WAN is a computer network covering a wide geographical area, typically connecting multiple locations. Your LAN may only consist of Ethernet and a few network services.

However, a WAN may consist of various communications equipment, protocols, and media that provide access to multiple sites and users. WANs usually use private leased lines, such as T-carrier lines, to connect geographically dispersed locations. The equipment in the WAN is out of your control.

Computer Networks
Diagram: Computer Networks with LAN and WAN.

Virtual Private Network ( VPN )

To connect LAN networks over a WAN, we use a VPN. A virtual private network (VPN) is a secure and private connection between two or more devices over a public network such as the Internet. The purpose of a VPN is to provide fast, encrypted communication over an untrusted network, such as the Internet.

VPNs are commonly used by businesses and individuals to protect sensitive data from prying eyes. One of the primary benefits of using a VPN is that it can protect your online privacy by masking your IP address and encrypting your internet traffic. This means that your online activities are hidden from your internet service provider (ISP), hackers, and other third parties who may be trying to eavesdrop on your internet connection.

Example: VPN Technology

An example of a VPN technology is Cisco DMVPN. DMVPN operates in phases; there is DMVPN Pashe 1 to 3. For true hub and spoke, you would implement Phase 1; however, today, Phase 3 is the most popular, offering spoke-to-spoke tunnels. The screenshot below is an example of DMVPN Phase 1 running an OSPF network type of broadcast.




Computer Networking

Once the components and structure of the network have been determined, the next step is the configuration of computer networking. This involves setting up network parameters, such as IP addresses and subnets, and configuring routing tables.

Remember that security is paramount, especially when connecting to the Internet, an untrusted network with a lot of malicious activity. Firewalls help you create boundaries and secure zones for your networks. Different firewall types exist for the other network parts, making a layered approach to security.

Once the computer networking is completed, the next step is to test the network. This can be done using tools such as network analyzers, which can detect any errors or issues present. You can conduct manual tests using Internet Control Message Protocol (ICMP) protocols, such as ping and traceroute. Testing for performance is only half of the pictures. It’s also imperative to regularly monitor the network for potential security vulnerabilities. So, you must have antivirus software, a computer firewall, and other endpoint security controls.

Finally, it’s critical to keep the network updated. This includes updating the operating system and applications and patching any security vulnerabilities as soon as possible. It’s also crucial to watch for upcoming or emerging technologies that may benefit the network.

packet loss testing
Diagram: Packet loss testing.


Lab Guide: Endpoint Networking and Security

Address Resolution Protocol (ARP)

The first command you will want to become familiar with is arp

At its core, ARP is a protocol that maps an IP address to a corresponding MAC address. It enables devices within a local network to communicate with each other by resolving the destination MAC address for a given IP address. By maintaining an ARP table, devices store these mappings for efficient and quick communication.

Analysis: What you see are 5 column headers explained as follows:

  • Address: The IP address of a device on the network identified through the ARP protocol, resolved to the hostname.

  • HWtype: Describes the type of hardware connection facilitating the network connection. In this case, this is an ethernet interface instead of a Wi-Fi interface.

  • HW address: The MAC address assigned to the hardware interface responding to ARP requests.

  • Flags Mask: A hex value translated into ASCII defines how the interface was set.

  • Iface: Lists the interface’s name associated with the hardware and IP address.


Analysis: The output contains the same columns and information, with additional information about the contents of the cache. The -v flag is for verbose mode and provides additional information about the entries in the cache. Focus on the Address. The -n flag tells the command not to resolve the address to a hostname; the result is seeing the Address as an IP.

Note: The IP and Mac address returned is an additional VM running Linux in this network. This is significant because if a device is within the same subnet or layer two broadcast domain as a device identified by its local ARP cache, it will simply address traffic to the designated MAC address. In this way, if you can change the ARP cache, you can change where the device sends traffic within its subnet.

Locally, you can change the ARP cache directly by adding entries yourself.  See the screenshot above:

Analysis: Now you see the original entry and the entry you just set within the local ARP cache. When your device attempts to send traffic to the address, the packets will be addressed at layer 2 to the corresponding MAC address from this table. Generally, MAC address to IP address mappings are learned dynamically through the ARP network protocol activity, indicated by the “C” under the Flags Mask column. The CM reflects that the entry was manually added.

Note: Additional Information on ARP

  • ARP Request and Response

When a device needs to communicate with another device on the same network, it initiates an ARP request. The requesting device broadcasts an ARP request packet containing the target IP address for which it seeks the MAC address. The device with the matching IP address responds with an ARP reply packet, providing its MAC address. This exchange allows the requesting device to update its ARP table and establish a direct communication path.

  • ARP Cache Poisoning

While ARP serves a critical purpose in networking, it is vulnerable to attacks like ARP cache poisoning. In this type of attack, a malicious entity spoofs its MAC address, tricking devices on the network into associating an incorrect MAC address with an IP address. This can lead to various security issues, including interception of network traffic, data manipulation, and unauthorized access.

  • Address Resolution Protocol in IPv6

While ARP is predominantly used in IPv4 networks, IPv6 networks utilize a similar protocol called Neighbor Discovery Protocol (NDP). NDP performs functions similar to ARP but with additional features such as stateless address autoconfiguration and duplicate address detection. Although NDP differs in several ways from ARP, its purpose of mapping IP addresses to link-layer addresses remains the same.


Computer Networking & Data Traffic

Computer networking aims to carry data traffic so we can share resources. The first use case of computer networks was to share printers; now, we have a variety of use cases that evolve around data traffic. Data traffic can be generated from online activities such as streaming videos, downloading files, surfing the web, and playing online games. It is also generated from behind-the-scenes activities such as system updates and background and software downloads.

The Importance of Data Traffic

Data traffic is the amount transmitted over a network or the Internet. It is typically measured in bits, bytes, and packets per second. Data traffic can be both inbound and outbound. Inbound traffic is data coming into a network or computer, and outbound traffic is data leaving a network or computer. Inbound data traffic should be inspected by a security device, such as a firewall, which can either be at the network’s perimeter or on your computing device. At the same time, outbound traffic is generally unfiltered.

To keep up with the increasing demand, companies must monitor data traffic to ensure the highest quality of service and prevent network congestion. With the right data traffic monitoring tools and strategies, organizations can improve network performance and ensure their data is secure.

The Issues of Best Efforts or FIFO

Network devices don’t care what kind of traffic they have to forward. Ethernet frames are received by your switch, which looks for the destination MAC address before forwarding them. Your router does the same thing: it gets an IP packet, checks the routing table for the destination, and forwards the packet.

Would the frame or packet contain data from a user downloading the latest songs from Spotify or significant speech traffic from a VoIP phone? It doesn’t matter to the switch or router. This forwarding logic is called best effort or FIFO (First In, First Out). Sometimes, this can be an issue when applications are hungry for bandwidth. 

Example: Congestion

The serial link is likely congested when the host and IP phone transmit data and voice packets to the host and IP phone on the other side. Packets queued for transmission will not be indefinitely held by the router.

When the queue is full, how should the router proceed? Are the data packets being dropped? Voice packets? Dropping voice packets will result in poor voice quality complaints on the other end. Users may complain about slow transfer speeds when data packets are dropped.

You can change how the router or switch handles packets using QoS tools. For example, the router can prioritize voice traffic over data traffic.


The Role of QoS

Quality of Service (QoS) is a popular technique used in computer networking. QoS can segment applications so that different types will have different priority levels. For example, Voice traffic is often considered more critical than web surfing traffic. Especially as it is sensitive to packet loss. So, when there is congestion on the network, QoS allows administrators to prioritize network traffic so users have the best experience.

Quality of Service (QoS) refers to techniques and protocols prioritizing and managing network traffic. By allocating resources effectively, QoS ensures that critical applications and services receive the necessary bandwidth, low latency, and minimal packet loss while maintaining a stable network connection. This optimization process considers factors such as data type, network congestion, and the specific requirements of different applications.

Expedited Forwarding (EF)

Expedited Forwarding (EF) is a model of network traffic management that provides preferential treatment to certain types of traffic. The EF model is a way to prioritize traffic, specifically real-time traffic such as voice, video, and streaming media, over other types of traffic, such as email and web browsing. This allows these real-time applications to function more reliably and efficiently by reducing latency and jitter.

The EF model works by assigning a traffic class to each data packet. Each packet is assigned a class based on the type of data it contains, and the assigned class dictates how the network treats the packet. The EF model has two categories: EF for real-time traffic and Best Effort (BE) for other traffic. EF traffic is given preferential treatment, meaning it is prioritized over BE traffic, resulting in a higher quality of service for the EF traffic.

The EF model is an effective and efficient way to manage computer network traffic. By prioritizing real-time traffic, the EF model allows these applications to function more reliably, with fewer delays and a higher quality of service. Additionally, the EF model is more efficient, reducing the amount of traffic that needs to be managed by the network.


Lab Guide: QoS and Marking Traffic

TOS ( Type of Service )

In this Lab, we’ll take a look at marking packets. Marking means we set the TOS (Type of Service) byte with an IP Precedence or DSCP value.

Marking and Classifcaiton take place on R2. R1 is the source of the ICMP and HTTP Traffic. R3 has an HTTP server installed. As traffic, both telnet and HTTP packets get sent from R1 and traverse R2, classification takes place.


To ensure each application gets the treatment it requires, we have to implement QoS (Quality of Service). The first step when implementing QoS is classification,

QoS classification


We will mark the traffic and apply a QoS policy once the traffic has been classified. Marking and configuring QoS policies are a whole different story so in this lesson, we’ll stick to classification.

On IOS routers, there are a couple of methods we can use for classification:

  • Header inspection
  • Payload inspection

We can use some fields in our headers to classify applications. For example, telnet uses TCP port 23, and HTTP uses TCP port 80. Using header inspection, you can look for:

  • Layer 2: MAC addresses
  • Layer 3: source and destination IP addresses
  • Layer 4: source and destination port numbers and protocol


Benefits of Quality of Service

A) Bandwidth Optimization:

One of the primary advantages of implementing QoS is the optimized utilization of available bandwidth. By classifying and prioritizing traffic, QoS ensures that bandwidth is allocated efficiently, preventing congestion and bottlenecks. This translates into smoother and uninterrupted network experiences, especially when multiple users or devices access the network simultaneously.

B) Enhanced User Experience:

With QoS, users can enjoy a seamless experience across various applications and services. Whether streaming high-quality video content, engaging in real-time online gaming, or participating in video conferences, QoS helps maintain low latency and minimal jitter, resulting in a smooth and immersive user experience.

Implementing Quality of Service

To implement QoS effectively, network administrators need to understand the specific requirements of their network and its users. This involves:

A) Traffic Classification:

Different types of network traffic require different levels of priority. Administrators can allocate resources by classifying traffic based on its nature and importance.

B) Traffic Shaping and Prioritization:

Once traffic is classified, administrators can shape and prioritize it using various QoS mechanisms such as traffic shaping, packet queuing, and traffic policing. These techniques ensure critical applications receive the necessary resources while preventing high-bandwidth applications from monopolizing the network.

C) Monitoring and Fine-Tuning:

Regular monitoring and fine-tuning of QoS parameters are essential to maintain optimal network performance. By analyzing network traffic patterns and adjusting QoS settings accordingly, administrators can adapt to changing demands and ensure a consistently high level of service.


Computer Networking Components – Devices:

Firstly, the devices. Media interconnect devices provide the channel over which the data travels from source to destination. Many devices are virtualized today, meaning they no longer exist as separate hardware units.

One physical device can emulate multiple end devices. In addition to having its operating system and required software, an emulated computer system operates as a separate physical unit. Devices can be further divided into endpoints and intermediary devices.


Endpoint is a device part of a computer network, including PCs, laptops, tablets, smartphones, video game consoles, and televisions. Endpoints can be physical hardware units, such as file servers, printers, sensors, cameras, manufacturing robots, and smart home components. Nowadays, we have virtualized endpoints.


Computer Networking Components – Intermediate Devices

Layer 2 Switches:

These devices enable multiple endpoints, such as PCs, file servers, printers, sensors, cameras, and manufacturing robots, to connect to the network. Switches allow devices to communicate on the same network. Switches attempt to forward messages from the sender so the destination can only receive them, unlike a hub that floods traffic out of all ports. The switch operates with MAC addresses and works at Layer 2 of the OSI model.

Usually, all the devices that connect to a single switch or a group of interconnected switches belong to a standard network. They can, therefore, exchange information directly with each other. If an end device wants to communicate with a device on a different network, it requires the “services” of a device known as a router, which connects other networks and works higher up in the OSI model at Layer 3. Routers work with the IP protocol.


Routers’ primary function is to route traffic between computer networks. For example, you need a router to connect your office network to the Internet. Routers connect computer networks and intelligently select the best paths between them, and they hold destinations in what is known as a routing table. There are different routing protocols for different-sized networks, and each will have other routing convergence times.

We have recently combined functions for Layer 2 and Layer 3 functionality. So we have a Layer 3 router with a Layer 2 switch module inserted, or we can have what’s known as a multilayer switch that combines the functions of Layer 3 routing and Layer 2 switch functionality on a single device.

Computer Networks
Diagram: Computer Networks with Switch and Routers.


Wi-Fi access points

These devices allow wireless devices to connect. They usually connect to switches but can also be integrated into routers. My WAN router has everything in one box: Wi-Fi, Ethernet LAN, and network services such as NAT and WAN. Wi-Fi access points provide wireless internet access within a specified area.

Wi-Fi access points are typically found in coffee shops, restaurants, libraries, and airports in public settings. These access points allow anyone with a Wi-Fi-enabled device to access the Internet without needing additional hardware. 

WLAN controllers: 

WLAN controllers are devices used to automate the configuration of wireless access points. It provides centralized management of wireless networks and acts as a gateway between wireless and wired networks. Administrators can monitor and manage the entire WLAN, set up security policies, and configure access points through the controller. WLAN controllers also authenticate users, allowing them to access the wireless network.

In addition, the WLAN controller can also detect and protect against malicious activities such as unauthorized access, denial-of-service attacks, and interference from other wireless networks. By using the controller, administrators can also monitor the usage of the wireless network and make sure that the network is secure.

Network firewalls:

Then, we have firewalls that are the cornerstone of security. There will be different firewall types depending on your requirements. Firewalls range from basic packet filterings to advanced next-generation firewalls and come in virtual and physical forms.

Generally, a firewall monitors and controls incoming and outgoing traffic according to predefined security rules. The firewall will have a default rule set so that some firewall interfaces are more trusted than others, blankly restricting traffic from outside to inside, but you need to set up a policy for firewalls to work.

A firewall typically establishes a barrier between a trusted, secure internal network and another outside network, such as the Internet, which is assumed not to be secure or trusted. Firewalls are typically deployed in a layered approach, meaning multiple security measures are used to protect the network. Firewalls provide application, protocol, and network layer protection.

data center firewall
Diagram: The data center firewall.


  • Application layer protection:

The next layer is the application layer, designed to protect the network from malicious applications, such as viruses and malware. The application layer also includes software like firewalls to detect and block malicious traffic.

  • Protocol layer protection: 

The third layer is the protocol layer. This layer focuses on ensuring that the data traveling over a network is encrypted and that it is not allowed to be modified or corrupted in any way. This layer also includes authentication protocols that prevent unauthorized users from accessing the network.

  • Network Layer protection

Finally, the fourth layer is network layer protection. This layer focuses on controlling access to the network and ensuring that users cannot access resources or applications they are not authorized to use.

A network intrusion protection system (IPS): 

An IPS or IDS analyzes network traffic to search for signs that a particular behavior is suspicious or malicious. The IPS can take protective action immediately if it detects such behavior. In addition, the IPS and firewall can work together to protect a network. So, if an IPS detects suspicious behavior, it can trigger a policy or rule for the firewall to implement.

An intrusion protection system can alert administrators of suspicious activity, such as attempts to gain unauthorized access to confidential files or data. Additionally, it can block malicious activity if necessary; it provides a layer of defense against malicious actors and cyber attacks. Intrusion protection systems are essential to any organization’s security plan.

Cisco IPS
Diagram: Traditional Intrusion Detection. With Cisco IPS.

Computer Networking Components – Media

Next, we have the media. The media connects network devices. Different media have different characteristics, and selecting the most appropriate medium depends on the circumstances, including the environment in which the media is used and the distances that need to be covered.

The media will need some connectors. A connector makes it much easier to connect wired media to network devices. A connector is a plug attached to each end of the cable. RJ-45 connector is the most common type of connector on an Ethernet LAN.

Ethernet: Wired LAN technology.

The term Ethernet refers to an entire family of standards. Some standards define how to send data over a particular type of cabling and at a specific speed. Other standards define protocols or rules that the Ethernet nodes must follow to be a part of an Ethernet LAN. All these Ethernet standards come from the IEEE and include 802.3 as the beginning of the standard name.

Introducing Copper and Fiber

Ethernet LANs use cables for the links between nodes on a computer network, and because many types of cables use copper wires, Ethernet LANs are often called wired LANs. Ethernet LANs also use fiber-optic cabling, which includes a fiberglass core that devices use to send data using light. 

Materials inside the cable: UTP and Fiber

The most fundamental cabling choice concerns the materials used inside the cable to transmit bits physically: either copper wires or glass fibers. 

  • Unshielded twisted pair (UTP) cabling devices transmit data over electrical circuits via the copper wires inside the cable.
  • Fiber-optic cabling, the more expensive alternative, allows Ethernet nodes to send light over glass fibers in the cable’s center. 

Although more expensive, optical cables typically allow longer cabling distances between nodes. So you have UTP cabling in your LAN and Fiber-optic cabling over the WAN.

UTP and Fiber

The most common copper cabling for Ethernet is UTP. An unshielded twisted pair (UTP) is cheaper than the other two and is easier to install and troubleshoot. The capability of many UTP-based Ethernet standards to use a cable length of up to 100 meters means that most Ethernet cabling in an enterprise uses UTP cables.

The distance from an Ethernet switch to every endpoint on the floor of a building will likely be less than 100m. In some cases, however, an engineer might prefer first to use fiber cabling for some links in an Ethernet LAN to reach greater distances.

Fiber Cabling

Then we have fiber-optic cabling, a glass core that carries light pulses and is immune to electrical interference. Fiber-optic cabling is typically used as a backbone between buildings. So fiber cables are high-speed transmission mediums. It contains tiny glass or plastic filaments as the medium to which light passes.

Cabling types: Multimode and Single Mode

There are two main types of fiber optic cables. We have single-mode fiber ( SMF) and multimode fiber ( MMF). Two implementations of fiber-optic include MMF for shorter distances and SMF for longer distances. Multimode improves the maximum distances over UTP and uses less expensive transmitters than single-mode. Standards vary; for instance, the criteria for 10 Gigabit Ethernet over Fiber allow for distances up to 400m, often allowing for connecting devices in different buildings in the same office park.


Network Services and Protocols

We need to follow these standards and the rules of the game. And we also need protocols, so we have the means to communicate. If you use your web browser, you use the HTTP protocol. And if you are sending an email, we are using other protocols such as IMAP and SMTP.

A protocol establishes a set of rules that determine how data is transmitted between different devices in the network. The two protocols must talk to each other, such as HTTP at one end and HTTP at the other.

Consider protocol in the same way as speaking the same language. So we need to speak the same language to communicate. Then, we have standards that we need to follow for computer networking, such as the TCP/IP suite.


Types of protocols

We have different types of protocols. The following are the main types of protocols used in computer networking.

  • Communication Protocols

For example, we have routing protocols on our routers that help you forward traffic. This would be an example of a communication protocol that allows different devices to communicate with each other. Another example of a communication protocol would be instant messaging.

Instant messaging is instantaneous, text-based communication you probably have used on your smartphone. So here we have several instant messaging network protocols. Short Message Service (SMS): This communications protocol was created to send and receive text messages over cellular networks.  

  • Network Management

Network management protocols define and describe the various operating procedures of a computer network. These protocols affect multiple devices on a single network — including computers, routers, and servers — to ensure each one and the network, as a whole, perform optimally.

  • Security Protocols

Security protocols, also called cryptographic protocols, ensure that the network and the data sent over it are protected from unauthorized users. Security protocols are implemented on more than just your network security devices. They are implemented everywhere. The standard functions of security network protocols include encryption: Encryption protocols protect data and secure areas by requiring users to input a secret key or password to access that information.

The following screenshot is an example of an IPsec Tunnel offering end-to-end encryption. Notice that the first packet in the ping ( ICMP request ) was lost due to ARP working in the background. 5 pings are sent, but 4 are encapsulated/decapsulated.


Site to Site VPN


Characteristics of a network

Network Topology:

In a carefully designed network, data flows are optimized, and the network performs as intended based on the network topology. Network topology is the arrangement of a computer network’s elements (links, nodes, etc.). It can be used to illustrate a network’s physical and logical layout and how it functions. 

Bitrate or Bandwidth:

It is often referred to as bandwidth or speed in device configurations, sometimes considered speed. Bitrate measures the data rate in bits per second (bps) of a given link in the network. The number of bits transmitted in a second is more important than the speed at which one bit is transmitted over the link – which is determined by the physical properties of the medium that propagates the signal. Many link bit rates are commonly encountered today, including 1 and 10 gigabits per second (1 and 10 billion bits per second). Some links can reach 100 and even 400 gigabits per second.

Network Availability: 

Network availability is determined by several factors, including the type of network being used, the number of users, the complexity of the network, the physical environment, and the availability of network resources. Network availability should also be addressed in terms of redundancy and backup plans. Redundancy helps to ensure that the system is still operational even if one or more system components fail. Backup plans should also be in place in the event of a system failure.

A network’s availability is calculated based on the percentage of time it is accessible and operational. To calculate this percentage, divide the number of minutes the network is available by the total number of minutes it is available for over an agreed period and divide it by 100. In other words, availability is the ratio of uptime and full-time, expressed in percentage. 


Gateway Load Balancer Protocol
Diagram: Gateway Load Balancer Protocol (GLBP)

Network High Availability: 

High availability is a critical component of a successful IT infrastructure. It ensures that systems and services remain available and accessible to users and customers. High availability is achieved by using redundancies, such as multiple servers, systems, and networks, to ensure that if one component fails, a backup component is available.

High availability is also achieved through fault tolerance, which involves designing systems that respond to failures without losing data or becoming unavailable. High availability can be achieved through various strategies like clustering, virtualization, and replication.

Network Reliability:

Network reliability can be achieved by implementing a variety of measures, often through redundancy. Redundancy is a crucial factor in ensuring a reliable network. Redundancy has multiple components to provide a backup in case of failure. Redundancy can include having multiple servers, routers, switches, and other hardware devices. Redundancy can also involve having numerous sources of power, such as various power supplies or batteries, and multiple paths for data to travel through the network.

For adequate network reliability, you also need to consider network monitoring. Network monitoring involves using software and hardware tools to monitor the network’s performance continuously. Monitoring can detect and alert administrators of potential performance issues or failures. We have a new term called Observability, which accurately reflects tracking in today’s environment.

Network Characteristics
Diagram: Network Characteristics

Network Scalability:

A network’s scalability indicates how easily it can accommodate more users and data transmission requirements without affecting performance. It can be costly and difficult to meet new needs when the network grows if you design and optimize it only for the current conditions.

In terms of network scalability, several factors must be taken into account. First and foremost, the network must be designed with the expectation that the number of devices or users will increase over time. This includes hardware and software components, as the network must support the increased traffic. Additionally, the network must be designed to be flexible so that it can easily accommodate changes in traffic or user count. 

Network Security: 

Network security is protecting the integrity and accessibility of networks and data. It involves a range of protective measures designed to prevent unauthorized access, misuse, modification, or denial of a computer network and its processing data. These measures include physical security, technical security, and administrative security. A network’s security tells you how well it protects itself against potential threats.

The subject of security is essential, and defense techniques and practices are constantly evolving. The network infrastructure and the information transmitted over it should also be protected. Whenever you take actions to affect the network, you should consider security. An excellent way to view network security is to take a zero-trust approach.

zero trust security


Virtualization can be done at the hardware, operating system, and application level. At the hardware level, physical hardware can be divided into multiple virtual machines, each running its operating system and applications.

At the operating system level, virtualization can run multiple operating systems on the same physical server, allowing for more efficient use of resources. Multiple applications can run on the same operating system at the application level, allowing for better resource utilization and scalability. 

Overall, virtualization can provide several benefits, including improved efficiency, utilization, and flexibility, as well as improved security and scalability. It can consolidate and manage hardware or simplify application movement between different environments. Virtualization can also make it easier to manage other settings and provide better security by isolating various applications.


Computer Networking and Network Topologies

Physical and logical topologies exist in networks. The physical topology describes the physical layout of the devices and cables. A physical topology may be the same in two networks but may differ in distances between nodes, physical connections, transmission rates, or signal types.

There are various types of physical topologies you may encounter in wired networks. Identifying the type of cabling used is essential when describing the physical topology. Physical topology can be categorized into the following categories:

  • Bus Topology:

In a bus topology, every workstation is connected to a common transmission medium, a single cable called a backbone or bus. In a previous bus topology, computers and other network devices were connected to a central coaxial cable via connectors, resulting in direct connectivity.

  • Ring Topology:

In a ring topology, computers and other network devices are cabled in succession, with the last device connected to the first to form a circle or ring. There are two neighbors for every device in the network, and there are no direct connections between them. When one node sends data to another, it passes through each node between them until it reaches its destination.

  • Star Topology

A star topology is the most common physical topology, where network devices are connected to a central device through point-to-point connections. It is also known as the hub and spoke topology. A spoke device does not have a direct physical connection to another. This topology can also be called the extended star topology. A device with its spokes replaces one or more spoke devices in an extended star topology.

  • Mesh Topology

One device can be connected to more than one other in a mesh topology. For one node to reach another, multiple paths are available. Redundant links enhance reliability and self-healing. In a full mesh topology, all nodes are connected. In partial mesh, some nodes do not connect to all other nodes.


Introducing Switching Technologies

All Layer 2 devices connect to switches to communicate with one another. Switches work at layer two of the Open Systems Interconnection (OSI) model, the data link layer. Switches are ready to use right out of the box. In contrast to a router, a switch doesn’t require configuration settings by default. When you unbox the switch, it does not need to be configured to perform its role, which is to provide connectivity for all devices on your network. After putting power on the switch and connecting the systems, the switch will forward traffic to each connected device as needed.

Switch vs. Hubs

Moreover, you learned that switches had replaced hubs since they provide more advanced capabilities and are better suited to today’s computer networks. Advanced functionality includes filtering traffic by sending data only to the destination port (while a hub always sends data to all ports).

Full Duplex vs. Half Duplex

With a full duplex, both parties can talk and listen simultaneously, making it more efficient than half-duplex communication, where only one can speak simultaneously. Full duplex transmission is also more reliable since it is less likely to experience interference or distortion. Until switches became available, communication devices were only half-duplexed with hubs. A half-duplex device can send and receive simultaneously, but not simultaneously send and receive.

VLAN: Logical LANs

Virtual Local Area Networks (VLANs) are computer networks that divide a single physical local area network (LAN) into multiple logical networks. This partitioning allows for the segmentation of broadcast traffic, which helps to improve network performance and security.

VLANs enable administrators to set up multiple networks within a single physical LAN without needing separate cables or ports. These benefits businesses need to separate data and applications between various teams, departments, or customers.

In a VLAN, each segment is identified by a unique identifier or VLAN ID. The VLAN ID is used to associate traffic with a particular VLAN segment. For example, if a user needs to access an application on a different VLAN, the packet must be tagged with the VLAN ID of the destination segment to be routed correctly.

In the screenshot below, we have an overlay with VXLAN. VXLAN, short for Virtual Extensible LAN, is an overlay network technology that enables the creation of virtual Layer 2 networks over an existing Layer 3 infrastructure. It addresses traditional VLANs’ limitations by extending network virtualization’s scalability and flexibility. By encapsulating Layer 2 frames within UDP packets, VXLAN allows for the creation up to 16 million logical networks, overcoming the limitations imposed by the 12-bit VLAN identifier field.


Diagram: Changing the VNI

VLANs also provide security benefits. A VLAN can help prevent malicious traffic from entering a segment by segmenting traffic into logical networks. This helps prevent attackers from gaining access to the entire network. Additionally, VLANs can isolate critical or confidential data from other users on the same network. VLANs can be implemented on almost any network, including wired and wireless networks. They can also be combined with other network technologies, such as routing and firewalls, to improve security further.

Overall, VLANs are a powerful tool to help improve performance and security in a local area network. With the right implementation and configuration, businesses can enjoy improved performance and better protection when using VLANs.


IP Routing Process

IP routing works by examining the IP address of each packet and determining where it should be sent. Routers are responsible for this task and use routing protocols such as RIP, OSPF, EIGRP, and BGP to decide the best route for each packet. In addition, each router contains a routing table, which includes information on the best path to a given destination.

When a router receives a packet, it looks up the destination in its routing table. If the destination is known, the router will make a forwarding decision based on the routing protocol. The router will use a default gateway to forward the packet if the destination is unknown.

Routing Protocol
Diagram: Routing Protocol. ISIS.

To route packets successfully, routers must be configured appropriately and must be able to communicate with one another. Routers must also be able to detect any changes to the network, such as link failures or changes in network topology.

IP routing is an essential component of any network and ensures that packets are routed as efficiently as possible. Therefore, ensuring routers are correctly configured and maintained to route packets successfully is essential.

IP Forwarding Example
Diagram: IP Forwarding Example.


Routing Table

A routing table is a data table stored in a router or a networked computer that lists the possible routes a packet of data can take when traversing a network. The routing table contains information about the network’s topology and decides which route a packet should take when leaving the router or computer. Therefore, the routing table must be updated to ensure data packets are routed correctly.

The routing table usually contains entries that specify which interface to use when forwarding a packet. Each entry may have network destination addresses and the associated metrics, such as the cost or hop count of the route. In addition to the destination address, each entry can include a subnet mask, a gateway address, and a list of interface addresses.

Routers use the routing table to determine which interface to use when forwarding packets. When a router receives a packet, it looks at the packet’s destination address and compares it to the entries in the routing table. Once it finds a match, it forwards the packet to the corresponding interface.


Lab Guide: Networking and Security

Routing Tables and Netstat

Routing tables are essentially databases stored within networking devices, such as routers. These tables contain valuable information about the available paths and destinations within a network. Each entry in a routing table consists of various fields, including the destination network address, next-hop address, and interface through which the data packet should be forwarded.

One of the fundamental features of Netstat is its ability to display active connections. Using the appropriate flags, you can view the list of established connections, their local and remote IP addresses, ports, and the protocol being used. This information is invaluable for identifying suspicious or unauthorized connections.

Get started by running the route command.

Analysis: Seem familiar? Yet another table with the following column headers:

    • Destination: This refers to the destination of any traffic from this device. Default refers to anything not explicitly set.

    • Gateway: The next hop for traffic headed to the specific destination.

    • Genmask: The netmask of the destination.

      Note: For more detailed explanations of all the columns and results, run man route.

Run netstat to get a stream of information relating to network socket connections and UNIX domain sockets.

Note: UNIX domain sockets are a mechanism that allows processes local to the devices to exchange information.

  1. To clean this up, you can view just the network traffic using. netstat -at.

    • -a displays all ports, including IPV4 & IPV6

    • -t displays only TCP sockets

Analysis: When routes are created in different ways, they display differently. In the most recent rule, you can see that no metric is listed, and the scope is different from the other automatic routes. That is the kind of information we can use for detection.

The route table will send traffic to the designated gateway regardless of the route’s validity. Threat actors can use this to intercept traffic destined for another location, and it is a crucial location to look for indicators of compromise.

How Routing Tables Work

Routing tables utilize various routing protocols, such as OSPF (Open Shortest Path First) or BGP (Border Gateway Protocol), to gather information about network topology and make informed decisions about the best paths for data packets. These protocols exchange routing information between routers, ensuring that each device has an up-to-date understanding of the network’s structure.

Routing Table Entries and Metrics

Each entry in a routing table contains specific metrics that determine the best path for forwarding packets. Metrics can include hop count, bandwidth, delay, or reliability. By evaluating these metrics, routers can select the most optimal route based on network conditions and requirements.

Computer Networking

wan monitoring

SD WAN Overlay


SDWAN overlay


SD WAN Overlay

In today’s digital age, businesses rely on seamless and secure network connectivity to support their operations. Traditional Wide Area Network (WAN) architectures often struggle to meet the demands of modern companies due to their limited bandwidth, high costs, and lack of flexibility. A revolutionary SD-WAN (Software-Defined Wide Area Network) overlay has emerged to address these challenges, offering businesses a more efficient and agile network solution. This blog post will delve into SD-WAN overlay, exploring its benefits, implementation, and potential to transform how businesses connect.

SD-WAN overlay is a network architecture that enhances traditional WAN infrastructure by leveraging software-defined networking (SDN) principles. Unlike traditional WAN, where network management is done manually and requires substantial hardware investments, SD-WAN overlay centralizes network control and management through software. This enables businesses to simplify network operations and reduce costs by utilizing commodity internet connections alongside existing MPLS networks.


SD WAN Overlay 

Overlay Types

  • Tunnel-Based Overlays

  • Segment-Based Overlays

  • Policy-Based Overlays

  • Internet-Based SD-WAN Overlay


SD WAN Overlay 

Overlay Types

  • Hybrid Overlays

  • Cloud-Enabled Overlays

  • MPLS-Based SD-WAN Overlay

  • Hybrid SD-WAN Overlay

Highlights: SD-WAN Overlay

So, what exactly is an SD-WAN overlay?

In simple terms, it is a virtual layer added to the existing network infrastructure. These network overlays connect different locations, such as branch offices, data centers, and the cloud, by creating a secure and reliable network.

1. Tunnel-Based Overlays:

One of the most common types of SD-WAN overlays is tunnel-based overlays. This approach encapsulates network traffic within a virtual tunnel, allowing it to traverse multiple networks securely. Tunnel-based overlays are typically implemented using IPsec or GRE (Generic Routing Encapsulation) protocols. They offer enhanced security through encryption and provide a reliable connection between the SD-WAN edge devices.

2. Segment-Based Overlays:

Segment-based overlays are designed to segment the network traffic based on specific criteria such as application type, user group, or location. This allows organizations to prioritize critical applications and allocate network resources accordingly. By segmenting the traffic, SD-WAN can optimize the performance of each application and ensure a consistent user experience. Segment-based overlays are particularly beneficial for businesses with diverse network requirements.

3. Policy-Based Overlays:

Policy-based overlays enable organizations to define rules and policies that govern the behavior of the SD-WAN network. These overlays use intelligent routing algorithms to dynamically select the most optimal path for network traffic based on predefined policies. By leveraging policy-based overlays, businesses can ensure efficient utilization of network resources, minimize latency, and improve overall network performance.

4. Hybrid Overlays:

Hybrid overlays combine the benefits of both public and private networks. This overlay allows organizations to utilize multiple network connections, including MPLS, broadband, and LTE, to create a robust and resilient network infrastructure. Hybrid overlays intelligently route traffic through the most suitable connection based on application requirements, network availability, and cost. Businesses can achieve high availability, cost-effectiveness, and improved application performance by leveraging mixed overlays.

5. Cloud-Enabled Overlays:

As more businesses adopt cloud-based applications and services, seamless connectivity to cloud environments becomes crucial. Cloud-enabled overlays provide direct and secure connectivity between the SD-WAN network and cloud service providers. These overlays ensure optimized performance for cloud applications by minimizing latency and providing efficient data transfer. Cloud-enabled overlays simplify the management and deployment of SD-WAN in multi-cloud environments, making them an ideal choice for businesses embracing cloud technologies.


Related: For additional pre-information, you may find the following helpful:

  1. Transport SDN
  2. SD WAN Diagram 
  3. Overlay Virtual Networking


SD-WAN Overlay

Key SD WAN Overlay Discussion Points:

  • WAN transformation.

  • The issues with traditional networking.

  • Introduction to Virtual WANs.

  • SD-WAN and SDN discussion.

  • SD-WAN overlay core features.

  • Drivers for SD-WAN.


Back to Basics: SD-WAN Overlay

Overlay Networking

Overlay networking is an approach to computer networking that involves building a layer of virtual networks on top of an existing physical network. This approach provides a way to improve the scalability, performance, and security of the underlying infrastructure. It also allows for the creating of virtual networks that span multiple physical networks, allowing for greater flexibility in traffic routed.

At the core of overlay networking is the concept of virtualization. This involves separating the physical infrastructure from the virtual networks, allowing greater control over allocating resources. This separation also allows the creation of virtual network segments that span multiple physical networks. This provides an efficient way to route traffic, as well as the ability to provide additional security and privacy measures.

Underlay network

A network underlay is a physical infrastructure that provides the foundation for a network overlay, a logical abstraction of the underlying physical network. The network underlay provides the physical transport of data between nodes, while the overlay provides logical connectivity.

The network underlay can comprise various technologies, such as Ethernet, Wi-Fi, cellular, satellite, and fiber optics. The network underlay is the foundation of a network overlay, and it is essential for the proper functioning of the network. It provides the transport of data and the physical connections between nodes. It also provides the physical elements that make up the infrastructure, such as routers, switches, and firewalls.

Overlay networking
Diagram: Overlay networking. Source Researchgate.


SD-WAN with SDWAN overlay.

SD-WAN leverages a transport-independent fabric technology that is used to connect remote locations. This is achieved by using overlay technology. The SDWAN overlay works by tunneling traffic over any transport between destinations within the WAN environment.

This gives authentic flexibility to routing applications across any network portion regardless of the circuit or transport type. This is the definition of transport independence. Having a fabric SDWAN overlay network means that every remote site, regardless of physical or logical separation, is always a single hop away from another.

SD-WAN overlays offer several advantages over traditional WANs, including improved scalability, reduced complexity, and better control over traffic flows. They also provide better security, as each site is protected by its dedicated security protocols. Additionally, SD-WAN overlays can improve application performance and reliability and reduce latency.

We need more bandwidth.

Modern businesses demand more bandwidth than ever to connect their data, applications, and services. As a result, we have many things to consider with the WAN, such as regulations, security, visibility, branch, data center sites, remote workers, internet access, cloud, and traffic prioritization. They were driving the need for SD-WAN.

The concepts and design principles of creating a wide area network (WAN) to provide resilient and optimal transit between endpoints have continuously evolved. But the driver behind building a better WAN has remained the same: to support applications that demand performance and resiliency.


SD WAN Overlay 

Key SD WAN Features

Full stack obervability 

Not all traffic treated equally

Combining all transports

Intelligent traffic steering 

Controller-based policy


WAN Innovation

The WAN is the entry point between inside the perimeter and outside. An outage in the WAN has a large blast radius, affecting many applications and other branch site connectivity. Yet the WAN has had little innovation until now with the advent of both SD-WAN and SASE.  SASE is a combination of both network and security functions.

If you look at the history of the WAN, there have been several stages in WAN virtualization. Most WAN transformation projects went from the basic hub-and-spoke topologies based on services such as leased lines to fully meshed MPLS-based WAN servers. The cost was the main driver for this evolution and not agility.  


wide area network
Diagram: Wide Area Network: WAN Technologies.


Issues with the Traditional Network

As the world of I.T. becomes dispersed, the network and security perimeters are dissolving and becoming less predictable. Before, it was easy to know what was internal and external, but now we live in a world of micro-perimeters with a considerable change in the focal point.

The perimeter is now the identity of the user and device – not the fixed point at an H.Q. site. As a result, applications require a WAN to support distributed environments, flexible network points, and a change in the perimeter design.

Suboptimal traffic flow

The optimal route will be the fastest or most efficient and, therefore, preferred to transfer data. Sub-optimal routes will be slower and, therefore, not the preferred route. Centralized-only designs resulted in suboptimal traffic flow and increased latency, which will degrade application performance.

A key point to note is that traditional networks focus on centralized points in the network that all applications, network, and security services must adhere to. These network points are fixed and cannot be changed.

Network point intelligence

However, the network should be evolved to have network points positioned where it makes the most sense for the application and user. Not based on, let’s say, a previously validated design for a different application era. For example, many branch sites do not have local Internet breakouts.

So, for this reason, we backhauled internet-bound traffic to secure, centralized internet portals at the H.Q. site. As a result, we sacrificed the performance of Internet and cloud applications. Designs that place the H.Q. site at the center of connectivity requirements inhibit the dynamic access requirements for digital business.

Hub and spoke drawbacks.

Simple spoke-type networks are sub-optimal because you always have to go to the center point of the hub and then out to the machine you need rather than being able to go directly to whichever node you need. As a result, the hub becomes a bottleneck in the network as all data must go through it. With a more scattered network using multiple hubs and switches, a less congested and more optimal route could be found between machines.


  • A key point on MPLS agility

Multiprotocol Label Switching, or MPLS, is a networking technology that routes traffic using the shortest path based on “labels,” rather than network addresses, to handle forwarding over private wide area networks. As a protocol-independent solution, MPLS assigns labels to each data packet, controlling the path the packet follows. As a result, MPLS significantly improves traffic speed yet has some drawbacks.

Diagram: MPLS VPN

MPLS topologies, once they are provisioned, are challenging to modify. While community tagging and matching do provide some degree of flexibility and are commonly used, meaning the customers set BGP communities on prefixes for specific applications. The SP matches these communities and sets traffic engineering parameters like the MED and Local Preference. However, the network topology essentially remains fixed.


digital transformation
Diagram: Networking: The cause of digital transformation.


Connecting remote sites to the cloud offering, such as SaaS or IaaS, is far more efficient over the public Internet. However, there are many drawbacks to backhauling traffic to a central data center when it is not required, and it is more efficient to go direct. SD-WAN technologies share similar technologies to DMVPN phases, allowing your branch sites to go directly to the cloud-based applications without backhauling to the central H.Q.


Introducing the SD WAN Overlay

A software-defined wide area network is a wide area network that uses software-defined network technology, such as communicating over the Internet using SDWAN overlay tunnels that are encrypted when destined for internal organization locations. SD-WAN is software-defined networking for the wide area network.

SD-WAN decouples (separates) the WAN infrastructure be it physical or virtual, from its control plane mechanism and allows applications or application groups to be placed into virtual WAN overlays.

Types of SD WAN and the SD WAN overlay: The virtual WANs 

The separation allows us to bring many enhancements and improvements to a WAN that has had very little innovation in the past compared to the rest of the infrastructure, such as server and storage modules. With server virtualization, several virtual machines create application isolation on a physical server.

For example, an application placed in a VM operated in isolation from each other, yet the VMs were installed on the same physical hosts.

Consider SD-WAN to operate with similar principles. Each application or group can operate independently when traversing the WAN to endpoints in the cloud or other remote sites. These applications are placed into a virtual SDWAN overlay.

Cisco SD WAN Overlay
Diagram: Cisco SD-WAN overlay. Source Network Academy


SD-WAN overlay and SDN combined

  • The Fabric

The word fabric comes from the fact that there are many paths to move from one server to another to ease balance and traffic distribution. SDN aims to centralize the order that enables the distribution of the flows over all the fabric paths. Then we have an SDN controller device. The SDN controller can also control several fabrics simultaneously, managing intra and inter-datacenter flows.

  • SD-WAN overlay includes SDN

SD-WAN is used to control and manage a company’s multiple WANs. There are different types of WAN: Internet, MPLS, LTE, DSL, fiber, wired network, circuit link, etc. SD-WAN uses SDN technology to control the entire environment. Like SDN, the data plane and control plane are separated. A centralized controller must be added to manage flows, routing or switch policies, packet priority, network policies, etc. SD-WAN technology is based on overlay, meaning nodes representing underlying networks.

  • Centralized logic

In a traditional network, each device’s transport functions and controller layer are resident. This is why any configuration or change must be done box-by-box. Configuration was carried out manually or, at the most, an Ansible script. SD-WAN brings Software-Defined Networking (SDN) concepts to the enterprise branch WAN.

Software-defined networking (SDN) is an architecture, whereas SD-WAN is a technology that can be purchased and built on SDN’s foundational concepts. The centralized logic that SD-WAN offers stems from software-defined networking. SDN separates the control from the data plane and uses a central controller to make intelligent decisions, similar to the design that most SD-WAN vendors operate.

  • A holistic view

The controller has a holistic view. Same with the SD-WAN overlay. The controller supports central policy management, enabling network-wide policy definitions and traffic visibility. The SD-WAN edge devices perform the data plane. The data plane is where the simple forwarding occurs, and the control plane, which is separate from the data plane, sets up all the controls for the data plane to forward.

Like SDN, the SD-WAN overlay abstracts network hardware into a control plane with multiple data planes to make up one large WAN fabric. As the control layer is abstracted and decoupled above the physicals and running in software, services can be virtualized and delivered from a central location to any point on the network.

sd-wan technology
Diagram: SD-WAN technology: The old WAN vs the new WAN.


Types of SD WAN and SD-WAN Overlay Features

Enterprises that employ SD-WAN solutions for their network architecture will simplify the complexity of their WAN. Enterprises should look at the SD-WAN options available in various deployment options, ranging from the thin devices with most of the functionality in the cloud to more, let’s say, thicker devices at the branch location performing most of the work. Whichever SD-WAN vendor you choose, there will be similar features.

Today’s WAN environment requires us to manage many elements: numerous physical components that include both network and security devices, complex routing protocols, configurations, complex high availability designs, and various path optimizations and encryption techniques. 


  • A key point: Gaining the SD-WAN benefits

By employing the features discussed below, you will gain the benefits of SD-WAN: its higher capacity bandwidth, centralized management, network visibility, and multiple connection types. In addition, SD-WAN technology allows organizations to use cheaper connection types than MPLS.

virtual private network
Diagram: SD-WAN features: Virtual Private Network (VPN).


Types of SD WAN: Combining the transports

At its core, SD-WAN shapes and steers application traffic across multiple WAN means of transport. Building off the concept of link bonding to combine multiple means of transport and transport types, the SD-WAN overlay improves the concept by moving the functionality up the stack—first, SD-WAN aggregates last-mile services, representing them as a single pipe to the application.

SD-WAN allows you to combine all transport links into one big pipe. SD-WAN is transport agnostic. As it works by abstraction, it does not care what transport links you have. Maybe you have MPLS, private Internet, or LTE. It can combine all these or use them separately.

Types of SD WAN: Central location

From a central location, SD-WAN pulls all of these WAN resources together, creating one large WAN fabric which allows administrators to slice up the WAN to match the application requirements that sit on top. Different applications traverse the WAN, so we need the WAN to react differently.

For example, if you’re running a call center, you want a low delay, latency, and high availability with Voice traffic. You may want this traffic to use an excellent service-level agreement path.

SD WAN traffic steering
Diagram: SD-WAN traffic steering. Source Cisco.

Types of SD WAN: Traffic steering

There may also be a requirement for traffic steering: voice traffic to another path if, for example, the first Path is experiencing high latency. Or, if it’s not possible to steer traffic automatically to a link that is better performing, run a series of path remediation techniques to try and improve performance. File transfer differs from real-time Voice: you can tolerate more delay but need more B/W.

Here you may want to use a combination of WAN transports ( such as customer broadband and LTE ) and combine them for higher aggregate B/W. It also allows you to automatically steer traffic over different WAN transports when there is a deflagration on one link. With the SD-WAN overlay, we must start thinking about paths, not links.

SD-WAN overlay makes intelligent decisions

At its core, SD-WAN enables real-time application traffic steering over any link, such as broadband, LTE, and MPLS, assigning pre-defined policies based on business intent. Steering policies support many application types, making intelligent decisions about how WAN links are utilized and which paths are taken.

computer networking
Diagram: Computer networking: Overlay technology.


Types of SD WAN: Steering traffic

The concept of an underlay and overlay are not new, and SD-WAN borrows these designs. First, the underlay is the physical or virtual world, such as the physical infrastructure. Then we have the overlay, where all the intelligence can be set. The SDWAN overlay represents the virtual WANs that hold your different applications.

A virtual WAN overlay enables us to steer traffic and combine all bandwidths. Similar to how applications are mapped to V.M. in the server world, with SD-WAN, each application is mapped to its own virtual SDWAN overlay. And each virtual SDWAN overlay can have its SD WAN security policies, topologies, and performance requirements.

SD-WAN overlay path monitoring

SD-WAN monitors the paths and the application performance on each link (Internet, MPLS, LTE ) and then chooses the best Path based on real-time conditions and the policy set by the business. In summary, the underlay network is the physical or virtual infrastructure above which the overlay network is built. An SDWAN overlay network is a virtual network built on top of an underlying Network infrastructure/Network layer (the underlay).

Types of SD WAN: Controller-based policy

An additional layer of information is needed to make more intelligent decisions about how and where to forward application traffic. This is the controller-based policy approach that SD-WAN offers, incorporating a holistic view.

A central controller can now make decisions based on global information, not solely on a path-by-path basis with traditional routing protocols.  Getting all the routing information and compiling it into the controller to make a decision is much more efficient than making local decisions that only see a limited part of the network.

The SD-WAN Controller provides physical or virtual device management for all SD-WAN Edges associated with the controller. This includes but is not limited to configuration and activation, IP address management, and pushing down policies onto SD-WAN Edges that are located at the branch sites.


 SD-WAN Overlay Case Study

I recently consulted for a private enterprise. Like many enterprises, they have many applications, both legacy and new. No one knew about courses and applications running over the WAN. Visibility was at an all-time low. For the network design, the H.Q. has MPLS and Direct Internet access.

So nothing new here, and this design has been in place for the last decade. All traffic is backhauled to the HQ/MPLS headend for security screening. The H.Q. was where the security stack was located. This will include firewalls, IDS/IPS, and anti-malware. The remote sites have high latency and limited connectivity options.

types of sd wan
Diagram: WAN transformation: Network design.


More importantly, they are transitioning their ERP system to the cloud. As apps move to the cloud, they want to avoid fixed WAN, a big driver for a flexible SD-WAN solution. They also have remote branches. These branches are hindered by high latency and poorly managed I.T. infrastructure.

But they don’t want an I.T. representative at each site location. They have heard that SD-WAN has a centralized logic and can view the entire network from one central location. These remote sites must receive large files from the H.Q.; the branch sites’ transport links are only single-customer broadband links.

The cost of remote sites

Some remote sites have LTE, and the bills are getting more significant. The company wants to reduce costs with dedicated Internet access or customer/business broadband. They have heard that you can combine different transports with SD-WAN and have several path remediations on degraded transports for better performance. So they decided to roll out SD-WAN. From this new architecture, they gained several benefits.


SD WAN Visibility

When your business-critical applications operate over different provider networks, it gets harder to troubleshoot and find the root cause of problems. So visibility is critical to business. SD-WAN allows you to see network performance data in real time and is critical for determining where packet loss, latency, and jitter are occurring so you can resolve the problem quickly.

You also need to be able to see who or what is consuming bandwidth so you can spot intermittent problems. For all these reasons, SD-WAN visibility needs to go beyond network performance metrics and provide greater insight into the delivery chains that run from applications to users.

Understand your baselines

Visibility is needed to complete the network baseline before the SD-WAN is deployed. This enables the organization to understand existing capabilities, the norm, what applications are running, the number of sites connected, what service providers used, and whether they’re meeting their SLAs.

Visibility is a critical phase in getting a complete picture, so teams understand how to optimize the infrastructure for the business. SD-WAN gives you an intelligent edge so you can see all the traffic and do something with the traffic immediately.

First, look at the visibility of the various flows and what links are used, and any issues on those links. Then, if necessary, you can tweak the bonding policy to optimize the traffic flow. Before the rollout of SD-WAN, there was no visibility and types of traffic, and different apps used what B.W. They had limited knowledge of WAN performance.

SD-WAN offers higher visibility

With SD-WAN, they have the visibility to control and class traffic on layer seven values, such as what URL you are using and what Domain you are trying to hit, along with the standard port and protocol.

All applications are not equal; some applications run better on different links. You can route to a different circuit if a particular application is not performing correctly. With the SD-WAN orchestrator, you have complete visibility across all locations, all links, and into the different traffic across all circuits. 


SD WAN High Availability

The goal of any high-availability solution is to ensure that all network services are resilient to failure. Such a solution aims to provide continuous access to network resources by addressing the potential causes of downtime through functionality, design, and best practices.

The previous high-availability design was active and passive with manual failover. It was hard to maintain, and there was a lot of unused bandwidth. Now they have more efficient use of resources and are no longer tied to the bandwidth of the first circuit.

There is a better granular applications failover mechanism; You can also select what apps are prioritized if there is a link failure or when a certain congestion ratio is hit. For example, you have LTE as a backup which can be very expensive. So applications marked high priority are steered over the backup link, but guest wifi traffic doesn’t.  

Flexible topology

Before, they had a hub and spoke MPLS design for all applications. They wanted a complete mesh architecture for some applications, kept the existing hub, and spoke for others. However, the service provider couldn’t accommodate the level of granularity that they wanted.

Now with SD-WAN, they can choose topologies better suited to the application type. As a result, the network design is now more flexible and matches the application than the application matching a network design it doesn’t want.

SD-WAN topology
Diagram: SD-WAN Topologies.

Going Deeper on the SD-WAN Overlay Components

SD-WAN combines transports, SDWAN overlay, and underlay

Look at it this way. With an SD-WAN topology, there are different levels of networking. There is an underlay network, the physical infrastructure, and an SDWAN overlay network. The physical infrastructure is the router, switches, and WAN transports; the overlay network is the virtual WAN overlays.

The SDWAN overlay presents a different network to the application. For example, the voice overlay will see only the voice overlay. The logical virtual pipe the overlay creates and the application sees differs from the underlay.

An SDWAN overlay network is a virtual or logical network created on top of an existing physical network. The internet, which connects many nodes via circuit switching, is an example of an SDWAN overlay network. An overlay network is any virtual layer on top of physical network infrastructure.

Consider an SDWAN overlay as a flexible tag.

This may be as simple as a virtual local area network (VLAN) but typically refers to more complex virtual layers from SDN or an SD-WAN). Think of an SDWAN overlay as a tag so that building the overlays is not expensive or time-consuming. In addition, you don’t need to buy physical equipment for each overlay as the overlay is virtualized and in the software.

Similar to software-defined networking (SDN), the critical part is that SD-WAN works by abstraction. All the complexities are abstracted into application overlays. For example, application type A can use this SDWAN overlay, and application type B can use that SDWAN overlay. 

I.P. and port number, orchestrations, and end-to-end

Recent application requirements drive a new type of WAN that more accurately supports today’s environment with an additional layer of policy management. The world has moved away from looking at I.P. addresses and Port numbers used to identify applications and made the correct forwarding decision. 


Types of SD WAN

The market for branch office wide-area network functionality is shifting from dedicated routing, security, and WAN optimization appliances to feature-rich SD-WAN. As a result, WAN edge infrastructure now incorporates a widening set of network functions, including secure routers, firewalls, SD-WAN, WAN path control, and WAN optimization, along with traditional routing functionality. Therefore, consider the following approach to deploying SD-WAN.


SD WAN Overlay Approach

SD WAN Feature

 Application-orientated WAN

Holistic visibility and decisions

Central logic

Independent topologies

Application mapping


1. Application-based approach

With SD-WAN, we are shifting from a network-based approach to an application-based approach. The new WAN no longer looks solely at the network to forward packets. Instead, it looks at the business requirements and decides how to optimize the application with the correct forwarding behavior. This new way of forwarding would be problematic when using traditional WAN architectures.

Making business logic decisions with I.P. and port number information is challenging. Standard routing is the most common way to forward application traffic today, but it only assesses part of the picture when making its forwarding decision. 

These devices have routing tables to perform forwarding. Still, with this model, they operate and decide on their little island, losing the holistic view required for accurate end-to-end decision-making.  

2. SD-WAN: Holistic decision

The WAN must start to make decisions holistically. The WAN should not be viewed as a single module in the network design. Instead, it must incorporate several elements it has not incorporated to capture the correct per-application forwarding behavior. The ideal WAN should be automatable to form a comprehensive end-to-end solution centrally orchestrated from a single pane of glass.

Managed and orchestrated centrally, this new WAN fabric is transport agnostic. It offers application-aware routing, regional-specific routing topologies, encryption on all transports regardless of link type, and high availability with automatic failover. All of these will be discussed shortly and are the essence of SD-WAN.  

3. SD-WAN and central logic        

Besides the virtual SDWAN overlay, another key SD-WAN concept is centralized logic. Upon examining a standard router, local routing tables are computed from an algorithm to forward a packet to a given destination.

It receives routes from its peers or neighbors but computes paths locally and makes local routing decisions. The critical point to note is that everything is computed locally. SD-WAN functions on a different paradigm.

Rather than using distributed logic, it utilizes centralized logic. This allows you to view the entire network holistically and with a distributed forwarding plane that makes real-time decisions based on better metrics than before.

This paradigm enables SD-WAN to see how the flows behave along the path. This is because they are taking the fragmented control approach and centralizing it while benefiting from a distributed system. 

The SD-WAN controller, which acts as the brain, can set different applications to run over different paths based on business requirements and performance SLAs, not on a fixed topology. So, for example, if one path does not have acceptable packet loss and latency is high, we can move to another path dynamically.

4. Independent topologies

SD-WAN has different levels of networking and brings the concepts of SDN into the Wide Area Network. Similar to SDN, we have an underlay and an overlay network with SD-WAN. The WAN infrastructure, either physical or virtual, is the underlay, and the SDWAN overlay is in software on top of the underlay where the applications are mapped.

This decoupling or separation of functions allows a different application or group overlays. Previously, the application had to work with a fixed and pre-built network infrastructure. With SD-WAN, the application can choose the type of topology it wants, such as a full mesh or hub and spoke. The topologies with SD-WAN are a lot more flexible.


  • A key point: SD-WAN abstracts the underlay

The virtual WAN overlays are abstracted from the physical device’s underlay with SD-WAN. Therefore, the virtual WAN overlays can take on topologies independent of each other without being pinned to the configuration of the underlay network. SD-WAN changes how you map application requirements to the network allowing for the creation of independent topologies per application.

For example, mission-critical applications may use expensive leased lines, while lower-priority applications can use inexpensive best-effort Internet links. This can all change on the fly if specific performance metrics are unmet.

Previously, the application had to match and “fit” into the network with the legacy WAN, but with an SD-WAN, the application now controls the network topology. Multiple independent topologies per application are a crucial driver for SD-WAN.


types of sd wan
Diagram: SD-WAN Link Bonding.

5. The SD-WAN overlay

SD-WAN optimizes traffic over multiple available connections. It will dynamically steer traffic to the best available link. Suppose the available links show any transmission issues. In that case, it will immediately transfer to a better path or apply remediation to a link if, for example, you only have a single link. SD-WAN delivers application flows from a source to a destination based on the configured policy and best available network path. A core concept of SD-WAN is the overlays.

SD-WAN solutions provide the software abstraction to create the SDWAN overlay and decouple network software services from the underlying physical infrastructure. Multiple virtual overlays may be defined to abstract the underlying physical transport services, each supporting different quality of service, preferred transport, and high availability characteristics.

6. Application mapping

Application mapping also allows you to steer traffic over different WAN transports. This steering is automatic and can be implemented when specific performance metrics are unmet. For example, if Internet transport has a 15% packet loss, the policy can be set to steer all or some of the application traffic over to better-performing MPLS transport.

Applications are mapped to different overlays based on business intent, not infrastructure details like I.P. addresses. When you think about overlays, it’s common to have, on average, four overlays. For example, you may have a gold, platinum, and bronze SDWAN overlay and then map the applications to these overlays.

The applications will have different networking requirements, and overlays allow you to slice and dice your network if you have multiple application types. 

SDWAN Overlau
Diagram: Technology design: SDWAN overlay application mapping.


  • A key point: SD-WAN & WAN metrics

SD-WAN captures metrics that go far beyond the standard WAN measurements. For example, the traditional way would measure packet loss, latency, and jitter metrics to determine path quality. These measurements are insufficient for routing protocols that only make the packet flow decision at layer 3 of the OSI model.

As we know, layer 3 of the OSI model lacks intelligence and misses the overall user experience. Rather than relying on bits, bytes jitter, and latency, we must start to look at the application transactions.

SD-WAN incorporates better metrics that look beyond those considered by a standard WAN edge router. These metrics may include application response time, network transfer, and service response time. Some SD-WAN solutions monitor each flow’s RTT, sliding windows, and ACK delays. Not just the I.P. or TCP. This creates a more accurate view of the performance of the application.


SD-WAN Features and Benefits

      • Leverage all available connectivity types.

All SD-WAN vendors can balance traffic across all transports regardless of transport type, which can be done per flow or packet. This ensures that existing redundant links sitting idle are not being used. SD-WAN creates an active-active network and eliminates the need to use and maintain traditional routing protocols for active–standby setups.  

      • App-aware routing capabilities 

As we know, application visibility is critical to forward efficiently over either transport. Still, we also need to go one step further and examine deep inside the application and understand what sub-applications exist, such as determining Facebook chat over regular Facebook. This allows you to balance loads across the WAN based on sub-applications. 

      • Regional-specific routing topologies

Several topologies include a hub and spoke full mesh and Internet PoP topologies. Each organization will have different requirements for choosing a topology. For example, Voice should use a full mesh design, while data requires a hub and spoke connecting to a central data center.

As we are moving heavily into the use of cloud applications. Local internet access/internet breakout is a better strategic option than backhauling traffic to a central site when it doesn’t need to. SD-WAN abstracts the details of WAN enabling application-independent topologies. Each application can have its topology, and this can be dynamically changed. All of which are managed by an SD-WAN control plane.

      • Centralized device management & policy administration 

With the controller-based approach that SD-WAN has, you are not embedding the control plane in the network. This allows you to centrally provision and pushes policy down any instructions to the data plane from a central location. This simplifies management and increases scale. The manual box-by-box approach to policy enforcement is not the way forward.

The ability to tie everything to a template and automate enables rapid branch deployments, security updates, and other policy changes. It’s much better to manage it all in one central place with the ability to dynamically push out what’s needed, such as updates and other configuration changes. 

      • High availability with automatic failovers 

You cannot apply a single viewpoint to high availability. Many components are involved in creating a high availability plan, such as device, link, and site level’s high availability requirements; these should be addressed in an end-to-end solution. In addition, traditional WANs require additional telemetry information to detect failures and brown-out events. 

      • Encryption on all transports, irrespective of link type 

Regardless of link type, MPLS, LTE, or the Internet, we need the capacity to encrypt all those paths without the excess baggage and complications that IPsec brings. Encryption should happen automatically, and the complexity of IPsec should be abstracted.


Closing Comments on SD WAN Overlay

In today’s digital landscape, businesses increasingly rely on cloud-based applications, remote workforces, and data-driven operations. As a result, the demand for a more flexible, scalable, and secure network infrastructure has never been greater. This is where SD-WAN overlay comes into play, revolutionizing how organizations connect and operate.

SD-WAN overlay is a network architecture that allows organizations to abstract and virtualize their wide area networks, decoupling them from the underlying physical infrastructure. It utilizes software-defined networking (SDN) principles to create an overlay network that runs on top of the existing WAN infrastructure, enabling centralized management, control, and optimization of network traffic.

Key benefits of SD-WAN overlay 

1. Enhanced Performance and Reliability:

SD-WAN overlay leverages multiple network paths to distribute traffic intelligently, ensuring optimal performance and reliability. By dynamically routing traffic based on real-time conditions, businesses can overcome network congestion, reduce latency, and maximize application performance. This capability is particularly crucial for organizations with distributed branch offices or remote workers, as it enables seamless connectivity and productivity.

2. Cost Efficiency and Scalability:

Traditional WAN architectures can be expensive to implement and maintain, especially when organizations need to expand their network footprint. SD-WAN overlay offers a cost-effective alternative by utilizing existing infrastructure and incorporating affordable broadband connections. With centralized management and simplified configuration, scaling the network becomes a breeze, allowing businesses to adapt quickly to changing demands without breaking the bank.

3. Improved Security and Compliance:

In an era of increasing cybersecurity threats, protecting sensitive data and ensuring regulatory compliance are paramount. SD-WAN overlay incorporates advanced security features to safeguard network traffic, including encryption, authentication, and threat detection. Businesses can effectively mitigate risks, maintain data integrity, and comply with industry regulations by segmenting network traffic and applying granular security policies.

4. Streamlined Network Management:

Managing a complex network infrastructure can be a daunting task. SD-WAN overlay simplifies network management with centralized control and visibility, enabling administrators to monitor and manage the entire network from a single pane of glass. This level of control allows for faster troubleshooting, policy enforcement, and network optimization, resulting in improved operational efficiency and reduced downtime.

5. Agility and Flexibility:

In today’s fast-paced business environment, agility is critical to staying competitive. SD-WAN overlay empowers organizations to adapt rapidly to changing business needs by providing the flexibility to integrate new technologies and services seamlessly. Whether adding new branch locations, integrating cloud applications, or adopting emerging technologies like IoT, SD-WAN overlay offers businesses the agility to stay ahead of the curve.

Implementation of SD-WAN Overlay:

Implementing SD-WAN overlay requires careful planning and consideration. The following steps outline a typical implementation process:

1. Assess Network Requirements: Evaluate existing network infrastructure, bandwidth requirements, and application performance needs to determine the most suitable SD-WAN overlay solution.

2. Design and Architecture: Create a network design incorporating SD-WAN overlay while considering factors such as branch office connectivity, data center integration, and security requirements.

3. Vendor Selection: Choose a reliable and reputable SD-WAN overlay vendor based on their technology, features, support, and scalability.

4. Deployment and Configuration: Install the required hardware or virtual appliances and configure the SD-WAN overlay solution according to the network design. This includes setting up policies, traffic routing, and security parameters.

5. Testing and Optimization: Thoroughly test the SD-WAN overlay solution, ensuring its compatibility with existing applications and network infrastructure. Optimize the solution based on performance metrics and user feedback.

Conclusion: SD-WAN overlay is a game-changer for businesses seeking to optimize their network infrastructure. By enhancing performance, reducing costs, improving security, streamlining management, and enabling agility, SD-WAN overlay unlocks the true potential of connectivity. Embracing this technology allows organizations to embrace digital transformation, drive innovation, and gain a competitive edge in the digital era. In an ever-evolving business landscape, SD-WAN overlay is the key to unlocking new growth opportunities and future-proofing your network infrastructure.

SDWAN overlay

Cyber security threat. Computer screen with programming code. Internet and network security. Stealing private information. Using technology to steal password and private data. Cyber attack crime

Software defined perimeter (SDP) A disruptive technology


software-defined perimeter


Software Defined Perimeter

In today’s digital landscape, where the security of sensitive data is paramount, traditional security measures are no longer sufficient. The ever-evolving threat landscape demands a more proactive and robust approach to protecting valuable assets. Enter the Software Defined Perimeter (SDP), a revolutionary concept changing how organizations secure their networks. In this blog post, we will delve into the world of SDP and explore its benefits, implementation, and prospects.

Software Defined Perimeter, also known as a Zero Trust Network, is a security framework that provides secure access to applications and resources, regardless of the user’s location or network. Unlike traditional perimeter-based security models, which rely on firewalls and VPNs, SDP takes a more dynamic and adaptive approach.


Highlights: Software Defined Perimeter

  • A Disruptive Technology

There has been tremendous growth in the adoption of software defined perimeter solutions and the zero trust network design over the last few years. This has resulted in SDP VPN becoming a disruptive technology, especially when replacing or working with the existing virtual private network. Why? Because the steps that software-defined perimeter proposes are needed.

  • Challenge With Todays Security

Today’s network security architectures, tools, and platforms are lacking in many ways when trying to combat current security threats. From a bird’s eye view, the stages of zero trust software defined perimeter are relatively simple. SDP requires that endpoints, both internal and external to an organization, must authenticate and then be authorized before being granted network access. Once these steps occur, two-way encrypted connections between the requesting entity and the intended protected resource are created.


For pre-information, you may find the following post helpful:

  1. SDP Network
  2. Software Defined Internet Exchange
  3. SDP VPN


Software-Defined Perimeter.

Key Software Defined Perimeter Discussion points:

  • The issues with traditional security and networking constructs.

  • Identity-driven access.

  • Discussing Cloud Security Alliance (CSA).

  • Highlighting Software Defined Perimeter capabilities.

  • Dynamic Tunnelling. 


Back to basics with Software Defined Perimeter

A software-defined perimeter constructs a virtual boundary around company assets. This separates it from access-based controls restricting user privileges but allowing broad network access. The three fundamental pillars on which a software-defined perimeter is built are Zero Trust:

It leverages micro-segmentation to apply the principle of the least privilege to the network. It ultimately reduces the attack surface. Identity-centric: It’s designed around the user identity and additional contextual parameters, not the IP address.


Benefits of Software-Defined Perimeter:

1. Enhanced Security: SDP employs a Zero Trust approach, ensuring that only authorized users and devices can access the network. This eliminates the risk of unauthorized access and reduces the attack surface.

2. Scalability: SDP allows organizations to scale their networks without compromising security. It seamlessly accommodates new users, devices, and applications, making it ideal for expanding businesses.

3. Simplified Management: With SDP, managing access controls becomes more straightforward. IT administrators can easily assign and revoke permissions, reducing the administrative burden.

4. Improved Performance: By eliminating the need for backhauling traffic through a central gateway, SDP reduces latency and improves network performance, enhancing the overall user experience.


Implementing Software-Defined Perimeter:

Implementing SDP requires a systematic approach and careful consideration of various factors. Here are the key steps involved in deploying SDP:

1. Identify Critical Assets: Determine the applications and resources that require enhanced security measures. This could include sensitive data, intellectual property, or customer information.

2. Define Access Policies: Establish granular access policies based on user roles, device types, and locations. This ensures that only authorized individuals can access specific resources.

3. Implement Authentication Mechanisms: Incorporate strong authentication measures such as multi-factor authentication (MFA) or biometric authentication to verify user identities.

4. Implement Encryption: Encrypt all data in transit to prevent eavesdropping or unauthorized interception.

5. Continuous Monitoring: Regularly monitor network activity and analyze logs to identify suspicious behavior or anomalies.


The Software-Defined Perimeter Proposition

Security policy flexibility is offered with fine-grained access control that dynamically creates and removes inbound and outbound access rules. Therefore, a software-defined perimeter minimizes the attack surface for bad actors to play with—small attack surface results in a small blast radius. So less damage can occur.

A VLAN has a relatively large attack surface, mainly because the VLAN contains different services. SDP eliminates the broad network access that VLANs exhibit. SDP has a separate data and control plane. A control plane sets up the controls necessary for data to pass from one endpoint to another. Separating the control from the data plane renders protected assets “black,” thereby blocking network-based attacks. You cannot attack what you cannot see.


The IP Address; Is Not a Valid Hook

We should know that IP addresses are lost in today’s hybrid environment. SDP provides a connection-based security architecture instead of an IP-based one. This allows for many things. For one, security policies follow the user regardless of location. Let’s say you are doing forensics on an event 12 months ago for a specific IP.

However, that IP address is a component in a test DevOps environment. Do you care? Anything tied to IP is ridiculous, as we don’t have the right hook to hang things on for security policy enforcement.


Software-defined perimeter; Identity-driven access

Identity-driven network access control is more precise in measuring the actual security posture of the endpoint. Access policies tied to IP addresses cannot offer identity-focused security. SDP enables the control of all connections based on pre-vetting who can connect and to what services.

If you do not meet this level of trust, you can’t, for example, access the database server, but you can access public-facing documents. Users are granted access only to authorized assets preventing lateral movements that will probably go unnoticed when traditional security mechanisms are in place.



Information and infrastructure hiding

SDP does a great job of information and infrastructure hiding. The SDP architectural components ( the SDP controller and gateways ) are “dark, ” providing resilience against high and low-volume DDoS attacks. A low bandwidth DDoS attack may often bypass traditional DDoS security controls. However, the SDP components do not respond to connections until the requesting clients are authenticated and authorized, allowing only good packets through.

A suitable security protocol that can be used here is single packet authorization (SPA). Single Packet Authorization, or Single Packet Authentication, gives the SDP components a default “deny-all” security posture.

The “default deny” can be achieved because if an accepting host receives any packet other than a valid SPA packet, it assumes it is malicious. The packet will get dropped, and a notification will not get sent back to the requesting host. This stops reconnaissance at the door by silently detecting and dropping bad packets.


Sniffing a SPA packet

However, SPA can be subject to Man-In-The-Middle (MITM) attacks. If a bad actor can sniff a SPA packet, they can establish the TCP connection to the controller or AH client. But there is another level of defense in that the bad actor cannot complete the mutually encrypted connection (mTLS) without the client’s certificate.

SDP brings in the concept of mutually encrypted connections, also known as two-way encryption. The usual configuration for TLS is that the client authenticates the server, but TLS ensures that both parties are authenticated. Only validated devices and users can become authorized members of the SDP architecture.

We should also remember that the SPA is not a security feature that can be implemented to protect all. It has its benefits but does not take over from existing security technologies. SPA should work alongside them. The main reason for its introduction to the SDP world is to overcome the problems with TCP. TCP connects and then authenticates. With SPA, you authenticate first and only then connect.


SPA Use Case
Diagram: SPA Use Case. Source mrash Github.


The World of TCP & SDP

When clients want to access an application with TCP, they must first set up a connection. There needs to be direct connectivity between the client and the applications. So this requires the application to be reachable and is carried out with IP addresses on each end. Then once the connect stage is done, there is an authentication phase.

Once the authentication stage is done, we can pass data. Therefore, we have the connect, then authenticate, and data pass a stage. SDP reverses this.

zero trust security
Diagram: Zero trust security. The opposite of the TCP: Connect Firsts and then Authenticate



The center of the software-defined perimeter is trust.

In Software-Defined Perimeter, we must establish trust between the client and the application before the client can set up the connection. The trust is bi-directional between the client and the SDP service and the application to the SDP service. Once trust has been established, we move into the next stage, authentication.

Once this has been established, we can connect the user to the application. This flips the entire security model and makes it more robust. The user has no idea of where the applications are located. The protected assets are hidden behind the SDP service, which in most cases is the SDP gateway, or some call this a connector.


  • Cloud Security Alliance (CSA) SDP
    • With the Cloud Security Alliance SDP architecture, we have several components:

Firstly, the IH & AH: are the clients initiating hosts (IH) and the service accepting hosts (AH). The IH devices can be any endpoint device that can run the SDP software, including user-facing laptops and smartphones. Many SDP vendors have remote browser isolation-based solutions without SDP client software. The IH, as you might expect, initiates the connections.

With an SDP browser-based solution, the user uses a web browser to access the applications and only works with applications that can speak across a browser. So it doesn’t give you the full range of TCP and UDP ports, but you can do many things that speak natively across HTML5.

Most browser-based solutions don’t give you the additional security posture checks of assessing the end user device than an endpoint with the client installed.


Software-Defined Perimeter: Browser-based solution

The AHs accept connections from the IHS and provide a set of services protected securely by the SDP service. The AHs are under the administrative control of the enterprise domain. They do not acknowledge communication from any other host and will not respond to non-provisioned requests. This architecture enables the control plane to remain separate from the data plane achieving a scalable security system.

The IH and AH devices connect to an SDP controller that secures access to isolated assets by ensuring that the users and their devices are authenticated and authorized before granting network access. After authenticating an IH, the SDP controller determines the list of AHs to which the IH is authorized to communicate. The AHs are then sent a list of IHs that should accept connections.

Aside from the hosts and the controller, we have the SDP gateway component that provides authorized users and devices access to protected processes and services. The protected assets are located behind the gateway that can be architecturally positioned in multiple locations such as the cloud or on-premise. The gateways can exist in multiple locations in parallel.


Dynamic Tunnelling

A user with multiple tunnels to multiple gateways will be expected in the real world. It’s not a static path or a one-to-one relationship but a user-to-application relationship. The applications can exist everywhere, whereby the tunnel is dynamic and ephemeral.

For a client to connect to the gateway, latency or SYN SYN/ACK RTT testing should be performed to determine the Internet links’ performance. This ensures that the application access path always uses the best gateway, improving application performance.

Remember that the gateway only connects outbound on TCP port 443 (mTLS), and as it acts on behalf of the internal applications, it needs access to the internal apps. As a result, depending on where you position the gateway, either internal to the LAN, private virtual private cloud (VPC) or in the DMZ protected by local firewalls, ports may need to be opened on the existing firewall.


Future of Software-Defined Perimeter:

As the digital landscape evolves, secure network access becomes even more crucial. The future of SDP looks promising, with advancements in technologies like Artificial Intelligence and Machine Learning enabling more intelligent threat detection and mitigation.

In an era where data breaches are a constant threat, organizations must stay ahead of cybercriminals by adopting advanced security measures. Software Defined Perimeter offers a robust, scalable, and dynamic security framework that ensures secure access to critical resources.

By embracing SDP, organizations can significantly reduce their attack surface, enhance network performance, and protect sensitive data from unauthorized access. The time to leverage the power of Software Defined Perimeter is now.


software-defined perimeter

Cyber security threat. Young woman using computer and coding. Internet and network security. Stealing private information. Person using technology to steal password and private data. Cyber attack crime

SDP Network



SDP Network

In today’s interconnected world, where threats to data security are becoming increasingly sophisticated, businesses are constantly searching for ways to protect their sensitive information. Traditional network security measures are no longer sufficient in safeguarding against cyber attacks. Enter the Software Defined Perimeter (SDP), a revolutionary approach to network security that offers enhanced protection and control. In this blog post, we will delve into the concept of SDP and explore its various benefits.

Software Defined Perimeter, also known as a Zero Trust Network, is an architecture that focuses on secure access to resources based on the user’s identity and device. It moves away from the traditional method of relying on a static perimeter defense and instead adopts a dynamic approach that adapts to the ever-changing threat landscape.


Highlights: SDP Network

  • Creating a Zero Trust Environment

Software-Defined Perimeter is a security framework that shifts the focus from traditional perimeter-based network security to a more dynamic and user-centric approach. Instead of relying on a fixed network boundary, SDP creates a “Zero Trust” environment, where users and devices are authenticated and authorized individually before accessing network resources. This approach ensures that only trusted entities gain access to sensitive data, regardless of their location or network connection.

  • Zero trust framework

The zero-trust framework for networking and security is here for a good reason. There are various bad actors: ranging from the opportunistic and targeted to state-level, and all are well prepared to find ways to penetrate a hybrid network. As a result, there is now a compelling reason to implement the zero-trust model for networking and security.

SDP network brings SDP security, also known as software defined perimeter, which is heavily promoted as a replacement for the virtual private network (VPN) and, in some cases, firewalls for ease of use and end-user experience.

  • Dynamic tunnel of 1

It also provides a solid SDP security framework utilizing a dynamic tunnel of 1 per app per user. This offers security at the segmentation of a micro level, providing a secure enclave for entities requesting network resources. These are micro-perimeters and zero trust networks that can be hardened with technology such as SSL security and single packet authorization.


For pre-information, you may find the following useful:

  1. Remote Browser Isolation
  2. Zero Trust Network


SDN Network.

Key SDP Network Discussion points:

  • The role of SDP security. Authentication and Authorization.

  • SDP and the use of Certificates.

  • SDP and Private Key storage.

  • Public Key Infrastructure (PKI).

  • A final note on Certificates.


Back to basics with an SDP network

A software-defined perimeter is a security approach that controls resource access and forms a virtual boundary around networked resources. Think of an SDP network as a 1-to-1 mapping. Unlike a VLAN that can have many hosts within, all of which could be of different security levels.

Also, with an SDP network, we create a security perimeter via software versus hardware; an SDP can hide an organization’s infrastructure from outsiders, regardless of location. Now we have a security architecture that is location agnostic. As a result, employing SDP architectures will decrease the attack surface and mitigate internal and external network bad actors. The SDP framework is based on the U.S. Department of Defense’s Defense Information Systems Agency’s (DISA) need-to-know model from 2007.


Benefits of Software-Defined Perimeter:

1. Enhanced Security: SDP provides an additional layer of security by ensuring that only authenticated and authorized users can access the network. By implementing granular access controls, SDP reduces the attack surface and minimizes the risk of unauthorized access, making it significantly harder for cybercriminals to breach the system.

2. Improved Flexibility: Traditional network architectures often struggle to accommodate the increasing number of devices and the demand for remote access. SDP enables businesses to scale their network infrastructure effortlessly, allowing seamless connectivity for employees, partners, and customers, regardless of location. This flexibility is particularly valuable in today’s remote work environment.

3. Simplified Network Management: SDP simplifies network management by centralizing access control policies. This centralized approach reduces complexity and streamlines granting and revoking access privileges. Additionally, SDP eliminates the need for VPNs and complex firewall rules, making network management more efficient and cost-effective.

4. Mitigated DDoS Attacks: Distributed Denial of Service (DDoS) attacks can cripple an organization’s network infrastructure, leading to significant downtime and financial losses. SDP mitigates the impact of DDoS attacks by dynamically rerouting traffic and preventing the attack from overwhelming the network. This proactive defense mechanism ensures that network resources remain available and accessible to legitimate users.

5. Compliance and Regulatory Requirements: Many industries are bound by strict regulatory requirements, such as healthcare (HIPAA) or finance (PCI-DSS). SDP helps organizations meet these requirements by providing a secure framework that ensures data privacy and protection. Implementing SDP can significantly simplify the compliance process and reduce the risk of non-compliance penalties.


Feature 1: Dynamic Access Control

One of the primary features of SDP is its ability to dynamically control access to network resources. Unlike traditional perimeter-based security models, which grant access based on static rules or IP addresses, SDP employs a more granular approach. It leverages context-awareness and user identity to dynamically allocate access rights, ensuring that only authorized users can access specific resources. This feature eliminates the risk of unauthorized access, making SDP an ideal solution for securing sensitive data and critical infrastructure.

Feature 2: Zero Trust Architecture

SDP embraces the concept of Zero Trust, a security paradigm that assumes no user or device can be trusted by default, regardless of their location within the network. With SDP, every request to access network resources is subject to authentication and authorization, regardless of whether the user is inside or outside the corporate network. By adopting a Zero Trust architecture, SDP eliminates the concept of a network perimeter and provides a more robust defense against both internal and external threats.

Feature 3: Application Layer Protection

Traditional security solutions often focus on securing the network perimeter, leaving application layers vulnerable to targeted attacks. SDP addresses this limitation by incorporating application layer protection as a core feature. By creating micro-segmented access controls at the application level, SDP ensures that only authenticated and authorized users can interact with specific applications or services. This approach significantly reduces the attack surface and enhances the overall security posture.

Feature 4: Scalability and Flexibility

SDP offers scalability and flexibility to accommodate the dynamic nature of modern business environments. Whether an organization needs to provide secure access to a handful of users or thousands of employees, SDP can scale accordingly. Additionally, SDP seamlessly integrates with existing infrastructure, allowing businesses to leverage their current investments without the need for a complete overhaul. This adaptability makes SDP a cost-effective solution with a low barrier to entry.


SDP Security

Authentication and Authorization

So when it comes to creating an SDP network and SDP security, what are the ways to authenticate and authorize?

Well, firstly, trust is the main element within an SDP network. Therefore, mechanisms that can associate themselves with authentication and authorization to trust at a device, user, or application level are necessary for zero-trust environments.

When something presents itself to a zero-trust network, it must go through several SDP security stages before access is granted. Essentially the entire network is dark, meaning that resources drop all incoming traffic by default, providing an extremely secure posture. A more secure, robust, and dynamic network of geographically dispersed services and clients can be created based on this simple premise.


  • A key point: The difference between Authentication and Authorization.

Before we go any further, it’s essential to understand the difference between authentication and authorization. Upon examination of an end host in the zero-trust world, we have a device and a user forming an agent. The device and user authentication are carried out first before agent formation.

Authentication of the device will come first and second for the user. After these steps, authorization is performed against the agent. Authentication means confirming your identity, while authorization means granting access to the system.


The consensus among SDP network vendors

Generally, with most zero-trust and SDP VPN network vendors, the agent is only formed once valid device and user authentication have been carried out. And the authentication methods used to validate the device and user can be separate. A device that needs to identify itself to the network can be authenticated with X.509 certificates.

A user can be authenticated by other means, such as a setting from an LDAP server if the zero trust solution has that as an integration point. The authentication methods between the device and users don’t have to be tightly coupled, providing flexibility.

zero trust networks
Diagram: Zero trust networks. Some of the zero trust components are involved.


SDP Security with SDP Network: X.509 certificates

IP addresses are used for connectivity, not authentication, and don’t have any fields to implement authentication. The authentication must be handled higher up the stack. So we need to use something else to define identity, and that would be the use of certificates. X.509 certificates are a digital certificate standard that allows identity to be verified through a chain of trust and is commonly used to secure device authentication. X.509 certificates can carry a wealth of information within the standard fields that can fulfill the requirements to carry particular metadata.

To provide identity and bootstrap encrypted communications, X.509 certificates use two cryptographic keys, mathematically-related pairs consisting of public and private keys. The most common are RSA (Rivest–Shamir–Adleman) key pairs.

The private key is secret and held by the certificate’s owner, and the public key, as the names suggest, is not secret and distributed. The public key can encrypt the data that the private key can decrypt and vice versa. If the correct private key is not held, it is impossible to decrypt encrypted data using the public key.


SDP Security with SDP Network: Private key storage

Before we start discussing the public key, let us examine how we secure the private key. If bad actors get their hands on the private key, it lights out for device authentication.

Once the device presents a signed certificate, one way to secure the private key would be to configure some access rights to the key. However, if a compromise occurs, we are left in the undesirable world of elevated access, exposing the unprotected key.

The best way to secure and store private device keys is to use crypto processors such as a trusted platform module (TPM). The cryptoprocessor is essentially a chip that is embedded in the device.

The private keys are bound to the hardware without being exposed to the system’s operating system, which is far more vulnerable to compromise than the actual hardware. TPM binds the private software key to the hard creating a very robust device authentication.


SDP Security with SDP Network: Public Key Infrastructure (PKI)

How do we ensure that we have the correct public key? This is the role of the public key infrastructure (PKI). There are many types of PKI, with certificate authorities (CA) being the most popular. In cryptography, a certificate authority is an entity that issues digital certificates.

A certificate can be a pointless blank paper unless it is somehow trusted. This is done by digitally signing the certificate to endorse the validity. It is the responsibility of the certificate authorities to ensure all details of the certificate are correct before signing it. PKI is a framework that defines a set of roles and responsibilities used to distribute and validate public keys in an untrusted network securely.

For this, a PKI leverages a registration authority (RA). You may wonder what the difference between an RA and a CA is. The RA interacts with the subscribers to provide CA services. The RA is subsumed by the CA, which takes total responsibility for all actions of the RA.

The registration authority accepts requests for digital certificates and authenticates the entity making the request. This binds the identity to the public key embedded in the certificate, cryptographically signed by the trusted 3rd party.


Not all certificate authorities are secure!

However, all certificate authorities are not bulletproof from attack. Back in 2011, DigiNotar was at the mercy of a security breach. The bad actor took complete control of all eight certificate-issuing servers in which they issued rogue certificates that had not yet been identified. It is estimated that over 300,000 users had their private data exposed by rogue certificates.

Browsers immediately blacklist DigiNotar’s certificates, but it does highlight the issues of using a 3rd party. While Public Key Infrastructure is used at large on the public internet backing X.509 certificates, it’s not recommended for zero trust SDP. At the end of the day, when you think about it, you are still using 3rd party for a pretty important task. It would be best if you were looking to implement a private PKI system for a zero-trust approach to networking and security.

You could also implement a temporary one-time password (TOTP) if you are not looking for a fully automated process. This allows for human control over the signing of the certificates. Remember that much trust must be placed in whoever is responsible for this step.



As businesses continue to face increasingly sophisticated cyber threats, the importance of implementing robust network security measures cannot be overstated. Software Defined Perimeter offers a comprehensive solution that addresses the limitations of traditional network architectures.

By adopting SDP, organizations can enhance their security posture, improve network flexibility, simplify management, mitigate DDoS attacks, and meet regulatory requirements. Embracing this innovative approach to network security can safeguard sensitive data and provide peace of mind in an ever-evolving digital landscape.

Organizations must adopt innovative security solutions as cyber threats evolve to protect their valuable assets. Software-Defined Perimeter offers a dynamic and user-centric approach to network security, providing enhanced protection against unauthorized access and data breaches.

With enhanced security, granular access control, simplified network architecture, scalability, and regulatory compliance, SDP is gaining traction as a trusted security framework in today’s complex cybersecurity landscape. Embracing SDP can help organizations stay one step ahead of the ever-evolving threat landscape and safeguard their critical data and resources.


sdp security


Viptela Software Defined WAN (SD-WAN)


viptela sd wan

Viptela SD WAN

Why can’t enterprise networks scale like the Internet? What if you could virtualize the entire network?

Wide Area Network (WAN) connectivity models follow a hybrid approach, and companies may have multiple types – MPLS and the Internet. For example, branch A has remote access over the Internet, while branch B employs private MPLS connectivity. Internet and MPLS have distinct connectivity models, and different types of overlay exist for the Internet and MPLS-based networks.

The challenge is to combine these overlays automatically and provide a transport-agnostic overlay network. The data consumption model in enterprises is shifting. Around 70% of data is; now Internet-bound, and it is expensive to trombone traffic from defined DMZ points. Customers are looking for topological flexibility, causing a shift in security parameters. Topological flexibility forces us to rethink WAN solutions for tomorrow’s networks and leads towards Viptela SD-WAN.


Before you proceed, you may find the following helpful:

  1. SD WAN Tutorial
  2. SD WAN Overlay
  3. SD WAN Security 
  4. WAN Virtualization
  5. SD-WAN Segmentation


Solution: Viptela SD WAN

Viptela created a new overlay network called Secure Extensible Network (SEN) to address these challenges. For the first time, encryption is built into the solution. Security and routing are combined into one solution. Enables you to span environments, anywhere-to-anywhere in a secure deployment. This type of architecture is not possible with today’s traditional networking methods.

Founded in 2012, Viptela is a Virtual Private Network (VPN) company utilizing concepts of Software Defined Networking (SDN) to transform end-to-end network infrastructure. Based in San Jose, they are developing an SDN Wide Area Network (WAN) product offering any-to-any connectivity with features such as application-aware routing, service chaining, virtual Demilitarized Zone (DMZ), and weighted Equal Cost Multipath (ECMP) operating on different transports.

The key benefit of Viptela is any-to-any connectivity product offering. Connectivity was previously found in Multiprotocol Label Switching (MPLS) networks. They purely work on the connectivity model and not security frameworks. They can, however, influence-traffic paths to and from security services.

Viptela sd wan


Ubiquitous data plane

MPLS was attractive because it had a single control plane and a ubiquitous data plane. As long as you are in the MPLS network, connection to anyone is possible. Granted, you have the correct Route Distinguisher (RD) and Route Target (RT) configurations. But why can’t you take this model to Wide Area Network? Invent a technology that can create a similar model and offer ubiquitous connectivity regardless of transport type ( Internet, MPLS ).


Why Viptela SDN WAN?

The business today wants different types of connectivity modules. When you map service to business logic, the network/service topology is already laid out. It’s defined. Services have to follow this topology. Viptela is changing this concept by altering the data and control plane connectivity model using SDN to create an SDN WAN technology.

SDN is all about taking intensive network algorithms out of the hardware. Previously, in traditional networks, this was in individual hardware devices using control plane points in the data path. As a result, control points may become congested (for example – OSPF max neighbors reached). Customers lose capacity on the control plane front but not on the data plane. SDN is moving the intensive computation to off-the-shelf servers. MPLS networks attempt to use the same concepts with Route-Reflector (RR) designs.

They started to move route reflectors off the data plane to compute the best-path algorithms. Route reflectors can be positioned anywhere in the network and do not have to sit on the data path. Controller-based SDN approach, you are not embedding the control plane in the network. The controller is off the path. Now, you can scale out and SDN frameworks centrally provision and push policy down to the data plane.

Viptela can take any circuit and provide the ubiquitous connectivity MPLS provided, but now, it’s based on a policy with a central controller. Remote sites can have random transport methods. One leg could be the Internet, and the other could be MPLS. As long as there is an IP path between endpoint A and the controller, Viptela can provide the ubiquitous data plane.


Viptela SD WAN and Secure Extensible Network (SEN)

Managed overlay network

If you look at the existing WAN, it is two-part: routing and security. Routing connects sites, and security secures transmission. We have too many network security and policy configuration points in the current model. SEN allows you to centralize control plane security and routing, resulting in data path fluidity. The controller takes care of routing and security decisions.

It passes the relevant information between endpoints. Endpoints can pop up anywhere in the network. All they have to do is set up a control channel for the central controller. This approach does not build excessive control channels, as the control channel is between the controller and endpoints. Not from endpoint to endpoint. The data plane can flow based on the policy in the center of the network.

Viptela SD WAN


Viptela SD WAN: Deployment considerations

Deployment of separate data plane nodes at the customer site is integrated into existing infrastructure at Layer 2 or 3. So you can deploy incrementally, starting with one node and ending with thousands. It is so scalable because it is based on routed technology. The model allows you to deploy, for example, a guest network and then integrate it further into your network over time. Internally they use Border Gateway Protocol (BGP). One the data plane, they use standard IPSec between endpoints. It also works over Network Address Translation (NAT), meaning IPSec over UDP.

When an attacker gets access to your network, it is easy for them to reach the beachhead and hop from one segment to another. Viptela enables per-segment encryption, so even if they get to one segment, they will not be able to jump to another. Key management on a global scale has always been a challenge. Viptela solves this with a propitiatory distributed manager based on a priority system. Currently, their key management solution is not open to the industry.


SDN controller

You have a controller and VPN termination points i.e data plane points. The controller is the central management piece that assigns the policy. Data points are modules that are shipped to customer sites. The controller allows you to dictate different topologies for individual endpoint segments. Similar to how you influence-routing tables with RT in MPLS.

The control plane is at the controller.


Data plane module

Data plane modules are located at the customer site. They connect this data plane module, which could be a PE hand-off to the internal side of the network. The data plane module must be in the data plane path on the customer site. Internal side, they discover the routing protocols and participate in prefix learning. At Layer 2, they discover the VLANs. Their module can either be the default gateway or perform the router neighbor relationship function. WAN side, data plane module registers uplink IP address to WAN controller/orchestration system. The controller builds encrypted tunnels between the data endpoints. The encrypted control channels are only needed when you build over untrusted third parties.

If the problem occurs with controller connectivity, the on-site module can stop being the default gateway and usually participate in Layer 3 forwarding for existing protocols. It backs off from being the primary router for off-net traffic. It’s like creating VRF for different businesses and default routes for each VRF with a single peering point to the controller; Policy-Based Routing (PBR) for each VRF for data plane activity. The PBR is based on information coming from the controller. Each control segment can have a separate policy (for example – modifying the next hop). From a configuration point of view, you need an IP on the data plane module and the remote controller IP. The controller pushes down the rest.


  • Viptela SD WAN: Use cases

For example, you have a branch office with three distinct segments, and you want each endpoint to have its independent topology. The topology should be service driven, and the service should not follow existing defined topology. Each business should depict how they want their business to connect to the network team should not say this is how the topology is, and you must obey our topology.

From a carrier’s perspective, they can expand their MPLS network to areas they do not have a physical presence. And bring customers with this secure overlay to their closest pop where they have an MPLS peering. MPLS providers can expand their footprint to areas where they do not have service. If MPLS has customers in region X and wants to connect to the customer in region Y, they can use Viptela. Having those different data plane endpoints through a security framework would be best before entering the MPLS network.

Viptela allows you to steer traffic based on the SLA requirements of the application, aka Application-Aware Routing. For example, if you have two sites with dual connectivity to MPLS and Internet, data plane modules (located at customer sites) nodes can steer traffic over either the MPLS or Internet transport based on end-to-end latency or drops. They do this by maintaining the real-time loss, latency, and jitter characteristics and then applying policies on the centralized controller. As a result, critical traffic is always steered to the most reliable link. This architecture can scale to 1000 nodes in a full mesh topology.


viptela sd wan