rsz_199362bc73d551930019b45770c60b76

ACUNETIX – Web Application Security

Web Application Security

Hello, I did a tailored package for ACUNETIX. We split a number of standard blogs into smaller ones for SEO. There are lots of ways to improve web application security so we covered quite a lot of bases in the package.

“So why is there a need for true multi-cloud capacity? The upsurge of the latest applications demands multi-cloud scenarios. Firstly, organizations require application portability amongst multiple cloud providers. Application uptime is a necessity and I.T organizations cannot rely on a single cloud provider to host the critical applications. Besides, lock-in I.T organizations don’t want to have their application locked into specific cloud frameworks. Hardware vendors have been doing this since the beginning of time, thereby, locking you to their specific life cycles. Within a cloud environment that has been locked into one provider means, you cannot easily move your application from one provider to another.

Thirdly, cost is one of the dominant factors. Clouds are not a cheap resource and the pricing models vary among providers, even for the same instance size and type. With a multi-cloud strategy in place, you are in a much better position to negotiate the price.”

The World Wide Web (WWW) has transformed from simple static content to serving the dynamic world of today. The remodel has essentially changed the way we communicate and do business. However, now we are experiencing another wave of innovation in the technologies. The cloud is becoming an even more diverse technology compared to the former framework. The cloud has evolved into its second decade of existence, which formulates and drives a new world of cloud computing and application security. After all, it has to overtake the traditional I.T by offering an on-demand elastic environment. It largely affects how the organizations operate and have become a critical component for new technologies.

The new shift in cloud technologies is the move to ‘multi-cloud designs’, which is a big game-changer for application security. Undoubtedly, multi-cloud will become a necessity for the future but unfortunately, at this time, it is miles apart from a simple move. It is a fact, that not many have started their multi-cloud journey. As a result, there are a few lessons learned, which can expose your application stack to security risks unless you were to hire a professional Web Application Company that will develop and maintain the security of your new application within the cloud for you and your business, opting for this method can mean having a dedicated IT specialist company that can be of service should anything go awry.

Reference architecture guides are a great starting point, however, there are many unknowns when it comes to multi-cloud environments. To take advantage of these technologies, you need to move with application safety in mind. Applications don’t care what cloud technology they lay in. What is significant is, that they need to be operational and hardened with appropriate security.”

“In the early 2000s, we had simple shell scripts created to take down a single web page. Usually, one attacking signature was used from one single source IP address. This was known as a classical Bot based attack, which was effective in taking down a single web page. However, this type of threat needed a human to launch every single attack. For example, if you wanted to bring ten web applications to a halt, you would need to hit “enter” on the keyboard ten times.

We then started to encounter the introduction of simple scripts compiled with loops. Under this improved attack, instead of hitting the keyboard every time they wanted to bring down a web page, the bad actor would simply add the loop to the script. The attack still used only one source IP address and was known as the classical denial of service (DoS).

Thus, the cat and mouse game continued between the web application developers and the bad actors. The patches were quickly released. If you patched the web application and web servers in time, and as long as a good design was in place, then you could prevent these types of known attacks.”

“The speed at which cybersecurity has evolved over the last decade has taken everyone by surprise. Different types of threats and methods of attack have been surfacing consistently, hitting the web applications at an alarming rate. Unfortunately, the foundations of web application design were not laid with security in mind. Therefore, the dispersed design and web servers continue to pose challenges to security professionals.

If the correct security measures are not in place, the existing well-known threats that have been around for years will infuse application downtime and data breaches. Here the prime concern is that if security professionals are unable to protect themselves against today’s web application attacks, how would they fortify against the unknown threats of tomorrow?

The challenges that we see today are compounded by the use of Artificial Intelligence (AI) by cybercriminals. Cybercriminals already have an extensive arsenal at their disposal but to make matters worse, they now have the capability to combine their existing toolkits with the unknown power of AI.”

“The cloud API management plane is one of the most significant differences between traditional computing and cloud computing. It offers an interface, which is often public, to connect the cloud assets. In the past, we followed the box-by-box configuration mentality, where we configured the physical hardware stringed by the wires. However, now, our infrastructure is controlled with an application programming interface (API) calls.

The abstraction of virtualization is aided by the use of APIs, which are the underlying communication methods for assets within a cloud. As a result of this shift of management paradigm, compromising the management plane is like winning unfiltered access to your data center, unless proper security controls to the application level are in place.”

“As we delve deeper into the digital world of communication, from the perspective of privacy, the impact of personal data changes in proportion to the way we examine security. As organizations chime in this world, the normal methods that were employed to protect data have now become obsolete. This forces the security professionals to shift their thinking from protecting the infrastructure to protecting the actual data. Also, the magnitude at which we are engaged in digital business makes the traditional security tools outdated. Security teams must be equipped with real-time visibility to fathom what’s happening all the way up at the web application layer. It is a constant challenge to map all the connections we are building and the personal data that is spreading literally everywhere. This challenge must be addressed not just from the technical standpoint but also from the legal and legislative context.

With the arrival of new General Data Protection Regulation (GDPR) legislation, security professionals must become data-centric. As a result, they no longer rely on traditional practices to monitor and protect data along with the web applications that act as a front door to the user’s personal data. GDPR is the beginning of wisdom when it comes to data governance and has more far-reaching implications than one might think of. It has been predicted that by the end of 2018, more than 50% of the organizations affected by GDPR, will not be in full compliance with its requirements.”

“Cloud computing is the technology that equips the organizations to fabricate products and services for both internal and external usage. It is one of the exceptional shifts in the I.T industry that many of us are likely to witness in our lifetimes. However, to align both; the business and operational goals, cloud security issues must be addressed by governance and not just treated as a technical issues. Essentially, the cloud combines resources such as central processing unit (CPU), Memory, and Hard Drives and places them into a virtualized pool. Consumers of the cloud can access the virtualized pool of resources and can allocate them in accordance to the requirement. Upon completion of the task, the assets are released back into the pool for future use.

Cloud computing represents a shift from a server-service-based approach, eventually, offering significant gains to businesses. However, these gains are often eroded when the business’s valuable assets, such as web applications, become vulnerable to the plethora of cloud security threats, which are like a fly in the ointment.”

“Firewall Designs & the Evolving Security Paradigm The firewall has weathered through a number of design changes. Initially, we started with a single chunky physical firewall and prayed that it wouldn’t fail. We then moved to a variety of firewall design models such as active-active and active-backup mode. The design of active-active really isn’t a true active-active due to certain limitations. However, the active-backup leaves one device, which is possibly quite expensive, left idle sitting there, waiting to take over in the event of primary firewall failover.

We now have the ability to put firewalls in containers. At the same time, some vendors claim that they can cluster up to eight firewalls creating one big active firewall. While these introductions are technically remarkable, nevertheless, they are complex as well. Anything complexity involved in security is certainly a volatile place to dock a critical business application.”

“Introduction Internet Protocol (IP) networks provide services to customers and businesses across the sphere. Everything and everyone is practically connected in some form or another. As a result, the stability and security of the network and the services that ride on top of IP are of paramount importance for successful service delivery. The connected world banks on IP networks and as the reliance mushrooms so does the level of network and web application attacks. Although the new technologies may offer services that simplify life and facilitate businesses to function more efficiently but in certain scenarios, they change the security paradigms which introduce oodles of complexities.

Alloying complexity with security is like stirring the water in oil which would eventually result in a crash. We operate in a world where we need multiple layers of security and updated security paradigms in order to meet the latest application requirements. Here, the significant questions to be pondered over are, can we trust the new security paradigms? Are we comfortable withdrawing from the traditional security model of well-defined component tiers? How does the security paradigm appear from a security auditor’s perspective?”

“Part One in this two-part series looked at the evolution of network architecture and how it affects security. Here we will take a deeper look at the security tools needed to deal with these changes. The Firewall is not enough Firewalls in three-tier or leaf and spine designs are not lacking features; this is not the actual problem. They are fully-featured rich. The problem is with the management of Firewall policies that leave the door wide open. This might invite a bad actor to infiltrate the network and laterally move throughout searching to compromise valuable assets on a silver platter. The central Firewall is often referred to as a “holy cow” as it contains so many unknown configured policies that no one knows what are they used for what. Have you ever heard of the 20-year-old computer that can be pingable but no one knows where it is or has there been any security patches in the last decade?

Having a poorly configured Firewall, no matter how feature-rich it is, it poses the exact same threat as a 20-year-old unpatched computer. It is nothing less than a fly in the ointment. Over the years, the physical Firewall will have had many different security administrators. The security professionals leave jobs every couple of years. And each year the number of configured policies on the Firewall increase. When the security administrator leaves his or her post, the Firewall policy stays configured but is often undocumented. Yet the rule may not even be active anymore. Therefore, we are left with central security devices with thousands of rules that no one fully understands but are still parked like deadwood.”

“The History of Network Architecture The goal of any network and its underlying infrastructure is simple. It is to securely transport the end user’s traffic to support an application of some kind without any packet drops which may trigger application performance problems. Here a key point to consider is that the metrics engaged to achieve this goal and the design of the underlying infrastructure derives in many different forms. Therefore, it is crucial to tread carefully and fortify the many types of web applications comfortably under an umbrella of hardened security. The network design has evolved over the last 10 years to support the new web application types and the ever-changing connectivity models such as remote workers and Bring Your Own Device (BYOD).”

“Part 1 in this series looked at Online Security and the flawed protocols it lays upon. Online Security is complex and its underlying fabric was built without security in mind. Here we shall be exploring aspects of Application Security Testing. We live in a world of complicated application architecture compounds with poor visibility leaving the door wide open for compromise. Web Applications Are Complex The application has transformed from a single server app design to a multi-tiered architecture, which has rather opened Pandora’s Box.

To complicate application security testing further, multiple tiers have both firewalling and load balancing between tiers, implemented with either virtualized or physical appliances. Containers and microservices introduce an entirely new wave of application complexity. Individual microservices require cross-communication, yet potentially located in geographically dispersed data centers.”

“A plethora of valuable solutions now run on web-based applications. One could argue that web applications are at the forefront of the world. More importantly, we must equip them with appropriate online security tools to barricade against the rising web vulnerabilities. With the right toolset at hand, any website can shock-absorb known and unknown attacks. Today the average volume of encrypted internet traffic is greater than the average volume of unencrypted traffic. Hypertext Transfer Protocol (HTTPS) is good but it’s not invulnerable. We see evidence of its shortcoming in the Heartbleed Bug where the compromise of secret keys was made possible. Users may assume that they see HTTPS in the web browser and that the website is secured.”

WAN Design Requirements

Network Stretch

Network Stretch

Network stretch refers to the capability of a network to extend its reach, connecting users and devices across geographical boundaries. This can be achieved through various technologies such as virtual private networks (VPNs), wide-area networks (WANs), or cloud-based networking solutions.

Network stretch goes beyond the traditional limitations of physical infrastructure and geographical boundaries. It refers to the ability of a network to expand, adapt, and connect diverse devices and systems across various locations. This flexibility allows for enhanced communication, collaboration, and access to resources.

Highlights: Network Stretch

Understanding Network Stretch Techniques

Network stretch techniques involve extending the boundaries of a network, enabling seamless communication across multiple locations, and enhancing connectivity beyond traditional limitations. Whether it’s through physical or virtual means, these techniques empower businesses to establish secure and reliable connections, regardless of geographic distances.

Organizations must adopt robust strategies tailored to their specific requirements to implement network stretch techniques successfully. Some key strategies include leveraging virtual private networks (VPNs), utilizing software-defined networking (SDN) solutions, and implementing hybrid cloud architectures. Each approach offers unique advantages, such as enhanced security, scalability, and flexibility.

Network routing forms the backbone of data transmission, guiding packets of information from source to destination. It involves selecting the most suitable path for data to travel through a network of interconnected devices. By efficiently navigating the network, data reaches its destination promptly, ensuring a smooth user experience.

Factors Influencing Network Path Selection

– Network Congestion: High network congestion can lead to data packet loss, delays, and poor quality of service. Routing algorithms consider network congestion levels to avoid congested paths and select alternative routes for optimal performance.

– Bandwidth Availability: Bandwidth availability along different network paths affects the speed and reliability of data transmission. Routing protocols consider the bandwidth capacity of various paths to choose the one that can efficiently handle the data volume.

– Latency and Delay: Reducing latency and minimizing delays are crucial for real-time applications such as video streaming, online gaming, and VoIP. Network routing algorithms consider latency measurements to choose paths with minimal delay, ensuring smooth and responsive user experiences.

Example: EIGRP and LFA

EIGRP LFA utilizes a pre-computed table called the Topology Table (T-Table), which stores information about feasible successors and loop-free alternate paths. When a primary path fails, EIGRP refers to the T-Table to quickly identify a backup path, avoiding potential loops.

EIGRP LFA offers numerous benefits, including reduced convergence time, improved network stability, and optimized resource utilization. It is particularly useful in environments where fast and reliable rerouting is critical, such as data centers, large enterprise networks, or service provider networks.

EIGRP LFA

Understanding BGP Route Reflection

BGP route reflection is a method that allows for efficient and scalable distribution of routing information within an Autonomous System (AS). It reduces the full mesh requirement between BGP speakers, providing a more streamlined approach for propagating routing updates.

One of the primary objectives of network redundancy is to ensure uninterrupted connectivity in the event of link or router failures. BGP route reflection plays a crucial role in achieving redundancy by allowing the distribution of routing information to multiple reflector routers. In case of a failure, the reflector routers can continue forwarding traffic to the remaining operational routers, ensuring seamless connectivity.

Enhancing connectivity

One of the critical advantages of network stretch is enhanced connectivity. By extending the network to different locations, businesses can seamlessly connect their employees, customers, and partners, regardless of location. This improves collaboration and communication and enables organizations to tap into new markets and expand their customer base.

End users perception

Defining and engineering the most optimal network path is critical to network architecture. The value of the network is most evident in the end users’ perception of application quality. Application quality and the perception of quality will vary from user to user.

For example, one user may view a 5-second interrupt to a voice call as acceptable, while another could classify this as unacceptable. To maintain a high-quality perception for all users, you must engineer a packet to reach its destination as quickly as possible. This is where the concept of “network stretch” comes into play. 

Software-defined networking (SDN)

Software-defined networking (SDN) is a crucial technology driving network stretch. SDN enables centralized control and management of network infrastructure, making it easier to scale and extend networks across multiple locations. By decoupling the network control plane from the underlying hardware, SDN offers greater flexibility, agility, and scalability, making it an ideal solution for network stretch.

software defined networking
Diagram: Software Defined Networking (SDN). Source is Opennetworking

Virtual private network (VPN) and GRE

Another critical technology is virtual private networks (VPNs), which provide secure and encrypted connections over public networks. VPNs play a crucial role in network stretch by enabling organizations to connect their various locations and remote workers securely. By utilizing VPNs, businesses can ensure that their data remains protected while allowing employees to access company resources anywhere in the world.

GRE configuration

Google Data Centers

Understanding Network Tiers

Network tiers, in the context of Google Cloud, refer to different levels of network performance and cost. Google Cloud offers three tiers: Premium, Standard, and Subnet. Each tier is designed to cater to specific requirements and budgets. By understanding the characteristics of each tier, businesses can make informed decisions to align their network spend with their operational needs.

The Premium tier is Google Cloud’s highest performing network tier. It offers low latency, high throughput, and extensive global coverage. This tier is ideal for businesses that require real-time data synchronization, such as streaming services, gaming platforms, and financial institutions. While the Premium tier comes at a higher cost compared to other tiers, its unmatched performance justifies the investment for businesses that prioritize speed and reliability.

The Standard tier strikes a balance between performance and cost-effectiveness. It provides reliable network connectivity for a wide range of use cases, including web applications, enterprise workloads, and database management systems. The Standard tier offers competitive pricing while maintaining reasonable latency and throughput. For businesses seeking a cost-efficient yet dependable network solution, the Standard tier is a compelling choice.

Understanding Google Cloud CDN

Google Cloud CDN is a global network of servers designed to deliver content with low latency and high availability. It caches static assets, such as images, videos, and documents, in strategically edge locations worldwide. When a user requests content from your website, Google Cloud CDN serves it from the nearest edge location, drastically reducing latency and improving overall performance.

a. Accelerated Content Delivery: By caching content at edge locations, Google Cloud CDN reduces the distance data must travel, resulting in faster content delivery. This translates to an improved user experience, decreased bounce rates, and increased conversions.

b. Scalability and Global Reach: With Google’s vast network infrastructure, Cloud CDN can effortlessly handle sudden spikes in traffic. Whether you have a local or global audience, Google Cloud CDN ensures consistent and reliable content delivery worldwide.

c. Cost Optimization: Google Cloud CDN optimizes cost by reducing bandwidth usage and offloading traffic from your origin server. With intelligent caching policies and efficient content delivery, you can save on infrastructure costs without compromising performance.

Understanding Google HA VPN

Google HA VPN is a highly scalable and fully managed service that allows you to securely connect your on-premises network to your Google Cloud Virtual Private Cloud (VPC) network. It provides a seamless and encrypted connection over the public internet, ensuring the confidentiality and integrity of your data. With HA VPN, you can establish a high-availability connection between your on-premises network and Google Cloud, enabling secure access to your cloud resources.

a. Enhanced Security: HA VPN utilizes robust encryption protocols, such as IPsec, to protect your data in transit. This ensures that your sensitive information remains secure from threats and unauthorized access.

b. Scalability: Google HA VPN is designed to handle high traffic volumes, allowing your network to grow seamlessly as your business expands. It provides ample bandwidth and can accommodate increased traffic demands without compromising performance.

c. Automated Failover: HA VPN offers built-in redundancy and failover capabilities, ensuring uninterrupted connectivity even during a network failure or outage. This feature guarantees high availability and minimizes downtime, enhancing your network’s reliability.

Related: For pre-information, you may find the following useful:

  1. Observability vs Monitoring
  2. Virtual Device Context
  3. Redundant Links
  4. SDN Data Center
  5. LISP Hybrid Cloud
  6. Ansible Architecture

Network Stretch

Understanding Stretch LAN

Stretch LAN, also known as Extended LAN or Stretched LAN, is an innovative networking approach that enables seamless connectivity across multiple geographical locations. Unlike traditional LANs, which are typically confined to a specific physical area, Stretch LAN extends the network coverage to distant places, creating a unified and expanded network infrastructure. This breakthrough technology has revolutionized how organizations establish and manage their networks, providing unprecedented flexibility and scalability.

Benefits of Stretch LAN

Enhanced Connectivity: Stretch LAN eliminates distance limitations, enabling seamless communication and data sharing across multiple locations. It promotes collaboration, improves productivity, and fosters a cohesive work environment even when teams are geographically dispersed.

Cost-Effective: By leveraging existing network infrastructure and extending it to new locations, Stretch LAN eliminates the need for costly hardware investments. This cost-effectiveness makes it attractive for businesses looking to expand their operations without breaking the bank.

Scalability and Flexibility: Stretch LAN offers unparalleled scalability, allowing organizations to add or remove locations as needed quickly. It provides the flexibility to accommodate evolving business needs, ensuring the network can grow alongside the organization.

Implementing Stretch LAN

Network Architecture: Implementing Stretch LAN requires careful planning and a well-designed network architecture. It involves deploying specialized equipment, such as stretch switches and routers, which facilitate the seamless extension of the LAN.

Configuration and Security: Proper configuration and security measures are essential to ensure the integrity and confidentiality of data transmitted across the Stretch LAN. Encryption protocols, firewalls, and robust access controls must be implemented to safeguard against potential threats.

Applications of Stretch LAN

Multi-Site Organizations: Stretch LAN is particularly advantageous for businesses with multiple locations, such as retail chains, educational institutions, or healthcare facilities. It provides a unified network infrastructure, enabling seamless site communication and resource sharing.

Disaster Recovery: Stretch LAN plays a crucial role in disaster recovery scenarios, where maintaining network connectivity is vital. By extending the LAN to a remote backup site, organizations can ensure uninterrupted access to critical data and applications, even in a disaster at the primary location.

Lab Guide: Router on a stick configuration

A router on a Stick is a networking setup where a single physical interface on a router is used to communicate with multiple VLANs (Virtual Local Area Networks). A trunk port is utilized instead of dedicating a separate port for each VLAN. This trunk port carries traffic from multiple VLANs to the router, which is processed and forwarded accordingly. Network administrators can effectively manage and control traffic flow within their network infrastructure by leveraging this configuration.

Note: 

VLAN 10 and VLAN 20 are configured on the switch, and a single cable connects the router and switch. Routers need access to both VLANs, so switches and routers will share the same trunk!

Subinterfaces can be created on a router. We can configure IP addresses on each sub-interface of these virtual interfaces.

Here are the IP addresses I assigned to my two sub-interfaces. The default gateway for computers in VLAN 10 will be 192.168.10.254, while the default gateway for computers in VLAN 20 will be 192.168.20.254.

Encapsulation dot1Q is an important command. Our router cannot tell which VLAN belongs to which sub-interface, so we must use this command.Fa0/0.10 will belong to VLAN 10, and Fa0/0.20 will belong to VLAN 20.

router on a stick

To grasp the concept of the router on a stick, we must first delve into its fundamental principles. Essentially, a router on a stick involves using a single physical interface on a router to handle traffic between multiple VLANs. By utilizing subinterfaces and 802.1Q tagging, network administrators can achieve efficient inter-VLAN routing without requiring dedicated router interfaces for each VLAN.

Benefits and Use Cases

A router on a stick offers several advantages, making it an attractive option for various scenarios. First, it saves costs by reducing the number of physical router interfaces required. Second, it simplifies network management by centralizing routing configurations. This technique is beneficial in environments where VLANs are prevalent, such as educational institutions, large enterprises, or multi-tenant buildings.

Deploying Stretched VLANs/LAN Extensions

Migration of virtual machines to another data center is critical for virtual workload mobility. Conversely, virtual machines and their applications can still communicate and be identified on the network, and services can continue to run.

Stretched VLANs, which span multiple physical data centers, are typically required for this to work. A Layer 3 WAN SDN connects locations in multisite data center topologies. This is the most straightforward configuration, removing many complex considerations from the environment.

A native Layer 3 environment requires migrated devices to change their IP addresses to match the addressing scheme at the other site, or all resources on the VLAN subnet must be moved at once. This approach severely restricts the ability to move resources from one site to another and does not provide flexibility.

Therefore, it is necessary to implement stretched VLANs to facilitate live migration over distance since they can extend beyond a single site and enable resources to communicate as if they were local.

Stretched VLAN
Diagram: Stretch VLAN. The source is VMware.

Overlay Networking

Overlay networking is a virtual network infrastructure that operates on top of an existing physical network. It allows for creating logical networks decoupled from the underlying hardware infrastructure. Organizations can achieve greater flexibility, scalability, and security by encapsulating data packets within a separate overlay network.

Benefits of Overlay Networking

Overlay networking offers a multitude of benefits for businesses. Firstly, it simplifies network management by enabling seamless integration of different network types, such as virtual private networks (VPNs) and software-defined networks (SDNs). Secondly, overlay networks empower organizations to scale their infrastructure effortlessly, as new devices and services can be added without disrupting the existing network. Lastly, overlay networking enhances security by isolating and encrypting traffic within the overlay, protecting sensitive data from unauthorized access.

VXLAN multicast mode

Implementation of Overlay Networking

Implementing overlay networking requires a robust and flexible software-defined network controller. This controller manages the creation, configuration, and maintenance of overlay networks. The underlying physical network must also support the necessary protocols, such as Virtual Extensible LAN (VXLAN) or Generic Routing Encapsulation (GRE). Organizations can leverage these technologies to establish overlay networks across data centers, cloud environments, and geographically dispersed locations.

GRE over IPsec

Network modularity. Different designs and approaches.

Layered hub-and-spoke topologies are more widely used because they provide better network convergence than ring topologies. What about building a full mesh of modules?

Although a full mesh design might work well for a network with a small set of modules, it does not have stellar scaling characteristics because it requires an additional (and increasingly more extensive) set of ports and links for each module added to the network. 

Additionally, full mesh designs don’t lend themselves to efficient policy implementation; each link between every pair of modules must have policy configured and managed, a job that can become demanding as the network expands.

network modularity
Diagram: Network modularity. Source is Networkdirection

The Value of Network Modularity

Modular network design is an approach to architecture that divides the entire network into small, independent units or modules. These modules can be connected to form a more extensive network, enabling organizations to create a custom network tailored to their specific needs. Organizations can customize their network using modular network design to meet performance and scalability requirements while providing a cost-effective solution.

The value of a stretch network is that it’s modular and can affect only certain network parts. Therefore, you can design around its concept. A modular network separates the network into various functional modules consisting of network functions, each targeting a specific place or purpose in the network.

This brings a lot of value from a security and performance perspective. In a leaf and spine data center design, you could consider a network module, a pod, or a group of pods. So, the stretched network concepts must first be addressed with a bird’s eye view in the network design.

Network Stretch and Route Path Selection

Network stretch is the difference between the best possible path and the actual path the traffic takes through the network. The concept of a stretched network relates to both Layers 2 and 3.

For instance, if the shortest actual path available is 2 hops, but the traffic follows a 3-hop path, the stretch is 1. An increase in network stretch always represents sub-optimal use of available resources. To fully understand the concept of network stretch, first, consider the basics of route path selection and route aggregation.

stretch network
Diagram: The basics of routing: Destination-based routing.

The following diagram illustrates the basics of routing. We have three routers in the network topology. Router 1 has two outbound connections—one to Router 2 and another to Router 3, each with different routing metrics. Routers 1 to Router 2 cost 10, and Router 1 to Router 3 cost 20. Destination-based routing for the same prefix length always prefers a path with a lower cost, resulting in traffic following the path to Router 2.

Route path selection

One critical aspect of a router’s functionality is its ability to determine the most efficient route for these packets. This process, known as route path selection, ensures data is transmitted optimally and reliably.

Factors Influencing Route Path Selection:

1. Network Topology:

The underlying network topology significantly impacts the route path selection process. Routers have a routing table containing information about the available paths to different destinations. Based on this information, a router determines the best path to forward packets. Factors such as the number of hops, link capacity, and network congestion are considered to ensure efficient data transmission.

2. Administrative Distance:

Administrative distance is a metric routers use to determine the reliability of a particular routing protocol or source. Each forwarding routing protocol is assigned a numerical value indicating its preference level. With multiple routing protocols or sources, the router selects the one with the lowest administrative distance. For example, a router might prefer a directly connected network over a network learned through a dynamic routing protocol.

3. Routing Metrics:

Routing metrics are used to quantify the performance characteristics of a route. Different routing protocols utilize various metrics to determine the best path. Standard metrics include hop count, bandwidth, delay, reliability, and load. By analyzing these metrics, routers can select the most suitable path based on the network requirements and priorities. Take note of the metric assigned to the individual routes once the summary routes have been configured on R1. A metric of 16 is assigned, meaning they are not used while the summary route is in place.

RIP configuration

Routing Algorithms:

1. Shortest Path First (SPF) Algorithm:

The SPF algorithm, Dijkstra’s algorithm, is widely used for route path selection. It calculates the shortest path between the source and destination based on the link costs. The algorithm maintains a routing table that stores the shortest path to each destination. By iteratively updating the routing table, routers can dynamically adapt to changes in the network topology.

2. Border Gateway Protocol (BGP):

BGP is a routing protocol used in large-scale networks like the Internet. Unlike interior routing protocols, BGP focuses on inter-domain routing. BGP routers exchange routing information to determine the best path for data transmission. BGP considers path length, AS (Autonomous System) path, and routing policies to select routes.

Route aggregation

Next, we have route aggregation. Route summarization—also known as route aggregation—is a method to minimize the number of routing tables in an IP network. It consolidates selected multiple routes into a single route advertisement, which serves two purposes in the network. 

  1. Breaking the network into multiple failure domains and
  2. Reducing the amount of information the routing protocol must deal with when converging.

In our case, Router 1 must install all individual routes without route aggregation, including metrics, tags, and other information. The best path to reach a particular destination must be calculated every time the topology changes.

Route aggregation is crucial in simplifying the routing process and optimizing network performance in networking. By consolidating multiple network routes into a single entry, route aggregation reduces the size of routing tables, improves scalability, and enhances overall network efficiency. In this blog post, we will explore the concept of route aggregation, its benefits, and its implementation in modern networking environments.

RIP Configuration

1st Lab guide: EIGRP Summary Address

In the following lab guide, we have a DMVPN network.  R11 is the hub, and R31 and R41 are the spokes. We are running EIGRP over the DMVPN tunnel, which is a mGRE tunnel. EIGRP has been configured to send a summary route to the spoke sites.

Notice below in the screenshot that after the configuration, we have a Null0 route on the hub where the summarization was configured, and also, the spokes now only have one route, i.e., the summary route, in their routing tables.

Remember that when you have a Hub and Spoke topology and a Distant Vector protocol, we have issues with Split Horizon at the hub site. However, as we are sending a summary route from the Hub, this is not an issue.

EIGRP Summary Address
Diagram: EIGRP Summary Address

What is Route Aggregation?

Route aggregation, also known as route summarization or supernetting, is a technique for consolidating multiple network routes into a more concise representation. Instead of advertising individual routes, network administrators can advertise a summarized route, which encompasses several smaller routes. This consolidation allows routers to make more efficient routing decisions, reducing the complexity of routing tables.

Benefits of Route Aggregation:

1. Reduced Routing Table Size: One of route aggregation’s primary advantages is the significant reduction in routing table size. By summarizing multiple routes into a single entry, the number of entries in the routing table is significantly reduced, leading to faster routing lookups and improved scalability.

2. Enhanced Network Efficiency: Smaller routing tables allow routers to process routing updates more quickly, improving network efficiency. The reduced size of routing tables also reduces memory and processing requirements, enabling routers to handle higher traffic loads without performance degradation.

3. Improved Convergence: Route aggregation helps to improve network convergence, which refers to the time it takes for routers to reach a consistent view of the network topology after a change occurs. Consolidating routes expedites the convergence process, as routers have fewer individual routes to process and update.

4. Enhanced Security: Using route aggregation, network administrators can help protect network resources by concealing internal network details. By advertising summarized routes instead of specific routes, potential attackers find it more challenging to gain insight into the network’s internal structure.

Implementation of Route Aggregation:

Route aggregation can be implemented using various routing protocols, such as Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF). These protocols allow network administrators to configure route summarization at specific points within the network, optimizing routing efficiency.

When implementing route aggregation, balancing summarizing routes too aggressively and maintaining the necessary network granularity level is essential. Over-aggregation can lead to suboptimal routing decisions and potential connectivity issues. Network administrators must carefully design and configure route aggregation to ensure optimal performance.

Route Aggregation: A networking technique

Route aggregation is a networking technique that reduces the number of routes in a routing table. It is based on summarizing multiple IP addresses into a single IP address prefix. The method reduces the size of routing tables, thereby reducing the memory and bandwidth required for network communication.

Route aggregation, also known as route summarization or supernetting, groups multiple IP addresses into a single IP address prefix by selecting a typical bit pattern between the IP addresses and replacing that bit pattern with a single value. This reduces the number of routes, reducing the router’s total memory and bandwidth requirements.

Route aggregation can be used in both interior and exterior routing protocols. In internal protocols, the router can use route aggregation to reduce the number of routes in the routing table, thus reducing the total memory and bandwidth requirements.

In exterior protocols, route aggregation can reduce the number of routes sent to other network routers. This reduces the overall network traffic and the time it takes for the routing information to be propagated throughout the network.

Route aggregation and performance problems

This can cause performance problems, especially if the network has a high state change rate and many routes. Whenever the network topology changes, the router’s control plane must go through the convergence process steps ( detect, describe, switch, find ) and recalculate the best path to the affected destinations. If the rate of change is faster than the control plane can calculate new best paths, the network will never converge. One method used to overcome this is Route Aggregation.

Route aggregation creates separate failure domains and boundaries in the network. Routing nodes on different sides of the boundary will not query each other. It is essentially slicing the network. In addition, if a specific link frequently alternates between Up and Down states, the links uninvolved in the route summarization will not be affected. This prevents route flapping and improves network stability.

Route aggregation example:

So, in summary, route aggregation lets you take several specific routes and combine them into one inclusive route. As a result, route aggregation can reduce the number of routes a given protocol advertises. This is because the aggregates are activated by contributing routes. The routing protocols will have different route aggregation methods, such as those used in OSPF. When an ABR sends routing information to other areas, it originates Type 3 LSAs for each network segment.

If any contiguous segments exist in this area, run the abr-summary command to summarize these segments into one. An ABR then sends just one summarized LSA to other areas, and no LSAs belong to the summarized network segment specified by this command. Therefore, the routing table size is reduced, and router performance is improved. The critical point in the diagram below is the two separate failure domains. Failure domains A and B. 

route aggregation
Diagram: Route aggregation.

State versus stretch

This has benefits and drawbacks in that packets can follow a less optimal path to reach their destination. When you summarize at the edge of the network, the receiving router loses complete network visibility, which can cause an increase in network stretch in some cases. What happens to traffic entering Router 1 and traveling to destination 192.168.1.1/24?

route summarization
Diagram: The issues of route summarization.

Loss of visibility and state results in suboptimal traffic flow

Without aggregation on Router 3, this traffic would flow to Router 1 – Router 3 – Router 6. However, with route aggregation configured on both Router 2 and Router 3, this traffic will take the path with the better cost, Router 1 – Router 2 – Router 3 – Router 6, increasing one hop. As a result, the path from Router 1 to reach the destination 192.168.1.1/24 has stretched by one hop – or the stretch of the network has increased by 1.

Understanding Suboptimal Traffic Flow:

Suboptimal traffic flow is when data packets transmitted through routers take longer than necessary to reach their destination. This issue arises due to the complex nature of router operations, congestion, and routing protocols. Simply put, the path the data packets take is inefficient, resulting in delays, packet loss, and even degraded network performance.

    • Causes of Suboptimal Traffic Flow:

Several factors contribute to routers’ suboptimal traffic flow. One significant factor is the inefficient routing algorithms employed by routers. These algorithms determine the best path for data packets to travel through a network. However, due to limitations in these algorithms, they may choose suboptimal paths, such as congested or longer routes, resulting in delays.

Another cause of suboptimal traffic flow is network congestion. Conger occurs when multiple devices are connected to a router, and the data traffic exceeds capacity. This congestion leads to packet loss, increased latency, and inefficient traffic flow.

    • Impact on Online Experiences:

The suboptimal traffic flow in routers can significantly impact our online experiences. Slow-loading web pages, buffering videos, and laggy online gaming sessions are just a few examples. Beyond these inconveniences, businesses relying on efficient data transfer may suffer from decreased productivity and customer dissatisfaction. It is, therefore, crucial to address this issue to ensure a seamless online experience for all users.

    • Solutions to Improve Traffic Flow:

Several approaches can improve routers’ suboptimal traffic flow. One solution is investing in routers with advanced algorithms that optimize the path selection process. These algorithms can consider network congestion, latency, and packet loss to choose the most efficient route for data packets.

Additionally, implementing Quality of Service (QoS) techniques can help prioritize critical traffic, ensuring that it receives higher bandwidth and lower latency. By giving priority to time-sensitive applications such as video streaming or VoIP calls, QoS can significantly improve the overall traffic flow.

Regular router maintenance and firmware updates are also crucial to maintaining optimal traffic flow. By keeping the router’s software current, manufacturers can address any known issues and improve the device’s overall performance and efficiency.

    • Network Performance and CDN

Moreover, network performance can be impacted when the network is stretched over long distances. Latency and bandwidth limitations can affect the user experience, particularly for applications that require real-time data transmission. To overcome these challenges, businesses must carefully design their network architecture, leveraging technologies like content delivery networks (CDNs) and edge computing.

    • State reduction ( blocking links ) costs increase the stretch. 

Consider the example of a Spanning Tree regarding state/stretch trade-offs. A spanning tree works by selecting one switch as the tree’s root and selecting specific links within the tree structure to move toward the root. This reduces the state to an absolute minimum by forcing all traffic along a single tree and blocking redundant links that don’t belong to that Tree. However, the state reduction ( blocking links ) costs increase the stretch through the network to the maximum possible.

This has led to the introduction of THRILL and Cisco’s FabricPath. These technologies allow you to have active/active paths, thereby increasing the state of the network while decreasing the stretch. When examining the data center transition, the default way to create scalable designs for Layers 2 and 3 is to have an overlay, such as VXLAN. Layer 2 and 3 traffic is differentiated with the VNI of the VXLAN header. All of these operate over a routed Layer 3 underlay.

VXLAN Benefits
VXLAN Benefits: Scale and loop-free networks.

A closing point on the stretch network

You can’t hide state information constantly, as it decreases the network’s overall efficiency by increasing the stretch. However, if all your traffic flows north/south, reducing the state will not impact the stretch, as the traffic can only follow one direction. But if you have a combination of traffic patterns ( north/south & east/west ), reducing the state will cause traffic to take a sub-optimal path through the network – thus increasing the stretch.

Summary: Network Stretch

In this fast-paced digital age, the concept of network stretch has emerged as a game-changer. Network stretch refers to expanding and optimizing networks to meet the increasing demands of connectivity. This blog post explored the various aspects of network stretch and how it can revolutionize how we connect and communicate.

Understanding Network Stretch

Network stretch is more than expanding physical infrastructure. It involves leveraging advanced technologies, such as software-defined networking (SDN) and network function virtualization (NFV), to enhance network capabilities. By embracing network stretch, organizations can achieve scalability, flexibility, and improved performance.

The Benefits of Network Stretch

Network stretch offers a myriad of benefits. Firstly, it enables seamless connectivity across various locations, allowing businesses to expand their reach without compromising network performance. Secondly, it enhances disaster recovery capabilities by creating redundant pathways and ensuring business continuity. Lastly, network stretch empowers organizations to adopt cloud-based services and leverage the Internet of Things (IoT) power.

Implementing Network Stretch Strategies

Implementing network stretch requires careful planning and execution. Organizations need to assess their current network infrastructure, identify areas for improvement, and leverage technologies like SDN and NFV. Working with experienced network providers can also help design and deploy robust network stretch solutions tailored to specific business needs.

Overcoming Challenges

While network stretch offers immense potential, it comes with its challenges. Ensuring security across stretched networks becomes paramount, as it involves a broader attack surface. Proper encryption, authentication protocols, and network segmentation are crucial to mitigate risks. Additionally, organizations must address potential latency issues and ensure seamless integration with existing network infrastructure.

Conclusion:

In conclusion, network stretch presents a remarkable opportunity for organizations to unlock new connectivity, scalability, and performance levels. By embracing advanced technologies and implementing sound strategies, businesses can revolutionize their networks and stay ahead in the digital era. Whether expanding geographical reach, improving disaster recovery capabilities, or embracing emerging technologies, network stretch is the key to a more connected future.