Internet of things

Internet of Things Access Technologies

Internet of Things Access Technologies

In today's interconnected world, the Internet of Things (IoT) has become a significant part of our lives. From smart homes to industrial automation, IoT devices rely on access technologies to establish connectivity. This blog post aims to delve into the world of IoT access technologies, understanding their importance, and exploring the various options available.

IoT access technologies serve as the foundation for connecting devices to the internet. These technologies enable seamless communication between IoT devices and the cloud, facilitating data transfer and control. They can be categorized into wired and wireless access technologies.

Wired Access Technologies: Wired access technologies offer reliable and secure connectivity for IoT devices. Ethernet, for instance, has been a longstanding wired technology used in both industrial and residential settings. Power over Ethernet (PoE) further enhances the capabilities of Ethernet by providing power alongside data transmission. This section will explore the advantages and considerations of wired access technologies.

Wireless Access Technologies: Wireless access technologies have gained immense popularity due to their flexibility and scalability. Wi-Fi, Bluetooth, and Zigbee are among the most widely used wireless technologies in the IoT landscape. Each technology has its own strengths and limitations, making them suitable for different IoT use cases. This section will discuss the key features and applications of these wireless access technologies.

Cellular Connectivity for IoT: As IoT devices continue to expand, cellular connectivity has emerged as a critical access technology. Cellular networks provide wide coverage, making them ideal for remote and mobile IoT deployments. This section will explore cellular technologies such as 2G, 3G, 4G, and the emerging 5G, highlighting their capabilities and potential impact on IoT applications.

Security Considerations: With the proliferation of IoT devices, security becomes a paramount concern. This section will shed light on the security challenges associated with IoT access technologies and discuss measures to ensure secure and robust connectivity for IoT devices. Topics such as authentication, encryption, and device management will be explored.

Conclusion: In conclusion, IoT access technologies play a vital role in enabling seamless connectivity for IoT devices. Whether through wired or wireless options, these technologies bridge the gap between devices and the internet, facilitating efficient data exchange and control. As the IoT landscape continues to evolve, staying updated with the latest access technologies and implementing robust security measures will be key to harnessing the full potential of the Internet of Things.

Highlights: Internet of Things Access Technologies

A practitioner in the tech industry should be familiar with the term Internet of Things (IoT). IoT continues to grow due to industries’ increasing reliance on the internet. Even though we sometimes don’t realize it, it is everywhere.

The Internet of Things plays an important role in the development of smart cities through its various implementations, such as real-time sensor data retrieval and automated task execution. The Internet of Things’ profound impact on our urban landscape, which is increasingly evident, means that these systems, equipped with numerous sensors, take action when specific thresholds are reached.

The Internet of Things, coined by computer scientist Kevin Ashton in 1999, is a network of connected physical objects that send and receive data. Smart fridges and mobile phones are examples of everyday objects, as are smart agriculture and smart cities, which span entire towns and industries.

IoT has revolutionized how we live and interact with our environment in just a few years. Households can benefit from IoT in the following ways:

  • Convenience: Devices can be programmed and controlled remotely. Think about being able to adjust your home’s lighting or heating even when you’re miles away using your smartphone.
  • Efficiency: Smart thermostats and lighting systems can optimize operations based on your usage patterns, saving energy and lowering utility bills.
  • In terms of security, IoT-enabled security systems and cameras can alert homeowners of potential breaches, and smart locks can recognize and grant access based on recognized users.
  • Monitoring health metrics: Smart wearables can track health metrics and provide real-time feedback, potentially alerting users or medical professionals to concerning changes.
  • Improved user experience: Devices can learn and adapt to users’ preferences, ensuring tailored and improved interactions over time.

Wi-Fi:

One of the most widely used IoT access technologies is Wi-Fi. With high data transfer speeds and widespread availability, Wi-Fi enables devices to connect effortlessly to the internet. Wi-Fi allows for convenient and reliable connectivity, whether controlling your thermostat remotely or monitoring security cameras. However, its range limitations and power consumption can be challenging in specific IoT applications.

Cellular Networks:

Cellular networks, such as 3G, 4G, and now the emerging 5G technology, play a vital role in IoT connectivity. These networks offer broad coverage areas, making them ideal for IoT deployments in remote or rural areas. With the advent of 5G, IoT devices can now benefit from ultra-low latency and high bandwidth, paving the way for real-time applications like autonomous vehicles and remote robotic surgery.

Bluetooth:

Bluetooth technology has long been synonymous with wireless audio streaming, but it also plays a significant role in IoT connectivity. Bluetooth Low Energy (BLE) is designed for IoT devices, offering low power consumption and short-range communication. This makes it perfect for applications like wearable devices, healthcare monitoring, and intelligent home automation, where battery life and proximity are crucial.

Zigbee:

Zigbee is a low-power wireless communication standard designed for IoT devices. It operates on the IEEE 802.15.4 standard and offers low data rates and long battery life. Zigbee is commonly used for home automation systems, such as smart lighting, temperature control, and security systems. Its mesh networking capabilities allow devices to form a network and communicate with each other, extending the overall range and reliability.

LoRaWAN:

LoRaWAN (Long Range Wide Area Network) is a low-power, wide-area network technology for long-range communication. It enables IoT devices to transmit data over long distances, making it suitable for applications like smart agriculture, asset tracking, and environmental monitoring. LoRaWAN operates on unlicensed frequency bands, enabling cost-effective and scalable IoT deployments.

Related: Before you proceed, you may find the following post helpful:

  1. 6LoWPAN Range
  2. Blockchain-based Applications
  3. Open Networking
  4. OpenFlow and SDN Adoption
  5. OpenStack Architecture



Internet of Things Access Technologies.

Key IoT Access Technologies Discussion points:


  • Introduction to data and analytics.

  • Fog, edge and cloud computing.

  • Access technology types.

  • Comments on TCP and UDP with IoT.

  • A final note on ethical challenges.

Back to Basics: Internet of Things (IoT).

The Internet of Things consists of a network of several devices, including a range of digital and mechanical objects, with separate access to transfer information over a network. The word “thing” can also represent an individual with a heart- monitor implant or even a pet with an IoT-based collar.

The term “thing” reflects the association of “internet” to devices previously disconnected with internet access. For example, the alarm clock was never meant to be internet-enabled, but now you can connect it to the Internet. With IoT, the options are endless.

Then, we have IoT access technologies. The three major network access technologies for IoT Connectivity are Standard Wireless Access – WiFi, 2G, 3G, and standard LTE. We also have Private Long Range – LoRA-based platforms, Zigbee and SigFox. Mobile IoT Technologies – LTE-M, NB-IoT, and EC-GSM-IoT

IoT Access Technologies

The origins of the Internet, which started in the 1960s, look entirely different from the map of what we have today. It is now no longer a luxury but more of a necessity. It started with areas such as basic messaging, which grew to hold the elasticity and dynamic nature of the cloud, to a significant technological shift into the world of the Internet of Things (IoT): Internet of Things theory. It’s not about buying and selling computers and connecting them anymore; it’s all about data, analytics, and new solutions, such as event stream processing.

Internet of Things Access Technologies: A New World

A World with the Right Connections

IoT represents a new world where previously unconnected devices have new communication paths and reachability points. This marks IoT as the next evolutionary phase of the Internet, building better customer solutions.

The revolutionary phase is not just a technical phase; ethical challenges now face organizations, society, and governments. In the past, computers relied on input from humans. We entered keystrokes, and the machine would perform an action based on the input.

The computer had no sensors that could automatically detect the world around it and perform a specific action based on that. IoT ties these functions together. The function could be behavioral, making the object carry out a particular task or provide other information. IoT brings a human element to technology, connecting physically to logic.

 It’s all about data and analytics.

The Internet of Things is not just about connectivity. The power of IoT comes from how all these objects are connected and the analytics they provide. New analytics lead to new use cases that will lead to revolutionary ideas enhancing our lives. Sensors are not just put on machines and objects but also on living things. Have you heard of the connected cow? Maybe we should start calling this “cow computing” instead of “cloud computing.” For example, an agricultural firm called Chitale Group uses IoT technologies to keep tabs on herds to monitor their thirst.

These solutions will formulate a new type of culture, intertwining various connected objects and dramatically shifting how we interact with our surroundings. This new type of connectivity will undoubtedly formulate how we live and form the base of a new culture of connectivity and communication.

 

IoT Access Technologies: Data management – Edge, fog, and cloud computing

In the traditional world of I.T. networks, data management is straightforward. It is based on I.P. with a client/server model, and the data is in a central location. IoT brings new data management concepts such as Edge, Cloud, and Fog computing. However, the sheer scale of IoT data management presents many challenges. Low bandwidth of the last mile leads to high latency, and new IoT architectural concepts such as Fog computing, where you analyze data close to where it’s connected, are needed. 

Like the cloud in Skype, Fog is on the ground and best suited for constrained networks that need contextual awareness and quick reaction. Edge computing is another term where the processing is carried out at the furthest point – the IoT device itself. Edge computing is often called Mist computing.

IoT Access Technologies
Diagram: IoT Access Technologies. Source Cisco.

Cloud computing is not everything.

IoT brings highlights the concept that “cloud computing is not everything.” IoT will require onsite data processing; some data must be analyzed in real-time. This form of edge computing is essential when you need near-time results and when there isn’t time to send visual, speed, and location information to the cloud for instructions on what to do next. For example, if a dog runs out in front of your car, the application does not have the time for several round trips to the cloud. 

Services like iCloud have had a rough few years. Businesses are worried about how secure their data will be when using one of the many cloud-based services available. This is because of the iCloud data breach in 2014. However, with the rise of cloud security solutions, many businesses are starting to see the benefits of cloud technology, as they no longer have to worry about their data security.

Internet is prone to latency.

The Internet is prone to latency, and we cannot fix it unless we shorten the links or change the speed of light. Connected cars require the capability to “think” and make decisions on the ground without additional hops to the cloud.

Fog computing is a distributed computing infrastructure located between the edge of a network and the cloud. It is a distributed computing architecture designed to address the challenges of latency and bandwidth constraints introduced by traditional cloud computing. F fog computing decentralizes the computing process rather than relying on a single, centralized data center to store and process data. It enables data to be processed closer to the network’s edge.

Fog computing

Fog computing performs better than cloud computing in meeting the demands of emerging paradigms. However, batch processing is still preferred for high-end jobs in the business world, so it can only partially replace cloud computing. In conclusion, fog computing and cloud computing complement one another while having their advantages and disadvantages. Edge computing is crucial in the Internet of Things (IoT).

Security, confidentiality, and system reliability are research topics in the fog computing platform. Cloud computing will meet the needs of business communities with its lower cost based on a utility pricing model. In contrast, fog computing is expected to serve the emerging network paradigms that require faster processing with less delay and delay jitter. Fog computing will grow by supporting the emerging network paradigms that require faster processing with less delay and delay jitter.

Internet of Things Access Technologies: Architectural Standpoint

From an architectural point of view, one must determine the technologies used to allow these “things” to communicate with each other. And the technologies chosen are determined by how the object is classified. I.T. network architecture has matured over the last decade, but IoT architectures bring new paradigms and a fresh approach. For example, traditional security designs consist of physical devices with well-designed modules.

Although new technologies such as VM NIC Firewalls and other distributed firewalls other than IoT have dissolved the perimeter, IoT brings dispersed sensors outside the protected network, completely dissolving the perimeter to a new level. 

When evaluating the type of network needed to connect IoT smart objects, one needs to address transmission range, frequency bands, power consumption, topology, and constrained devices and networks. The technologies used for these topologies include IEEE 802.15.4, IEEE 802.15.4g and IEEE 802.15.4e, IEEE 1901.2a, IEEE 802.11ah, LoRaWAN, and NB-IoT. The majority of them are wireless. Similar to I.T. networks, IoT networks follow Layer 1 (PHY), Layer 2 (MAC), Layer 3 (I.P.), etc. layers. And some of these layers require optimizations to support IoT intelligent objects.

IP/TCP/UDP in an IoT world

The Internet Protocol (I.P.) is an integral part of IoT due to its versatility in dealing with the large array of changes in Layer 1 and Layer 2 to suit the last-mile IoT access technologies. I.P. is the Internet protocol that connects billions of networks with a well-understood knowledge base. Everyone understands I.P. and knows how to troubleshoot it.

It has proven robust and scalable, providing a solid framework for bi-directional or unidirectional communication between IoT devices. Sometimes, the full I.P. stack may not be necessary as protocol overhead may exceed device data. 

More importantly, IPv6. Using I.P. for the last mile of constrained networks requires introducing a new mechanism and routing protocols, such as RLP and adaptation layers, to handle the constrained environments. In addition, routing protocol optimizations must occur for constrained devices and networks. This is where we see the introduction of the IPv6 RPL protocol. IPv6 RPL protocol is a distance-vector routing protocol specifically designed for IoT intelligent objects.

Optimizations are needed at various levels, and control plane traffic must be kept to a minimum, leading to new algorithms such as On-Demand Distance Vector. Standard routing algorithms learn the paths and store the information for future use. This works compared to AODV, which does not send a message until a route is needed.

Both TCP and UDP have their place in IoT.

TCP and UDP will play a significant role in the IoT transport layer. TCP for guaranteed delivery or UDP to leave the handling to a higher layer. The additional activities TCP brings to make it a reliable transport protocol come at the overhead cost per packet and session.

On the other hand, UDP is connectionless and often used for performance, which is more critical than packet retransmission. Therefore, a low-power and Lossy Networks (LLN) network may be better suited than UDP & a more robust cellular network for TCP.

Session overhead may not be a problem on everyday I.T. infrastructures. Still, it can cause stress on IoT-constrained networks and devices, especially when a device needs only to send a few bytes per bytes of data per transaction. IoT Class 0 devices that only send a few bytes do not need to implement a full network protocol stack. For example, small payloads can be transported on top of the MAC layer without TCP or UDP.

Ethical challenges

What are the ethical ramifications of IoT? In the Cold War, everyone was freaked out about nuclear war; now, it’s all about data science. We are creating billions of connected “things,” and we don’t know what’s to come. The responsibilities around the ethical framework for IoT and the data it generates fall broadly on individual governments and society.

These are not technical choices; they are socially driven. This might scare people and hold them back with IoT, but if you look at technology, it’s been a fantastic force for good. One should not resist IoT and the use cases it will offer our lives. However, I don’t think it will work out well for you if you do.

New technologies will always have risks and challenges. But if you reflect on and look at original technologies, such as the wheel, they created new jobs rather than destroyed old ones. IoT is the same. It’s about being on the right side of history and accepting it.

Cisco and IoT

One technology that has played a significant role in the advancement of IoT is Cisco’s LoRaWAN. LoRaWAN, short for Long Range Wide Area Network, is a low-power, wide-area network protocol for long-range communication between IoT devices.

It operates in the sub-gigahertz frequency bands, providing an extended communication range while consuming minimal power. This makes it ideal for applications that require long-distance connectivity, such as smart cities, agriculture, asset tracking, and industrial automation.

Cisco’s Contribution:

Cisco, a global leader in networking solutions, has embraced LoRaWAN technology and has been at the forefront of driving its adoption. The company has developed a comprehensive suite of LoRaWAN solutions, including gateways, sensors, and network infrastructure, enabling businesses to leverage the power of IoT.

Example Use Case: Cisco’s LoRaWAN-compliant solution.

With Cisco’s LoRaWAN-compliant solution, IoT sensors and endpoints can be connected cheaply across cities and rural areas. Low power consumption also extends battery life up to several years. The solution operates in the 800–900 MHz ISM band around the globe as part of Cisco’s LoRaWAN (long-range wide-area network) solution.

The Cisco Industrial Asset Vision solution includes it and a stand-alone component. Monitoring equipment, people, and facilities with LoRaWAN sensors improves business resilience, safety, and efficiency.

Cisco IoT Solution
Diagram: Cisco IoT Solution. Source Cisco.

Benefits of Cisco’s LoRaWAN:

1. Extended Range: With its long-range capabilities, Cisco’s LoRaWAN enables devices to communicate over several kilometers, surpassing the limitations of traditional wireless networks.

2. Low Power Consumption: LoRaWAN devices consume minimal power, allowing them to operate on batteries for an extended period. This makes them ideal for applications where the power supply is limited or impractical to install.

3. Scalability: Cisco’s LoRaWAN solutions are highly scalable, accommodating thousands of devices and ensuring seamless communication between them. This scalability makes it suitable for large-scale deployments, such as smart cities or industrial IoT applications.

4. Secure Connectivity: Security is a top priority in any IoT deployment. Cisco’s LoRaWAN solutions incorporate robust security measures, ensuring data integrity and protection against unauthorized access.

Use Cases:

1. Smart Agriculture: LoRaWAN allows farmers to monitor soil moisture, temperature, and humidity, optimize irrigation, and reduce water consumption. Cisco’s LoRaWAN solutions provide reliable connectivity to enable efficient farming practices.

2. Asset Tracking: From logistics to supply chain management, tracking assets in real-time is crucial. Cisco’s LoRaWAN solutions enable accurate and cost-effective asset tracking, enhancing operational efficiency.

3. Smart Cities: LoRaWAN is vital in building smart cities. It allows municipalities to monitor and manage various aspects, such as parking, waste management, and street lighting. Cisco’s LoRaWAN solutions provide the necessary infrastructure to support these initiatives.

As the IoT ecosystem expands, the choice of access technologies becomes critical to ensure seamless connectivity and efficient data exchange. Wi-Fi, cellular networks, Bluetooth, Zigbee, and LoRaWAN are examples of today’s diverse IoT access technologies. By understanding the strengths and limitations of each technology, businesses, and individuals can make informed decisions about which access technology best suits their IoT applications. As we embrace the connected future, IoT access technologies will continue to evolve, enabling us to unlock the full potential of the Internet of Things.

Summary: Internet of Things Access Technologies

The Internet of Things (IoT) has become a pervasive force in our ever-connected world, transforming how we live, work, and interact with technology. As the IoT continues to expand, it is crucial to understand the access technologies that enable its seamless integration. This blog post delved into IoT access technologies, highlighting their importance, benefits, and potential challenges.

Section 1: What are IoT Access Technologies?

IoT access technologies encompass the various means through which devices connect to the internet and communicate with each other. These technologies provide the foundation for IoT ecosystems, enabling devices to exchange data and perform complex tasks. From traditional Wi-Fi and cellular networks to emerging technologies like LPWAN (Low Power Wide Area Network) and 5G, the landscape of IoT access technologies is diverse and constantly evolving.

Section 2: Traditional Access Technologies

Wi-Fi and cellular networks have long been the go-to options for connecting IoT devices. Wi-Fi offers high bandwidth and reliable connectivity within a limited range, making it suitable for home and office environments. On the other hand, cellular networks provide wider coverage but may require a subscription and can be costlier. Both technologies have strengths and limitations, depending on the specific use case and requirements.

Section 3: The Rise of LPWAN

LPWAN technologies have emerged as a game-changer in IoT connectivity. These low-power, wide-area networks offer long-range coverage, low energy consumption, and cost-effective solutions. LPWAN technologies like LoRaWAN and NB-IoT are ideal for applications that require battery-powered devices and long-range connectivity, such as smart cities, agriculture, and asset tracking.

Section 4: The Promise of 5G

The advent of 5G technology is set to revolutionize IoT access. With its ultra-low latency, high bandwidth, and massive device connectivity, 5G opens up a world of possibilities for IoT applications. Supporting many devices in real-time with near-instantaneous response times unlocks new use cases like autonomous vehicles, remote healthcare, and smart industries. However, the deployment of 5G networks and the associated infrastructure pose challenges that must be addressed for widespread adoption.

Conclusion:

Internet of Things access technologies form the backbone of our interconnected world. From traditional options like Wi-Fi and cellular networks to emerging technologies like LPWAN and 5G, each has unique features and suitability for IoT applications. As the IoT expands, it is essential to leverage these technologies effectively, ensuring seamless connectivity and bridging the digital divide. By understanding and embracing IoT access technologies, we can unlock the full potential of the Internet of Things, creating a smarter and more connected future.

SD-WAN Static Network-Based

SD-WAN Static Network-Based

In today's fast-paced digital world, businesses are constantly seeking innovative solutions to enhance network connectivity and efficiency. One such groundbreaking technology that has emerged is SD-WAN Static Network-Based. This blogpost delves into the concept of SD-WAN Static Network-Based, its benefits, and its potential impact on the future of networking.

SD-WAN, or Software-Defined Wide Area Networking, is a transformative approach that simplifies the management and operation of a wide area network. By separating the network hardware from its control mechanism, SD-WAN enables businesses to utilize various underlying transport technologies, including broadband internet, to securely connect users to applications.

Dynamic vs. Static Network-Based SD-WAN: While dynamic SD-WAN provides agility and flexibility by dynamically selecting the best path for each application, static network-based SD-WAN takes a different approach. It leverages predetermined paths and preconfigured policies to ensure reliable and predictable performance. This approach can be particularly beneficial for businesses with specific compliance or performance requirements.

Enhanced Security: With static network-based SD-WAN, businesses can implement advanced security measures across their network infrastructure, protecting critical data and minimizing vulnerabilities.

Improved Performance: By utilizing predetermined paths and policies, SD-WAN static network-based ensures consistent application performance, reducing latency and packet loss, and enhancing user experience.

Cost Optimization: Static network-based SD-WAN enables businesses to make the most of cost-effective transport options, such as broadband internet, while still maintaining a high level of performance and reliability.

SD-WAN static network-based has found applications across various industries. From finance to healthcare, businesses are leveraging this technology to streamline operations, enhance productivity, and improve customer experience.

While SD-WAN static network-based offers numerous benefits, it's crucial to consider potential challenges. These may include network complexity during implementation, ensuring compatibility with existing infrastructure, and the need for adequate network monitoring and management.

Conclusion: SD-WAN static network-based represents a significant shift in the networking landscape. Its ability to provide secure, reliable, and optimized connectivity opens up new possibilities for businesses across industries. As organizations continue to embrace digital transformation, SD-WAN static network-based stands as a powerful tool to revolutionize network connectivity and drive future innovation.

Highlights: SD-WAN Static Network-Based

Change in Landscape

We are now in full swing of a new era of application communication, with more businesses looking for an SD-WAN Static Network-Based solution suitable for them. Digital communication now formulates our culture and drives organizations to a new world, improving productivity around the globe.

The dramatic changes in application consumption introduce new paradigms while reforming how we approach networking. Networking around the Wide Area Network (WAN) must change as the perimeter dissolves.

Recent Application Requirements

Recent application requirements drive a new type of SD WAN Overlay that more accurately supports today’s environment with an additional layer of policy management. It’s not just about IP addresses and port numbers anymore; it’s the entire suite of business logic that drives the correct forwarding behavior. The WAN must start to make decisions holistically.

It is not just a single module in the network design and must touch all elements to capture the correct per-application forwarding behavior. It should be automatable and rope all components together to form a comprehensive end-to-end solution orchestrated from a single pane of glass. Before we delve into the drivers for an SD-WAN Static Network-Based solution, you may want to recap SD-WAN with this SD WAN tutorial.

Before you proceed, you may find the following posts helpful:

  1. SD WAN Security 
  2. SD WAN SASE

 



SD-WAN Static Network-Based.

Key SD-WAN Static Network-Based Discussion points:


  • Introduction to an application-orientated WAN.

  • The drivers for SD-WAN.

  • Limitations of some protocols.

  • Dicsussion on WAN challenges.

  • List the SD-WAN core features.

 

Back to Basic With SD-WAN Static Network-Based

The traditional network

The networks we depend on for business are sensitive to many factors that can result in a slow and unreliable experience. We can experience latency, which either refers to the time between a data packet being sent and received or the round-trip time, which is the time it takes for the packet to be sent and for it to get a reply.

We can also experience jitter, the variance in the time delay between data packets in the network, which is basically a “disruption” in the sending and receiving of packets. We have fixed-bandwidth networks that can experience congestion. For example, with five people sharing the same Internet link, each could experience a stable and swift network. Add another 20 or 30 people onto the same link, and the experience will be markedly different.

Benefits of SD-WAN Static Network-Based:

1. Enhanced Security:

With SD-WAN Static Network-Based, organizations gain greater control over their network security. By configuring static routes, administrators can ensure that traffic flows through secure paths, minimizing the risk of data breaches. Additionally, static routing reduces the exposure to potential threats by limiting the number of entry points into the network.

2. Improved Performance:

Static routing enables organizations to optimize network performance by establishing the most efficient paths for data transmission. Administrators can minimize latency and packet loss by carefully designing the network architecture, resulting in faster and more reliable application delivery. This is particularly crucial for organizations that rely on real-time applications such as video conferencing or VoIP.

3. Simplified Network Management:

One of the key advantages of SD-WAN Static Network-Based is its simplicity in network management. With static routing, administrators have complete visibility and control over the network infrastructure. They can easily configure, monitor, and troubleshoot the network, reducing the complexity associated with dynamic routing protocols. This simplification allows IT teams to focus on strategic initiatives rather than spending excessive time on network maintenance.

4. Cost Savings:

SD-WAN Static Network-Based can lead to significant cost savings for organizations. By leveraging existing network infrastructure and optimizing traffic flow, businesses can reduce the need for expensive bandwidth upgrades. Additionally, static routing eliminates the need for complex routing protocols, which can be costly to implement and maintain. These cost savings make SD-WAN Static Network-Based an attractive option for organizations seeking to maximize network efficiency while minimizing expenses.

SD-WAN Static Network-Based: Application-Orientated WAN

Push to the cloud.  

When geographically dispersed users connect back to central locations, their consumption triggers additional latency, degrading the application’s performance. No one can get away from latency unless we find ways to change the speed of light. One way is to shorten the link by moving to cloud-based applications.

The push to the cloud is inevitable. Most businesses are now moving away from on-premise in-house hosting to cloud-based management. Nevertheless, the benefits of moving to the cloud are manifold. It is easier for so many reasons.

The ready-made global footprint enables the usage of SaaS-based platforms that negate the drawbacks of dispersed users tromboning to a central data center. This software is pivotal to millions of businesses worldwide, which explains why companies such as Capsifi are so popular.

Logically positioned cloud platforms are closer to the mobile user. It’s increasingly far more efficient from the technical and business standpoint to host these applications in the cloud, which makes them available over the public Internet.

Bandwidth intensive applications

Richer applications, multimedia traffic, and growth in the cloud application consumption model drive the need for additional bandwidth. Unfortunately, we can only fit so much into a single link. The congestion leads to packet drops, ultimately degrading application performance and user experience. In addition, most applications ride on TCP, yet TCP was not designed with performance.

Organic growth

Organic business growth is a significant driver for additional bandwidth requirements. The challenge is that existing network infrastructures are static and unable to respond to this growth in a reasonable period adequately. The last mile of MPLS locks you in and kills agility. Circuit lead times impede the organization’s productivity and create an overall lag.

Costs

A WAN virtualization solution should be simple. To serve the new era of applications, we need to increase the link capacity by buying more bandwidth. However, nothing is as easy as it may seem. The WAN is one of the network’s most expensive parts, and employing link oversubscription to reduce congestion is too expensive.

Furthermore, bandwidth comes at a cost, and the existing TDM-based MPLS architectures cannot accommodate application demands. 

Traditional MPLS comes with a lot of benefits and is feature-rich. No one doubts this fact. MPLS will never die. However, it comes at a high cost for relatively low bandwidth. Unfortunately, MPLS’s price and capabilities are not a perfect couple.

  • Hybrid connectivity

Since there is not one stamp for the entire world, similar applications will have different forwarding preferences. Therefore, application flows are dynamic and change depending on user consumption. Furthermore, the MPLS, LTE, and Internet links often complement each other since they support different application types.

For example, Storage and Big data replication traffic are forwarded through the MPLS links, while cheaper Internet connectivity is used for standard Web-based applications.

Limitations of protocols

When left to its defaults, IPsec is challenged by hybrid connectivity. IPSec architecture is point-to-point, not site-to-site. As a result, it doesn’t natively support redundant uplinks. Complex configurations are required when sites have multiple uplinks to multiple providers.

By default, IPsec is not abstracted; one session cannot be used over multiple uplinks, causing additional issues with transport failover and path selection. It’s a Swiss Army knife of features, and much of IPSec’s complexities should be abstracted. Secure tunnels should be torn up and down immediately, and new sites should be incorporated into a secure overlay without much delay or manual intervention. 

Internet of Things (IoT)

Security and bandwidth consumption are key issues concerning introducing IP-enabled objects and IoT access technologies. IoT is all about Data and will bring a shed load of additional overheads for networks to consume. As millions of IoT devices come online, how efficiently do we segment traffic without complicating the network design further? Complex networks are hard to troubleshoot, and simplicity is the mother of all architectural success. Furthermore, much IoT processing requires communication to remote IoT platforms. How do we account for the increased signaling traffic over the WAN? The introduction of billions of IoT devices leaves many unanswered questions.

Branch NFV

There has been strong interest in infrastructure consolidation by deploying Network Function Virtualization (NFV) at the branch. Enabling on-demand service and chaining application flows key drivers for NFV. However, traditional service chaining is static since it is bound to a particular network topology. Moreover, it is typically built through manual configuration and is prone to human error.

 Challenges to existing WAN

Traditional WAN architectures consist of private MPLS links complemented with Internet links as a backup. Standard templates in most Service Provider environments are usually broken down into Bronze, Silver, and Gold SLAs. 

However, these types of SLA do not fit all geographies and often should be fine-tuned per location. Capacity, reliability, analytics, and security are all central parts of the WAN that should be available on demand. Traditional infrastructure is very static, and bandwidth upgrades and service changes require considerable time processing and locking agility for new sites.

It’s not agile enough, and nothing can be performed on the fly to meet the growing business needs. In addition, the cost per bit for the private connection is high, which is problematic for bandwidth-intensive applications, especially when the upgrades are too costly and can’t be afforded. 

 

  • A distributed world of dissolving perimeters

Perimeters are dissolving, and the world is becoming distributed. Applications require a WAN to support distributed environments along with flexible network points. Centralized-only designs result in suboptimal traffic engineering and increased latency. Increased latency disrupts the application performance, and only a particular type of content can be put into a Content Delivery Network (CDN). CDN cannot be used for everything.

Traditional WANs are operationally complex; people likely perform different network and security functions. For example, you may have a DMVPN, Security, and Networking specialist. Some wear all hats, but they are few and far between. Different hats have different ideas, and agreeing on a minor network change could take ages.

The World of SD-WAN Static Network-Based

SD-WAN replaces traditional WAN routers, which are agnostic to the underlying transport technology. You can use various link types, such as MPLS, LTE, and broadband. All combined. Based on policies generated by the business, SD-WAN enables load sharing across different WAN connections that more efficiently support today’s application environment.

It pulls policy and intelligence out of the network and places them into an end-to-end solution orchestrated by a single pane of glass. SDN-WAN is not just about tunnels. It consists of components that work together to simplify network operations while meeting all bandwidth and resilience requirements.

Centralized points in the network are no longer adequate; we need network points positioned where they make the most sense for the application and user. It is illogical to backhaul traffic to a central data center and is far more efficient to connect remote sites to a SaaS or IaaS model over the public Internet. The majority of enterprises prefer to connect remote sites directly to cloud services. So why not let them do this in the best possible way?

A new style of WAN and SD-WAN

We require a new style of WAN and a shift from a network-based approach to an application-based approach. The new WAN no longer looks solely at the network to forward packets. Instead, it looks at the business application and decides how to optimize it with the correct forwarding behavior. This new style of forwarding is problematic with traditional WAN architecture.

Making business logic decisions with IP and port number information is challenging. Standard routing is packet by packet and can only set part of the picture. They have routing tables and perform forwarding but essentially operate on their little island, losing out on a holistic view required for accurate end-to-end decision-making. An additional layer of information is needed.

A controller-based approach offers the necessary holistic view. We can now make decisions based on global information, not solely on a path-by-path basis. Getting all the routing information and compiling it into a dashboard to make a decision is much more efficient than making local decisions that only see parts of the network. 

From a customer’s viewpoint, what would the perfect WAN look like if you could roll back the clock and start again?   

SD-WAN Static Network-Based Components

SD-WAN key features:

  • App-Aware Routing Capabilities

Not only do we need application visibility to forward efficiently over either transport, but we also need the ability to examine deep inside the application and look at the sub-applications. For example, we can determine whether Facebook chat is over regular Facebook. This removes the application’s mystery and allows you to balance loads based on sub-applications. It’s like using a scalpel to configure the network instead of a sledgehammer.

  • Ease Of Integration With Existing Infrastructure

The risk of introducing new technologies may come with a disruptive implementation strategy. Loss of service damages more than the engineer’s reputation. It hits all areas of the business. The ability to seamlessly insert new sites into existing designs is a vital criterion. With any network change, a critical evaluation is to know how to balance risk with innovation while still meeting objectives.

How aligned is marketing content to what’s happening in reality? It’s easy for marketing materials to implement their solution at Layer 2 or 3! It’s an entirely different ball game doing this. SD-WAN carries a certain amount of due diligence. One way to read between the noise is to examine who has real-life deployments with proven Proof of concept (POC) and validated designs. Proven POC will help you guide your transition in a step-by-step manner.

  • Regional Specific Routing Topologies

Every company has different requirements for hub and spoke full mesh and Internet PoP topologies. For example, Voice should follow a full mesh design, while Data requires a hub and spokes connecting to a central data center. Nearest service availability is the key to improved performance, as there is little we can do about the latency Gods except by moving services closer together. 

  • Centralized Device Management & Policy Administration

The manual box-by-box approach to policy enforcement is not the way forward. It’s similar to stepping back into the Stone Age to request a catered flight. The ability to tie everything to a template and automate enables rapid branch deployments, security updates, and configuration changes. The optimal solutions have everything in one place and can dynamically push out upgrades.

  • High Available With Automatic Failovers

You cannot apply a singular viewpoint to high availability. An end-to-end solution should address the high availability requirements of the device, link, and site level. WANs can fail quickly, but this requires additional telemetry information to detect failures and brownout events. 

  • Encryption On All Transports

Irrespective of link type, whether MPLS, LTE, or the Internet, we need the capacity to encrypt on all those paths without the excess baggage of IPsec. Encryption should happen automatically, and the complexity of IPsec should be abstracted.

Summary: SD-WAN Static Network-Based

The networking world has witnessed a significant transformation in recent years, and one technology that has been making waves is SD-WAN (Software-Defined Wide Area Network). In this blog post, we will delve into the details of SD-WAN, its benefits, and how it is revolutionizing the network landscape.

Understanding SD-WAN

SD-WAN is a technology that abstracts networking hardware and enables centralized control and management of wide-area networks (WANs). Unlike traditional WANs, which rely on static and complex configurations, SD-WAN introduces agility and flexibility by separating the control plane from the data plane.

The Benefits of SD-WAN

SD-WAN offers numerous advantages that are transforming the way businesses approach networking. Firstly, it enhances network performance by leveraging multiple transport technologies such as MPLS, broadband, and LTE, providing intelligent traffic routing and optimization. Secondly, it brings cost savings by utilizing cost-effective internet connections and reducing reliance on expensive private circuits. Additionally, SD-WAN simplifies network management through centralized policies and automation, improving overall operational efficiency.

Security Considerations with SD-WAN

While SD-WAN offers immense benefits, it is crucial to address security concerns. With SD-WAN adoption, organizations must ensure robust security measures are in place. Encryption, authentication, and secure connectivity between branch offices and data centers are paramount. Implementing next-generation firewalls, intrusion detection systems, and regular security audits are essential to mitigate risks.

Case Studies: Real-World SD-WAN Success Stories

To truly grasp the impact of SD-WAN, let’s explore a few real-world success stories. Company X, a multinational organization, improved network performance and reduced costs by implementing SD-WAN across its global offices. Similarly, Company Y streamlined its network management, achieving better scalability and agility in response to changing business needs.

Conclusion:

The rise of SD-WAN has revolutionized the network landscape, empowering organizations with enhanced performance, cost savings, and simplified management. However, it is crucial to prioritize security measures to protect sensitive data and maintain network integrity. As businesses continue to embrace digital transformation, SD-WAN emerges as a powerful tool to drive network efficiency and enable seamless connectivity.

rsz_199362bc73d551930019b45770c60b76

ACUNETIX – Web Application Security

Web Application Security

Hello, I did a tailored package for ACUNETIX. We split a number of standard blogs into smaller ones for SEO. There are lots of ways to improve web application security so we covered quite a lot of bases in the package.

“So why is there a need for true multi-cloud capacity? The upsurge of the latest applications demands multi-cloud scenarios. Firstly, organizations require application portability amongst multiple cloud providers. Application uptime is a necessity and I.T organizations cannot rely on a single cloud provider to host the critical applications. Besides, lock-in I.T organizations don’t want to have their application locked into specific cloud frameworks. Hardware vendors have been doing this since the beginning of time, thereby, locking you to their specific life cycles. Within a cloud environment that has been locked into one provider means, you cannot easily move your application from one provider to another.

Thirdly, cost is one of the dominant factors. Clouds are not a cheap resource and the pricing models vary among providers, even for the same instance size and type. With a multi-cloud strategy in place, you are in a much better position to negotiate the price.”

The World Wide Web (WWW) has transformed from simple static content to serving the dynamic world of today. The remodel has essentially changed the way we communicate and do business. However, now we are experiencing another wave of innovation in the technologies. The cloud is becoming an even more diverse technology compared to the former framework. The cloud has evolved into its second decade of existence, which formulates and drives a new world of cloud computing and application security. After all, it has to overtake the traditional I.T by offering an on-demand elastic environment. It largely affects how the organizations operate and have become a critical component for new technologies.

The new shift in cloud technologies is the move to ‘multi-cloud designs’, which is a big game-changer for application security. Undoubtedly, multi-cloud will become a necessity for the future but unfortunately, at this time, it is miles apart from a simple move. It is a fact, that not many have started their multi-cloud journey. As a result, there are a few lessons learned, which can expose your application stack to security risks unless you were to hire a professional Web Application Company that will develop and maintain the security of your new application within the cloud for you and your business, opting for this method can mean having a dedicated IT specialist company that can be of service should anything go awry.

Reference architecture guides are a great starting point, however, there are many unknowns when it comes to multi-cloud environments. To take advantage of these technologies, you need to move with application safety in mind. Applications don’t care what cloud technology they lay in. What is significant is, that they need to be operational and hardened with appropriate security.”

“In the early 2000s, we had simple shell scripts created to take down a single web page. Usually, one attacking signature was used from one single source IP address. This was known as a classical Bot based attack, which was effective in taking down a single web page. However, this type of threat needed a human to launch every single attack. For example, if you wanted to bring ten web applications to a halt, you would need to hit “enter” on the keyboard ten times.

We then started to encounter the introduction of simple scripts compiled with loops. Under this improved attack, instead of hitting the keyboard every time they wanted to bring down a web page, the bad actor would simply add the loop to the script. The attack still used only one source IP address and was known as the classical denial of service (DoS).

Thus, the cat and mouse game continued between the web application developers and the bad actors. The patches were quickly released. If you patched the web application and web servers in time, and as long as a good design was in place, then you could prevent these types of known attacks.”

“The speed at which cybersecurity has evolved over the last decade has taken everyone by surprise. Different types of threats and methods of attack have been surfacing consistently, hitting the web applications at an alarming rate. Unfortunately, the foundations of web application design were not laid with security in mind. Therefore, the dispersed design and web servers continue to pose challenges to security professionals.

If the correct security measures are not in place, the existing well-known threats that have been around for years will infuse application downtime and data breaches. Here the prime concern is that if security professionals are unable to protect themselves against today’s web application attacks, how would they fortify against the unknown threats of tomorrow?

The challenges that we see today are compounded by the use of Artificial Intelligence (AI) by cybercriminals. Cybercriminals already have an extensive arsenal at their disposal but to make matters worse, they now have the capability to combine their existing toolkits with the unknown power of AI.”

“The cloud API management plane is one of the most significant differences between traditional computing and cloud computing. It offers an interface, which is often public, to connect the cloud assets. In the past, we followed the box-by-box configuration mentality, where we configured the physical hardware stringed by the wires. However, now, our infrastructure is controlled with an application programming interface (API) calls.

The abstraction of virtualization is aided by the use of APIs, which are the underlying communication methods for assets within a cloud. As a result of this shift of management paradigm, compromising the management plane is like winning unfiltered access to your data center, unless proper security controls to the application level are in place.”

“As we delve deeper into the digital world of communication, from the perspective of privacy, the impact of personal data changes in proportion to the way we examine security. As organizations chime in this world, the normal methods that were employed to protect data have now become obsolete. This forces the security professionals to shift their thinking from protecting the infrastructure to protecting the actual data. Also, the magnitude at which we are engaged in digital business makes the traditional security tools outdated. Security teams must be equipped with real-time visibility to fathom what’s happening all the way up at the web application layer. It is a constant challenge to map all the connections we are building and the personal data that is spreading literally everywhere. This challenge must be addressed not just from the technical standpoint but also from the legal and legislative context.

With the arrival of new General Data Protection Regulation (GDPR) legislation, security professionals must become data-centric. As a result, they no longer rely on traditional practices to monitor and protect data along with the web applications that act as a front door to the user’s personal data. GDPR is the beginning of wisdom when it comes to data governance and has more far-reaching implications than one might think of. It has been predicted that by the end of 2018, more than 50% of the organizations affected by GDPR, will not be in full compliance with its requirements.”

“Cloud computing is the technology that equips the organizations to fabricate products and services for both internal and external usage. It is one of the exceptional shifts in the I.T industry that many of us are likely to witness in our lifetimes. However, to align both; the business and operational goals, cloud security issues must be addressed by governance and not just treated as a technical issues. Essentially, the cloud combines resources such as central processing unit (CPU), Memory, and Hard Drives and places them into a virtualized pool. Consumers of the cloud can access the virtualized pool of resources and can allocate them in accordance to the requirement. Upon completion of the task, the assets are released back into the pool for future use.

Cloud computing represents a shift from a server-service-based approach, eventually, offering significant gains to businesses. However, these gains are often eroded when the business’s valuable assets, such as web applications, become vulnerable to the plethora of cloud security threats, which are like a fly in the ointment.”

“Firewall Designs & the Evolving Security Paradigm The firewall has weathered through a number of design changes. Initially, we started with a single chunky physical firewall and prayed that it wouldn’t fail. We then moved to a variety of firewall design models such as active-active and active-backup mode. The design of active-active really isn’t a true active-active due to certain limitations. However, the active-backup leaves one device, which is possibly quite expensive, left idle sitting there, waiting to take over in the event of primary firewall failover.

We now have the ability to put firewalls in containers. At the same time, some vendors claim that they can cluster up to eight firewalls creating one big active firewall. While these introductions are technically remarkable, nevertheless, they are complex as well. Anything complexity involved in security is certainly a volatile place to dock a critical business application.”

“Introduction Internet Protocol (IP) networks provide services to customers and businesses across the sphere. Everything and everyone is practically connected in some form or another. As a result, the stability and security of the network and the services that ride on top of IP are of paramount importance for successful service delivery. The connected world banks on IP networks and as the reliance mushrooms so does the level of network and web application attacks. Although the new technologies may offer services that simplify life and facilitate businesses to function more efficiently but in certain scenarios, they change the security paradigms which introduce oodles of complexities.

Alloying complexity with security is like stirring the water in oil which would eventually result in a crash. We operate in a world where we need multiple layers of security and updated security paradigms in order to meet the latest application requirements. Here, the significant questions to be pondered over are, can we trust the new security paradigms? Are we comfortable withdrawing from the traditional security model of well-defined component tiers? How does the security paradigm appear from a security auditor’s perspective?”

“Part One in this two-part series looked at the evolution of network architecture and how it affects security. Here we will take a deeper look at the security tools needed to deal with these changes. The Firewall is not enough Firewalls in three-tier or leaf and spine designs are not lacking features; this is not the actual problem. They are fully-featured rich. The problem is with the management of Firewall policies that leave the door wide open. This might invite a bad actor to infiltrate the network and laterally move throughout searching to compromise valuable assets on a silver platter. The central Firewall is often referred to as a “holy cow” as it contains so many unknown configured policies that no one knows what are they used for what. Have you ever heard of the 20-year-old computer that can be pingable but no one knows where it is or has there been any security patches in the last decade?

Having a poorly configured Firewall, no matter how feature-rich it is, it poses the exact same threat as a 20-year-old unpatched computer. It is nothing less than a fly in the ointment. Over the years, the physical Firewall will have had many different security administrators. The security professionals leave jobs every couple of years. And each year the number of configured policies on the Firewall increase. When the security administrator leaves his or her post, the Firewall policy stays configured but is often undocumented. Yet the rule may not even be active anymore. Therefore, we are left with central security devices with thousands of rules that no one fully understands but are still parked like deadwood.”

“The History of Network Architecture The goal of any network and its underlying infrastructure is simple. It is to securely transport the end user’s traffic to support an application of some kind without any packet drops which may trigger application performance problems. Here a key point to consider is that the metrics engaged to achieve this goal and the design of the underlying infrastructure derives in many different forms. Therefore, it is crucial to tread carefully and fortify the many types of web applications comfortably under an umbrella of hardened security. The network design has evolved over the last 10 years to support the new web application types and the ever-changing connectivity models such as remote workers and Bring Your Own Device (BYOD).”

“Part 1 in this series looked at Online Security and the flawed protocols it lays upon. Online Security is complex and its underlying fabric was built without security in mind. Here we shall be exploring aspects of Application Security Testing. We live in a world of complicated application architecture compounds with poor visibility leaving the door wide open for compromise. Web Applications Are Complex The application has transformed from a single server app design to a multi-tiered architecture, which has rather opened Pandora’s Box.

To complicate application security testing further, multiple tiers have both firewalling and load balancing between tiers, implemented with either virtualized or physical appliances. Containers and microservices introduce an entirely new wave of application complexity. Individual microservices require cross-communication, yet potentially located in geographically dispersed data centers.”

“A plethora of valuable solutions now run on web-based applications. One could argue that web applications are at the forefront of the world. More importantly, we must equip them with appropriate online security tools to barricade against the rising web vulnerabilities. With the right toolset at hand, any website can shock-absorb known and unknown attacks. Today the average volume of encrypted internet traffic is greater than the average volume of unencrypted traffic. Hypertext Transfer Protocol (HTTPS) is good but it’s not invulnerable. We see evidence of its shortcoming in the Heartbleed Bug where the compromise of secret keys was made possible. Users may assume that they see HTTPS in the web browser and that the website is secured.”

CacheFly-Logo-5x3

Cachefly CDN

I recently completed a few guest posts for Cachefly CDN. Kindly follow the link – Post 1 – Matt Conran & Cachefly, Post 2 – Matt Conran & Anycast and Post 3 – Matt Conran & TCP Anycast.

“We are witnessing a hyperconnectivity era with everything and anything pushed to the Internet to take advantage of its broad footprint. Users are scattered everywhere and they all want consistent service independent of connected device or medium. Everyone has high expectations and no one is willing to wait for a slow page load or buffered video. Nowadays, service performance is critical but are today’s protocols prepared for this new age of connectivity?”

 

 

corsalogo

Corsa Technologies DDoS White Paper

I recently completed a white paper for Corsa Technologies on DDoS Mitigation. Kindly visit the link and register to download – Matt Conran & Corsa Technologies.

“This white paper addresses some of the key concerns of today’s approach and technologies used to deal with the increasing volume of DDoS attacks. It introduces a radically simplified, yet high-performance approach to network security that shuts down even the most intense attacks. Today’s threat landscape is intensifying, and it’s not going away anytime soon. The number of infected Internet of Things (IoT) devices continues to surge with the release of the Mirai source code and appearance of the Leet Botnet. Multi-vector sophisticated attacks are targeting popular domains and new Android malware is swallowing the mobile world. The ubiquitous nature of the always on cell phone introduces a lot of scary things.”

 

 

 

 

 

 

 

multipath tcp

Multipath TCP

Multipath TCP

In today's interconnected world, a seamless and reliable internet connection is paramount. Traditional TCP/IP protocols have served us well, but they face challenges in handling modern network demands. Enter Multipath TCP (MPTCP), a groundbreaking technology that has the potential to revolutionize internet connections. In this blog post, we will explore the intricacies of MPTCP, its benefits, and its implications for the future of networking.

MPTCP, as the name suggests, allows a single data stream to be transmitted across multiple paths simultaneously. Unlike traditional TCP, which relies on a single path, MPTCP splits the data into subflows, distributing them across different routes. This enables the utilization of multiple network interfaces, such as Wi-Fi, cellular, or wired connections, to enhance performance, resilience, and overall user experience.

One of the key advantages of MPTCP lies in its ability to provide robustness and resilience. By utilizing multiple paths, MPTCP ensures that data transmission remains uninterrupted even if one path fails or experiences congestion. This redundancy significantly improves the reliability of connections, making it particularly valuable for critical applications such as real-time streaming and online gaming.

Implementing MPTCP requires both client and server-side support. Fortunately, MPTCP has gained significant traction, and numerous operating systems and network devices have begun incorporating native support for this protocol. From Linux to iOS, MPTCP is gradually becoming a standardized feature, empowering users with enhanced connectivity options.

The versatility of MPTCP opens up a plethora of possibilities for various applications. For instance, in the context of mobile devices, MPTCP can seamlessly switch between Wi-Fi and cellular networks, optimizing connectivity and ensuring uninterrupted service. Additionally, MPTCP holds promise for cloud computing, content delivery networks, and distributed systems, where the concurrent utilization of multiple paths can significantly improve performance and efficiency.

Highlights: Multipath TCP

TCP restricts communication

Multiple paths connect hosts, but TCP restricts communications to a single path per transport connection. Multiple paths could be used concurrently within the network to maximize resource usage. The user experience should be enhanced by improved resilience to network failures and higher throughput.

Due to protocol constraints both on the end systems and within the network, Internet resources (particularly bandwidth) are often not fully utilized as the Internet evolves. The end-user experience could be significantly improved if these resources were used simultaneously.

A similar improvement in user experience could also be achieved without as much expenditure on network infrastructure. In resource pooling, these available resources are ‘pooled’ into one logical resource for the user by applying the technique of resource pooling.

The goal of resource pooling

As part of multipath transport, disjoint (or partially disjoint) paths across a network are simultaneously used to achieve some of the goals of resource pooling. In addition to increasing resilience, multipath transport protects end hosts from failures on one path. As a result, network capacity can be increased by improving resource utilization efficiency. In a multipath TCP connection, multiple paths are pooled transparently within a transport connection to achieve multipath TCP goals.

When one or both hosts are multihomed, multipath TCP uses multiple paths end-to-end. A host can also manipulate the network path by changing port numbers with Equal Cost MultiPath (ECMP), for example, to create multiple paths within the network.

Multipath TCP and TCP

Multipath TCP (MPTCP) is a protocol extension that allows for the simultaneous use of multiple network paths between two endpoints. Traditionally, TCP (Transmission Control Protocol) relies on a single path for data transmission, which can limit performance and reliability. With MPTCP, multiple paths can be established between the sender and receiver, enabling the distribution of traffic across these paths. This offers several advantages, including increased throughput, better load balancing, and improved resilience against network failures.

Automatically Set Up Multiple Paths.

It is designed to automatically set up multiple paths between two endpoints and use those paths to send and receive data efficiently. It also provides a mechanism for detecting and recovering from packet loss and for providing low-latency communication. MPTCP is used in applications that require high throughput and low latency, such as streaming media, virtual private networks (VPNs), and networked gaming. MPTCP is an extension to the standard TCP protocol and is supported by most modern operating systems, including Windows, macOS, iOS, and Linux.

High Throughput & Low Latency

MPTCP is an attractive option for applications that require high throughput and low latency, as it can provide both. Additionally, it can provide fault tolerance and redundancy, allowing an application to remain operational even if one or more of its paths fail. This makes it useful for applications such as streaming media, where high throughput and low latency are essential, and reliability is critical.

Before you proceed, you may find the following helpful:

  1. Software Defined Perimeter
  2. Event Stream Processing
  3. Application Aware Networking



TCP Multipath.

Key Multipath TCP Discussion points:


  • Introduction of multiple paths for a single TCP session.

  • Discussion on TCP Subflow.

  • MP-TCP setup. 

  • Multipath networking use cases

  • The issues with TCP congestion control

Back to Basic: Reliability in Multipath TCP

Reliable byte streams

To start the discussion on multipath TCP, we must understand the basics of Transmission Control Protocol (TCP) and its effect on IP Forwarding. TCP applications offer reliable byte streams with congestion control mechanisms that adjust flows to the current network load. Designed in the 1970s, TCP is the most widely used protocol and remains unchanged, unlike the networks it operates within. In those days, the designers understood there could be link failure and decided to decouple the network layer (IP) from the transport layer (TCP).

This enables the routing with IP around link failures without breaking the end-to-end TCP connection. Dynamic routing protocols such as BGP Multipath do this automatically without the need for transport layer knowledge. Even though it has wide adoption, it does not fully align with the multipath networking requirements of today’s networks, driving the need for MP-TCP.

TCP delivers reliability using distinct variations of the techniques. Because it provides a byte stream interface, TCP must convert a sending application’s stream of bytes into a set of packets that IP can carry. This is called packetization. These packets contain sequence numbers, which in TCP represent the byte offsets of the first byte in each packet in the overall data stream rather than packet numbers. This allows packets to be of variable size during a transfer and may also allow them to be combined, which is called repacketization.

Diagram: The need for MP-TCP.

TCP’s main drawback is that it’s a single path per connection protocol. A single path means once the stream is placed on a path ( endpoints of the connection), it can not be moved to another path even though multiple paths may exist between peers. This characteristic is suboptimal as most of today’s networks and end hosts have multipath characteristics for better performance and robustness.

What is Multipath TCP?

Multipath TCP, also known as MPTCP, is an extension to the traditional TCP protocol that allows a single TCP connection to utilize multiple network paths simultaneously. Unlike conventional TCP, which operates on a single path, MPTCP offers the ability to distribute the traffic across multiple paths, enabling more efficient resource utilization and increased overall network capacity.

Critical Benefits of Multipath TCP:

1. Improved Performance: MPTCP can distribute the data traffic using multiple paths, enabling faster transmission rates and reducing latency. This enhanced performance is particularly beneficial for bandwidth-intensive applications such as streaming, file transfers, and video conferencing, where higher throughput and reduced latency are crucial.

2. Increased Resilience: MPTCP enhances network resilience by providing seamless failover capabilities. In traditional TCP, if a network path fails, the connection is disrupted, resulting in a delay or even a complete loss of service. However, with MPTCP, if one path becomes unavailable, the connection can automatically switch to an alternative path, ensuring uninterrupted communication.

3. Efficient Resource Utilization: MPTCP allows for better utilization of available network resources. Distributing traffic across multiple paths prevents congestion on a single path and optimizes the usage of available bandwidth. This results in more efficient utilization of network resources and improved overall performance.

4. Seamless Transition between Networks: MPTCP is particularly useful in scenarios where devices need to switch between different networks seamlessly. For example, when a mobile device moves from a Wi-Fi network to a cellular network, MPTCP can maintain the connection and seamlessly transfer the ongoing data traffic to the new network without interruption.

5. Compatibility with Existing Infrastructure: MPTCP is designed to be backward compatible with traditional TCP, making it easy to deploy and integrate into existing network infrastructure. It can coexist with legacy TCP connections and gradually adapt to MPTCP capabilities as more devices and networks support the protocol.

Multipath TCP

Main Multipath TCP Components

Multipath TCP

  • Allows a single transmission control protocol (TCP) connection to use multiple network paths simultaneously.

  • Automatically set up multiple paths between two endpoints.

  • TCP’s main drawback is that it’s a single path per connection protocol.

  • MPTCP enhances network resilience by providing seamless failover capabilities.

Multiple Paths for a Single TCP Session

Using multiple paths for a single TCP session increases resource usage and resilience for TCP optimization. All this is achieved with additional extensions added to regular TCP, simultaneously enabling connection transport across multiple links.

The core aim of Multipath TCP (MP-TCP) is to allow a single TCP connection to use multiple paths simultaneously by using abstractions at the transport layer. As it operates at the transport layer, the upper and lower layers are transparent to its operation. No network or link-layer modifications are needed.

There is no need to change the network or the end hosts. The end hosts use the same socket API call, and the network continues to operate as before. No unique configurations are required as it’s a capability exchange between hosts. Multipath TCP enabling multipath networking is 100% backward compatible with regular TCP.

Multipath TCP
Diagram: Multipath TCP. Source is Cisco

TCP sub flows

MPTCP achieves its goals through sub-flows of individual TCP connections forming an MPTCP session. These sub-flows can be established over different network paths, allowing for parallel data transmission. MPTCP also includes mechanisms for congestion control and data sequencing across the sub-flows, ensuring reliable packet delivery.

MP-TCP binds a TCP connection between two hosts, not two interfaces, like regular TCP. Regular TCP connects two IP endpoints by establishing a source/destination by IP address and port number. The application has to choose a single link for the connection. However, MPTCP creates new TCP connections known as sub-flows, allowing the application to take different links for each subflow. 

Subflows are set up the same as regular TCP connections. They consist of a flow of TCP segments operating over individual paths but are still part of the overall MPTCP connection. Subflows are never fixed and may fluctuate in number during the lifetime of the parent Multipath TCP connection.

mp-tcp
Diagram: MP-TCP.

Multipath TCP uses cases.

The deployment of MPTCP has the potential to benefit various applications and use cases. For example, MPTCP can enable seamless handovers between cellular towers or Wi-Fi access points in mobile networks, providing uninterrupted connectivity. MPTCP can improve server-to-server communications in data centers by utilizing multiple links and avoiding congestion.

Multipath TCP is beneficial in multipath data centers and mobile phone environments. All mobiles allow you to connect via wifi and a 3G network. MP-TCP enables the combined throughput and the switching of interfaces (wifi / 3G ) without disrupting the end-to-end TCP connection.

For example, if you are currently on a 3G network with an active TCP stream, the TCP stream is bound to that interface. If you want to move to the wifi network, you need to reset the connection, and all ongoing TCP connections will reset. With MP-TCP, the swapping of interfaces is transparent.

Multipath networking: leaf-spine data center

Leaf and spine data centers are a revolutionary networking architecture that has revolutionized connectivity in modern data centers. Unlike traditional hierarchical designs, leaf and spine networks are based on a non-blocking, fully meshed structure. The leaf switches act as access points, connecting directly to the spine switches, creating a flat network topology.

Key Characteristics of Leaf and Spine Data Centers

One of the key characteristics of leaf and spine data centers is their scalability. With the non-blocking architecture, leaf and spine networks can easily accommodate the increasing demands of modern data centers without sacrificing performance. Additionally, they offer low latency, high bandwidth, and improved resiliency compared to traditional designs.

Next-generation leaf and spine data center networks are built with Equal-Cost Multipath (ECMP). Within the data center, any two endpoints are equidistant. For one endpoint to communicate with another, a TCP flow is placed on a single link, not spread over multiple links. As a result, single-path TCP collisions may occur, reducing the throughput available to that flow.

what is spine and leaf architecture
Diagram: What is spine and leaf architecture? 2-Tier Spine Leaf Design

This is commonly seen for large flows and not small mice flow. When a server starts a TCP connection in a data center, it gets placed on a path and stays there. With MP-TCP, you could use many sub-flows per connection instead of a single path per connection. Then, if some of those sub-flows get congested, you don’t send over that subflow, improving traffic fairness and bandwidth optimizations.

Hash-based distribution

The default behavior of spreading traffic through a LAG or ECMP next hops is based on the hash-based distribution of packets. First, an array of buckets is created, and each outbound link is assigned to one or more buckets. Next, fields such as source-destination IP address / MAC address are taken from the outgoing packet header and hashed based on this endpoint identification. Finally, the hash selects a bucket, and the packet is queued to the interface assigned to that bucket. 

redundant links
Diagram: Redundant links with EtherChannel. Source is jmcritobal

The issue is that the load-balancing algorithm does not consider interface congestions or packet drops. With all mice flows, this is fine, but once you mix mice and elephant flows together, your performance will suffer. An algorithm is needed to identify congested links and then reshuffle the traffic.

A good use for MPTCP is a mix of mice and elephant flows. Generally, MP-TCP does not improve performance for environments with only mice flows.

Small files, say 50 KB, offer the same performance as regular TCP. Multipath networking usually has the same results as link bonding as the file size increases. The benefits of MP-TCP come into play when files are enormous (300 KB ). MP-TCP outperforms link bonding at this level as the congestion control can better balance the load over the links.

MP-TCP connection setup

The connection aims to have a single TCP connection with many sub-flows. The two endpoints using MPTCP are synchronized and have connection identifiers for each sub-flow. MPTCP starts the same as regular TCP. Additional TCP subflow sessions are combined into the existing TCP session if different paths are available. The original TCP and other subflow sessions appear as one to the application, and the primary Multipath TCP connection seems like a regular TCP connection. Identifying additional paths boils down to the number of IP addresses on the hosts. 

tcp multipath
Diagram: TCP Multipath.

The TCP handshake starts as expected, but within the first SYN, there is a new MP_CAPABLE option ( value 0x0 ) and a unique connection identifier. This allows the client to indicate that they want to do MPTCP. At this stage, the application layer creates a standard TCP socket with additional variables telling it intends to do MPTCP.

If the receiving server end is MP_CAPABLE, it will reply with the SYN/ACK MP_CAPABLE and its connection identifier. Once the connection is agreed upon, the client and server will set up the upstate. Inside the kernel, a Meta socket is the layer between the application and all the TCP sub-flows.

Under a multipath condition and when multiple paths are detected (based on IP addresses), the client starts a regular TCP handshake with the MP_JOIN option (value 0x1) and uses the connection identifier for the server. The server then replies with a subflow setup. New sub-flows are created, and the scheduler will schedule over each sub-flow as the data is sent from the application to the meta socket. 

TCP sequence numbers 

Regular TCP uses sequence numbers, enabling the receiving side to return packets in the correct order before sending them to the application. The sender can determine which packets are lost by looking at the ACK.

For MP-TCP, packets must travel multiple paths, so sequence numbers are needed to restore packet order before they are passed to the application. The sequence numbers also inform the sender of any packet loss on a path. When an application sends a packet, the segment is assigned a data sequence number.

TCP looks at the sub-flows to see where to send this segment. When it ships on a subflow, it uses a sequence number and puts it in the TCP header, and the other data sequence number gets set in the TCP options. 

The sequence number on the TCP header informs the client of any packet loss. The recipient also uses the data sequence number to reorder packets before sending them to the application.

Congestion control

Congestion control was never a problem in circuit switching. Resources are reserved at call setup to prevent congestion during data transfer, resulting in a lot of bandwidth underutilization due to the reservation of circuits. We then moved to packet switching, where we had a single link with no reservation, but the flows could use as much of the link as they wanted. This increases the utilization of the link and also the possibility of congestion.

To help this situation, congestion control mechanisms were added to TCP. Similar TCP congestion control mechanisms are employed for MP-TCP. Standard TCP congestion control maintains a congestion window for each connection, and you increase the window size on each ACK. With a drop, you half the window. 

MP-TCP operates similarly. You maintain one congestion window for each subflow path. Similar to standard TCP, when you have a drop on a subflow, you have half the window for that subflow. However, the increased rules are different from expected TCP behavior.

It gives more of an increase for sub-flows with a larger window. A larger window means it has a lower loss. As a result, traffic moves from congested to uncongested links dynamically.

Summary: Multipath TCP

The networking world is constantly evolving, with new technologies and protocols being developed to meet the growing demands of our interconnected world. One such protocol that has gained significant attention recently is Multipath TCP (MPTCP). In this blog post, we dived into the fascinating world of MPTCP, its benefits, and its potential applications.

Section 1: Understanding Multipath TCP

Multipath TCP, often called MPTCP, is an extension of the traditional TCP protocol that allows for simultaneous data transmission across multiple paths. Unlike conventional TCP, which operates on a single path, MPTCP leverages multiple network interfaces, such as Wi-Fi and cellular connections, to improve overall network performance and reliability.

Section 2: Benefits of Multipath TCP

By utilizing multiple paths, MPTCP offers several key advantages. Firstly, it enhances throughput by aggregating the bandwidth of multiple network interfaces, resulting in faster data transfer speeds. Additionally, MPTCP improves resilience by providing seamless failover between different paths, ensuring uninterrupted connectivity even if one path becomes congested or unavailable.

Section 3: Applications of Multipath TCP

The versatility of MPTCP opens the door to a wide range of applications. One notable application is in mobile devices, where MPTCP can intelligently combine Wi-Fi and cellular connections to provide users with a more stable and faster internet experience. MPTCP also finds utility in data centers, enabling efficient load balancing and reducing network congestion by distributing traffic across multiple paths.

Section 4: Challenges and Future Developments

While MPTCP brings many benefits, it also presents challenges. One such challenge is ensuring compatibility with existing infrastructure and devices that may not support MPTCP. Additionally, optimizing MPTCP’s congestion control mechanisms and addressing security concerns are ongoing research and development areas.

Conclusion:

Multipath TCP is a groundbreaking protocol that has the potential to revolutionize the way we experience network connectivity. With its ability to enhance throughput, improve resilience, and enable new applications, MPTCP holds great promise for the future of networking. As researchers continue to address challenges and refine the protocol, we can expect even greater advancements in this exciting field.