Technician working in network card labaratory

What is Remote Browser Isolation

What is Remote Browser Isolation

In today's digital landscape, remote browser isolation has emerged as a powerful solution for enhancing cybersecurity and protecting sensitive data. This innovative technology isolates web browsing activities from the local device, creating a secure environment that shields users from potential threats. In this blog post, we will dive into the world of remote browser isolation, exploring its benefits, implementation, and future prospects.

Remote browser isolation, also known as web isolation or browser isolation, is a security approach that separates web browsing activities from the user's device. By executing web sessions in a remote environment, remote browser isolation prevents potentially malicious code or content from infiltrating the user's system. It acts as a barrier between the user and the internet, ensuring a safe and secure browsing experience.

Enhanced Security: One of the primary advantages of remote browser isolation is its ability to mitigate web-based threats. By isolating web content and executing it in a separate environment, any malicious code or malware is contained and unable to affect the user's device. This significantly reduces the risk of cyberattacks such as phishing, drive-by downloads, and zero-day exploits.
Protection Against Zero-Day Vulnerabilities: Zero-day vulnerabilities are software vulnerabilities that are unknown to the vendor and, therefore, unpatched. Remote browser isolation provides a powerful defense against such vulnerabilities by executing web sessions in an isolated environment. Even if a website contains a zero-day exploit, it poses no threat to the user's device as the execution occurs remotely.

BYOD (Bring Your Own Device) Security: With the rise of remote work and the increasing use of personal devices for business purposes, remote browser isolation offers a robust security solution. It allows employees to access corporate resources and browse the internet securely, without the need for complex VPN setups or relying solely on endpoint security measures.
Cloud-Based Deployments: Cloud-based remote browser isolation solutions have gained popularity due to their scalability and ease of deployment. These solutions route web traffic to a remote virtual environment, where browsing sessions are executed. The rendered content is then transmitted back to the user's device, ensuring a seamless browsing experience.

On-Premises Deployment: For organizations with specific compliance requirements or highly sensitive data, on-premises remote browser isolation solutions provide an alternative. In this approach, the isolation environment is hosted within the organization's infrastructure, granting greater control and customization options.

As cyber threats continue to evolve, remote browser isolation is expected to play an increasingly important role in cybersecurity strategies. The adoption of this technology is likely to grow, driven by the need for robust protection against web-based attacks. Moreover, advancements in virtualization and cloud technologies will further enhance the performance and scalability of remote browser isolation solutions.

Remote browser isolation is a game-changer in the realm of cybersecurity. By creating a secure and isolated browsing environment, it provides effective protection against web-based threats, zero-day vulnerabilities, and enables secure BYOD practices. Whether implemented through cloud-based solutions or on-premises deployments, remote browser isolation is poised to shape the future of web security, ensuring safer digital experiences for individuals and organizations alike.

Highlights: What is Remote Browser Isolation

What is Browser Isolation

With browser isolation (remote browsing), internet browsing activity is separated from loading and displaying webpages locally.

Most website visitors load content and code directly from their local browsers. The content and code on the Internet often come from unknown sources (e.g., cloud hosting and web servers), which makes browsing the Internet somewhat risky from a security perspective. Web content is loaded and executed in the cloud by remote browser isolation (RBI), which underpins browser isolation.

Detecting hazardous web content is “outsourced” to remote browsing, like machines monitoring hazardous environments to protect humans. Consequently, users are protected from malicious websites (and the networks they connect to) that carry malware.

Types: Remote Browser Isolation?

Remote browser isolation (RBI), or cloud-hosted browser isolation, involves loading and executing webpages and code on a cloud server far from users’ local devices. Any malicious cookies or downloads are deleted after the user’s browsing session ends.

RBI technology protects users and corporate networks from untrusted browser activity. Users’ web browsing activities are typically conducted on a cloud server controlled by RBI vendors. Through RBI, users can interact with webpages normally without loading the entire website on their local device or browser. User actions, such as mouse clicks, keyboard inputs, and form submissions, are sent to a cloud server for further processing.

1. Enhanced Security: The primary advantage of remote browser isolation is its ability to provide enhanced web security. By isolating the browsing activity in a remote environment, any potential malware, zero-day exploits, or malicious websites are contained within the isolated environment, ensuring they cannot reach the user’s device. This dramatically reduces the risk of successful cyber attacks, as the user’s device remains protected even if a website is compromised.

2. Protection Against Phishing Attacks: Phishing attacks are a significant concern for individuals and organizations. Remote browser isolation offers a robust defense against such attacks. By isolating the browsing session, any attempts to trick users into revealing sensitive information through fraudulent websites or email links are ineffective, as the malicious code is contained within the isolated environment.

3. Mitigation of Web-Based Threats: Remote browser isolation effectively mitigates web-based threats by preventing the execution of potentially malicious code on the user’s device. Whether it’s malware, ransomware, or drive-by downloads, all potentially harmful elements are executed within the isolated environment, ensuring the user’s device remains unharmed. This approach significantly reduces the attack surface and minimizes the potential impact of web-based threats.

4. Compatibility and Ease of Use: One key advantage of remote browser isolation is its compatibility with various platforms and devices. Users can access isolated browsing sessions from any device, including desktops, laptops, and mobile devices, without compromising security. This flexibility ensures a seamless user experience while maintaining high security.

Example Technology: Browser Caching

Understanding Browser Caching

Browser caching is a mechanism that allows web browsers to store static files locally, such as images, CSS files, and JavaScript scripts. When a user revisits a website, the browser can retrieve these cached files from the local storage instead of making a new request to the server. This significantly reduces page load time and minimizes bandwidth usage.

Nginx, a popular web server, offers a powerful module called “header” that enables fine-grained control over HTTP response headers. By utilizing this module, we can easily configure browser caching directives and control cache expiration for different types of files.

Implementing Browser Caching

To start leveraging browser caching with Nginx, we need to modify the server configuration. First, we define the types of files we want to cache, such as images, CSS, and JavaScript. Then, we set the desired expiration time for each file type, specifying how long the browser should keep the cached versions before checking for updates.

While setting a fixed expiration time for cached files is a good start, it’s important to fine-tune our cache expiration strategies based on file update frequency. For static files that rarely change, we can set longer expiration times. However, for dynamic files that are updated frequently, we should use techniques like cache busting or versioning to ensure users always receive the latest versions.

Implementing Remote Browser Isolation:

– Implementing remote browser isolation typically involves deploying a virtualized browsing environment that runs on a server or in the cloud. When a user initiates a web session, the web content is rendered within this isolated environment and securely transmitted to the user’s device as a visual stream, ensuring no potentially harmful code reaches the endpoint.

– Various approaches to implementing remote browser isolation exist, ranging from on-premises solutions to cloud-based services. Organizations can choose the option that best suits their requirements, considering scalability, ease of management, and integration with existing security infrastructure.

The Rise of Threats:

– The majority of attacks originate externally. Why? The Internet can be dirty because we can’t control what we don’t know. Browsing the Internet and clicking on uniform resource identifier (URL) links exposes the enterprise to compromise risks.

– These concerns can be very worrying for individuals who need to use the internet regularly, as they want a safe online browsing experience. Cyber security is becoming an increasingly vital consideration to be aware of when using the internet, with rising cyber-attacks forcing the need for Remote Browser Isolation (RBI). 

Before you proceed, you may find the following posts helpful:

  1. Cisco Umbrella CASB
  2. Ericom Browser Isolation
  3. DDoS Attacks
  4. IPv6 Attacks

What is Remote Browser Isolation

The Challenging Landscape

It is estimated that the distribution of exploits used in cyber attacks by type of application attacked showed over 40% related to browser attacks. Android was next in line with 27% of the attack surface. As a result, we need to provide more security regarding Internet browsing.

Most compromises involve web-based attacks and standard plugins, such as Adobe, supported in the browser. Attacks will always happen, but your ability to deal with them is the key. If you use the Internet daily, check the security of your proxy server.

Browser Attacks:

Attacking through the browser is too easy, and the targets are too rich. Once an attacker has penetrated the web browser, they can move laterally throughout the network, targeting high-value assets such as a database server. Data exfiltration is effortless these days.

Attackers use social media accounts such as Twitter and even domain name systems (DNS) commonly not inspected by firewalls as file transfer mechanisms. We need to apply the zero trust network design default-deny posture to web browsing. This is known as Remote Browser Isolation.

Remote Browser Isolation: Zero Trust

Neil McDonald, an analyst from Gartner, is driving the evolution of Remote Browser Isolation. This feature is necessary to offer a complete solution to the zero-trust model. The zero-trust model already consists of micro-segmentation vendors that can be SDN-based, network-based appliances (physical or virtual), microservices-based, host-based, container-centric, IaaS built-in segmentation, or API-based. There are also a variety of software-defined perimeter vendors in the zero-trust movement.

So, what is Remote Browser Isolation (RBI)? Remote Browser Isolation starts with a default-deny posture, contains the ability to compromise, reduces the surface area for an attack, and, as sessions are restored to a known good state after each use, it is like having a dynamic segment of 1 for surfing the Internet. Remote browser offerings are a subset of browser isolation technologies that remove the browser process from the end user’s desktop.

You can host a browser on a terminal server and then use the on-device browser to browse to that browser, increasing the security posture. When you use HTML 5 connectivity, the rendering is done in the remote browser.

RBI – Sample Solution

Some vendors are coming out with a Linux-based, proxy-based solution. A proxy – often hosted on sites like https://www.free-proxy-list.net/ – acts as an internet gateway, a middleman for internet interactions. Usually, when you browse the Internet, you go to a non-whitelist site, but if it hasn’t been blacklisted, you will be routed to the remote system.

Real-time Rendering

You could have a small Linux-based solution in the demilitarized zone (DMZ) or the cloud in the proxy-based system. That container with docker container security best practices enabled will do the browsing for you. It will render the information in real time and send it back to the user using HTML5 as the protocol using images. For example, if you are going to a customer relationship management (CRM) system right now, you will go directly to that system as it is whitelisted.

Best Browsing Experience

But when you go to a website that hasn’t been defined, the system will open a small container, and that dedicated container can give you the browsing experience, and you won’t know the difference. As a result, you can mimic a perfect browsing experience without any active code running on your desktop while browsing.

Separating Browsing Activities

Remote browser isolation has emerged as a powerful solution in the fight against web-based cyber threats. Separating browsing activities from the user’s local device provides enhanced security, protects against phishing attacks, mitigates web-based threats, and offers compatibility across different platforms and devices.

As the digital landscape continues to evolve, remote browser isolation is set to play a crucial role in safeguarding individuals and organizations from the ever-present dangers of the web.

Summary: What is Remote Browser Isolation

In today’s digital landscape, where online threats are becoming increasingly sophisticated, ensuring secure browsing experiences is paramount. Remote Browser Isolation (RBI) emerges as an innovative solution to tackle these challenges head-on. In this blog post, we delved into the world of RBI, its key benefits, implementation, and its role in enhancing cybersecurity.

Understanding Remote Browser Isolation

Remote Browser Isolation, also known as Web Isolation, is an advanced security technique that separates web browsing activities from the local device and moves them to a remote environment. By executing web code and rendering web content outside the user’s device, RBI effectively prevents malicious code and potential threats from reaching the user’s endpoint.

The Benefits of Remote Browser Isolation

Enhanced Security: RBI is a robust defense mechanism against web-based attacks such as malware, ransomware, and phishing. By isolating potentially harmful content away from the local device, it minimizes the risk of compromise and protects sensitive data.

Improved Productivity: With RBI, employees can access and interact with web content without worrying about inadvertently clicking on malicious links or compromising their devices. This freedom increases productivity and empowers users to navigate the web without fear.

Compatibility and User Experience: One of the notable advantages of RBI is its seamless compatibility with various devices and operating systems. Regardless of the user’s device specifications, RBI ensures a consistent and secure browsing experience without additional software installations or updates.

Implementing Remote Browser Isolation

Cloud-Based RBI Solutions: Many organizations opt for cloud-based RBI solutions, where web browsing activities are redirected to remote virtual machines. This approach offers scalability, ease of management, and reduced hardware dependencies.

On-Premises RBI Solutions: Some organizations prefer deploying RBI on their own infrastructure, which provides them with greater control over security policies and data governance. On-premises RBI solutions offer enhanced customization options and tighter integration with existing security systems.

Remote Browser Isolation in Action

Secure Web Access: RBI enables users to access potentially risky websites and applications in a safe and controlled environment. This proves particularly useful for industries like finance, healthcare, and government, where sensitive data protection is paramount.

Phishing Prevention: By isolating web content, RBI effectively neutralizes phishing attempts. The isolation prevents potential damage or data loss, even if a user unintentionally interacts with a fraudulent website or email link.

Conclusion:

Remote Browser Isolation stands at the forefront of modern cybersecurity strategies, offering a proactive and practical approach to protect users and organizations from web-based threats. RBI provides enhanced security, improved productivity, and seamless compatibility by isolating web browsing activities. Whether deployed through cloud-based solutions or on-premises implementations, RBI is a powerful tool for safeguarding digital experiences in an ever-evolving threat landscape.

rsz_technology_focused_hubnw

Matt Conran | Network World

Hello, I have created a Network World column and will be releasing a few blogs per month. Kindly visit the following link to view my full profile and recent blogs – Matt Conran Network World.

The list of individual blogs can be found here:

“In this day and age, demands on networks are coming from a variety of sources, internal end-users, external customers and via changes in the application architecture. Such demands put pressure on traditional architectures.

To deal effectively with these demands requires the network domain to become more dynamic. For this, we must embrace digital transformation. However, current methods are delaying this much-needed transition. One major pain point that networks suffer from is the necessity to dispense with manual working, which lacks fabric wide automation. This must be addressed if organizations are to implement new products and services ahead of the competition.

So, to evolve, to be in line with the current times and use technology as an effective tool, one must drive the entire organization to become a digital enterprise. The network components do play a key role, but the digital transformation process is an enterprise-wide initiative.”

“There’s a buzz in the industry about a new type of product that promises to change the way we secure and network our organizations. It is called the Secure Access Service Edge (SASE). It was first mentioned by Gartner, Inc. in its hype cycle for networking. Since then Barracuda highlighted SASE in a recent PR update and Zscaler also discussed it in their earnings call. Most recently, Cato Networks announced that it was mentioned by Gartner as a “sample vendor” in the hype cycle.

Today, the enterprises have upgraded their portfolio and as a consequence, the ramifications of the network also need to be enhanced. What we are witnessing is cloud, mobility, and edge, which has resulted in increased pressure on the legacy network and security architecture. Enterprises are transitioning all users, applications, and data located on-premise, to a heavy reliance on the cloud, edge applications, and a dispersed mobile workforce.”

“Microsoft has introduced a new virtual WAN as a competitive differentiator and is getting enough tracking that AWS and Google may follow. At present, Microsoft is the only company to offer a virtual WAN of this kind. This made me curious to discover the highs and lows of this technology. So I sat down with Sorell Slaymaker, Principal Consulting Analyst at TechVision Research to discuss. The following is a summary of our discussion.

But before we proceed, let’s gain some understanding of the cloud connectivity.

Cloud connectivity has evolved over time. When the cloud was introduced about a decade ago, let’s say, if you were an enterprise, you would connect to what’s known as a cloud service provider (CSP). However, over the last 10 years, many providers like Equinix have started to offer carrier-neutral collocations. Now, there is the opportunity to meet a variety of cloud companies in a carrier-neutral colocation. On the other hand, there are certain limitations as well as cloud connectivity.”

“Actions speak louder than words. Reliable actions build lasting trust in contrast to unreliable words. Imagine that you had a house with a guarded wall. You would feel safe in the house, correct? Now, what if that wall is dismantled? You might start to feel your security is under threat. Anyone could have easy access to your house.

In the same way, with traditional security products: it is as if anyone is allowed to leave their house, knock at your door and pick your locks. Wouldn’t it be more secure if only certain individuals whom you fully trust can even see your house? This is the essence of zero-trust networking and is a core concept discussed in my recent course on SDP (software-defined perimeter).

Within a zero-trust environment, there is no implicit trust. Thus, trust must be sourced from somewhere else in order to gain access to protected resources. It is only after sufficient trust has been established and the necessary controls are passed, that the access is granted, providing a path to the requested resource. The access path to the resource is designed differently, depending on whether it’s a client or service-initiated software-defined perimeter solution.”

“Networking has gone through various transformations over the last decade. In essence, the network has become complex and hard to manage using traditional mechanisms. Now there is a significant need to design and integrate devices from multiple vendors and employ new technologies, such as virtualization and cloud services.

Therefore, every network is a unique snowflake. You will never come across two identical networks. The products offered by the vendors act as the building blocks for engineers to design solutions that work for them. If we all had a simple and predictable network, this would not be a problem. But there are no global references to follow and designs vary from organization to organization. These lead to network variation even while offering similar services.

It is estimated that over 60% of users consider their I.T environment to be more complex than it was 2 years ago. We can only assume that network complexity is going to increase in the future.”

We are living in a hyperconnected world where anything can now be pushed to the cloud. The idea of having content located in one place, which could be useful from the management’s perspective, is now redundant. Today, the users and data are omnipresent.

The customer’s expectations have up-surged because of this evolution. There is now an increased expectation of high-quality service and a decrease in customer’s patience. In the past, one could patiently wait 10 hours to download the content. But this is certainly not the scenario at the present time. Nowadays we have high expectations and high-performance requirements but on the other hand, there are concerns as well. The internet is a weird place, with unpredictable asymmetric patterns, buffer bloat and a list of other performance-related problems that I wrote about on Network Insight. [Disclaimer: the author is employed by Network Insight.]

Also, the internet is growing at an accelerated rate. By the year 2020, the internet is expected to reach 1.5 Gigabyte of traffic per day per person. In the coming times, the world of the Internet of Things (IoT) driven by objects will far supersede these data figures as well. For example, a connected airplane will generate around 5 Terabytes of data per day. This spiraling level of volume requires a new approach to data management and forces us to re-think how we delivery applications.”

“Deploying zero trust software-defined perimeter (SDP) architecture is not about completely replacing virtual private network (VPN) technologies and firewalls. By and large, the firewall demarcation points that mark the inside and outside are not going away anytime soon. The VPN concentrator will also have its position for the foreseeable future.

A rip and replace is a very aggressive deployment approach regardless of the age of technology. And while SDP is new, it should be approached with care when choosing the correct vendor. An SDP adoption should be a slow migration process as opposed to the once off rip and replace.

As I wrote about on Network Insight [Disclaimer: the author is employed by Network Insight], while SDP is a disruptive technology, after discussing with numerous SDP vendors, I have discovered that the current SDP landscape tends to be based on specific use cases and projects, as opposed to a technology that has to be implemented globally. To start with, you should be able to implement SDP for only certain user segments.”

“Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.

More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.

The perimeter approach based its design on visibility and accessibility. If an entity external to the network can’t see an internal resource, then access cannot be gained. As a result, external entities were blocked from coming in, yet internal entities were permitted to passage out. However, it worked only to a certain degree. Realistically, the fixed network perimeter will always be breachable; it’s just a matter of time. Someone with enough skill will eventually get through.”

“In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations.

The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia.

A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects.”

“As the cloud service providers and search engines started with the structuring process of their business, they quickly ran into the problems of managing the networking equipment. Ultimately, after a few rounds of getting the network vendors to understand their problems, these hyperscale network operators revolted.

Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking.”

“I recently shared my thoughts about the role of open source in networking. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.

The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.

Seemingly, we are beginning to see less investment in hardware unless there is a specific segment that needs to be resolved. But generally, software-based platforms are preferred as they bring many advantages. It is evident that there has been a technology shift. We have moved networking from hardware to software and this shift has positive effects for users, enterprises and service providers.”

“BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, there’s a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?

There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.

The problem with BGP is that it’s not capacity or performance aware and therefore its decisions can sink the application’s performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the network’s performance.”

“The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both – the advantages and disadvantages.

The network and security need to keep up with this rapid pace of change. If you cannot match the speed of the digital age, then ultimately bad actors will become a hazard. Therefore, the organizations must move to a zero-trust environment: default deny, with least privilege access. In today’s evolving digital world this is the primary key to success.

Ideally, a comprehensive solution must provide protection across all platforms including legacy servers, VMs, services in public clouds, on-premise, off-premise, hosted, managed or self-managed. We are going to stay hybrid for a long time, therefore we need to equip our architecture with zero-trust.”

“With the introduction of cloud, BYOD, IoT, and virtual offices scattered around the globe, the traditional architectures not only hold us back in terms of productivity but also create security flaws that leave gaps for compromise.

The network and security architectures that are commonly deployed today are not fit for today’s digital world. They were designed for another time, a time of the past. This could sound daunting…and it indeed is.”

“The Internet was designed to connect things easily, but a lot has changed since its inception. Users now expect the internet to find the “what” (i.e., the content), but the current communication model is still focused on the “where.”

The Internet has evolved to be dominated by content distribution and retrieval. As a matter of fact, networking protocols still focus on the connection between hosts that surfaces many challenges.

The most obvious solution is to replace the “where” with the “what” and this is what Named Data Networking (NDN) proposes. NDN uses named content as opposed to host identifiers as its abstraction.”

“Today, connectivity to the Internet is easy; you simply get an Ethernet driver and hook up the TCP/IP protocol stack. Then dissimilar network types in remote locations can communicate with each other. However, before the introduction of the TCP/IP model, networks were manually connected but with the TCP/IP stack, the networks can connect themselves up, nice and easy. This eventually caused the Internet to explode, followed by the World Wide Web.

So far, TCP/IP has been a great success. It’s good at moving data and is both robust and scalable. It enables any node to talk to any other node by using a point-to-point communication channel with IP addresses as identifiers for the source and destination. Ideally, a network ships the data bits. You can either name the locations to ship the bits to or name the bits themselves. Today’s TCP/IP protocol architecture picked the first option. Let’s discuss the section option later in the article.

It essentially follows the communication model used by the circuit-switched telephone networks. We migrated from phone numbers to IP addresses and circuit-switching by packet-switching with datagram delivery. But the point-to-point, location-based model stayed the same. This made sense during the old times, but not in today’s times as the view of the world has changed considerably. Computing and communication technologies have advanced rapidly.”

“Technology is always evolving. However, in recent time, two significant changes have emerged in the world of networking. Firstly, the networking is moving to software that can run on commodity off-the-shelf hardware. Secondly, we are witnessing the introduction and use of many open source technologies, removing the barrier of entry for new product innovation and rapid market access.

Networking is the last bastion within IT to adopt the open source. Consequently, this has badly hit the networking industry in terms of the slow speed of innovation and high costs. Every other element of IT has seen radical technology and cost model changes over the past 10 years. However, IP networking has not changed much since the mid-’90s.

When I became aware of these trends, I decided to sit with Sorell Slaymaker to analyze the evolution and determine how it will inspire the market in the coming years.”

“Ideally, meeting the business objectives of speed, agility, and cost containment boil down to two architectural approaches: the legacy telco versus the cloud-based provider.

Today, the wide area network (WAN) is a vital enterprise resource. Its uptime, often targeting availability of 99.999%, is essential to maintain the productivity of employees and partners and also for maintaining the business’s competitive edge.

Historically, enterprises had two options for WAN management models — do it yourself (DIY) and a managed network service (MNS). Under the DIY model, the IT networking and security teams build the WAN by integrating multiple components including MPLS service providers, internet service providers (ISPs), edge routers, WAN optimizer, and firewalls.

The components are responsible for keeping that infrastructure current and optimized. They configure and adjust the network for changes, troubleshoot outages and ensure that the network is secure. Since this is not a trivial task, therefore many organizations have switched to an MNS. The enterprises outsource the buildout, configuration and on-going management often to a regional telco.”

“To undergo the transition from legacy to cloud-native application environments you need to employ zero trust.

Enterprises operating in the traditional monolithic environment may have strict organizational structures. As a result, the requirement for security may restrain them from transitioning to a hybrid or cloud-native application deployment model.

In spite of the obvious difficulties, the majority of enterprises want to take advantage of cloud-native capabilities. Today, most entities are considering or evaluating cloud-native to enhance their customer’s experience. In some cases, it is the ability to draw richer customer market analytics or to provide operational excellence.

Cloud-native is a key strategic agenda that allows customers to take advantage of many new capabilities and frameworks. It enables organizations to build and evolve going forward to gain an edge over their competitors.”

“Domain name system (DNS) over transport layer security (TLS) adds an extra layer of encryption, but in what way does it impact your IP network traffic? The additional layer of encryption indicates controlling what’s happening over the network is likely to become challenging.

Most noticeably it will prevent ISPs and enterprises from monitoring the user’s site activity and will also have negative implications for both; the wide area network (WAN) optimization and SD-WAN vendors.

During a recent call with Sorell Slaymaker, we rolled back in time and discussed how we got here, to a world that will soon be fully encrypted. We started with SSL1.0, which was the original version of HTTPS as opposed to the non-secure HTTP. As an aftermath of evolution, it had many security vulnerabilities. Consequently, we then evolved from SSL 1.1 to TLS 1.2.”

“Delivering global SD-WAN is very different from delivering local networks. Local networks offer complete control to the end-to-end design, enabling low-latency and predictable connections. There might still be blackouts and brownouts but you’re in control and can troubleshoot accordingly with appropriate visibility.

With global SD-WANs, though, managing the middle-mile/backbone performance and managing the last-mile are, well shall we say, more challenging. Most SD-WAN vendors don’t have control over these two segments, which affects application performance and service agility.

In particular, an issue that SD-WAN appliance vendors often overlook is the management of the last-mile. With multiprotocol label switching (MPLS), the provider assumes the responsibility, but this is no longer the case with SD-WAN. Getting the last-mile right is challenging for many global SD-WANs.”

“Today’s threat landscape consists of skilled, organized and well-funded bad actors. They have many goals including exfiltrating sensitive data for political or economic motives. To combat these multiple threats, the cybersecurity market is required to expand at an even greater rate.

The IT leaders must evolve their security framework if they want to stay ahead of the cyber threats. The evolution in security we are witnessing has a tilt towards the Zero-Trust model and the software-defined perimeter (SDP), also called a “Black Cloud”. The principle of its design is based on the need-to-know model.

The Zero-Trust model says that anyone attempting to access a resource must be authenticated and be authorized first. Users cannot connect to anything since unauthorized resources are invisible, left in the dark. For additional protection, the Zero-Trust model can be combined with machine learning (ML) to discover the risky user behavior. Besides, it can be applied for conditional access.”

“There are three types of applications; applications that manage the business, applications that run the business and miscellaneous apps.

A security breach or performance related issue for an application that runs the business would undoubtedly impact the top-line revenue. For example, an issue in a hotel booking system would directly affect the top-line revenue as opposed to an outage in Office 365.

It is a general assumption that cloud deployments would suffer from business-impacting performance issues due to the network. The objective is to have applications within 25ms (one-way) of the users who use them. However, too many network architectures backhaul the traffic to traverse from a private to the public internetwork.”

“Back in the early 2000s, I was the sole network engineer at a startup. By morning, my role included managing four floors and 22 European locations packed with different vendors and servers between three companies. In the evenings, I administered the largest enterprise streaming networking in Europe with a group of highly skilled staff.

Since we were an early startup, combined roles were the norm. I’m sure that most of you who joined as young engineers in such situations could understand how I felt back then. However, it was a good experience, so I battled through it. To keep my evening’s stress-free and without any IT calls, I had to design in as much high-availability (HA) as I possibly could. After all, all the interesting technological learning was in the second part of my day working with content delivery mechanisms and complex routing. All of which came back to me when I read a recent post on Cato network’s self-healing SD-WAN for global enterprises networks.

Cato is enriching the self-healing capabilities of Cato Cloud. Rather than the enterprise having the skill and knowledge to think about every type of failure in an HA design, the Cato Cloud now heals itself end-to-end, ensuring service continuity.”

While computing, storage, and programming have dramatically changed and become simpler and cheaper over the last 20 years, however, IP networking has not. IP networking is still stuck in the era of mid-1990s.

Realistically, when I look at ways to upgrade or improve a network, the approach falls into two separate buckets. One is the tactical move and the other is strategic. For example, when I look at IPv6, I see this as a tactical move. There aren’t many business value-adds.

In fact, there are opposites such as additional overheads and minimal internetworking QoS between IPv4 & v6 with zero application awareness and still a lack of security. Here, I do not intend to say that one should not upgrade to IPv6, it does give you more IP addresses (if you need them) and better multicast capabilities but it’s a tactical move.

It was about 20 years ago when I plugged my first Ethernet cable into a switch. It was for our new chief executive officer. Little did she know that she was about to share her traffic with most others on the first floor. At that time being a network engineer, I had five floors to be looked after.

Having a few virtual LANs (VLANs) per floor was a common design practice in those traditional days. Essentially, a couple of broadcast domains per floor were deemed OK. With the VLAN-based approach, we used to give access to different people on the same subnet. Even though people worked at different levels but if in the same subnet, they were all treated the same.

The web application firewall (WAF) issue didn’t seem to me as a big deal until I actually started to dig deeper into the ongoing discussion in this field. It generally seems that vendors are trying to convince customers and themselves that everything is going smooth and that there is not a problem. In reality, however, customers don’t buy it anymore and the WAF industry is under a major pressure as constantly failing on the customer quality perspective.

There have also been red flags raised from the use of the runtime application self-protection (RASP) technology. There is now a trend to enter the mitigation/defense side into the application and compile it within the code. It is considered that the runtime application self-protection is a shortcut to securing software that is also compounded by performance problems. It seems to be a desperate solution to replace the WAFs, as no one really likes to mix its “security appliance” inside the application code, which is exactly what the RASP vendors are currently offering to their customers. However, some vendors are adopting the RASP technology.

“John Kindervag, a former analyst from Forrester Research, was the first to introduce the Zero-Trust model back in 2010. The focus then was more on the application layer. However, once I heard that Sorell Slaymaker from Techvision Research was pushing the topic at the network level, I couldn’t resist giving him a call to discuss the generals on Zero Trust Networking (ZTN). During the conversation, he shone a light on numerous known and unknown facts about Zero Trust Networking that could prove useful to anyone.

The traditional world of networking started with static domains. The classical network model divided clients and users into two groups – trusted and untrusted. The trusted are those inside the internal network, the untrusted are external to the network, which could be either mobile users or partner networks. To recast the untrusted to become trusted, one would typically use a virtual private network (VPN) to access the internal network.”

“Over the last few years, I have been sprawled in so many technologies that I have forgotten where my roots began in the world of the data center. Therefore, I decided to delve deeper into what’s prevalent and headed straight to Ivan Pepelnjak’s Ethernet VPN (EVPN) webinar hosted by Dinesh Dutt. I knew of the distinguished Dinesh since he was the chief scientist at Cumulus Networks, and for me, he is a leader in this field. Before reading his book on EVPN, I decided to give Dinesh a call to exchange our views about the beginning of EVPN. We talked about the practicalities and limitations of the data center. Here is an excerpt from our discussion.”

“If you still live in a world of the script-driven approach for both service provider and enterprise networks, you are going to reach limits. There is only so far you can go alone. It creates a gap that lacks modeling and database at a higher layer. Production-grade service provider and enterprise networks require a production-grade automation framework.

In today’s environment, the network infrastructure acts as the core centerpiece, providing critical connection points. Over time, the role of infrastructure has expanded substantially. In the present day, it largely influences the critical business functions for both the service provider and enterprise environments”

“At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The monolith keeps much of the state in memory and replicates between the instances, which makes them hard to split and scale. Scaling up can be expensive and scaling out requires replicating the state and the entire application, rather than the parts that need to be replicated.

In comparison to microservices, which provide separation of the logic from the state, the separation enables the application to be broken apart into a number of smaller more manageable units, making them easier to scale. Therefore, a microservices environment consists of multiple services communicating with each other. All the communication between services is initiated and carried out with network calls, and services exposed via application programming interfaces (APIs). Each service comes with its own purpose that serves a unique business value.”

“When I stepped into the field of networking, everything was static and security was based on perimeter-level firewalling. It was common to have two perimeter-based firewalls; internal and external to the wide area network (WAN). Such layout was good enough in those days.

I remember the time when connected devices were corporate-owned. Everything was hard-wired and I used to define the access control policies on a port-by-port and VLAN-by-VLAN basis. There were numerous manual end-to-end policy configurations, which were not only time consuming but also error-prone.

There was a complete lack of visibility and global policy throughout the network and every morning, I relied on the multi-router traffic Grapher (MRTG) to manual inspect the traffic spikes indicating variations from baselines. Once something was plugged in, it was “there for life”. Have you ever heard of the 20-year-old PC that no one knows where it is but it still replies to ping? In contrast, we now live in an entirely different world. The perimeter has dissolved, resulting in perimeter-level firewalling alone to be insufficient.”

“Recently, I was reading a blog post by Ivan Pepelnjak on intent-based networking. He discusses that the definition of intent is “a usually clearly formulated or planned intention” and the word “intention” is defined as ’what one intends to do or bring about.” I started to ponder over his submission that the definition is confusing as there are many variations.

To guide my understanding, I decided to delve deeper into the building blocks of intent-based networking, which led me to a variety of closed-loop automation solutions. After extensive research, my view is that closed-loop automation is a prerequisite for intent-based networking. Keeping in mind the current requirements, it’s a solution that the businesses can deploy.

Now that I have examined different vendors, I would recommend gazing from a bird’s eye view, to make sure the solution overcomes today’s business and technical challenges. The outputs should drive a future-proof solution”

“What keeps me awake at night is the thought of artificial intelligence lying in wait in the hands of bad actors. Artificial intelligence combined with the powers of IoT-based attacks will create an environment tapped for mayhem. It is easy to write about, but it is hard for security professionals to combat. AI has more force, severity, and fatality which can change the face of a network and application in seconds.

When I think of the capabilities artificial intelligence has in the world of cybersecurity I know that unless we prepare well we will be like Bambi walking in the woods. The time is now to prepare for the unknown. Security professionals must examine the classical defense mechanisms in place to determine if they can withstand an attack based on artificial intelligence.”

“When I began my journey in 2015 with SD-WAN, the implementation requirements were different to what they are today. Initially, I deployed pilot sites for internal reachability. This was not a design flaw, but a solution requirement set by the options available to SD-WAN at that time. The initial requirement when designing SD-WAN was to replace multiprotocol label switching (MPLS) and connect the internal resources together.

Our projects gained the benefits of SD-WAN deployments. It certainly added value, but there were compelling constraints. In particular, we were limited to internal resources and users, yet our architecture consisted of remote partners and mobile workers. The real challenge for SD-WAN vendors is not solely to satisfy internal reachability. The wide area network (WAN) must support a range of different entities that require network access from multiple locations.”

“Applications have become a key driver of revenue, rather than their previous role as merely a tool to support the business process. What acts as the heart for all applications is the network providing the connection points. Due to the new, critical importance of the application layer, IT professionals are looking for ways to improve the architecture of their network.

A new era of campus network design is required, one that enforces policy-based automation from the edge of the network to public and private clouds using an intent-based paradigm.

SD-Access is an example of an intent-based network within the campus. It is broken down into three major elements

  1. Control-Plane based on Locator/ID separation protocol (LISP),
  2. Data-Plane based on Virtual Extensible LAN (VXLAN) and
  3. Policy-Plane based on Cisco TrustSec.”

“When it comes to technology, nothing is static, everything is evolving. Either we keep inventing mechanisms that dig out new security holes, or we are forced to implement existing kludges to cover up the inadequacies in security on which our web applications depend.

The assault on the changing digital landscape with all its new requirements has created a black hole that needs attention. The shift in technology, while creating opportunities, has a bias to create security threats. Unfortunately, with the passage of time, these trends will continue to escalate, putting web application security at center stage.

Business relies on web applications. Loss of service to business-focused web applications not only affects the brand but also results in financial loss. The web application acts as the front door to valuable assets. If you don’t efficiently lock the door or at least know when it has been opened, valuable revenue-generating web applications are left compromised.”

“When I started my journey in the technology sector back in the early 2000’s the world of networking comprised of simple structures. I remember configuring several standard branch sites that would connect to a central headquarters. There was only a handful of remote warriors who were assigned, and usually just a few high-ranking officials.

As the dependence on networking increased, so did the complexity of network designs. The standard single site became dual-based with redundant connectivity to different providers, advanced failover techniques, and high-availability designs became the norm. The number of remote workers increased, and eventually, security holes began to open in my network design.

Unfortunately, the advances in network connectivity were not in conjunction with appropriate advances in security, forcing everyone back to the drawing board. Without adequate security, the network connectivity that is left to defaults is completely insecure and is unable to validate the source or secure individual packets. If you can’t trust the network, you have to somehow secure it. We secured connections over unsecured mediums, which led to the implementation of IPSec-based VPNs along with all their complex baggage.”

“Over the years, we have embraced new technologies to find improved ways to build systems.  As a result, today’s infrastructures have undergone significant evolution. To keep pace with the arrival of new technologies, legacy is often combined with the new, but they do not always mesh well. Such a fusion between ultra-modern and conventional has created drag in the overall solution, thereby, spawning tension between past and future in how things are secured.

The multi-tenant shared infrastructure of the cloud, container technologies like Docker and Kubernetes, and new architectures like microservices and serverless, while technically remarkable, increasing complexity. Complexity is the number one enemy of security. Therefore, to be effectively aligned with the adoption of these technologies, a new approach to security is required that does not depend on shifting infrastructure as the control point.”

“Throughout my early years as a consultant, when asynchronous transfer mode (ATM) was the rage and multiprotocol label switching (MPLS) was still at the outset, I handled numerous roles as a network architect alongside various carriers. During that period, I experienced first-hand problems that the new technologies posed to them.

The lack of true end-to-end automation made our daily tasks run into the night. Bespoke network designs due to the shortfall of appropriate documentation resulted in one that person knows all. The provisioning teams never fully understood the design. The copy-and-paste implementation approach is error-prone, leaving teams blindfolded when something went wrong.

Designs were stitched together and with so much variation, that limited troubleshooting to a personalized approach. That previous experience surfaced in mind when I heard about carriers delivering SD-WAN services. I started to question if they could have made the adequate changes to provide such an agile service.”