micro segmentation technology

Zero Trust Network Design

 

zero trust

 

Zero Trust Network Design

In today’s interconnected world, where data breaches and cyber threats have become commonplace, traditional perimeter defenses are no longer enough to protect sensitive information. Enter Zero Trust Network Design is a security approach that prioritizes data protection by assuming that every user and device, inside or outside the network, is a potential threat. In this blog post, we will explore the Zero Trust Network Design concept, its principles, and its benefits in securing the modern digital landscape.

Zero trust network design is a security concept that focuses on reducing the attack surface of an organization’s network. It is based on the assumption that users and systems inside a network are untrusted, and therefore, all traffic is considered untrusted and must be verified before access is granted. This contrasts traditional networks, which often rely on perimeter-based security to protect against external threats.

 

Highlights: Zero Trust Network Design

  • Never Trust, Always Verify

The core concept of zero trust network design and zero trust network segmentation is never to trust, always verify. This means that all traffic, regardless of its origin, must be verified before access is granted. This is achieved through layered security controls, including authentication, authorization, encryption, and monitoring.

Authentication is used to verify the identity of users and devices before allowing access to resources. Authorization is used to determine what resources a user or device is allowed to access. Encryption is used to protect data in transit and at rest. Monitoring is used to detect threats and suspicious activity.

  • Zero Trust Network Segmentation

Zero trust network design, including zero trust network segmentation, is becoming increasingly popular as organizations move away from perimeter-based security. By verifying all traffic rather than relying on perimeter-based security, organizations can reduce their attack surface and improve their overall security posture. With a zero-trust network segmentation approach, networks are segmented into smaller islands with specific workloads. In addition, each segment has its own ingress and egress controls to minimize the “blast radius” of unauthorized access to data.

 

For pre-information, you may find the following helpful:

  1. DNS Security Designs
  2. Zero Trust Access
  3. SD WAN Segmentation

 



Zero Trust Architecture

Key Zero Trust Network Design Discussion Points:


  • Zero Trust principles.

  • TCP weak connectivitiy model.

  • Develop a Zero Trust architecture.

  • Issues of the traditional perimeter.

  • The use of micro perimeters.

 

Back to basics with the Zero Trust Network Design

Challenging Landscape

The drive for a zero trust networking and software defined perimeter is again gaining momentum. The bad actors are getting increasingly sophisticated, resulting in a pervasive sense of unease in traditional networking and security methods. So why are our network infrastructure and applications open to such severe security risks? This Zero Trust tutorial will recap some technological weaknesses driving the path to Zero Trust network design and Zero Trust SASE.

We give devices IP addresses to connect to the Internet and signposts three pathways. None of these techniques ensures attacks will not happen. They are like preventive medicine. However, with bad actor sophistication, we need to be more at a total immunization level to ensure that attacks cannot even touch your infrastructure by implementing a zero trust security strategy and software defined perimeter solutions.

 

Understanding Zero Trust Network Design:

Zero Trust Network Design is a security framework that aims to prevent and mitigate cyber-attacks by continuously verifying and validating every access request. Unlike the traditional perimeter-based security model, Zero Trust Network Design leverages several core principles to achieve a higher level of security:

1. Least Privilege: Users and devices are granted only the minimum level of access required to perform their specific tasks. This principle ensures that the potential damage is limited even if a user’s credentials are compromised.

2. Micro-Segmentation: Networks are divided into smaller, isolated segments, making it more challenging for an attacker to move laterally and gain unauthorized access to critical systems or data.

3. Continuous Authentication: Zero Trust Network Design emphasizes multi-factor authentication and continuous verification of user identity and device health rather than relying solely on static credentials like usernames and passwords.

4. Network Visibility: Comprehensive monitoring and logging are crucial components of Zero Trust Network Design. Organizations can detect anomalies and potential security breaches in real time by closely monitoring network traffic and inspecting all data packets.

Benefits of Zero Trust Network Design:

Implementing Zero Trust Network Design offers numerous benefits for organizations seeking to protect their sensitive data and mitigate cyber risks:

1. Enhanced Security: By assuming that all users and devices are untrusted, Zero Trust Network Design provides a higher level of security against both internal and external threats. It minimizes the risk of unauthorized access and helps organizations detect and respond to potential breaches more effectively.

2. Improved Compliance: Many industries are subject to strict regulatory requirements regarding protecting sensitive data. Zero Trust Network Design addresses these compliance challenges by providing granular control over access permissions and ensuring that only authorized individuals can access critical information.

3. Reduced Attack Surface: Zero Trust Network Design reduces the attack surface for potential attackers by segmenting networks and implementing strict access controls. This proactive approach makes it significantly harder for cybercriminals to move laterally within the network and gain access to sensitive data.

4. Simplified User Experience: Contrary to common misconceptions, implementing Zero Trust Network Design does not have to come at the expense of user experience. With modern identity and access management solutions, users can enjoy a seamless and secure authentication process, regardless of location or device.

 

Highlighting zero trust network segmentation

Zero trust network segmentation is a process in which a network is divided into smaller, more secure parts. This can be done by using software firewalls, virtual LANs (VLANs), or other network security protocols. The purpose of Zero trust network segmentation, also known as microsegmentation is to decrease the attack surface of a network and reduce the potential damage caused by a network breach. It also allows for more granular control over user access, which can help prevent unauthorized access to sensitive data.

Microsegmentation also allows for more efficient deployment of applications and more detailed monitoring and logging of network activity. By leveraging the advantages of microsegmentation, organizations can increase their network’s security and efficiency while protecting their data and resources.

 

Zero Trust: Changing the Approach to Security

Zero Trust is about fundamentally transforming the underlying philosophy and approach to enterprise security—shifting from outdated and demonstrably ineffective perimeter-centric methods to a dynamic, identity-centric, and policy-based system. Policies are at the heart of Zero Trust—after all, its primary architectural components are Policy Decision Points and Policy Enforcement Points. In our Zero Trust world, policies are the structures organizations create to define which identities are permitted access to resources under which circumstances.

 

 

zero trust networking
Diagram: Define Zero Trust: The standard three pathways.

 

Introduction to Zero Trust Network Design

The idea behind the Zero Trust model and software-defined perimeter (SDP) is a connection-based security architecture designed to stop attacks. It doesn’t expose the infrastructure and its applications. Instead, it enables you to know the authorized users by authenticating, authorizing, and validating the devices they are on before connecting to protected resources.

A Zero Trust architecture allows you to operate while vulnerabilities, patches, and configurations are in progress. Essentially, it cloaks applications or groups of the application so they are invisible to attack.

zero trust network design
Diagram: Zero Trust Network Design. The Principles. Source cimcor.

 

Zero Trust principles

Zero Trust Network ZTN and SDP are a security philosophy and set of Zero Trust principles, which, taken together, represent a significant shift in how security should be approached. Foundational security elements used before Zero Trust often achieved only coarse-grained separation of users, networks, and applications.

On the other hand, Zero Trust enhances this, effectively requiring that all identities and resources be segmented from one another. Zero Trust enables fine-grained, identity-and-context-sensitive access controls driven by an automated platform. Although Zero Trust started as a narrowly focused approach of not trusting any network identities until authenticated and authorized.

 

  • A key point: Traditional security boundaries

Traditionally, security boundaries were placed at the edge of the enterprise network in a classic “castle wall and moat” approach. However, a significant issue with this was the design and how we connected. Traditional non-zero Trust security solutions have been unable to bridge the disconnect between network and application-level security. Traditionally, users (and their devices) obtained broad access to networks, and applications relied upon authentication-only access control.

 

Issue 1 – We Connect First and Then Authenticate

Connect first, authenticate second.

TCP/IP is a fundamentally open network protocol facilitating easy connectivity and reliable communications between distributed computing nodes. It has served us well in terms of enabling our hyper-connected world but—for various reasons—doesn’t include security as part of its core capabilities.

 

TCP has a weak security foundation

Transmission Control Protocol (TCP) has been around for decades and has a weak security foundation. When it was created, security was out of scope. TCP can detect and retransmit error packets but leave them to their default; communication packets are not encrypted, which poses security risks. In addition, TCP operates with a Connect First, Authenticate, Second operation model, which is inherently insecure. It leaves the two connecting parties wide open for an attack. When clients want to communicate and access an application, they first set up a connection.

Then only once the connect stage has been carried out successfully can the authentication stage occur. And once the authentication stage has been carried out, we can only begin to pass the data. 

zero trust network design
Diagram: Zero Trust security. The TCP model of connectivity.

 

From a security perspective, the most important thing to understand is that this connection occurs purely at a network layer with no identity, authentication, or authorization. The beauty of this model is that it enables anyone with a browser to easily connect to any public web server without requiring any upfront registration or permission. This is a perfect approach for a public web server but a lousy approach for a private application.

 

The potential for malicious activity

With this process of Connect First and Authenticate Second, we are essentially opening up the door of the network and the application without knowing who is on the other side. Unfortunately, with this model, we have no idea who the client is until they have carried out the connect phase, and once they have connected, they are already in the network. Maybe the requesting client is not trustworthy and has bad intentions. If so, once they connect, they can carry out malicious activity and potentially perform data exfiltration. 

 

Developing a Zero Trust Architecture

A zero-trust architecture requires endpoints to authenticate and be authorized before obtaining network access to protected servers. Then, real-time encrypted connections are created between requesting systems and application infrastructure. With a zero-trust architecture, we must establish trust between the client and the application before the client can set up the connection. Zero Trust is all about trust – never trust, always verify.

The trust is bi-directional between the client and the Zero Trust architecture ( that can take forms ) and the application to the Zero Trust architecture. It’s not a one-time check; it’s a continuous mode of operation. Once sufficient trust has been established, we move into the next stage, authentication. Once authentication has been set, we can connect the user to the application. Zero Trust access events flip the entire security model and make it more robust. 

  • We have gone from connecting first, authenticating second to authenticate first, connect second.
zero trust model
Diagram: The Zero Trust model of connectivity.

 

Example of a zero-trust network access

Single Pack Authorization ( SPA)

The user cannot see or know where the applications are located. SDP hides the application and creates a “dark” network by using Single Packet Authorization (SPA) for the authorization.

SPAs’ goal, also known as Single Packet Authentication, is to overcome the open and insecure nature of TCP/IP, which follows a “connect then authenticate” model.  SPA is a lightweight security protocol that validates a device or user’s identity before permitting network access to the SDP. The purpose of SPA is to allow a service to be darkened via a default-deny firewall.

The systems use a One-Time-Password (OTP) generated by algorithm 14 and embed the current password in the initial network packet sent from the client to the Server. The SDP specification mentions using the SPA packet after establishing a TCP connection. In contrast, the open-source implementation from the creators of SPA15 uses a UDP packet before the TCP connection.

single packet authorization

 

Issue 2 – Fixed perimeter approach to networking and security

Traditionally, security boundaries were placed at the edge of the enterprise network in a classic “castle wall and moat” approach. However, as technology evolved, remote workers and workloads became more common. As a result, security boundaries necessarily followed and expanded from just the corporate perimeter.

 

The traditional world of static domains

The traditional world of networking started with static domains. Networks were initially designed to create internal segments separated from the external world by a fixed perimeter. The classical network model divided clients and users into trusted and untrusted groups. The internal network was deemed trustworthy, whereas the external was considered hostile.

The perimeter approach to network and security has several zones. We have, for example, the Internet, DMZ, Trusted, and then Privileged. In addition, we have public and private address spaces that separate network access from here. Private addresses were deemed more secure than public ones as they were unreachable online. However, this trust assumption that all private addresses are safe is where our problems started. 

zero trust architecture
Diagram: Zero Trust security meaning. The issues with traditional networks and security.

 

The fixed perimeter 

The digital threat landscape is concerning. We are getting hit by external threats to your applications and networks from all over the world. They also come internally within your network, and we have insider threats within a user group and internally as insider threats across user group boundaries. These types of threats need to be addressed one by one.

One issue with the fixed perimeter approach is that it assumes trusted internal and hostile external networks. However, we must assume that the internal network is as hostile as the external one.

Over 80% of threats are from internal malware or malicious employees. The fixed perimeter approach to networking and security is still the foundation for most network and security professionals, even though a lot has changed since the inception of the design. 

zero trust network
Diagram: Traditional vs zero trust network. Source is thesslstore

 

 

We get hacked daily!

We are now at a stage where 45% of US companies have experienced a data breach. The 2022 Thales Data Threat Report found that almost half (45%) of US companies suffered a data breach in the past year. But this could be higher due to the potential for undetected breaches.

We are getting hacked daily, and major networks with skilled staff are crashing. Unfortunately, the perimeter approach to networking has failed to provide adequate security in today’s digital world. It works to an extent by delaying an attack. However, a bad actor will eventually penetrate your guarded walls with enough patience and skill.

If a large gate and walls guard your house, you would feel safe and think you are fully protected while inside the house. However, the perimeter protecting your home may be as large and thick as possible. There is still a chance that someone can climb the walls, access your front door, and enter your property. However, if a bad actor cannot even see your house, they cannot take the next step and try to breach your security.

 

Issue 3 – Dissolved perimeter caused by the changing environment

The environment has changed with the introduction of the cloud, advanced BYOD, machine-to-machine connections, the rise in remote access, and phishing attacks. We have many internal devices and a variety of users, such as on-site contractors, that need to access network resources.

There is also a trend for corporate devices to move to the cloud, collocated facilities, and off-site to customer and partner locations. In addition, it is becoming more diversified with hybrid architectures.

zero trust network design
Diagram: Zero Trust concept.

 

These changes are causing major security problems with the fixed perimeter approach to networking and security. For example, with the cloud, the internal perimeter is stretched to the cloud, but traditional security mechanisms are still being used. But it is an entirely new paradigm. Also, remote workers: abundant remote workers work from various devices and places.

Again, traditional security mechanisms are still being used. As our environment evolves, security tools and architectures must evolve. Let’s face it the network perimeter has dissolved as your remote users, things, services, applications, and data are everywhere. In addition, as the world moves to the cloud, mobile, and the IoT, the ability to control and secure everything in the network is longer available.

 

Phishing attacks are on the rise.

We have witnessed increased phishing attacks that can result in a bad actor landing on your local area network (LAN). Phishing is a type of social engineering where an attacker sends a fraudulent message designed to trick a person into revealing sensitive information to the attacker or to deploy malicious software on the victim’s infrastructure, like ransomware. The term “phishing” was first used in 1994 when a group of teens worked to obtain credit card numbers from unsuspecting users on AOL manually.

Phishing attacks
Diagram: Phishing attacks. Source is helpnetsecurity

 

Hackers are inventing new ways.

By 1995, they had created a program called AOHell to automate their work. Since then, hackers have continued to invent new ways to gather details from anyone connected to the internet. These actors have created several programs and types of malicious software still in use today.

Recently, I was a victim of a phishing email. Clicking and downloading the file is very easy if you are not educated about phishing attacks. In my case, the particular file was a .wav file. It looked safe, but it was not.

 

Issue 4 – Broad-level access

So, you may have heard of broad-level access and lateral movements. Remember, with traditional network and security mechanisms, when a bad actor lands on a particular segment, i.e., a VLAN, known as zone-based networking, they can see everything on that segment. So, this gives them broad-level access. But, generally speaking, when you are on a VLAN, you can see everything in that VLAN, and VLAN-to-VLAN communication is not the hardest thing to do, resulting in lateral movements.

 

The issue of lateral movements

Lateral movement is the technique attackers use to progress through the organizational network after gaining initial access. Adversaries use lateral movement to identify target assets and sensitive data for their attack. Lateral movement is the tenth step in the MITRE Att&ck framework. It is the set of techniques attackers use to move in the network while gaining access to credentials without being detected.

 

No intra-VLAN filtering

This is made possible as, traditionally, a security device does not filter this low down on the network, i.e., inside of the VLAN, known as intra-VLAN filtering. A phishing email can easily lead the bad actor to the LAN with broad-level access and the capability to move laterally throughout the network. 

For example, a bad actor can initially access an unpatched central file-sharing server; they move laterally between segments to the web developers’ machines and use a keylogger to get the credentials to access critical information on the all-important database servers.

They can then carry out data exfiltration with DNS or even a social media account like Twitter. However, firewalls generally do not check DNS as a file transfer mechanism, so data exfiltration using DNS will often go unnoticed. 

zero trust network design
Diagram: Zero trust application access. One of the many security threats is lateral movements.

 

Issue 5 – The challenges with traditional firewalls

The limited world of 5-tuple

Traditional firewalls typically control access to network resources based on source IP addresses. This creates the fundamental challenge of securing admission. Namely, we need to solve the user access problem, but we only have the tools to control access based on IP addresses.

As a result, you have to group users, some of whom may work in different departments and roles, to access the same service and with the same IP addresses. The firewall rules are also static and don’t change dynamically based on levels of trust on a given device. They provide only network information.

Maybe the user moves to a more risky location, such as an Internet cafe, its local Firewall, or antivirus software that has been turned off by malware or even by accident. Unfortunately, a traditional firewall cannot detect this and live in the little world of the 5-tuple.  Traditional firewalls can only express static rule sets and not communicate or enforce rules based on identity information.

TCP 5 Tuple
Diagram: TCP 5 Tuple. Source is packet-foo.

 

 

Issue 6 – A Cloud-focused environment

Upon examining the cloud, let’s compare a public parking space. A public cloud is where you can put your car compared to your vehicle in your parking garage. In a public parking space, we have multiple tenants who can take your area and don’t know what they can do to your car.

Today, we are very cloud-focused, but when moving applications to the cloud, we need to be very security-focused. However, the cloud environment is less mature in providing the traditional security control we use in our legacy environment. 

So, when putting applications in the cloud, you shouldn’t leave security to its default. Why?? Firstly, we operate in a shared model where the tenant after you can steal your encryption keys or data. There have been a lot of cloud breaches. We have firewalls with static rulesets, authentication, and key management issues in cloud protection.

 

Control point change

One of the biggest problems is that the perimeter has moved when you move to a cloud-based application. Servers are no longer under your control. Mobile and tablets exacerbate the problem as they can be located everywhere. So, trying to control the perimeter is very difficult. More importantly, firewalls only have access to and control network information and should have more content.

Defining this perimeter is what ZTNA architecture and software-defined perimeter are doing. Cloud users now manage firewalls by moving their applications to the cloud, not the I.T. teams within the cloud providers.

So when moving applications to the cloud, even though cloud providers provide security tools, the cloud consumer has to integrate security to have more visibility than they have today.

zero trust cloud
Diagram: ZTNA. Zero Trust cloud security.

 

Before, we had clear network demarcation points set by a central physical firewall creating inside and outside trust zones. Anything outside was considered hostile, and anything on the inside was deemed trusted.

 

1. Connection-centric model

The Zero Trust model flips this around and considers everything untrusted. To do this, there are no longer pre-defined fixed network demarcation points. Instead, the network perimeter initially set in stone is now fluid and software-based.

Zero Trust is connection-centric, not network-centric. Each user on a specific device connected to the network gets an individualized connection to a particular service hidden by the perimeter.

Instead of having one perimeter every user uses, SDP creates many small perimeters purposely built for users and applications. These are known as micro perimeters. Clients are cryptographically signed into these microperimeters.

security micro perimeters
Diagram: Security micro perimeters.

 

2. Micro perimeters: Zero trust network segmentation

The micro perimeter is based on user and device context and can dynamically adjust to environmental changes. So, as a user moves to different locations or devices, the Zero Trust architecture can detect this and set the appropriate security controls based on the new context.

The data center is no longer the center of the universe. Instead, the user on specific devices, along with their service requests, is the new center of the universe.

Zero Trust does this by decoupling the user and device from the network. The data plane is separated from the network to remove the user from the control plane. The control plane is where the authentication happens first.

Then, the data plane, the client-to-application connection, transfers the data. Therefore, the users don’t need to be on the network to gain application access. As a result, they have the least privilege and no broad-level access.

 

  • Concept: Zero trust network segmentation

The concept of zero-trust network segmentation is gaining traction in cybersecurity due to its ability to provide increased protection to an organization’s network. This method of securing networks is based on the concept of “never trust, always verify,” meaning that all traffic must be authenticated and authorized before it can access the network.

This is accomplished by segmenting the network into multiple isolated zones accessible only through specific access points, which are carefully monitored and controlled.

Network segmentation is a critical component of a zero-trust network design. By dividing the network into smaller, isolated units, it is easier to monitor and control access to the network. Additionally, segmentation makes it harder for attackers to move laterally across the network, reducing the chance of a successful attack.

Zero-trust network design segmentation is essential to any organization’s cybersecurity strategy. By utilizing segmentation, authentication, and monitoring systems, organizations can ensure their networks are secure and their data is protected.

 

A final issue 7 – The I.P. address conundrum

Everything today relies on I.P. addresses for trust, but there is a problem: I.P. addresses lack user knowledge to assign and validate the device’s trust. There is no way for an I.P. address to do this. I.P. addresses provide connectivity but do not get involved in validating the trust of the endpoint or the user.

Also, I.P. addresses should not be used as an anchor for network locations as they are today because when a user moves from one place to another, the I.P. address changes. 

 

security flaws
Diagram: Three main network security flaws.

 

Can’t have security related to an I.P. address.

But what about the security policy assigned to the old IP addresses? What happens with your change I.P.s? Anything tied to I.P. is ridiculous, as we don’t have a good hook to hang things on for security policy enforcement. When you examine policy, there are several facets. For example, the user access policy touches on authorization, the network access policy touches on what to connect to, and user account policies touch on authentication.

With either one, there is no policy visibility with I.P. addresses. This is also a significant problem for traditional firewalling, which displays static configurations; for example, a stationary design may state that this particular source can reach this destination using this port number. 

 

Security-related issues to I.P.

  1. This has no meaning. There is no indication of why that rule exists and under what conditions a packet should be allowed from one source to another.
  2. There is no contextual information taken into consideration. When creating a robust security posture, we must look at more than ports and IP addresses.

For a robust security posture, you need complete visibility into the network to see who, what, when, and how they connect with the device. Unfortunately, today’s Firewall is static and only contains information about the network.

On the other hand, Zero Trust enables a dynamic firewall with the user and device context to open a firewall for a single secure connection. The Firewall remains closed at all other times, creating a ‘black cloud’ stance regardless of whether the connections are made to the cloud or on-premise. 

 

The rise of the next-generation firewall?

Next-generation firewalls are more advanced than traditional firewalls, and they use the information in layers 5 through 7 (session layer, presentation layer, and application layer) to perform additional functions. They can provide advanced features such as intrusion detection, prevention, and virtual private networks.

Today, most enterprise firewalls are “next generation” and typically include IDS/IPS, traffic analysis and malware detection for threat detection, URL filtering, and some degree of application awareness/control.

Like the NAC market segment, vendors in this area began a journey to identity-centric security around the same time Zero Trust ideas began percolating through the industry. Today, many NGFW vendors offer Zero Trust capabilities, but many operate with the perimeter security model.

 

Still, IP-based security systems

They are still IP-based systems offering limited identity and application-centric capabilities. In addition, NGFWs are still static firewalls. Most do not employ zero trust segmentation, and they often mandate traditional perimeter-centric network architectures with site-to-site connections and don’t offer flexible network segmentation capabilities. Similar to conventional firewalls, their access policy models are typically coarse-grained, providing users with broader network access than what is strictly necessary.

Conclusion:

Zero Trust Network Design represents a paradigm shift in network security, recognizing that traditional perimeter defenses are no longer sufficient to protect against the evolving threat landscape. By implementing this approach, organizations can significantly enhance their security posture, minimize the risk of data breaches, and ensure compliance with regulatory requirements. As the digital landscape evolves, Zero Trust Network Design offers a robust framework for safeguarding sensitive information in an increasingly interconnected world.

 

defined perimeter

Safe-T SDP- Why Rip and Replace your VPN?

 

 

SDP VPN

Although organizations realize the need to upgrade their approach to user access control, deploying existing technologies is holding back the introduction of Software Defined Perimeter (SDP). A recent Cloud Security Alliance (CSA) report on the “State of Software-Defined Perimeter” states that existing in-place security technologies are the main barrier to adopting SDP. One can understand the reluctance to leap. After all, VPNs have been a cornerstone of secure networking for over two decades.

They do provide what they say; secure remote access. However, they have not evolved to secure our developing environment appropriately. The digital environment has changed considerably in recent times. A big push for the cloud, BYOD, and remote workers puts pressure on existing VPN architectures. As our environment evolves, the existing security tools and architectures must evolve, also to an era of SDP VPN. And to include other zero trust features such as Remote Brower Isolation.

Undoubtedly, there is a common understanding of the benefits of adopting the zero-trust principles that software-defined perimeter provides over traditional VPNs. But the truth that organizations want even safer, less disruptive, and less costly deployment models cannot be ignored. VPNs aren’t a solution that works for every situation. It is not enough to offer solutions that involve ripping the existing architectures completely or even putting software-defined perimeter on certain use cases. The barrier to adopting a software-defined perimeter involves finding a middle ground.

 

Before you proceed, you may find the following posts helpful:

  1. Zero Trust SASE
  2. SDP Network
  3. Safe-T approach to SDP
  4. Brownfield Network Automation

 



SDP VPN

Key Safe-T SDP Solution points:


  • The need for zero trust and software defined perimter.

  • The different software defined perimter solutions.

  • The challenges of the legacy VPN.

  • SDP vs VPN.

  • Safe-T SDP deployment models.

 

SDP VPN: Safe-T; Providing the middle ground.

Safe-T is aware of this need for a middle ground. Therefore, in addition to the standard software-defined perimeter offering, Safe-T offers this middle-ground to help the customer on the “journey from VPN to SDP,” resulting in a safe path to SDP VPN.

Now organizations do not need to rip and replace the VPN. Sofware-defined perimeter and VPNs ( SDP VPN ) can work together, yielding a more robust security infrastructure. Having network security that can bounce you between IP address locations can make it very difficult for hackers to break in. Besides, if you already have a VPN solution you are comfortable with, you can continue using it and pair it with Safe-T’s innovative software-defined perimeter approach. By adopting this new technology, you get equipped with a middle-ground that improves your security posture and maintains the advantages of existing VPNs.

Recently, Safe-T has released a new SDP solution called ZoneZero that enhances VPN security by adding SDP capabilities. Adding SDP capabilities allows exposure and access to applications and services. The access is granted only after assessing the trust, based on policies for an authorized user, location, and application. In addition, access is granted to the specific application or service rather than the network, as you would provide with a VPN.

Deploying SDP and single packet authorization on the existing VPN offers a customized and scalable zero-trust solution. It provides all the benefits of SDP while lowering the risks involved in adopting the new technology. Currently, Safe-T’s ZoneZero is the only SDP VPN solution in the market, focusing on enhancing VPN security by adding zero trust capabilities rather than replacing it.

 

The challenges of just using a traditional VPN

VPNOverview

While VPNs have stood the test of time, today, we know that the proper security architecture is based on zero trust access. VPNs operating by themselves are unable to offer optimum security. Now, let’s examine some of the expected shortfalls.

The VPN lacks because they cannot grant access on a granular, case-by-case level. This is a significant problem that SDP addresses. According to the traditional security setup, you had to connect a user to a network to get access to an application. Whereas, for the users not on the network, for example, remote workers, we needed to create a virtual network to place the user on the same network as the application.

To enable external access, organizations started to implement remote access solutions (RAS) to restrict user access and create secure connectivity. An inbound port is exposed to the public internet to provide application access. However, this open port is visible to anyone online, not just remote workers.

From a security standpoint, the idea of network connectivity to access an application will likely bring many challenges. We then moved to the initial layer of zero trust to isolate different layers of security within the network. This provided a way to quarantine the applications not meant to be seen as dark. But this leads to a sprawl of network and security devices.

For example, you could use inspection path control with a hardware stack. This enabled the users to only access what they could, based on the blacklist security approach. Security policies provided broad-level and overly permissive access. The attack surface was too wide. Also, the VPN displays static configurations that have no meaning. For example, a configuration may state that this particular source can reach this destination using this port number and policy.

However, with this configuration, the contextual configuration is not considered. There are just ports and IP addresses, and the configuration offers no visibility into the network to see who, what, when, and how they connect with the device.

More often than, access policy models are coarse-grained, which provides users with more access than is required. This model does not follow the least privilege model. The VPN device provides only the network information, and the static policy does not dynamically change based on the levels of trust.

For example, the user’s anti-virus software is accidentally turned off or by malicious malware. Or maybe you want to re-authenticate when certain user actions are performed. In such cases, a static policy cannot dynamically detect this and change configuration on the fly. They should be able to express and enforce the policy configuration based on the identity, which considers both the user and the device.

 

SDN VPN

The new technology adoption rate can be slow initially. The primary reason could be the lack of understanding that what you have in place today is not the best for your organization in the future. Maybe now is the time to stand back and ask if this is the future that we want.

All the money and time you have spent on the existing technologies are not evolving at pace with today’s digital environment. This indicates the necessity for new capabilities to be added. These get translated into different meanings based on an organization’s CIO and CTO roles. The CTOs are passionate about embracing new technologies and investing in the future. They are always looking to take advantage of new and exciting technological opportunities. However, the CIO looks at things differently. Usually, the CIO wants to stay with the known and is reluctant to change even in case of loss of service. Their sole aim is to keep the lights on.

This shines the torch on the need to find the middle ground. And that middle-ground is to adopt a new technology that has endless benefits for your organization. The technology should be able to satisfy the CTO group while also taking every single precaution and not disrupting the day-to-day operations.

 

  • The push by the marketers

There is a clash between what is needed and what the market is pushing. The SDP industry standard encourages customers to rip and replace their VPN to deploy their Software Defined Perimeter Solutions. But the customers have invested in a comprehensive VPN and are reluctant to replace it.

The SDP market initially pushed for a rip-and-replace model, which would eliminate the use of traditional security tools and technologies. This should not be the recommended case since the SDP functionality can overlap with the VPNs. Although the existing VPN solutions have their drawbacks, there should be an option to use the SDP in parallel. Thereby offering the best of both worlds.

 

Software-defined perimeter: How does Safe-T address this?

Safe-T understands there is a need to go down the SDP VPN path, but you may be reluctant to do a full or partial VPN replacement. So let’s take your existing VPN architecture and add the SDP capability.

The solution is placed after your VPN. The existing VPN communicates with Safe-T ZoneZero, which will do the SDP functions after your VPN device. From an end user’s perspective, they will continue to use their existing VPN client. In both cases, the users operate as usual. There are no behavior changes, and the users can continue using their VPN client.

For example, they authenticate with the existing VPN as before. But the VPN communicates with SDP for the actual authentication process instead of communicating with, for example, the Active Directory (AD).

What do you get from this? From an end user’s perspective, their day-to-day process does not change. Also, instead of placing the users on your network as you would with a VPN, they are switched to application-based access. Even though they use a traditional VPN to connect, they are still getting the full benefits of SDP.

This is a perfect stepping stone on the path toward SDP. Significantly, it provides a solid bridge to an SDP deployment. It will lower the risk and cost of the new technology adoption with minimal infrastructure changes. It removes the pain caused by deployment.

 

The ZoneZero™ deployment models

Safe-T offers two deployment models; ZoneZero Single-Node and Dual-Node.

With the single-node deployment, a ZoneZero virtual machine is located between the external firewall/VPN and the internal firewall. All VPN is routed to the ZoneZero virtual machine, which controls which traffic continues to flow into the organization.

In the dual-node deployment model, the ZoneZero virtual machine is between the external firewall/VPN and the internal firewall. And an access controller is in one of the LAN segments behind the internal firewall.

In both cases, the user opens the IPSEC or SSL VPN client and enters the credentials. The credentials are then retrieved by the existing VPN device and passed over RADIUS or API to ZoneZero for authentication.

SDP is charting the course to a new network and security architecture. But now, a middle ground can reduce the risks associated with the deployment. The only viable option is running the existing VPN architectures parallel with SDP. This way, you get all the benefits of SDP with minimal disruption.

 

SDP VPN

 

Zero Trust Access

Safe-T; A Progressive Approach to Zero Trust Access

 

 

Zero Trust SDP

The foundations that support our systems are built with connectivity, not security, as an essential feature. TCP connects before it authenticates. Security policy and user access based on IP lack context and allow architectures with overly permissive access. This will likely result in a brittle security posture enabling the need for Zero Trust SDP and SDP VPN.

Our environment has changed considerably, leaving traditional network and security architectures vulnerable to attack. The threat landscape is unpredictable. We are getting hit by external threats from all over the world. However, the environment is not just limited to external threats. There are insider threats within a user group and insider threats across user group boundaries.

Therefore, we must find ways to decouple security from the physical network and decouple application access from the network. We must change our mindset and invert the security model to a Zero Trust Security Strategy to do this. Software Defined Perimeter (SDP) is an extension of Zero Trust Network ZTN, which presents a revolutionary development. It provides an updated approach that current security architectures fail to address.

SDP is often referred to as Zero Trust Access (ZTA). Safe-T’s package of the access control software is called: Safe-T Zero+. Safe-T offers a phased deployment model, enabling you to progressively migrate to zero-trust network architecture while lowering the risk of technology adoption. Safe-T’s Zero+ model is flexible to meet today’s diverse hybrid IT requirements. It satisfies the zero-trust principles used to combat today’s network security challenges.

 

Before you proceed, you may find the following posts helpful:

  1. Zero Trust SASE
  2. Zero Trust Networking
  3. SDP Network

 



Zero Trust SDP

Key Safe-T Zero Trust Strategy Discussion points:


  • Network challenges.

  • The issues with legacy VPN.

  • Introduction to Zero Trust Access.

  • Safe-T SDP solution.

  • Safe-T SDP and Zero Trust capabilities.

 

Network Challenges

  • Connect First and Then Authenticate

TCP has a weak security foundation. When clients want to communicate and have access to an application: they first set up a connection. Only after the connect stage has been carried out can the authentication stage be accomplished. Unfortunately, with this model, we have no idea who the client is until they have completed the connect phase. There is a possibility that the requesting client is not trustworthy.

 

  • The Network Perimeter

We began with static domains, whereby a fixed perimeter separates internal and external segments. Public IP addresses are assigned to the external host, and private addresses are to the internal. If a host is assigned a private IP, it is thought to be more trustworthy than if it has a public IP address. Therefore, trusted hosts operate internally, while untrusted operate externally to the perimeter. The significant factor that needs to be considered is that IP addresses lack user knowledge to assign and validate trust.

Today, IT has become more diverse since it supports hybrid architectures with various user types, humans, applications, and the proliferation of connected devices. Cloud adoption has become the norm since many remote workers access the corporate network from various devices and places.

The perimeter approach no longer accurately reflects the typical topology of users and servers. It was built for a different era where everything was inside the organization’s walls. However, today, organizations are increasingly deploying applications in public clouds located in geographical locations. These locations are remote from an organization’s trusted firewalls and the perimeter network. This certainly stretches the network perimeter.

We have a fluid network perimeter where data and users are located everywhere. Hence, now we operate in a completely new environment. But the security policy controlling user access is built for static corporate-owned devices within the supposed trusted LAN.

 

  • Lateral Movements

A significant concern with the perimeter approach is that it assumes a trusted internal network. However, 80% of threats are from internal malware or malicious employee that will often go undetected.

Besides, with the rise of phishing emails, an unintentional click will give a bad actor broad-level access. And once on the LAN, the bad actors can move laterally from one segment to another. They are likely to navigate undetected between or within the segments.

Eventually, the bad actor can steal the credentials and use them to capture and exfiltrate valuable assets. Even social media accounts can be targeted for data exfiltration since the firewall does not often inspect them as a file transfer mechanism.

 

  • Issues with the Virtual Private Network (VPN)

What happens with traditional VPN access is that the tunnel creates an extension between the client’s device and the application’s location. The VPN rules are static and do not dynamically change with the changing levels of trust on a given device. They provide only network information, which is a significant limitation.

Therefore, from a security standpoint, the traditional method of VPN access enables the clients to have broad network-level access. This makes the network susceptible to undetected lateral movements. Also, the remote users are authenticated and authorized but once permitted to the LAN. They have coarse-grained access. This creates a high level of risk as undetected malware on a user’s device can spread to an internal network.

Another significant challenge is that VPNs generate administrative complexity and cannot easily handle cloud or multiple network environments. They require the installation of end-user VPN software clients and knowing where the application they are accessing is located. Users would have to change their VPN client software to access the applications at different locations. In a nutshell, traditional VPNs are complex for administrators to manage and for users to operate.

With public concern over surveillance, privacy, and identity theft growing, many people are turning to VPNs to help keep them safer online. But where should you start when choosing the best VPN for your needs?

Also, poor user experience will likely occur as you need to backhaul the user traffic to a regional data center. This adds latency and bandwidth costs.

In recent years, torrenting has become increasingly popular among computer users who wish to download movies, books, and songs. Without having a VPN, you could risk your privacy and security. It is also important to note that you should be very careful when downloading files to your computer as they could cause more harm than good. 

 

Can Zero Trust Access be the Solution?

The main principle that Zero Trust Network Design follows is that nothing should be trusted. This is regardless of whether the connection originates inside or outside the network perimeter. Reasonably, today, we have no reason to trust any user, device, or application; some companies may try and decrease accessibility by using programs like office 365 distribution group to allow and disallow users’ and devices’ specific network permissions. You know that you cannot protect what you cannot see, but you also cannot attack what you cannot see also holds. ZTA makes the application and the infrastructure utterly undetectable to unauthorized clients, creating an invisible network.

Preferably, application access should be based on contextual parameters, such as who/where the user is located and the judgment of the security stance of the device. Then a continuous assessment of the session should be performed. This moves us from network-centric to user-centric, providing a connection-based approach to security. Security enforcement should be based on user context and include policies that matter to the business. It should be unlike a policy based on subnets that have no meaning. The authentication workflows should include context-aware data, such as device ID, geographic location, and the time and day when the user requests access.

It’s not good enough to provide network access. We must provide granular application access with a dynamic segment of 1. Here, an application microsegment gets created for every request that comes in. Micro-segmentation creates the ability to control access by subdividing the larger network into small secure application micro perimeter internal to the network. This abstraction layer puts a lockdown on lateral movements. In addition, zero trust access also implements a policy of least privilege by enforcing controls that enable the users to have access only to the resources they need to perform their tasks.

 

Characteristics of Safe-T

Safe-T has three main pillars to provide a secure application and file access solution:

1) An architecture that implements zero trust access,

2) A proprietary secure channel that enables users to access/share sensitive files remotely and

3) User behavior analytics.

Safe-T’s SDP architecture is designed to substantially implement the essential capabilities delineated by the Cloud Security Alliance (CSA) architecture. Safe-T’s Zero+ is built using these main components:

The Safe-T Access Controller is the centralized control and policy enforcement engine that enforces end-user authentication and access. It acts as the control layer, governing the flow between end-users and backend services.

Secondly, the Access Gateway is a front-end for all the backend services published to an untrusted network. The Authentication Gateway presents to the end-user in a clientless web browser. Hence, a pre-configured authentication workflow is provided by the Access Controller. The authentication workflow is a customizable set of authentication steps, such as 3rd party IDPs (Okta, Microsoft, DUO Security, etc.). In addition, it has built-in options, such as a captcha, username/password, No-Post, and OTP.

 

Safe-T Zero+ Capabilities

The Safe-T Zero+ capabilities are in line with zero trust principles. With Safe-T Zero+, clients requesting access must go through authentication and authorization stages before accessing the resource. Any network resource that has not passed these steps is blackened. Here, URL rewriting is used to hide the backend services.

This reduces the attack surface to an absolute minimum and follows Safe-T’s axiom: If you can’t be seen, you can’t be hacked. In a normal operating environment, for the users to access services behind a firewall, they have to open ports on the firewall. This presents security risks as a bad actor could directly access that service via the open port and exploit any vulnerabilities.

Another paramount capability of Safe-T Zero+ is implementing a patented technology called reverse access to eliminate the need to open incoming ports in the internal firewall. This also eliminates the need to store sensitive data in the demilitarized zone (DMZ). It can extend to on-premise, public, and hybrid cloud, supporting the most diverse hybrid and meeting the IT requirements. Zero+ can be deployed on-premises, as part of Safe-T’s SDP services, or on AWS, Azure, and other cloud infrastructures, thereby protecting both cloud and on-premise resources.

Zero+ provides the capability of user behavior analytics that monitors the actions of protected web applications. This allows the administrator to inspect the details of anomalous behavior. Forensic assessment is more accessible by offering a single source for logging.

Finally, Zero+ provides a unique, native HTTPS-based file access solution for the NTFS file system, replacing the vulnerable SMB protocol. Besides, users can create a standard mapped network drive in their Windows explorer. This provides a secure, encrypted, access-controlled channel to shared backend resources.

 

Zero Trust Access: Deployment Strategy

Safe-T customers can exclusively select an architecture that meets their on-premise or cloud-based requirements.

 

There are three options:

i) The customer deploys three VMs: 1) Access Controller, 2) Access Gateway, and 3) Authentication Gateway. The VMs can be deployed on-premises in an organization’s LAN, on Amazon Web Services (AWS) public cloud, or on Microsoft’s Azure public cloud.

ii) The customer deploys the 1) Access Controller VM and 2) Access Gateway VM on-premises in their LAN. The customer deploys the Authentication Gateway VM on a public cloud like AWS or Azure.

iii) The customer deploys the Access Controller VM on-premise in the LAN, and Safe-T deploys and maintains two VMs, 1) Access Gateway and 2) Authentication Gateway, both hosted on Safe-T’s global SDP cloud service.

 

ZTA Migration Path

Today, organizations recognize the need to move to zero-trust architecture. However, there is a difference between recognition and deployment. Also, new technology brings with it considerable risks. Chiefly, traditional Network Access Control (NAC) and VPN solutions fall short in many ways, but a rip-and-replace model is a very aggressive approach.

To transition from legacy to ZTA and single packet authorization, you should look for a migration path you feel comfortable with. Maybe you want to run a traditional VPN in parallel or in conjunction with your SDP solution and only for a group of users for a set period. A probable example could be: choosing a server used primarily by experienced users, such as DevOps or QA personnel. This ensures minimal risk if any problem occurs during your organization’s phased deployment of SDP access.

A recent survey by the CSA indicates that SDP awareness and adoption are still in an early stage. However, when you go down the path of ZTA, vendor selection that provides an architecture that matches your requirements is the key to successful adoption. For example, look for SDP vendors who allow you to continue using your existing VPN deployment while adding.

SDP/ZTA capabilities on top of your VPN. This could sidestep the risks if you switch to an entirely new technology.

 

 

VPN virtual private network conncetion concept. Lan cable and a router with different flags. 3d illustration

Remote Browser Isolation

 

 

Remote Browser Isolation

In today’s digital landscape, where cyber threats continue to evolve at an alarming rate, businesses and individuals alike are constantly seeking innovative solutions to safeguard their sensitive information. One such solution that has gained significant attention is Remote Browser Isolation (RBI). In this blog post, we will explore what RBI is, how it works, and its role in enhancing security in the digital era.

Remote Browser Isolation, as the name suggests, is a technology that isolates web browsing activity from the user’s local device. Instead of directly accessing websites and executing code on the user’s computer or mobile device, RBI redirects browsing activity to a remote server, where the web page is rendered and interactions are processed. This isolation prevents any malicious code or potential threats from reaching the user’s device, effectively minimizing the risk of a cyberattack.

 

Highlights: Remote Browser Isolation

  • Challenging Landscape

Our digital environment has been transformed significantly. Unlike earlier times, we now have different devices, access methods, and types of users accessing applications from various locations. This makes it more challenging to know which communications can be trusted. The perimeter-based approach to security can no longer be limited to just the physical location of the enterprise.

  • A Fluid Perimeter

In this modern world, the perimeter is becoming increasingly difficult to enforce as organizations adopt mobile and cloud technologies. Hence, the need for Remote Browser Isolation (RBI) has become integral to the SASE definition. For example, Cisco Umbrella products have several Zero Trust SASE components, such as the CASB tools, and now RBI is integrated into one solution.

  • Its Just a matter of time

Under these circumstances, the perimeter is more likely to be breached; it’s just a matter of time. A bad actor would then be relatively free to move laterally, potentially accessing the privileged intranet and corporate data on-premises and in the cloud. Therefore, we must assume that users and resources on internal networks are as untrustworthy as those on the public internet and design enterprise application security with this in mind. 

 

Before you proceed, you may find the following posts helpful for pre-information:

  1. Cisco Umbrella CASB
  2. Ericom Shield
  3. SDP Network
  4. Zero Trust Access

 



Remote Browser Isolation (RBI)

Key Remote Browser Isolation Discussion points:


  • The perimeter approach can no longer be trusted. 

  • Software Defined Perimeter and security checking.

  • Mico egmentation.

  • Introducing RBI and its capabilities.

  • The RBI components.

 

Back to basics with remote browser isolation

Remote browser isolation (RBI), also known as web isolation or browser isolation, is a web security solution developed to protect users from Internet-borne threats. So we have on-premise isolation and remote browser isolation.

On-premise browser isolation functions similarly to remote browser isolation. But instead of taking place on a remote server, which could be in the cloud, the browsing occurs on a server inside the organization’s private network, which could be at the DMZ. So why would you choose on-premise isolation as opposed to remote browser isolation?

Firstly, performance. On-premise isolation can reduce latency compared to some types of remote browser isolation that need to be done in a remote location.

 

The Concept of RBI

The concept of RBI is based on the principle of “trust nothing, verify everything.” By isolating web browsing activity, RBI ensures that any potentially harmful elements, such as malicious scripts, malware, or phishing attempts, are unable to reach the user’s device. This approach significantly reduces the attack surface and provides an added layer of protection against threats that may exploit vulnerabilities in the user’s local environment.

So, how does Remote Browser Isolation work in practice? When a user initiates a web browsing session, instead of directly accessing the website, the RBI solution establishes a secure connection to a remote server. The remote server acts as a virtual browser, rendering the web page, executing potentially dangerous code, and processing user interactions.

Only the harmless visual representation of the webpage is transmitted back to the user’s device, ensuring that any potential threats are confined to the isolated environment.

 

Key Advantages

One of the key advantages of RBI is its ability to protect against both known and unknown threats. Since the browsing activity is isolated from the user’s device, even if a website contains an undiscovered vulnerability or a zero-day exploit, the user’s device remains protected. This is particularly valuable in today’s dynamic threat landscape, where new vulnerabilities and exploits are constantly being discovered.

Furthermore, RBI offers a seamless user experience, as it allows users to interact with web pages just as they would with a traditional browser. Whether it’s submitting forms, watching videos, or accessing web applications, users can perform their desired actions without compromising security. From an IT perspective, RBI also simplifies security management, as it enables centralized control and monitoring of browsing activity, making it easier to identify and address potential threats.

As organizations increasingly adopt cloud-based infrastructure and embrace remote work, Remote Browser Isolation has emerged as a critical security solution. By isolating web browsing activity, businesses can effectively protect their sensitive data, intellectual property, and customer information from cyber threats. RBI significantly reduces the risk of successful attacks, enhances overall security posture, and provides peace of mind to both organizations and individuals.

 

What within the perimeter makes us assume it can no longer be trusted?

Security becomes less and less tenable once there are many categories of users, device types, and locations. Users are diverse, so it is impossible, for example, to slot all vendors into one user segment with uniform permissions.

As a result, access to applications should be based on contextual parameters such as who and where the user is. And sessions should be continuously assessed to ensure they’re legit. 

We need to find ways to decouple security from the physical network and, more importantly, to decouple application access from the network. In short, we need a new approach to providing access to the cloud, network, and device-agnostic applications. This is where Software Defined Perimeter (SDP) comes into the picture.

 

What is a Software-Defined Perimeter (SDP)?

SDP VPN complements zero trust, which considers internal and external networks and actors untrusted. The network topology is divorced from the trust. There is no concept of inside or outside of the network.

This may result in users not automatically being granted broad access to resources, simply under their being inside the perimeter. Primarily, security pros must focus on solutions where they can set and enforce discrete access policies and protections for those requesting to use an application.

SDP lays the foundation and secures the access architecture, which enables an authenticated and trusted connection between the entity and the application. Unlike security based solely on IP, SDP does not grant access to network resources based on a user’s location.

Access policies are based on device, location, state, associated user information, and other contextual elements. The applications are considered in the abstract, so whether they run on-premise or in the cloud is irrelevant to the security policy.

 

  • Periodic Security Checking

Clients and their interactions are periodically checked to ensure they comply with the security policy. Periodic security checking protects against additional actions or requests not allowed while the connection is open. Let’s say you have a connection open to a financial application, and users access the recording software to record the session.

In this case, the SDP management platform can check whether the software has been started. If so, it employs protective mechanisms to ensure smooth and secure operation.

 

Microsegmentation

Front-end authentication and periodic checking are one part of the picture. However, we need to go a layer deeper to secure the front door to the application and the numerous doors within, which can potentially create additional access paths. Primarily, this is the job of microsegmentation.

It’s not sufficient to provide network access. We must enable granular application access for dynamic segments of 1. In this scenario, a microsegment is created for every request. Microsegmentation creates the minimal accessible network required to complete specific tasks smoothly and securely. This is accomplished by subdividing larger networks into small secure, and flexible micro-perimeters.

 

Introducing Remote Browser Isolation (RBI)

SDP provides mechanisms to prevent lateral movement once users are inside the network. However, we must also address how external resources on the internet and public clouds can be accessed while protecting end-users, their devices, and the networks they connect. This is where remote browser isolation (RBI) and technologies such as Single Packet Authorization come into the picture.

What is Remote Browser Isolation? Initially, we started with browser isolation, which protects the user from external sessions by isolating the interaction. Essentially, it generates complete browsers within a virtual machine on the endpoint, providing a proactive approach to isolate users’ sessions from, for example, malicious websites, emails, and links. But these solutions do not reliably isolate the web content from the end-user’s device on the network.

Remote browser isolation takes local browser isolation to the next level by enabling the rendering process to occur remotely from the user’s device in the cloud. Because only a clean data stream touches the endpoint, users can securely access untrusted websites from within the perimeter of the protected area.

 

Remote Browser Isolation
Diagram: Remote Brower Isolation.

 

SDP, along with Remote Browser Isolation (RBI)

In many important ways, remote browser isolation complements the SDP approach. When you access a corporate asset, you operate within the SDP. But when you need to access external assets, RBI is needed to keep you safe.

Zero trust and SDP are about authentication, authorization, and accounting (AAA) for internal resources, but there must be secure ways to access external resources. For this, RBI secures browsing elsewhere on your behalf.

No SDP solution can be complete without including rules to secure external connectivity. RBI takes zero trust to the next level by securing the internet browsing perspective. If access is to an internal corporate asset, we create a dynamic tunnel of one individualized connection. For external access, RBI transfers information without full, risky connectivity.

This is particularly crucial when it comes to email attacks like phishing. Malicious actors use social engineering tactics to convince recipients to trust them enough to click on embedded links.

Quality RBI solutions protect users by “knowing” when to allow user access while preventing malware from entering endpoints, entirely blocking malicious sites, or protecting users from entering confidential credentials by enabling read-only access.

 

The RBI Components

To understand how RBI works, let’s look under the hood of Ericom Shield. With RBI, for every tab a user opens on their device, the solution spins up a virtual browser in its dedicated Linux container in a remote cloud location. For additional information on containers, in particular Docker Container Security.

For example, if the user is actively browsing 19 open tabs on his Chrome browser, each will have a corresponding browser in its remote container. This sounds like it takes a lot of computing power, but enterprise-class  RBI solutions do a lot of optimizations to ensure that it is not eating up too much of the endpoint resources.

If a tab is unused for some time, the associated container is automatically terminated and destroyed. This frees up computing resources and also eliminates the possibility of persistence.

As a result, whatever malware may have resided on the external site being browsed is destroyed and cannot accidentally infect the endpoint, server, or cloud location. When the user shifts back to the tab, he is reconnected in a fraction of a second to the exact location but with a new container, creating a secure enclave for internet browsing. 

 

A key point: Website rendering

Website rendering is carried out in real-time from the remote browser. The web page is translated into a media stream, which then gets streamed back to the end-user via HTML5 protocol. In reality, the browsing experience is made out of images. When you look at the source code on the endpoint browser, you will find that the HTML code consists solely of a block of Ericom-generated code. This block manages to send and receiving of images via the media stream.

Whether the user is accessing the Wall Street Journal or YouTube, they will always get the same source code from Ericom Shield. This is ample proof that no local download, drive-by download, or any other contact that may try to hook up into your endpoint will ever get there, as it does not come into contact with the endpoint. It runs only remotely in a container outside the local LAN. The browser farm does all the heavy — and dangerous — lifting via container-bound browsers that read and execute the user’s uniform resource locator (URL) requests. 

 

Summary

SDP vendors have figured out device user authentication and how to secure sessions continuously. However, vendors are now looking for a way to secure the tunnel through to external resource access. 

The session can be hacked or compromised if you use your desktop to access a cloud application. But with RBI, you can maintain one-to-one secure tunneling. With a dedicated container for each specific app, you are assured of an end-to-end zero-trust environment. 

RBI, based on hardened containers and with a rigorous process to eliminate malware through limited persistence, forms a critical component of the SDP story. The power of RBI is that it stops known and unknown threats, making it a natural evolution from the zero-trust perspective.

In conclusion, Remote Browser Isolation plays a crucial role in enhancing security in the digital era. By isolating web browsing activity from the user’s device, RBI provides an effective defense against a wide range of cyber threats. With its ability to protect against both known and unknown threats, RBI offers a proactive approach to cybersecurity, ensuring that organizations and individuals can safely navigate the digital landscape. As the threat landscape continues to evolve, Remote Browser Isolation will undoubtedly remain a key component of a comprehensive security strategy.

 

Cyber security threat. Computer screen with programming code. Internet and network security. Stealing private information. Using technology to steal password and private data. Cyber attack crime

Software defined perimeter (SDP) A disruptive technology

 

software-defined perimeter

 

Software Defined Perimeter

In today’s digital landscape, where the security of sensitive data is paramount, traditional security measures are no longer sufficient. The ever-evolving threat landscape demands a more proactive and robust approach to protecting valuable assets. Enter the Software Defined Perimeter (SDP), a revolutionary concept changing how organizations secure their networks. In this blog post, we will delve into the world of SDP and explore its benefits, implementation, and prospects.

Software Defined Perimeter, also known as a Zero Trust Network, is a security framework that provides secure access to applications and resources, regardless of the user’s location or network. Unlike traditional perimeter-based security models, which rely on firewalls and VPNs, SDP takes a more dynamic and adaptive approach.

 

Highlights: Software Defined Perimeter

  • A Disruptive Technology

There has been tremendous growth in the adoption of software defined perimeter solutions and the zero trust network design over the last few years. This has resulted in SDP VPN becoming a disruptive technology, especially when replacing or working with the existing virtual private network. Why? Because the steps that software-defined perimeter proposes are needed.

  • Challenge With Todays Security

Today’s network security architectures, tools, and platforms are lacking in many ways when trying to combat current security threats. From a bird’s eye view, the stages of zero trust software defined perimeter are relatively simple. SDP requires that endpoints, both internal and external to an organization, must authenticate and then be authorized before being granted network access. Once these steps occur, two-way encrypted connections between the requesting entity and the intended protected resource are created.

 

For pre-information, you may find the following post helpful:

  1. SDP Network
  2. Software Defined Internet Exchange
  3. SDP VPN

 



Software-Defined Perimeter.

Key Software Defined Perimeter Discussion points:


  • The issues with traditional security and networking constructs.

  • Identity-driven access.

  • Discussing Cloud Security Alliance (CSA).

  • Highlighting Software Defined Perimeter capabilities.

  • Dynamic Tunnelling. 

 

Back to basics with Software Defined Perimeter

A software-defined perimeter constructs a virtual boundary around company assets. This separates it from access-based controls restricting user privileges but allowing broad network access. The three fundamental pillars on which a software-defined perimeter is built are Zero Trust:

It leverages micro-segmentation to apply the principle of the least privilege to the network. It ultimately reduces the attack surface. Identity-centric: It’s designed around the user identity and additional contextual parameters, not the IP address.

 

Benefits of Software-Defined Perimeter:

1. Enhanced Security: SDP employs a Zero Trust approach, ensuring that only authorized users and devices can access the network. This eliminates the risk of unauthorized access and reduces the attack surface.

2. Scalability: SDP allows organizations to scale their networks without compromising security. It seamlessly accommodates new users, devices, and applications, making it ideal for expanding businesses.

3. Simplified Management: With SDP, managing access controls becomes more straightforward. IT administrators can easily assign and revoke permissions, reducing the administrative burden.

4. Improved Performance: By eliminating the need for backhauling traffic through a central gateway, SDP reduces latency and improves network performance, enhancing the overall user experience.

 

Implementing Software-Defined Perimeter:

Implementing SDP requires a systematic approach and careful consideration of various factors. Here are the key steps involved in deploying SDP:

1. Identify Critical Assets: Determine the applications and resources that require enhanced security measures. This could include sensitive data, intellectual property, or customer information.

2. Define Access Policies: Establish granular access policies based on user roles, device types, and locations. This ensures that only authorized individuals can access specific resources.

3. Implement Authentication Mechanisms: Incorporate strong authentication measures such as multi-factor authentication (MFA) or biometric authentication to verify user identities.

4. Implement Encryption: Encrypt all data in transit to prevent eavesdropping or unauthorized interception.

5. Continuous Monitoring: Regularly monitor network activity and analyze logs to identify suspicious behavior or anomalies.

 

The Software-Defined Perimeter Proposition

Security policy flexibility is offered with fine-grained access control that dynamically creates and removes inbound and outbound access rules. Therefore, a software-defined perimeter minimizes the attack surface for bad actors to play with—small attack surface results in a small blast radius. So less damage can occur.

A VLAN has a relatively large attack surface, mainly because the VLAN contains different services. SDP eliminates the broad network access that VLANs exhibit. SDP has a separate data and control plane. A control plane sets up the controls necessary for data to pass from one endpoint to another. Separating the control from the data plane renders protected assets “black,” thereby blocking network-based attacks. You cannot attack what you cannot see.

 

The IP Address; Is Not a Valid Hook

We should know that IP addresses are lost in today’s hybrid environment. SDP provides a connection-based security architecture instead of an IP-based one. This allows for many things. For one, security policies follow the user regardless of location. Let’s say you are doing forensics on an event 12 months ago for a specific IP.

However, that IP address is a component in a test DevOps environment. Do you care? Anything tied to IP is ridiculous, as we don’t have the right hook to hang things on for security policy enforcement.

 

Software-defined perimeter; Identity-driven access

Identity-driven network access control is more precise in measuring the actual security posture of the endpoint. Access policies tied to IP addresses cannot offer identity-focused security. SDP enables the control of all connections based on pre-vetting who can connect and to what services.

If you do not meet this level of trust, you can’t, for example, access the database server, but you can access public-facing documents. Users are granted access only to authorized assets preventing lateral movements that will probably go unnoticed when traditional security mechanisms are in place.

 

 

Information and infrastructure hiding

SDP does a great job of information and infrastructure hiding. The SDP architectural components ( the SDP controller and gateways ) are “dark, ” providing resilience against high and low-volume DDoS attacks. A low bandwidth DDoS attack may often bypass traditional DDoS security controls. However, the SDP components do not respond to connections until the requesting clients are authenticated and authorized, allowing only good packets through.

A suitable security protocol that can be used here is single packet authorization (SPA). Single Packet Authorization, or Single Packet Authentication, gives the SDP components a default “deny-all” security posture.

The “default deny” can be achieved because if an accepting host receives any packet other than a valid SPA packet, it assumes it is malicious. The packet will get dropped, and a notification will not get sent back to the requesting host. This stops reconnaissance at the door by silently detecting and dropping bad packets.

 

Sniffing a SPA packet

However, SPA can be subject to Man-In-The-Middle (MITM) attacks. If a bad actor can sniff a SPA packet, they can establish the TCP connection to the controller or AH client. But there is another level of defense in that the bad actor cannot complete the mutually encrypted connection (mTLS) without the client’s certificate.

SDP brings in the concept of mutually encrypted connections, also known as two-way encryption. The usual configuration for TLS is that the client authenticates the server, but TLS ensures that both parties are authenticated. Only validated devices and users can become authorized members of the SDP architecture.

We should also remember that the SPA is not a security feature that can be implemented to protect all. It has its benefits but does not take over from existing security technologies. SPA should work alongside them. The main reason for its introduction to the SDP world is to overcome the problems with TCP. TCP connects and then authenticates. With SPA, you authenticate first and only then connect.

 

SPA Use Case
Diagram: SPA Use Case. Source mrash Github.

 

The World of TCP & SDP

When clients want to access an application with TCP, they must first set up a connection. There needs to be direct connectivity between the client and the applications. So this requires the application to be reachable and is carried out with IP addresses on each end. Then once the connect stage is done, there is an authentication phase.

Once the authentication stage is done, we can pass data. Therefore, we have the connect, then authenticate, and data pass a stage. SDP reverses this.

zero trust security
Diagram: Zero trust security. The opposite of the TCP: Connect Firsts and then Authenticate

 

 

The center of the software-defined perimeter is trust.

In Software-Defined Perimeter, we must establish trust between the client and the application before the client can set up the connection. The trust is bi-directional between the client and the SDP service and the application to the SDP service. Once trust has been established, we move into the next stage, authentication.

Once this has been established, we can connect the user to the application. This flips the entire security model and makes it more robust. The user has no idea of where the applications are located. The protected assets are hidden behind the SDP service, which in most cases is the SDP gateway, or some call this a connector.

 

  • Cloud Security Alliance (CSA) SDP
    • With the Cloud Security Alliance SDP architecture, we have several components:

Firstly, the IH & AH: are the clients initiating hosts (IH) and the service accepting hosts (AH). The IH devices can be any endpoint device that can run the SDP software, including user-facing laptops and smartphones. Many SDP vendors have remote browser isolation-based solutions without SDP client software. The IH, as you might expect, initiates the connections.

With an SDP browser-based solution, the user uses a web browser to access the applications and only works with applications that can speak across a browser. So it doesn’t give you the full range of TCP and UDP ports, but you can do many things that speak natively across HTML5.

Most browser-based solutions don’t give you the additional security posture checks of assessing the end user device than an endpoint with the client installed.

 

Software-Defined Perimeter: Browser-based solution

The AHs accept connections from the IHS and provide a set of services protected securely by the SDP service. The AHs are under the administrative control of the enterprise domain. They do not acknowledge communication from any other host and will not respond to non-provisioned requests. This architecture enables the control plane to remain separate from the data plane achieving a scalable security system.

The IH and AH devices connect to an SDP controller that secures access to isolated assets by ensuring that the users and their devices are authenticated and authorized before granting network access. After authenticating an IH, the SDP controller determines the list of AHs to which the IH is authorized to communicate. The AHs are then sent a list of IHs that should accept connections.

Aside from the hosts and the controller, we have the SDP gateway component that provides authorized users and devices access to protected processes and services. The protected assets are located behind the gateway that can be architecturally positioned in multiple locations such as the cloud or on-premise. The gateways can exist in multiple locations in parallel.

 

Dynamic Tunnelling

A user with multiple tunnels to multiple gateways will be expected in the real world. It’s not a static path or a one-to-one relationship but a user-to-application relationship. The applications can exist everywhere, whereby the tunnel is dynamic and ephemeral.

For a client to connect to the gateway, latency or SYN SYN/ACK RTT testing should be performed to determine the Internet links’ performance. This ensures that the application access path always uses the best gateway, improving application performance.

Remember that the gateway only connects outbound on TCP port 443 (mTLS), and as it acts on behalf of the internal applications, it needs access to the internal apps. As a result, depending on where you position the gateway, either internal to the LAN, private virtual private cloud (VPC) or in the DMZ protected by local firewalls, ports may need to be opened on the existing firewall.

 

Future of Software-Defined Perimeter:

As the digital landscape evolves, secure network access becomes even more crucial. The future of SDP looks promising, with advancements in technologies like Artificial Intelligence and Machine Learning enabling more intelligent threat detection and mitigation.

In an era where data breaches are a constant threat, organizations must stay ahead of cybercriminals by adopting advanced security measures. Software Defined Perimeter offers a robust, scalable, and dynamic security framework that ensures secure access to critical resources.

By embracing SDP, organizations can significantly reduce their attack surface, enhance network performance, and protect sensitive data from unauthorized access. The time to leverage the power of Software Defined Perimeter is now.

 

software-defined perimeter

Zero trust security for full protection and data safety outline diagram. Labeled educational scheme with network, identity and device verification for safe information protection vector illustration.

Zero Trust: Single Packet Authorization | Passive authorization

Single Packet Authorization

In today's fast-paced world, where digital security is paramount, traditional authentication methods are often susceptible to malicious attacks. Single Packet Authorization (SPA) emerges as a powerful solution to enhance the security of networked systems. In this blog post, we will delve into the concept of SPA, its benefits, and how it revolutionizes network security.

Single Packet Authorization is a security technique that allows access to a networked system or service by sending a single, unique packet to a server. Unlike traditional authentication methods that rely on usernames and passwords, SPA utilizes cryptographic techniques to verify the legitimacy of the packet.

Table of Contents

Highlights: Single Packet Authorization

The Role of Authorization

Authorization is arguably the most critical process in a zero-trust network, so an authorization decision should not be taken lightly. Ultimately, every flow and request will require a decision.

For the authorization decision to be effective, enforcement must be in place. In most cases, it takes the form of a load balancer, a proxy, or a firewall. We use the policy engine to decide which interacts with this component. The enforcement component ensures that clients are authenticated and passes context for each flow/request to the policy engine. By comparing the request and its context with policy, the policy engine informs the enforcer whether the request is permitted. As many enforcement components as possible should exist throughout the system and should be close to the workload.

Reverse Security 

Even though we are looking at disruptive technology to replace the virtual private network and offer secure segmentation, one thing to keep in mind with zero trust network design and software defined perimeter (SDP) is that it’s not based on entirely new protocols, such as the use of spa single packet authorization and single packet authentication. So we have reversed the idea of how TCP connects.

It started with authentication, and then a connected approach, but traditional networking and protocols still play a large part. For example, we still use encryption to ensure only the receiver can read the data we send. We can, however, use encryption without authentication, which validates the sender.

The importance of authenticity

However, the two should go together to stand any chance in today’s world. Attackers can circumvent many firewalls and secure infrastructure. As a result, message authenticity is a must for zero trust, and without an authentication process, a bad actor could change, for example, the ciphertext without the reviewer ever knowing.

Encryption and authentication

Even though encryption and authenticity are often intertwined, their purposes are distinct. By encrypting your data, you ensure confidentiality-the promise that only the receiver can read it. The purpose of authentication is to verify that the message was sent by what it claims to be.

It is also interesting to note that authentication has another property. Message authentication requires integrity, which is essential to validate the sender and ensure that the message is unaltered.

Encryption is possible without authentication, though this is considered a poor security practice.



Single Packet Authentication.

Key Single Packet Authorization Discussion points:


  • The issues with traditional security and networking constructs. TCP connectivity model.

  • Introducing Zero Trust Networking and MTLS.

  • Discussing SPA and its operations.

  • What can SPA offer?

  • Securitiy benefits of introducing SPA.

Related: Before you proceed, you may find the following post helpful:

  1. Identity Security
  2. Zero Trust Access

Back to Basics: Single Packet Authorization (SPA)

SPA: A Security Protocol

Single Packet Authorization (SPA) is a security protocol allowing users to access a secure network without entering a password or other credentials. Instead, it is an authentication protocol that uses a single packet—an encrypted packet of data—to convey a user’s identity and request access. This packet can be sent over any network protocol, such as TCP, UDP, or SCTP, and is typically sent as an additional layer of authentication beyond the network and application layers.

SPA works by having the user’s system send a single packet of encrypted data to the authentication server. The authentication server then uses a unique algorithm to decode the packet containing the user’s identity and request for access. If the authentication is successful, the server will send a response packet that grants access to the user.

SPA is a secure and efficient way to authenticate and authorize users. It eliminates the need for multiple authentication methods and sensitive data storage. SPA is also more secure than traditional authentication methods, as the encryption used in SPA is often more secure than passwords or other credentials.

Additionally, since the packet sent is encrypted, it cannot be intercepted and decoded, making it an even more secure form of authentication.

single packet authorization

The Mechanics of SPA:

SPA operates by employing a shared secret between the client and server. When a client wishes to access a service, it generates a packet containing a specific data sequence, including a timestamp, payload, and cryptographic hash. The server, equipped with the shared secret, checks the received packet against its calculations. If the packet is authentic, the server grants access to the requested service.

Benefits of SPA:

1. Enhanced Security: SPA drastically reduces the attack surface by eliminating the need for open ports or exposed services. Since SPA relies on a single packet, it significantly reduces the risk of unauthorized access.

2. Protection against Network Scans: Traditional authentication methods are often vulnerable to network scans that attempt to identify open ports for potential attacks. SPA mitigates this risk by rendering the network invisible to scanning tools.

3. Flexibility and Convenience: SPA allows users to access services from any location, even through firewalls or network address translation (NAT). This flexibility eliminates the need for complex VPN setups or port forwarding configurations.

4. DDoS Mitigation: SPA can effectively mitigate Distributed Denial of Service (DDoS) attacks by rejecting packets that do not adhere to the predefined authentication criteria. This helps safeguard the availability of network services.

Implementing SPA:

Implementing SPA requires deploying specialized software or hardware components that support the single packet authorization protocol. Several open-source and commercial solutions are available, making it feasible for organizations of all sizes to adopt this innovative security technique.

 

Back to Basics: Zero Trust

Five fundamental assertions make up a zero-trust network:

  • Networks are always assumed to be hostile.

  • The network is always at risk from external and internal threats.

  • To determine trust in a network, locality alone is not sufficient.

  • A network flow, device, or user must be authenticated and authorized.

  • Policies must be dynamic and derived from as many data sources as possible to be effective.

In a traditional network security architecture, different networks are divided into firewalls-protected zones. It is determined which network resources each zone is permitted to access based on its level of trust. With this model, there is a solid defense in depth. In DMZs, traffic can be tightly monitored and controlled over resources deemed more risky, like those facing the public internet.

Perimeter Defense

Perimeter defenses protecting your network are less secure than you might think. Hosts behind the firewall have no protection, so when a host in the “trusted” zone is breached, which is just a matter of time, access to your data center can be breached. The zero trust movement strives to solve the inherent problems in placing our faith in the network.

Instead, it is possible to secure network communication and access so effectively that the physical security of the transport layer can be reasonably disregarded.

Typically, we examine the IP address of the remote system and ask for a password. Unfortunately, these strategies alone are insufficient for a zero-trust network, where attackers can communicate from any IP and insert themselves between themselves and a trusted remote host. Therefore, utilizing strong authentication on every flow in a zero-trust network is vital. The most widely accepted method is a standard named X.509.

zero trust security
Diagram: Zero trust security. Authenticate first and then connect.

A key aspect of zero trust network ZTN and zero trust principles is to authenticate and authorize network traffic, i.e., the flows between the requesting resource and the intended service. Simply securing communications between two endpoints is not enough. Security pros must ensure that each flow is authorized.

This can be done by implementing a combination of security technologies such as Single Packet Authorization (SPA), Mutual Transport Layer Security (MTLS), Internet Key Exchange (IKE), and IP security (IPsec).

IPsec can use a unique security association (SA) per application, and only authorized flows can construct security policies. While IPsec is considered to operate at Layer 3 or 4 in the open systems interconnection (OSI) model, application-level authorization can be carried out with X.509 or an access token.

 

Mutually authenticated TLS (MTLS)

Mutually authenticated TLS (Transport Layer Security) is a system of cryptographic protocols used to establish secure communications over the Internet. It guarantees that the client and the server are who they claim to be, ensuring secure communications between them. This authentication is accomplished through digital certificates and public-private key pairs.

Mutually authenticated TLS is also essential for preventing man-in-the-middle attacks, where a malicious actor can intercept and modify traffic between the client and server. Without mutually authenticated TLS, an attacker could masquerade as the server and thus gain access to sensitive data.

To set up mutually authenticated TLS, the client and server must have digital certificates. The server certificate is used to authenticate the server to the client, while the client certificate is used to authenticate the client to the server. Both certificates are signed by the Certificate Authority (CA) and can be stored in the server and client’s browsers. The client and server then exchange the certificates to authenticate each other.

The client and server can securely communicate once the certificates have been exchanged and verified. Mutually authenticated TLS also provides encryption and integrity checks, ensuring the data is not tampered with in transit.

This enhanced version of TLS, known as mutually authenticated TLS (MTLS), is used to validate both ends of the connection. The most common TLS configuration is the validation, which ensures the client is connected to a trusted entity. However, the authentication doesn’t happen the other way around, so the remote entity communicates with a trusted client. This is the job of mutual TLS. As I said, mutual TLS goes one step further and authenticates the client.

The pre-authentication stage

You can’t attack what you cannot see. The mode that allows pre-authentication is Single Packet Authorization. UDP is the preferred base for pre-authentication because UDP packets, by default, do not receive a response. However, TCP and even ICMP can be used with the SPA. Single Packet Authorization is a next-generation passive authentication technology beyond what we previously had with port knocking, which uses closed ports to identify trusted users. SPA is a step up from port knocking.

The typical port-knocking scenario involves a port-knocking server configuring a packet filter to block all access to a service, such as the SSH service, until a port-knocking client sends a specific port-knocking sequence. For instance, the server could require the client to send TCP SYN packets to the following ports in order: 23400 1001 2003 65501.

If the server monitors this knock sequence, the packet filter reconfigures to allow a connection from the originating IP address. However, port knocking has its limitations, which SPA addresses; SPA retains all of the benefits of port knocking but fixes the rules.

As a next-generation Port Knocking (PK), SPA overcomes many limitations PK exhibits while retaining its core benefits. However, there are several limitations to PK, including difficulty in protecting against replay attacks, inability to support asymmetric ciphers and HMAC schemes reliably, and the fact that it is trivially easy to mount a DoS attack by spoofing an additional packet into a PK sequence while it is traversing the network (thereby convincing the PK server that the client does not know the proper sequence).

SPA solves all of these shortcomings. As part of SPA, services are hidden behind a default-drop firewall policy, SPA data is passively acquired (usually via libpcap), and standard cryptographic operations are implemented for SPA packet authentication and encryption/decryption.

 

Firewall Knock Operator

Fwknop (short for the “Firewall Knock Operator”) is a single-packet authorization system designed to be a secure and straightforward way to open up services on a host running an iptables- or ipfw-based firewall. It is a free, open-source application that uses the Single Packet Authorization (SPA) protocol to provide secure access to a network.

Fwknop sends a single SPA packet to the firewall containing an encrypted message with authorization information. The message is then decrypted and compared against a set of rules on the firewall. If the message matches the rules, the firewall will open access to the service specified in the packet.

Fwknop is an ideal solution for users who need to access services on a remote host without having to manually configure the firewall each time. It is also a great way to add an extra layer of security to already open services.

To achieve strong concealment, fwknop implements the SPA authorization scheme. SPA requires only a single packet encrypted, non-replayable, and authenticated via an HMAC to communicate desired access to a service hidden behind a firewall in a default-drop filtering stance. The main application of SPA is to use a firewall to drop all attempts to connect to services such as SSH to make exploiting vulnerabilities (both 0-day and unpatched code) more difficult. Because there are no open ports, any service SPA hides cannot be scanned with, for example, NMAP.

Supported Firewalls

The fwknop project supports four firewalls: We have support for iptables, firewalld, PF, and ipfw across Linux, OpenBSD, FreeBSD, and Mac OS X. There is also support for custom scripts so that fwknop can be made to support other infrastructure such as ipset or nftables.

fwknop client user interface
Diagram: fwknop client user interface. Source mrash GitHub.

Example use case: SSHD protection

Users of Single Packet Authorization (SPA) or its less secure cousin, Port Knocking (PK), usually access SSHD running on the same system as the SPA/PK software. A SPA daemon temporarily permits access to a passively authenticated SPA client through a firewall configured to drop all incoming SSH connections. This is considered the primary SPA usage.

In addition to this primary use, fwknop also makes robust use of NAT (for iptables/firewalld firewalls). A firewall is usually deployed on a single host and acts as a gateway between networks. Firewalls that use NAT (at least for IPv4 communications) commonly provide Internet access to internal networks on RFC 1918 address space and access to internal services by external hosts.

Since fwknop integrates with NAT, users on the external Internet can access internal services through the firewall using SPA. Additionally, it allows fwknop to support cloud computing environments such as Amazon’s AWS, although it has many applications on traditional networks.

SPA Use Case
Diagram: SPA Use Case. Source mrash Github.

Single Packet Authorization and Single Packet Authentication

Single Packet Authorization (SPA) uses proven cryptographic techniques to make internet-facing servers invisible to unauthorized users. Only devices seeded with the cryptographic secret can generate a valid SPA packet and establish a network connection. This is how it reduces the attack surface and becomes invisible to hostile reconnaissance.

SPA Single Packet Authorization was invented over ten years ago and was commonly used for superuser SSH access to servers where it mitigates attacks by unauthorized users. The SPA process happens before the TLS connection, mitigating attacks targeted at the TLS ports.

As mentioned, SDP didn’t invent new protocols; it was more binding existing protocols. SPA used in SDP was based on RFC 4226 HMAC-based One-Time Password “HOTP.” It is another layer of security and is not a replacement for the security technologies mentioned at the start of the post.

 

Reconnaissance: The first step

The first step in an attack is reconnaissance, whereby an attacker is on the prowl to locate a target. This stage is easy and can be automated with tools such as NMAP. However, SPA ( and port knocking ) employs a default-drop stance that provides service only to those IP addresses that can prove their identity via a passive mechanism.

No TCP/IP stack access is required to authenticate remote IP addresses. Therefore, NMAP cannot tell that a server is running when protected with SPA, and whether the attacker has a zero-day exploit is irrelevant.

 

zero trust security model
Diagram: Zero trust security model.

The idea around SPA and Single Packet Authentication is that a single packet is sent, and based on that packet, an authentication process is carried out. The critical point is that nothing is listening on the service, so you have no open ports. For the SPA service to operate, there is nothing explicitly listening.

When the client sends an SPA packet, it will be rejected, but a second service identifies it in the IP stack and then authenticates it. If the SPA packet is successfully authenticated, the server will open a port in the firewall, which could be based on Linux iptables, so that the client can establish a secure and encrypted connection with the intended service.

A simple Single Packet Authentication process flow

The SDP network gateway protects assets, and this component could be containerized and listened to for SPA packets. In the case of an open-source version of SDP, this could be fwknop, which is a widespread open-source SPA implementation. When a client wants to connect to a web server, it sends a SPA packet. When the requested service receives the SPA packet, it will open the door once the credentials are verified. However, the service still has not responded to the request.

When the fwknop services receive a valid SPA packet, the contents are decrypted for further inspection. The inspection reveals the protocol and port numbers to which the sender requests access. Next, the SDP gateway adds a rule to the firewall to establish a mutual TLS connection to the intended service. Once this mutual TLS connection is established, the SDP gateway removes the firewall rules, making the service invisible to the outside world.

single packet authorization
Diagram: Single Packet Authorization: The process flow.

Fwknop uses this information to open firewall rules, allowing the sender to communicate with that service on those ports. The firewall will only be opened for some time and can be configured by the administrator. Any attempts to connect to the service must know the SPA packet, and even if the packet can be recreated, the packet’s sequence number needs to be established before the connection. This is next to impossible, considering the sequence numbers are randomly generated.

Once the firewall rules are removed, let’s say after 1 minute, the initial MTLS session will not be affected as it is already established. However, other sessions requesting access to the service on those ports will be blocked. This permits only the sender of the IP address to be tightly coupled with the requested destination ports. It’s also possible for the sender to include a source port, enhancing security even further.

What can Single Packet Authorization offer

Let’s face it: robust security is hard to achieve. We all know that you can never be 100% secure. Just have a look at OpenSSH. Some of the most security-conscious developers developed OpenSSH, yet it occasionally contains exploitable vulnerabilities.

Even when you look at some attacks on TLS, we have already discussed the DigiNotar forgery in a previous post on zero-trust networking. Still, one that caused a significant issue was the THC-SSL-DOS attack, where a single host could take down a server by taking advantage of the asymmetry performance required by the TLS protocol.

Single Packet Authorization (SPA) overcomes many existing attacks and, combined with the enhancements of MTLS with pinned certificates, creates a robust security model addition; SPA defeats many a DDoS attack as only a limited amount of server performance is required to operate.

SPA provides the following security benefits to the SPA-protected asset:

    • SPA blackens the gateway and protects assets that sit behind the gateway. The gateway does not respond to connection attempts until it provides an authentic SPA. Essentially, all network resources are dark until security controls are passed.
    • SPA also mitigates DDoS attacks on TLS. More than likely, TLS is publicly reachable online, and running the HTTPS protocol is highly susceptible to DDoS. SPA mitigates these attacks by allowing the SDP gateway to discard the TLS DoS attempt before entering the TLS handshake. As a result, there will be no exhaustion from targeting the TLS port.
    • SPA assists with attack detection. The first packet to an SDP gateway must be a SPA packet. If a gateway receives any other type of packet, it should be viewed and treated as an attack. Therefore, the SPA enables the SDP to identify an attack based on a malicious packet.

Summary: Single Packet Authorization

In this blog post, we explored the concept of SPA, its key features, benefits, and potential impact on enhancing network security.

Section 1: Understanding Single Packet Authorization

At its core, SPA is a security technique that adds an additional layer of protection to network systems. Unlike traditional methods that rely on usernames and passwords, SPA utilizes a single packet sent to the server to grant access. This packet contains encrypted data and specific authorization codes, ensuring that only authorized users can gain entry.

Section 2: The Key Features of SPA

One of the standout features of SPA is its simplicity. Using a single packet simplifies the process and minimizes the potential attack surface. SPA also offers enhanced security through its encryption and strict authorization codes, making it difficult for unauthorized individuals to gain access. Furthermore, SPA is highly customizable, allowing organizations to tailor the authorization process to their needs.

Section 3: Benefits of Single Packet Authorization

Implementing SPA brings several notable benefits to the table. Firstly, SPA effectively mitigates the risk of brute-force attacks by eliminating the need for traditional login credentials. Additionally, SPA enhances security without sacrificing usability, as users only need to send a single packet to gain access. This streamlined approach saves time and reduces the likelihood of human error. Lastly, SPA provides detailed audit logs, allowing organizations to monitor and track authorized access more effectively.

Section 4: Potential Impact on Network Security

The adoption of SPA has the potential to revolutionize network security. By leveraging this technique, organizations can significantly reduce the risk of unauthorized access, data breaches, and other cybersecurity threats. SPA’s unique approach challenges traditional authentication methods and offers a more robust and efficient alternative.

Conclusion:

Single Packet Authorization (SPA) is a powerful security technique with immense potential to bolster network security. With its simplicity, enhanced protection, and numerous benefits, SPA offers a promising solution for organizations seeking to safeguard their digital assets. By embracing SPA, they can take a proactive stance against cyber threats and build a more secure digital landscape.

Cyber security threat. Young woman using computer and coding. Internet and network security. Stealing private information. Person using technology to steal password and private data. Cyber attack crime

SDP Network

 

 

SDP Network

In today’s interconnected world, where threats to data security are becoming increasingly sophisticated, businesses are constantly searching for ways to protect their sensitive information. Traditional network security measures are no longer sufficient in safeguarding against cyber attacks. Enter the Software Defined Perimeter (SDP), a revolutionary approach to network security that offers enhanced protection and control. In this blog post, we will delve into the concept of SDP and explore its various benefits.

Software Defined Perimeter, also known as a Zero Trust Network, is an architecture that focuses on secure access to resources based on the user’s identity and device. It moves away from the traditional method of relying on a static perimeter defense and instead adopts a dynamic approach that adapts to the ever-changing threat landscape.

 

Highlights: SDP Network

  • Creating a Zero Trust Environment

Software-Defined Perimeter is a security framework that shifts the focus from traditional perimeter-based network security to a more dynamic and user-centric approach. Instead of relying on a fixed network boundary, SDP creates a “Zero Trust” environment, where users and devices are authenticated and authorized individually before accessing network resources. This approach ensures that only trusted entities gain access to sensitive data, regardless of their location or network connection.

  • Zero trust framework

The zero-trust framework for networking and security is here for a good reason. There are various bad actors: ranging from the opportunistic and targeted to state-level, and all are well prepared to find ways to penetrate a hybrid network. As a result, there is now a compelling reason to implement the zero-trust model for networking and security.

SDP network brings SDP security, also known as software defined perimeter, which is heavily promoted as a replacement for the virtual private network (VPN) and, in some cases, firewalls for ease of use and end-user experience.

  • Dynamic tunnel of 1

It also provides a solid SDP security framework utilizing a dynamic tunnel of 1 per app per user. This offers security at the segmentation of a micro level, providing a secure enclave for entities requesting network resources. These are micro-perimeters and zero trust networks that can be hardened with technology such as SSL security and single packet authorization.

 

For pre-information, you may find the following useful:

  1. Remote Browser Isolation
  2. Zero Trust Network

 



SDN Network.

Key SDP Network Discussion points:


  • The role of SDP security. Authentication and Authorization.

  • SDP and the use of Certificates.

  • SDP and Private Key storage.

  • Public Key Infrastructure (PKI).

  • A final note on Certificates.

 

Back to basics with an SDP network

A software-defined perimeter is a security approach that controls resource access and forms a virtual boundary around networked resources. Think of an SDP network as a 1-to-1 mapping. Unlike a VLAN that can have many hosts within, all of which could be of different security levels.

Also, with an SDP network, we create a security perimeter via software versus hardware; an SDP can hide an organization’s infrastructure from outsiders, regardless of location. Now we have a security architecture that is location agnostic. As a result, employing SDP architectures will decrease the attack surface and mitigate internal and external network bad actors. The SDP framework is based on the U.S. Department of Defense’s Defense Information Systems Agency’s (DISA) need-to-know model from 2007.

 

Benefits of Software-Defined Perimeter:

1. Enhanced Security: SDP provides an additional layer of security by ensuring that only authenticated and authorized users can access the network. By implementing granular access controls, SDP reduces the attack surface and minimizes the risk of unauthorized access, making it significantly harder for cybercriminals to breach the system.

2. Improved Flexibility: Traditional network architectures often struggle to accommodate the increasing number of devices and the demand for remote access. SDP enables businesses to scale their network infrastructure effortlessly, allowing seamless connectivity for employees, partners, and customers, regardless of location. This flexibility is particularly valuable in today’s remote work environment.

3. Simplified Network Management: SDP simplifies network management by centralizing access control policies. This centralized approach reduces complexity and streamlines granting and revoking access privileges. Additionally, SDP eliminates the need for VPNs and complex firewall rules, making network management more efficient and cost-effective.

4. Mitigated DDoS Attacks: Distributed Denial of Service (DDoS) attacks can cripple an organization’s network infrastructure, leading to significant downtime and financial losses. SDP mitigates the impact of DDoS attacks by dynamically rerouting traffic and preventing the attack from overwhelming the network. This proactive defense mechanism ensures that network resources remain available and accessible to legitimate users.

5. Compliance and Regulatory Requirements: Many industries are bound by strict regulatory requirements, such as healthcare (HIPAA) or finance (PCI-DSS). SDP helps organizations meet these requirements by providing a secure framework that ensures data privacy and protection. Implementing SDP can significantly simplify the compliance process and reduce the risk of non-compliance penalties.

 

Feature 1: Dynamic Access Control

One of the primary features of SDP is its ability to dynamically control access to network resources. Unlike traditional perimeter-based security models, which grant access based on static rules or IP addresses, SDP employs a more granular approach. It leverages context-awareness and user identity to dynamically allocate access rights, ensuring that only authorized users can access specific resources. This feature eliminates the risk of unauthorized access, making SDP an ideal solution for securing sensitive data and critical infrastructure.

Feature 2: Zero Trust Architecture

SDP embraces the concept of Zero Trust, a security paradigm that assumes no user or device can be trusted by default, regardless of their location within the network. With SDP, every request to access network resources is subject to authentication and authorization, regardless of whether the user is inside or outside the corporate network. By adopting a Zero Trust architecture, SDP eliminates the concept of a network perimeter and provides a more robust defense against both internal and external threats.

Feature 3: Application Layer Protection

Traditional security solutions often focus on securing the network perimeter, leaving application layers vulnerable to targeted attacks. SDP addresses this limitation by incorporating application layer protection as a core feature. By creating micro-segmented access controls at the application level, SDP ensures that only authenticated and authorized users can interact with specific applications or services. This approach significantly reduces the attack surface and enhances the overall security posture.

Feature 4: Scalability and Flexibility

SDP offers scalability and flexibility to accommodate the dynamic nature of modern business environments. Whether an organization needs to provide secure access to a handful of users or thousands of employees, SDP can scale accordingly. Additionally, SDP seamlessly integrates with existing infrastructure, allowing businesses to leverage their current investments without the need for a complete overhaul. This adaptability makes SDP a cost-effective solution with a low barrier to entry.

 

SDP Security

Authentication and Authorization

So when it comes to creating an SDP network and SDP security, what are the ways to authenticate and authorize?

Well, firstly, trust is the main element within an SDP network. Therefore, mechanisms that can associate themselves with authentication and authorization to trust at a device, user, or application level are necessary for zero-trust environments.

When something presents itself to a zero-trust network, it must go through several SDP security stages before access is granted. Essentially the entire network is dark, meaning that resources drop all incoming traffic by default, providing an extremely secure posture. A more secure, robust, and dynamic network of geographically dispersed services and clients can be created based on this simple premise.

 

  • A key point: The difference between Authentication and Authorization.

Before we go any further, it’s essential to understand the difference between authentication and authorization. Upon examination of an end host in the zero-trust world, we have a device and a user forming an agent. The device and user authentication are carried out first before agent formation.

Authentication of the device will come first and second for the user. After these steps, authorization is performed against the agent. Authentication means confirming your identity, while authorization means granting access to the system.

 

The consensus among SDP network vendors

Generally, with most zero-trust and SDP VPN network vendors, the agent is only formed once valid device and user authentication have been carried out. And the authentication methods used to validate the device and user can be separate. A device that needs to identify itself to the network can be authenticated with X.509 certificates.

A user can be authenticated by other means, such as a setting from an LDAP server if the zero trust solution has that as an integration point. The authentication methods between the device and users don’t have to be tightly coupled, providing flexibility.

zero trust networks
Diagram: Zero trust networks. Some of the zero trust components are involved.

 

SDP Security with SDP Network: X.509 certificates

IP addresses are used for connectivity, not authentication, and don’t have any fields to implement authentication. The authentication must be handled higher up the stack. So we need to use something else to define identity, and that would be the use of certificates. X.509 certificates are a digital certificate standard that allows identity to be verified through a chain of trust and is commonly used to secure device authentication. X.509 certificates can carry a wealth of information within the standard fields that can fulfill the requirements to carry particular metadata.

To provide identity and bootstrap encrypted communications, X.509 certificates use two cryptographic keys, mathematically-related pairs consisting of public and private keys. The most common are RSA (Rivest–Shamir–Adleman) key pairs.

The private key is secret and held by the certificate’s owner, and the public key, as the names suggest, is not secret and distributed. The public key can encrypt the data that the private key can decrypt and vice versa. If the correct private key is not held, it is impossible to decrypt encrypted data using the public key.

 

SDP Security with SDP Network: Private key storage

Before we start discussing the public key, let us examine how we secure the private key. If bad actors get their hands on the private key, it lights out for device authentication.

Once the device presents a signed certificate, one way to secure the private key would be to configure some access rights to the key. However, if a compromise occurs, we are left in the undesirable world of elevated access, exposing the unprotected key.

The best way to secure and store private device keys is to use crypto processors such as a trusted platform module (TPM). The cryptoprocessor is essentially a chip that is embedded in the device.

The private keys are bound to the hardware without being exposed to the system’s operating system, which is far more vulnerable to compromise than the actual hardware. TPM binds the private software key to the hard creating a very robust device authentication.

 

SDP Security with SDP Network: Public Key Infrastructure (PKI)

How do we ensure that we have the correct public key? This is the role of the public key infrastructure (PKI). There are many types of PKI, with certificate authorities (CA) being the most popular. In cryptography, a certificate authority is an entity that issues digital certificates.

A certificate can be a pointless blank paper unless it is somehow trusted. This is done by digitally signing the certificate to endorse the validity. It is the responsibility of the certificate authorities to ensure all details of the certificate are correct before signing it. PKI is a framework that defines a set of roles and responsibilities used to distribute and validate public keys in an untrusted network securely.

For this, a PKI leverages a registration authority (RA). You may wonder what the difference between an RA and a CA is. The RA interacts with the subscribers to provide CA services. The RA is subsumed by the CA, which takes total responsibility for all actions of the RA.

The registration authority accepts requests for digital certificates and authenticates the entity making the request. This binds the identity to the public key embedded in the certificate, cryptographically signed by the trusted 3rd party.

 

Not all certificate authorities are secure!

However, all certificate authorities are not bulletproof from attack. Back in 2011, DigiNotar was at the mercy of a security breach. The bad actor took complete control of all eight certificate-issuing servers in which they issued rogue certificates that had not yet been identified. It is estimated that over 300,000 users had their private data exposed by rogue certificates.

Browsers immediately blacklist DigiNotar’s certificates, but it does highlight the issues of using a 3rd party. While Public Key Infrastructure is used at large on the public internet backing X.509 certificates, it’s not recommended for zero trust SDP. At the end of the day, when you think about it, you are still using 3rd party for a pretty important task. It would be best if you were looking to implement a private PKI system for a zero-trust approach to networking and security.

You could also implement a temporary one-time password (TOTP) if you are not looking for a fully automated process. This allows for human control over the signing of the certificates. Remember that much trust must be placed in whoever is responsible for this step.

 

Conclusion:

As businesses continue to face increasingly sophisticated cyber threats, the importance of implementing robust network security measures cannot be overstated. Software Defined Perimeter offers a comprehensive solution that addresses the limitations of traditional network architectures.

By adopting SDP, organizations can enhance their security posture, improve network flexibility, simplify management, mitigate DDoS attacks, and meet regulatory requirements. Embracing this innovative approach to network security can safeguard sensitive data and provide peace of mind in an ever-evolving digital landscape.

Organizations must adopt innovative security solutions as cyber threats evolve to protect their valuable assets. Software-Defined Perimeter offers a dynamic and user-centric approach to network security, providing enhanced protection against unauthorized access and data breaches.

With enhanced security, granular access control, simplified network architecture, scalability, and regulatory compliance, SDP is gaining traction as a trusted security framework in today’s complex cybersecurity landscape. Embracing SDP can help organizations stay one step ahead of the ever-evolving threat landscape and safeguard their critical data and resources.

 

sdp security

Software Defined Perimeter Solutions

 

 

Software Defined Perimeter Solutions

The following post discusses the Software Defined Perimeter solution, the need for a new software perimeter solution, and how this can be integrated. Many companies use SD-WAN and multiprotocol label switching (MPLS) based architectures to attack the same problem. However, the application is changing, which requires a shift in these architectures. Application developers and software service providers want to take more control of their sessions. And we are seeing this with the rise of Open networking. The next generation of applications is mobile, in the cloud, software as a service (SaaS), and the internet of things (IoT) based. It is everywhere and does not stay in the walled garden of the enterprise.

You could say it’s wholly distributed but not dark and hidden from the Internet. Solutions encompassing a zero trust network design are needed to connect the application endpoints wherever and wherever they are allowed. We are seeing a new type of perimeter, which some call these software defined perimeter solutions. A perimeter that is built from scratch by the application and employs a zero-trust model, much of which is derived from cloud security alliance software defined perimeter.

 

Before you proceed, you may find the following posts helpful:

  1. Distributed Firewalls
  2. Software Defined Internet Exchange

 



Software Perimeter Solution.

Key Software Defined Perimeter Solutions Discussion points:


  • Discussion on Software Defined Perimeter ( SDP ).

  • The challenges with traditional perimeters.

  • Changes in the environment are causing a need for SDP.

  • SDP and end to end security.

  • Ways to integrate the application.

 

 

 

 

software defined perimeter solutions
Diagram: Software-defined perimeter solutions: The changing landscape.

 

Software-defined Perimeter solutions: Right at the Application

The perimeter should now be at the application. And with network traffic engineering, this is already common in microservices environments where the application programming interface (API) is the perimeter. Still, we should now introduce this to non-containerized environments, especially regarding IoT endpoints. Fundamentally, the only way to secure the new wave of applications is to a) have reliable end-to-end reachability and b) assume a zero-trust model. You have to assume that the wires are not secure.

For this, you need to have the ability to integrate the security solution more deeply within the application. A collaborative model that tightly integrates with the application provides the required security and network path control. Previously the traditional model separated the application from the network. The network does what it wants, and the application does what it wants. The application would throw its bits over the wire and hope the wires are secure. The packets will eventually get to their destination.

software defined perimeter solutions
Diagram: Software-defined perimeter solutions: The TCP/IP model issues.

 

End-to-end security 

It would be best to have a solution that works more closely with the application to ensure end-to-end security. This can be done by installing client software on the mobile app or using some software developer’s kit (SDK) or API. SD-WAN vendors have done a great job of backhauling security to the cloud and responded to the market very well. However, security must now become a first-class citizen within the application. You can’t rely on tunnels on the internet anymore. From the application, you need to take the packet, readdress, encrypt it, and send it to a globally private network. This private network can provide all the relevant services using standard Pops with physical/virtual equipment or containerized packed forwarders that can be spun upon demand. Containerized packed forwarders sound a little bit more interesting.

Each session can then get distributed to find the best route where several routers can be spun up all over the globe. The packet forwarder is spun up on different networks and autonomous systems (AS) worldwide, enabling the maximum number of diverse paths between point A and point B, i.e. different backbone providers, different AS, peering, and routing agreements.

The application endpoint can then examine all those paths for their given session and direct the packet to the best-performing path. If a path shows unacceptable performance within seconds, the session can be changed to a different path. Performance metrics like jitter, latency, and packet loss should be analyzed in real-time. And throughput for specific applications. You can do a linear optimization from all those variables.

Essentially you should do cold potato routing on the private network instead of hot potato routing. Cold potato routing keeps the packets on the network for as long as possible to take advantage of all the required optimizations.

 

Ways to integrate with the application

You can start by integrating with the application IAM (identity, access, and management) structure and then define what the application can talk to. For example, the port and protocols enable the business policies to govern the application’s network behavior.

A packet from the internet address towards an IoT endpoint in the home must be authenticated and authorized to fit the business policy. These endpoints could then be streamlined into a distributed ledger like a blockchain. Suddenly, with proper machine learning and collaborations, you may be able to identify some of the DDoS attacks before they create massive problems.

Sometimes, the policies can be tied to hardware router trust, i.e. the silicon. This is often seen in the IoT-connected car use case. The silicon itself is creating a unique identifier, not a defined identity but an immutable one. The unique conditions of the silicon itself generate the identity. In this case, you don’t care about the IP addresses anymore. As technology progresses, we are becoming less reliant on IP addresses.

Within the software, you must include a certificate and public key infrastructure (PKI) system, so now you have a bidirectional, authenticated, and authorized certificate exchange. Again, you don’t care about the IP address as you work at an upper level.

 

rsz_technology_focused_hubnw

Matt Conran | Network World

Hello, I have created a Network World column and will be releasing a few blogs per month. Kindly visit the following link to view my full profile and recent blogs – Matt Conran Network World.

The list of individual blogs can be found here:

“In this day and age, demands on networks are coming from a variety of sources, internal end-users, external customers and via changes in the application architecture. Such demands put pressure on traditional architectures.

To deal effectively with these demands requires the network domain to become more dynamic. For this, we must embrace digital transformation. However, current methods are delaying this much-needed transition. One major pain point that networks suffer from is the necessity to dispense with manual working, which lacks fabric wide automation. This must be addressed if organizations are to implement new products and services ahead of the competition.

So, to evolve, to be in line with the current times and use technology as an effective tool, one must drive the entire organization to become a digital enterprise. The network components do play a key role, but the digital transformation process is an enterprise-wide initiative.”

“There’s a buzz in the industry about a new type of product that promises to change the way we secure and network our organizations. It is called the Secure Access Service Edge (SASE). It was first mentioned by Gartner, Inc. in its hype cycle for networking. Since then Barracuda highlighted SASE in a recent PR update and Zscaler also discussed it in their earnings call. Most recently, Cato Networks announced that it was mentioned by Gartner as a “sample vendor” in the hype cycle.

Today, the enterprises have upgraded their portfolio and as a consequence, the ramifications of the network also need to be enhanced. What we are witnessing is cloud, mobility, and edge, which has resulted in increased pressure on the legacy network and security architecture. Enterprises are transitioning all users, applications, and data located on-premise, to a heavy reliance on the cloud, edge applications, and a dispersed mobile workforce.”

“Microsoft has introduced a new virtual WAN as a competitive differentiator and is getting enough tracking that AWS and Google may follow. At present, Microsoft is the only company to offer a virtual WAN of this kind. This made me curious to discover the highs and lows of this technology. So I sat down with Sorell Slaymaker, Principal Consulting Analyst at TechVision Research to discuss. The following is a summary of our discussion.

But before we proceed, let’s gain some understanding of the cloud connectivity.

Cloud connectivity has evolved over time. When the cloud was introduced about a decade ago, let’s say, if you were an enterprise, you would connect to what’s known as a cloud service provider (CSP). However, over the last 10 years, many providers like Equinix have started to offer carrier-neutral collocations. Now, there is the opportunity to meet a variety of cloud companies in a carrier-neutral colocation. On the other hand, there are certain limitations as well as cloud connectivity.”

“Actions speak louder than words. Reliable actions build lasting trust in contrast to unreliable words. Imagine that you had a house with a guarded wall. You would feel safe in the house, correct? Now, what if that wall is dismantled? You might start to feel your security is under threat. Anyone could have easy access to your house.

In the same way, with traditional security products: it is as if anyone is allowed to leave their house, knock at your door and pick your locks. Wouldn’t it be more secure if only certain individuals whom you fully trust can even see your house? This is the essence of zero-trust networking and is a core concept discussed in my recent course on SDP (software-defined perimeter).

Within a zero-trust environment, there is no implicit trust. Thus, trust must be sourced from somewhere else in order to gain access to protected resources. It is only after sufficient trust has been established and the necessary controls are passed, that the access is granted, providing a path to the requested resource. The access path to the resource is designed differently, depending on whether it’s a client or service-initiated software-defined perimeter solution.”

“Networking has gone through various transformations over the last decade. In essence, the network has become complex and hard to manage using traditional mechanisms. Now there is a significant need to design and integrate devices from multiple vendors and employ new technologies, such as virtualization and cloud services.

Therefore, every network is a unique snowflake. You will never come across two identical networks. The products offered by the vendors act as the building blocks for engineers to design solutions that work for them. If we all had a simple and predictable network, this would not be a problem. But there are no global references to follow and designs vary from organization to organization. These lead to network variation even while offering similar services.

It is estimated that over 60% of users consider their I.T environment to be more complex than it was 2 years ago. We can only assume that network complexity is going to increase in the future.”

We are living in a hyperconnected world where anything can now be pushed to the cloud. The idea of having content located in one place, which could be useful from the management’s perspective, is now redundant. Today, the users and data are omnipresent.

The customer’s expectations have up-surged because of this evolution. There is now an increased expectation of high-quality service and a decrease in customer’s patience. In the past, one could patiently wait 10 hours to download the content. But this is certainly not the scenario at the present time. Nowadays we have high expectations and high-performance requirements but on the other hand, there are concerns as well. The internet is a weird place, with unpredictable asymmetric patterns, buffer bloat and a list of other performance-related problems that I wrote about on Network Insight. [Disclaimer: the author is employed by Network Insight.]

Also, the internet is growing at an accelerated rate. By the year 2020, the internet is expected to reach 1.5 Gigabyte of traffic per day per person. In the coming times, the world of the Internet of Things (IoT) driven by objects will far supersede these data figures as well. For example, a connected airplane will generate around 5 Terabytes of data per day. This spiraling level of volume requires a new approach to data management and forces us to re-think how we delivery applications.”

“Deploying zero trust software-defined perimeter (SDP) architecture is not about completely replacing virtual private network (VPN) technologies and firewalls. By and large, the firewall demarcation points that mark the inside and outside are not going away anytime soon. The VPN concentrator will also have its position for the foreseeable future.

A rip and replace is a very aggressive deployment approach regardless of the age of technology. And while SDP is new, it should be approached with care when choosing the correct vendor. An SDP adoption should be a slow migration process as opposed to the once off rip and replace.

As I wrote about on Network Insight [Disclaimer: the author is employed by Network Insight], while SDP is a disruptive technology, after discussing with numerous SDP vendors, I have discovered that the current SDP landscape tends to be based on specific use cases and projects, as opposed to a technology that has to be implemented globally. To start with, you should be able to implement SDP for only certain user segments.”

“Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.

More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.

The perimeter approach based its design on visibility and accessibility. If an entity external to the network can’t see an internal resource, then access cannot be gained. As a result, external entities were blocked from coming in, yet internal entities were permitted to passage out. However, it worked only to a certain degree. Realistically, the fixed network perimeter will always be breachable; it’s just a matter of time. Someone with enough skill will eventually get through.”

“In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations.

The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia.

A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects.”

“As the cloud service providers and search engines started with the structuring process of their business, they quickly ran into the problems of managing the networking equipment. Ultimately, after a few rounds of getting the network vendors to understand their problems, these hyperscale network operators revolted.

Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking.”

“I recently shared my thoughts about the role of open source in networking. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.

The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.

Seemingly, we are beginning to see less investment in hardware unless there is a specific segment that needs to be resolved. But generally, software-based platforms are preferred as they bring many advantages. It is evident that there has been a technology shift. We have moved networking from hardware to software and this shift has positive effects for users, enterprises and service providers.”

“BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, there’s a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?

There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.

The problem with BGP is that it’s not capacity or performance aware and therefore its decisions can sink the application’s performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the network’s performance.”

“The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both – the advantages and disadvantages.

The network and security need to keep up with this rapid pace of change. If you cannot match the speed of the digital age, then ultimately bad actors will become a hazard. Therefore, the organizations must move to a zero-trust environment: default deny, with least privilege access. In today’s evolving digital world this is the primary key to success.

Ideally, a comprehensive solution must provide protection across all platforms including legacy servers, VMs, services in public clouds, on-premise, off-premise, hosted, managed or self-managed. We are going to stay hybrid for a long time, therefore we need to equip our architecture with zero-trust.”

“With the introduction of cloud, BYOD, IoT, and virtual offices scattered around the globe, the traditional architectures not only hold us back in terms of productivity but also create security flaws that leave gaps for compromise.

The network and security architectures that are commonly deployed today are not fit for today’s digital world. They were designed for another time, a time of the past. This could sound daunting…and it indeed is.”

“The Internet was designed to connect things easily, but a lot has changed since its inception. Users now expect the internet to find the “what” (i.e., the content), but the current communication model is still focused on the “where.”

The Internet has evolved to be dominated by content distribution and retrieval. As a matter of fact, networking protocols still focus on the connection between hosts that surfaces many challenges.

The most obvious solution is to replace the “where” with the “what” and this is what Named Data Networking (NDN) proposes. NDN uses named content as opposed to host identifiers as its abstraction.”

“Today, connectivity to the Internet is easy; you simply get an Ethernet driver and hook up the TCP/IP protocol stack. Then dissimilar network types in remote locations can communicate with each other. However, before the introduction of the TCP/IP model, networks were manually connected but with the TCP/IP stack, the networks can connect themselves up, nice and easy. This eventually caused the Internet to explode, followed by the World Wide Web.

So far, TCP/IP has been a great success. It’s good at moving data and is both robust and scalable. It enables any node to talk to any other node by using a point-to-point communication channel with IP addresses as identifiers for the source and destination. Ideally, a network ships the data bits. You can either name the locations to ship the bits to or name the bits themselves. Today’s TCP/IP protocol architecture picked the first option. Let’s discuss the section option later in the article.

It essentially follows the communication model used by the circuit-switched telephone networks. We migrated from phone numbers to IP addresses and circuit-switching by packet-switching with datagram delivery. But the point-to-point, location-based model stayed the same. This made sense during the old times, but not in today’s times as the view of the world has changed considerably. Computing and communication technologies have advanced rapidly.”

“Technology is always evolving. However, in recent time, two significant changes have emerged in the world of networking. Firstly, the networking is moving to software that can run on commodity off-the-shelf hardware. Secondly, we are witnessing the introduction and use of many open source technologies, removing the barrier of entry for new product innovation and rapid market access.

Networking is the last bastion within IT to adopt the open source. Consequently, this has badly hit the networking industry in terms of the slow speed of innovation and high costs. Every other element of IT has seen radical technology and cost model changes over the past 10 years. However, IP networking has not changed much since the mid-’90s.

When I became aware of these trends, I decided to sit with Sorell Slaymaker to analyze the evolution and determine how it will inspire the market in the coming years.”

“Ideally, meeting the business objectives of speed, agility, and cost containment boil down to two architectural approaches: the legacy telco versus the cloud-based provider.

Today, the wide area network (WAN) is a vital enterprise resource. Its uptime, often targeting availability of 99.999%, is essential to maintain the productivity of employees and partners and also for maintaining the business’s competitive edge.

Historically, enterprises had two options for WAN management models — do it yourself (DIY) and a managed network service (MNS). Under the DIY model, the IT networking and security teams build the WAN by integrating multiple components including MPLS service providers, internet service providers (ISPs), edge routers, WAN optimizer, and firewalls.

The components are responsible for keeping that infrastructure current and optimized. They configure and adjust the network for changes, troubleshoot outages and ensure that the network is secure. Since this is not a trivial task, therefore many organizations have switched to an MNS. The enterprises outsource the buildout, configuration and on-going management often to a regional telco.”

“To undergo the transition from legacy to cloud-native application environments you need to employ zero trust.

Enterprises operating in the traditional monolithic environment may have strict organizational structures. As a result, the requirement for security may restrain them from transitioning to a hybrid or cloud-native application deployment model.

In spite of the obvious difficulties, the majority of enterprises want to take advantage of cloud-native capabilities. Today, most entities are considering or evaluating cloud-native to enhance their customer’s experience. In some cases, it is the ability to draw richer customer market analytics or to provide operational excellence.

Cloud-native is a key strategic agenda that allows customers to take advantage of many new capabilities and frameworks. It enables organizations to build and evolve going forward to gain an edge over their competitors.”

“Domain name system (DNS) over transport layer security (TLS) adds an extra layer of encryption, but in what way does it impact your IP network traffic? The additional layer of encryption indicates controlling what’s happening over the network is likely to become challenging.

Most noticeably it will prevent ISPs and enterprises from monitoring the user’s site activity and will also have negative implications for both; the wide area network (WAN) optimization and SD-WAN vendors.

During a recent call with Sorell Slaymaker, we rolled back in time and discussed how we got here, to a world that will soon be fully encrypted. We started with SSL1.0, which was the original version of HTTPS as opposed to the non-secure HTTP. As an aftermath of evolution, it had many security vulnerabilities. Consequently, we then evolved from SSL 1.1 to TLS 1.2.”

“Delivering global SD-WAN is very different from delivering local networks. Local networks offer complete control to the end-to-end design, enabling low-latency and predictable connections. There might still be blackouts and brownouts but you’re in control and can troubleshoot accordingly with appropriate visibility.

With global SD-WANs, though, managing the middle-mile/backbone performance and managing the last-mile are, well shall we say, more challenging. Most SD-WAN vendors don’t have control over these two segments, which affects application performance and service agility.

In particular, an issue that SD-WAN appliance vendors often overlook is the management of the last-mile. With multiprotocol label switching (MPLS), the provider assumes the responsibility, but this is no longer the case with SD-WAN. Getting the last-mile right is challenging for many global SD-WANs.”

“Today’s threat landscape consists of skilled, organized and well-funded bad actors. They have many goals including exfiltrating sensitive data for political or economic motives. To combat these multiple threats, the cybersecurity market is required to expand at an even greater rate.

The IT leaders must evolve their security framework if they want to stay ahead of the cyber threats. The evolution in security we are witnessing has a tilt towards the Zero-Trust model and the software-defined perimeter (SDP), also called a “Black Cloud”. The principle of its design is based on the need-to-know model.

The Zero-Trust model says that anyone attempting to access a resource must be authenticated and be authorized first. Users cannot connect to anything since unauthorized resources are invisible, left in the dark. For additional protection, the Zero-Trust model can be combined with machine learning (ML) to discover the risky user behavior. Besides, it can be applied for conditional access.”

“There are three types of applications; applications that manage the business, applications that run the business and miscellaneous apps.

A security breach or performance related issue for an application that runs the business would undoubtedly impact the top-line revenue. For example, an issue in a hotel booking system would directly affect the top-line revenue as opposed to an outage in Office 365.

It is a general assumption that cloud deployments would suffer from business-impacting performance issues due to the network. The objective is to have applications within 25ms (one-way) of the users who use them. However, too many network architectures backhaul the traffic to traverse from a private to the public internetwork.”

“Back in the early 2000s, I was the sole network engineer at a startup. By morning, my role included managing four floors and 22 European locations packed with different vendors and servers between three companies. In the evenings, I administered the largest enterprise streaming networking in Europe with a group of highly skilled staff.

Since we were an early startup, combined roles were the norm. I’m sure that most of you who joined as young engineers in such situations could understand how I felt back then. However, it was a good experience, so I battled through it. To keep my evening’s stress-free and without any IT calls, I had to design in as much high-availability (HA) as I possibly could. After all, all the interesting technological learning was in the second part of my day working with content delivery mechanisms and complex routing. All of which came back to me when I read a recent post on Cato network’s self-healing SD-WAN for global enterprises networks.

Cato is enriching the self-healing capabilities of Cato Cloud. Rather than the enterprise having the skill and knowledge to think about every type of failure in an HA design, the Cato Cloud now heals itself end-to-end, ensuring service continuity.”

While computing, storage, and programming have dramatically changed and become simpler and cheaper over the last 20 years, however, IP networking has not. IP networking is still stuck in the era of mid-1990s.

Realistically, when I look at ways to upgrade or improve a network, the approach falls into two separate buckets. One is the tactical move and the other is strategic. For example, when I look at IPv6, I see this as a tactical move. There aren’t many business value-adds.

In fact, there are opposites such as additional overheads and minimal internetworking QoS between IPv4 & v6 with zero application awareness and still a lack of security. Here, I do not intend to say that one should not upgrade to IPv6, it does give you more IP addresses (if you need them) and better multicast capabilities but it’s a tactical move.

It was about 20 years ago when I plugged my first Ethernet cable into a switch. It was for our new chief executive officer. Little did she know that she was about to share her traffic with most others on the first floor. At that time being a network engineer, I had five floors to be looked after.

Having a few virtual LANs (VLANs) per floor was a common design practice in those traditional days. Essentially, a couple of broadcast domains per floor were deemed OK. With the VLAN-based approach, we used to give access to different people on the same subnet. Even though people worked at different levels but if in the same subnet, they were all treated the same.

The web application firewall (WAF) issue didn’t seem to me as a big deal until I actually started to dig deeper into the ongoing discussion in this field. It generally seems that vendors are trying to convince customers and themselves that everything is going smooth and that there is not a problem. In reality, however, customers don’t buy it anymore and the WAF industry is under a major pressure as constantly failing on the customer quality perspective.

There have also been red flags raised from the use of the runtime application self-protection (RASP) technology. There is now a trend to enter the mitigation/defense side into the application and compile it within the code. It is considered that the runtime application self-protection is a shortcut to securing software that is also compounded by performance problems. It seems to be a desperate solution to replace the WAFs, as no one really likes to mix its “security appliance” inside the application code, which is exactly what the RASP vendors are currently offering to their customers. However, some vendors are adopting the RASP technology.

“John Kindervag, a former analyst from Forrester Research, was the first to introduce the Zero-Trust model back in 2010. The focus then was more on the application layer. However, once I heard that Sorell Slaymaker from Techvision Research was pushing the topic at the network level, I couldn’t resist giving him a call to discuss the generals on Zero Trust Networking (ZTN). During the conversation, he shone a light on numerous known and unknown facts about Zero Trust Networking that could prove useful to anyone.

The traditional world of networking started with static domains. The classical network model divided clients and users into two groups – trusted and untrusted. The trusted are those inside the internal network, the untrusted are external to the network, which could be either mobile users or partner networks. To recast the untrusted to become trusted, one would typically use a virtual private network (VPN) to access the internal network.”

“Over the last few years, I have been sprawled in so many technologies that I have forgotten where my roots began in the world of the data center. Therefore, I decided to delve deeper into what’s prevalent and headed straight to Ivan Pepelnjak’s Ethernet VPN (EVPN) webinar hosted by Dinesh Dutt. I knew of the distinguished Dinesh since he was the chief scientist at Cumulus Networks, and for me, he is a leader in this field. Before reading his book on EVPN, I decided to give Dinesh a call to exchange our views about the beginning of EVPN. We talked about the practicalities and limitations of the data center. Here is an excerpt from our discussion.”

“If you still live in a world of the script-driven approach for both service provider and enterprise networks, you are going to reach limits. There is only so far you can go alone. It creates a gap that lacks modeling and database at a higher layer. Production-grade service provider and enterprise networks require a production-grade automation framework.

In today’s environment, the network infrastructure acts as the core centerpiece, providing critical connection points. Over time, the role of infrastructure has expanded substantially. In the present day, it largely influences the critical business functions for both the service provider and enterprise environments”

“At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The monolith keeps much of the state in memory and replicates between the instances, which makes them hard to split and scale. Scaling up can be expensive and scaling out requires replicating the state and the entire application, rather than the parts that need to be replicated.

In comparison to microservices, which provide separation of the logic from the state, the separation enables the application to be broken apart into a number of smaller more manageable units, making them easier to scale. Therefore, a microservices environment consists of multiple services communicating with each other. All the communication between services is initiated and carried out with network calls, and services exposed via application programming interfaces (APIs). Each service comes with its own purpose that serves a unique business value.”

“When I stepped into the field of networking, everything was static and security was based on perimeter-level firewalling. It was common to have two perimeter-based firewalls; internal and external to the wide area network (WAN). Such layout was good enough in those days.

I remember the time when connected devices were corporate-owned. Everything was hard-wired and I used to define the access control policies on a port-by-port and VLAN-by-VLAN basis. There were numerous manual end-to-end policy configurations, which were not only time consuming but also error-prone.

There was a complete lack of visibility and global policy throughout the network and every morning, I relied on the multi-router traffic Grapher (MRTG) to manual inspect the traffic spikes indicating variations from baselines. Once something was plugged in, it was “there for life”. Have you ever heard of the 20-year-old PC that no one knows where it is but it still replies to ping? In contrast, we now live in an entirely different world. The perimeter has dissolved, resulting in perimeter-level firewalling alone to be insufficient.”

“Recently, I was reading a blog post by Ivan Pepelnjak on intent-based networking. He discusses that the definition of intent is “a usually clearly formulated or planned intention” and the word “intention” is defined as ’what one intends to do or bring about.” I started to ponder over his submission that the definition is confusing as there are many variations.

To guide my understanding, I decided to delve deeper into the building blocks of intent-based networking, which led me to a variety of closed-loop automation solutions. After extensive research, my view is that closed-loop automation is a prerequisite for intent-based networking. Keeping in mind the current requirements, it’s a solution that the businesses can deploy.

Now that I have examined different vendors, I would recommend gazing from a bird’s eye view, to make sure the solution overcomes today’s business and technical challenges. The outputs should drive a future-proof solution”

“What keeps me awake at night is the thought of artificial intelligence lying in wait in the hands of bad actors. Artificial intelligence combined with the powers of IoT-based attacks will create an environment tapped for mayhem. It is easy to write about, but it is hard for security professionals to combat. AI has more force, severity, and fatality which can change the face of a network and application in seconds.

When I think of the capabilities artificial intelligence has in the world of cybersecurity I know that unless we prepare well we will be like Bambi walking in the woods. The time is now to prepare for the unknown. Security professionals must examine the classical defense mechanisms in place to determine if they can withstand an attack based on artificial intelligence.”

“When I began my journey in 2015 with SD-WAN, the implementation requirements were different to what they are today. Initially, I deployed pilot sites for internal reachability. This was not a design flaw, but a solution requirement set by the options available to SD-WAN at that time. The initial requirement when designing SD-WAN was to replace multiprotocol label switching (MPLS) and connect the internal resources together.

Our projects gained the benefits of SD-WAN deployments. It certainly added value, but there were compelling constraints. In particular, we were limited to internal resources and users, yet our architecture consisted of remote partners and mobile workers. The real challenge for SD-WAN vendors is not solely to satisfy internal reachability. The wide area network (WAN) must support a range of different entities that require network access from multiple locations.”

“Applications have become a key driver of revenue, rather than their previous role as merely a tool to support the business process. What acts as the heart for all applications is the network providing the connection points. Due to the new, critical importance of the application layer, IT professionals are looking for ways to improve the architecture of their network.

A new era of campus network design is required, one that enforces policy-based automation from the edge of the network to public and private clouds using an intent-based paradigm.

SD-Access is an example of an intent-based network within the campus. It is broken down into three major elements

  1. Control-Plane based on Locator/ID separation protocol (LISP),
  2. Data-Plane based on Virtual Extensible LAN (VXLAN) and
  3. Policy-Plane based on Cisco TrustSec.”

“When it comes to technology, nothing is static, everything is evolving. Either we keep inventing mechanisms that dig out new security holes, or we are forced to implement existing kludges to cover up the inadequacies in security on which our web applications depend.

The assault on the changing digital landscape with all its new requirements has created a black hole that needs attention. The shift in technology, while creating opportunities, has a bias to create security threats. Unfortunately, with the passage of time, these trends will continue to escalate, putting web application security at center stage.

Business relies on web applications. Loss of service to business-focused web applications not only affects the brand but also results in financial loss. The web application acts as the front door to valuable assets. If you don’t efficiently lock the door or at least know when it has been opened, valuable revenue-generating web applications are left compromised.”

“When I started my journey in the technology sector back in the early 2000’s the world of networking comprised of simple structures. I remember configuring several standard branch sites that would connect to a central headquarters. There was only a handful of remote warriors who were assigned, and usually just a few high-ranking officials.

As the dependence on networking increased, so did the complexity of network designs. The standard single site became dual-based with redundant connectivity to different providers, advanced failover techniques, and high-availability designs became the norm. The number of remote workers increased, and eventually, security holes began to open in my network design.

Unfortunately, the advances in network connectivity were not in conjunction with appropriate advances in security, forcing everyone back to the drawing board. Without adequate security, the network connectivity that is left to defaults is completely insecure and is unable to validate the source or secure individual packets. If you can’t trust the network, you have to somehow secure it. We secured connections over unsecured mediums, which led to the implementation of IPSec-based VPNs along with all their complex baggage.”

“Over the years, we have embraced new technologies to find improved ways to build systems.  As a result, today’s infrastructures have undergone significant evolution. To keep pace with the arrival of new technologies, legacy is often combined with the new, but they do not always mesh well. Such a fusion between ultra-modern and conventional has created drag in the overall solution, thereby, spawning tension between past and future in how things are secured.

The multi-tenant shared infrastructure of the cloud, container technologies like Docker and Kubernetes, and new architectures like microservices and serverless, while technically remarkable, increasing complexity. Complexity is the number one enemy of security. Therefore, to be effectively aligned with the adoption of these technologies, a new approach to security is required that does not depend on shifting infrastructure as the control point.”

“Throughout my early years as a consultant, when asynchronous transfer mode (ATM) was the rage and multiprotocol label switching (MPLS) was still at the outset, I handled numerous roles as a network architect alongside various carriers. During that period, I experienced first-hand problems that the new technologies posed to them.

The lack of true end-to-end automation made our daily tasks run into the night. Bespoke network designs due to the shortfall of appropriate documentation resulted in one that person knows all. The provisioning teams never fully understood the design. The copy-and-paste implementation approach is error-prone, leaving teams blindfolded when something went wrong.

Designs were stitched together and with so much variation, that limited troubleshooting to a personalized approach. That previous experience surfaced in mind when I heard about carriers delivering SD-WAN services. I started to question if they could have made the adequate changes to provide such an agile service.”