Zero Trust Access

Safe-T; A Progressive Approach to Zero Trust Access

The foundations that support our systems are built with connectivity and not security as an essential feature. TCP connects before it authenticates. Security policy and user access based on IP lack context and allow architectures that exhibit overly permissive access. Most likely, this will result in a brittle security posture enabling the need for Zero Trust Access. Our environment has changed considerably, leaving traditional network and security architectures vulnerable to attack. The threat landscape is unpredictable. We are getting hit by external threats from all over the world. However, the environment is not just limited to external threats. There are insider threats also within a user group, and insider threats, across user group boundaries.

Therefore, we need to find ways to decouple security from the physical network and also decouple application access from the network. To do this, we need to change our mindset and invert the security model. Software-Defined Perimeter (SDP) is an extension of zero trust which presents a revolutionary development. It provides an updated approach that current security architectures fail to address. SDP is often referred to as Zero Trust Access (ZTA). Safe-T’s package of the access control software is called: Safe-T Zero+. Safe-T offers a phased deployment model, enabling you to progressively migrate to zero-trust network architecture while lowering the risk of technology adoption. Safe-T’s Zero+ model is flexible to meet today’s diverse hybrid I.T requirements. It satisfies the zero-trust principles that are used to combat today’s network security challenges.


Network Challenges

  • Connect First and Then Authenticate

TCP has a weak security foundation. When clients want to communicate and have access to an application: they first set up a connection. It is only after the connect stage has been carried out, can the authentication stage be accomplished. Unfortunately, with this model, we have no idea who the client is until they have completed the connect phase. There is a possibility that the requesting client is not trustworthy.


  • The Network Perimeter

We began with static domains, whereby internal and external segments are separated by a fixed perimeter. Public IP addresses are assigned to the external host and private addresses to the internal. If a host is assigned a private IP, it is thought to be more trustworthy than if it has a public IP address. Therefore, trusted hosts operate internally, while untrusted operate externally to the perimeter. Here, the significant factor that needs to be considered is that IP addresses lack user knowledge to assign and validate trust.

Today, I.T has become more diverse since it now supports hybrid architectures with a variety of different user types, humans, applications, and the proliferation of connected devices. Cloud adoption has become the norm these days since there is an abundance of remote workers accessing the corporate network from a variety of devices and places.

The perimeter approach no longer reflects the typical topology of users and servers accurately. It was actually built for a different era where everything was inside the walls of the organization. However, today, organizations are increasingly deploying applications in the public clouds that are located in geographical locations. These are the locations that are remote from an organization’s trusted firewalls and the perimeter network. This certainly stretches the network perimeter.

We have a fluid network perimeter where data and users are located everywhere. Hence, now we operate in a completely new environment. But the security policy controlling user access is built for static corporate-owned devices, within the supposed trusted LAN


  • Lateral Movements

A major concern with the perimeter approach is that it assumes a trusted internal network. However, evidently, 80% of threats are from internal malware or a malicious employee that will often go undetected.

Besides, with the rise of phishing emails, an unintentional click will give a bad actor broad-level access. And once on the LAN, the bad actors can move laterally from one segment to another. They are likely to navigate undetected between, or within the segments.

Eventually, the bad actor can steal the credentials and use them to capture and exfiltrate valuable assets. Even social media accounts can be targeted for data exfiltration since they are not often inspected by the firewall as a file transfer mechanism.


  • Issues with the Virtual Private Network (VPN)

What is happening with traditional VPN access is that the tunnel creates an extension between the client’s device and the application’s location. The VPN rules are static and do not dynamically change with the changing levels of trust on a given device. They provide only network information which is a crucial limitation.

Therefore, from a security standpoint, the traditional method of VPN access enables the clients to have broad network-level access. This makes the network susceptible to undetected lateral movements. Also, the remote users are authenticated and authorized but once permitted to the LAN they have coarse-grained access. This obviously creates a high level of risk as undetected malware on a user’s device can spread to an inner network.

Another significant challenge is that VPNs generate administrative complexity and cannot easily handle cloud, or multiple network environments. They require the installation of end-user VPN software clients and knowing where the application that they are accessing is located. Users would have to make changes to their VPN client software to gain access to the applications situated at different locations. In a nutshell, traditional VPNs are complex for administrators to manage and for users to operate.

With public concern over surveillance, privacy, and identity theft growing, an increasing number of people are turning to VPNs to help keep them safer online. But where should you start when choosing the best VPN for your needs?

Also, poor user experience is most likely to occur as you need to backhaul the user traffic to a regional data center. This adds latency and bandwidth costs.

In recent years, torrenting has started to become increasingly popular amongst computer users who wish to download files such as movies, books, and songs. Without having a VPN, this could risk your privacy and security. It is also important to note that you should be very careful when it comes to downloading files to your computer as they could cause more harm than good. 


Can Zero Trust Access be the Solution?

The main principle that ZTA follows is that nothing should be trusted. This is regardless of whether the connection is originating inside or outside the network perimeter. Reasonably, today we have no reason to trust any user, device, or application, some companies may try and decrease accessibility with the use of programs like office 365 distribution group to allow and disallow users and devices’ specific network permissions. You know that you cannot protect what you cannot see but the fact that you also cannot attack what you cannot see also holds true. ZTA makes the application and the infrastructure completely undetectable to unauthorized clients, thereby creating an invisible network.

Preferably, application access should be based on contextual parameters, such as who/where the user is located, the judgment of the security stance of the device, and then a continuous assessment of the session should be performed. This moves us from network-centric to user-centric, providing a connection-based approach to security. Security enforcement should be based on user context and include policies that matter to the business. It should be unlike a policy based on subnets that have no meaning. The authentication workflows should include context-aware data, such as device ID, geographic location, and the time and day when the user requests access.

It’s not good enough to provide network access. We must provide granular application access with a dynamic segment of 1. Here, an application microsegment gets created for every request that comes in. Micro-segmentation creates the ability to control access by subdividing the larger network into small secure application micro perimeter internal to the network. This abstraction layer puts a lockdown on lateral movements. In addition, zero trust access also implements a policy of least privilege by enforcing controls that enable the users to have access only to the resources they need to perform their tasks.


Characteristics of Safe-T

Safe-T has 3 main pillars to provide a secure application and file access solution with:

1) An architecture that implements zero trust access,

2) A proprietary secure channel that enables users to remotely access/share sensitive files and

3) User behavior analytics.

Safe-T’s SDP architecture is designed to substantially implement the essential capabilities delineated by the Cloud Security Alliance (CSA) architecture. Safe-T’s Zero+ is built using these main components:

The Safe-T Access Controller is the centralized control and policy enforcement engine that enforces end-user authentication and access. It acts as the control layer, governing the flow between end-users and backend services.

Secondly, the Access Gateway acts as a front-end to all the backend services published to an untrusted network. The Authentication Gateway presents to the end-user in a clientless web browser. Hence, a pre-configured authentication workflow is provided by the Access Controller. The authentication workflow is a customizable set of authentication steps, such as 3rd party IDPs (Okta, Microsoft, DUO Security, etc.). In addition, it has built-in options, such as captcha, username/password, No-Post, and OTP.


Safe-T Zero+ Capabilities

The Safe-T Zero+ capabilities are in line with zero trust principles. With Safe-T Zero+, clients requesting access must go through authentication and authorization stages before they can access the resource. Any network resource that has not passed these steps is blackened. Here, URL rewriting is used to hide the backend services.

This reduces the attack surface to an absolute minimum and follows the Safe-T’s axiom: If you can’t be seen, you can’t be hacked. In a normal operating environment, for the users to get access to services behind a firewall, they have to open ports on the firewall. This presents security risks as a bad actor could directly access that service via the open port and exploit any vulnerabilities of the service.

Another paramount capability of Safe-T Zero+ is that it implements a patented technology called reverse access to eliminate the need to open incoming ports in the internal firewall. This also eliminates the need to store sensitive data in the demilitarized zone (DMZ). It has the ability to extend to on-premise, public, and hybrid cloud, supporting the most diverse hybrid and meeting the I.T requirements. Zero+ can be deployed on-premises, as part of Safe-T’s SDP services, or on AWS, Azure, and other cloud infrastructures, thereby protecting both cloud and on-premise resources.

Zero+ provides the capability of user behavior analytics that monitors the actions of protected web applications. This allows the administrator to inspect the details of anomalous behavior. Thence, forensic assessment is easier by offering a single source for logging.

Finally, Zero+ provides a unique, native HTTPS-based file access solution for the NTFS file system, replacing the vulnerable SMB protocol. Besides, users can create a standard mapped network drive in their Windows explorer. This provides a secure, encrypted, and access-controlled channel to shared backend resources.


Deployment Strategy

Safe-T customers can exclusively select an architecture that meets their on-premise or cloud-based requirements.


There are 3 options:

i) The customer deploys three VMs: 1) Access Controller, 2) Access Gateway, and 3) Authentication Gateway. The VMs can be deployed on-premises in an organization’s LAN, on Amazon Web Services (AWS) public cloud, or on Microsoft’s Azure public cloud.

ii) The customer deploys the 1) Access Controller VM and 2) Access Gateway VM on-premises in their LAN. The customer deploys the Authentication Gateway VM on a public cloud, such as AWS or Azure.

iii) The customer deploys the Access Controller VM on-premise in the LAN and Safe-T deploys and maintains two VMs 1) Access Gateway and 2) Authentication Gateway; both hosted on Safe-T’s global SDP cloud service.


ZTA Migration Path

Today, organizations recognize the need to move to zero trust architecture. However, there is a difference between recognition and deployment. Also, new technology brings with it considerable risks. Chiefly, traditional Network Access Control (NAC) and VPN solutions fall short in many ways, but a rip and replace model is a very aggressive approach.

To begin the transition from legacy to ZTA, you should look for a migration path that you feel comfortable with. Maybe you want to run a traditional VPN in parallel or in conjunction with your SDP solution and only for a group of users for a set period of time. A probable example could be: choosing a server used primarily by experienced users, such as DevOps or QA personnel. This ensures that the risk is minimal if any problem occurs during the phased deployment of SDP access in your organization.

A recent survey carried out by the CSA indicates that SDP awareness and adoption are still in an early stage. However, when you do go down the path of ZTA, vendor selection which provides an architecture that matches your requirements is the key to successful adoption. For example, look for SDP vendors who allow you to continue using your existing VPN deployment while adding

SDP/ZTA capabilities on top of your VPN. This could sidestep the possible risks involved if you switch to completely new technology.


VPN virtual private network conncetion concept. Lan cable and a router with different flags. 3d illustration

Remote Browser Isolation; Complementing the SDP Story

Our digital environment has been transformed significantly. Unlike earlier times, we now have a bunch of different devices, access methods, and types of users accessing applications from a variety of locations. This makes it more difficult to know which communications can be trusted. The perimeter-based approach to security can no longer be limited to just the physical location of the enterprise. In this modern world, the perimeter is becoming increasingly difficult to enforce as organizations adopt mobile and cloud technologies. Hence, the need for Remote Browser Isolation (RBI). Under these circumstances, the perimeter is more likely to be breached; it’s just a matter of time. A bad actor would then be relatively free to move laterally, potentially accessing the privileged intranet and corporate data, both on-premises and in the cloud. Therefore, we must operate under the assumption that users and resources on internal networks are as untrustworthy as those on the public internet, and design enterprise application security with this in mind. 


What within the perimeter leads us to assume that it can no longer be trusted?

Security becomes less and less tenable once there are many categories of users, device types, and locations. The very fact that users are diverse means that it is not possible, for example, to slot all vendors into one user segment with uniform permissions. As a result, access to applications should be based on contextual parameters such as who and where the user is. And sessions should be continuously assessed to ensure they’re legit.  We need to find ways to decouple security from the physical network and more importantly, to decouple application access from the network. In short, we need a new approach to providing access to applications that is cloud, network, and device agnostic. This is where Software-Defined Perimeter (SDP) comes into the picture.


What is a Software-Defined Perimeter (SDP)?

SDP complements zero trust, which considers both internal and external networks and actors to be completely untrusted. The network topology is divorced from trust. There simply is no concept of inside or outside of the network. This may result in users not automatically being granted broad access to resources, simply by virtue of their being inside the perimeter.  Primarily, security pros must focus on solutions where they can set and enforce discrete access policies and protections for those requesting to use an application. SDP lays the foundation and secures the access architecture, which enables an authenticated and trusted connection between the entity and the application. Unlike security based solely on IP, SDP does not grant access to network resources based on a user’s location. Access policies are rather based on device, location, state, and associated user information, along with other contextual elements. The applications are considered in the abstract, so whether they run on-premise or in the cloud is irrelevant to the security policy.


  • Periodic Security Checking

Clients and their interactions are periodically checked to make sure they are complying with the security policy. Periodic security checking protects against any additional actions or requests that are not allowed while the connection is open. Let’s say, you have a connection open to a financial application and users are accessing the recording software to record the session. In this case, the SDP management platform can check if the software has been started or not. If so, it employs protective mechanisms to ensure smooth and secure operation.



Front-end authentication and periodic checking are one part of the picture. However, we not only need to go a layer deeper to secure the front door to the application but also must secure the numerous doors within, which can potentially create additional access paths. Primarily, this is the job of microsegmentation. It’s not sufficient just to provide network access. We must enable granular application access for dynamic segments of 1. In this scenario, a microsegment is created for every request that comes in. Microsegmentation creates the minimal accessible network required to get specific tasks done smoothly and securely. This is accomplished by subdividing larger networks into small secure and flexible micro-perimeters.


Introducing Remote Browser Isolation (RBI)

SDP provides mechanisms to prevent the possibility of lateral movement once users are inside the network. However, we also need to address how external resources located on the internet and public clouds can be accessed while protecting end-users, their devices, and the networks to which they connect. This is where remote browser isolation (RBI) comes into the picture.

Initially, we started with browser isolation, which protects the user from external sessions by isolating the interaction. Essentially, it generates complete browsers within a virtual machine on the endpoint, providing a proactive approach to isolate user’s sessions from, for example, malicious websites, emails, and links. But these solutions do not reliably isolate the web content from the end-user’s device, which is of course, on the network.

Remote browser isolation takes local browser isolation to the next level by enabling the rendering process to take place remotely from the user’s device, in the cloud. Because only a clean data stream touches the endpoint, users can securely access untrusted websites from within the perimeter of the protected area.


Remote Browser Isolation

Diagram: Remote Brower Isolation.


SDP along with Remote Browser Isolation (RBI)

In many important ways, remote browser isolation complements the SDP approach. When you access a corporate asset, you operate within the SDP. But when you need to access external assets, RBI is needed to keep you safe.

Zero trust and SDP are about authentication, authorization, and accounting (AAA) for internal resources, but there must be secure ways to get to external resources as well. For this, RBI secures browsing elsewhere on your behalf. No SDP solution can be complete without including rules to secure external connectivity. RBI takes the essence of zero trust to the next level by securing the internet browsing perspective. If access is to an internal corporate asset we create a dynamic tunnel of one individualized connection. For external access, RBI allows information to be transferred without full, risky connectivity.

This is particularly crucial when it comes to email attacks, like phishing. Malicious actors use social engineering tactics to convince recipients to trust them enough to click on embedded links. Quality RBI solutions protect users by “knowing” when to allow user access while preventing malware from entering endpoints; entirely blocking malicious sites; or protecting users from entering confidential credentials by enabling read-only access.


The RBI Components

To understand just how RBI works, let’s look under the hood of Ericom Shield. With RBI, for every tab that a user opens on his or her device, the solution spins up a virtual browser in its own dedicated Linux container in a remote cloud location. For example, if the user is actively browsing 19 open tabs on his Chrome browser, each will have a corresponding browser in its own remote container. This sounds like it takes a lot of computing power but enterprise-class  RBI solutions do a lot of optimizations to ensure that it is not eating up too much of the endpoint resources.

If a tab is unused for a period of time, the associated container is automatically terminated and destroyed. This frees up computing resources and also eliminates the possibility of persistence. As a result, whatever malware may have resided on the external site being browsed is destroyed, and cannot accidentally infect the endpoint, server, or cloud location. When the user shifts back to the tab, he is reconnected in a fraction of a second to the same location but with a fresh container, creating a secure enclave for internet browsing. 

Website rendering is carried out in real-time from the remote browser. The web page is translated into a media stream, which then gets streamed back to the end-user via HTML5 protocol. In reality, the browsing experience is made out of images. When you look at the source code on the endpoint browser, you will find that the HTML code consists solely of a block of Ericom-generated code. This block manages to send and receiving of images via the media stream.

Regardless of whether the user is accessing the Wall Street Journal or YouTube, they are always going to get the same source code from Ericom Shield. This is ample proof that no local download, drive-by download, or any other contact that may try to hook up into your endpoint is ever going to get there, as it does not come into contact with the endpoint. It runs only remotely, in a container located outside of the local LAN. The browser farm does all the heavy — and dangerous — lifting via container-bound browsers that read and execute the uniform resource locator (URL) requests coming from the user. 


  • Summary

SDP vendors have figured out device user authentication and how to continuously secure sessions. However, vendors are now looking for a way to secure the tunnel through to external resource access.  If you use your desktop to access a cloud application, the session can be hacked or compromised. But with RBI, you can maintain one-to-one secure tunneling. With a dedicated container for each specific app, you are ensured of an end-to-end zero-trust environment. 

RBI based on hardened containers, and with a rigorous process to eliminate malware through limited persistence, forms a critical component of the SDP story. The power of RBI is that it stops both known and unknown threats, making it a natural evolution from the zero trust perspective.

Zero Trust: Single Packet Authorization | Passive authorization

Not everything in Software-Defined Perimeter (SDP) is new: Single Packet Authorization.

Even though we are looking at disruptive technology to replace the virtual private network and offer secure segmentation, one thing to keep in mind with zero trust and software-defined perimeter (SDP) is that it’s not based on entirely new protocols. So we have reversed the idea of how TCP connects. It started with an authentication first and then connect approach, but traditional networking and protocols still play a large part. For example, we still use encryption to ensure that only the receiver can read the data we send. We can, however, use encryption without authentication, which validates the sender. However, to stand any chance in today’s world, the two of them should go hand in hand. Attackers can circumvent many firewalls and secure infrastructure. As a result, message authenticity is a must for zero trust, and without an authentication process, a bad actor could change, for example, the ciphertext without the reviewer ever knowing.

zero trust security

Diagram: Zero trust security. Authenticate first and then connect.


A key aspect of zero trust networking and zero trust principles is to authenticate and authorize network traffic: i.e., the flows between the requesting resource and the intended service. Simply securing communications between two endpoints is not enough. Security pros must ensure that each individual flow is authorized. This can be done by implementing a combination of security technologies such as Single Packet Authorization (SPA), Mutual Transport Layer Security (MTLS), Internet Key Exchange (IKE), and IP security (IPsec).
IPsec can use a unique security association (SA) per application in which only authorized flows are allowed to construct security policies. While IPsec is considered to operate at Layer 3 or 4 in the open systems interconnection (OSI) model, application-level authorization can be carried out with X.509 or an access token.

There is also an enhanced version of TLS known as mutually authenticated TLS (MTLS) used to validate both ends of the connection. The most common TLS configuration is the validation which ensures that the client is connected to a trusted entity. But the authentication doesn’t happen the other way round so that the remote entity communicates with a trusted client. This is the job of mutual TLS. As I said, mutual TLS goes one step further and authenticates the client.



  • The pre-authentication stage

You can’t attack what you cannot see. The mode that allows pre-authentication is Single Packet Authorization. UDP is the preferred base for pre-authentication because UDP packets, by default, do not receive a response. However, TCP and even ICMP can be used with the SPA. Single Packet Authorization is a next-generation passive authentication technology, beyond what we previously had with port knocking, which uses closed ports to carry out the identification of trusted users. SPA is a step up from port knocking.

The typical port knocking scenario is for a port knocking server to configure a packet filter to block all access to a service, for example, the SSH service, until a specific port knock sequence is sent by a port knocking client. For example, the server could require the client to send TCP SYN packets to the following ports in order: 23400 1001 2003 65501.

If the server monitors this knock sequence, the packet filter reconfigures to allow a connection from the originating IP address. However, port knocking has its limitations which SPA addresses; SPA retains all of the benefits of port knocking but fixes the limitations.


  • What is Single Packet Authorization (SPA)

SPA was invented well over 10 years ago and was commonly used for superuser SSH access to servers where it mitigates attacks by unauthorized users. The SPA process happens before the TLS connection, mitigating attacks targeted at the TLS ports. As already mentioned, SDP didn’t invent new protocols; it was more of binding existing protocols. SPA used in the world of SDP was based on RFC 4226 HMAC-based One-Time Password “HOTP.” It is another layer of security and is not a replacement for the security technologies mentioned at the start of the post.

The first step in an attack is reconnaissance, whereby an attacker is on the prowl to locate a target. This stage is easy to do and can be automated with tools such as NMAP. However, SPA ( and port knocking ) employs a default-drop stance that provides service only to those IP addresses that can prove their identity via a passive mechanism. No TCP/IP stack access is required to authenticate remote IP addresses. Therefore, NMAP cannot tell that a server is running when protected with SPA, and whether the attacker has a zero-day exploit is irrelevant.


zero trust security model

Diagram: Zero trust security model.


The idea around SPA is that a single packet is sent: based on that packet, an authentication process is carried out. A key point to note is that nothing is listening on the service, so you have no open ports. For the SPA service to operate, there is not anything specifically listening. When the client sends a SPA packet, the packet will be rejected, but a second service identifies the SPA packet in the IP stack and then authenticates it. If the SPA packet is successfully authenticated, the server will open a port in the server’s firewall, which could be based on Linux iptables, so the client can establish a secure and encrypted connection with the intended service.


A simple SPA process flow

The SDP gateway protects assets, and this component could be containerized and listened for SPA packets. In the case of an open-source version of SDP, this could be fwknop, which is a popular open-source SPA implementation. When a client wants to connect to a web server, it sends a SPA packet. When the requested service receives the SPA packet, it will open the door once the credentials are verified. However, the service still does not respond to the request.

When the fwknop services receive a valid SPA packet, the contents of the packet will be decrypted for further inspection. The inspection will reveal the protocol and port numbers that the sender is requesting access to. Next, the SDP gateway adds a rule to the firewall to establish a mutual TLS connection to the intended service. Once this mutual TLS connection is established, the SDP gateway removes the firewall rules, and the service is invisible to the outside world.

single packet authorization

Diagram: Single Packet Authorization: The process flow.


Fwknop uses this information to open up firewall rules, allowing the sender to communicate with that service on those ports. The firewall will only be opened for some time and can be configurable by the administrator. Any attempts to connect to the service must know the SPA packet, and even if the packet can be recreated, the packet’s sequence number needs to be established prior to the connection. This is next to impossible, considering the sequence numbers are randomly generated.

Once the firewall rules are removed, let’s say after 1 minute, the initial MTLS session will not be affected as it is already established. However, any other sessions requesting access to the service on those ports will be blocked. This permits only the sender of the IP address to be tightly coupled with the requested destination ports. It’s also possible for the sender to include a source port, enhancing security even further.


    • What can SPA offer

Let’s face it; robust security is hard to achieve. We all know that you can never be 100% secure. Just have a look at OpenSSH. Some of the most security-conscious developers developed OpenSSH, yet it occasionally contains exploitable vulnerabilities.

Even when you look at some of the attacks on TLS, we have already discussed the DigiNotar forgery in a previous post on zero trust networking. Still, one that caused a major issue was the THC-SSL-DOS attack, where a single host could take down a server by taking advantage of the asymmetry performance required by the TLS protocol. Single Packet Authorization (SPA) overcomes many existing attacks and, combined with the enhancements of MTLS with pinned certificates, creates a robust security model. SPA defeats many a DDoS attack as only a limited amount of server performance is required for it to operate.


SPA provides the following security benefits to the SPA-protected asset:

    • SPA blackens the gateway and protected assets that sit behind the gateway. The gateway does not respond to any connection attempts until they have provided an authentic SPA. Essentially, all network resources are dark until security controls are passed.
    • SPA also mitigates DDoS attacks on TLS. More than likely, TLS is publicly reachable on the internet running the HTTPS protocol that is highly susceptible to DDoS. SPA mitigates these types of attacks because it allows the SDP gateway to discard the TLS DoS attempt before entering the TLS handshake. As a result, there will be no exhaustion from targeting the TLS port.
    • SPA assists with attack detection. The first packet to an SDP gateway must be a SPA packet. If a gateway receives any other type of packet, it should be viewed and treated as an attack. Therefore, the SPA enables the SDP to identify an attack based on a single malicious packet.


Why You Should Embrace Zero Trust Networking

Stop malicious traffic before it even gets on the IP network.

In this world of mobile users, billions of connected things, and public cloud applications everywhere – not to mention the growing sophistication of hackers and malware – the Zero Trust Networking and zero trust remote access movement is a new reality. As the name suggests, Zero Trust means no trusted perimeter — everything is untrusted and, even after authentication and authorization, a device or user only receives the least privileged access. Such is necessary to stop all these potential security breaches. Identity and access management (IAM) is the foundation of great IT security and the key to providing zero trust.


zero trust remote access

Diagram: Zero trust remote access.


Zero Trust Networking (ZTN) is the application of the zero-trust principles to enterprise and government agency IP networks. Among other things, ZTN integrates IAM into IP routing and prohibits the establishment of a single TCP/UDP session without prior authentication and authorization. Once a session is established, ZTN ensures all traffic in motion is encrypted. To put in the context of a common analogy, think of our road systems as a network and the cars and trucks on it as IP packets. Today, anyone can leave his or her house and drive to your home and come up your driveway. That driver may not have a key to get into your home, but he or she can cause it and wait for an opportunity to enter. In a Zero Trust world, no one can leave their house to travel over the roads to your home without prior authentication and authorization. This is what’s required in the digital, virtual world to ensure security.

In the voice world, we use signaling to establish the authentication and authorization prior to connecting the call. In the data world, this can be done with TCP/UDP sessions, and in many cases, in conjunction with Transport Layer Security, or TLS. The problem is that IP routing hasn’t evolved since the mid-‘90s. IP routing protocols such as Border Gateway Protocol are standalone; they don’t integrate with directories. Network admission control (NAC) is an earlier attempt to add IAM to networking, but it requires a client and assumes a trusted perimeter. NAC is IP address-based, not TCP/UDP session state-based.


  • Move up the stack 

The solution is to make IP routing more intelligent and bring it up the OSI stack to Layer 5 where security and session state reside. The next generation of software-defined networks is taking a more intelligent approach to networking with Layer 5 security and performance functions. While organizations over time have added firewalls, session border controllers, WAN optimizers, and load balancers to networks for their ability to manage session state and provide the intelligent performance and security controls required in today’s networks. For instance, firewalls stop malicious traffic in the middle of a network and do nothing within a Layer 2 broadcast domain. Every organization has directory services based on IAM that define who is allowed access to what. ZTN takes this further by embedding this information into the network and enabling malicious traffic to be stopped at the source.

zero trust security meaning

Diagram: Zero trust security meaning.

Another great feature of ZTN is anomaly detection. When a device starts trying to communicate with other devices, services, or applications to which it doesn’t have permission to do so, an alert can be generated. Hackers use a process of discovery, identification, and targeting to break into systems; with Zero Trust, you can prevent them from starting the initial discovery.