Zero Trust Access

Safe-T; A Progressive Approach to Zero Trust Access

The foundations that support our systems are built with connectivity and not security as an essential feature. TCP connects before it authenticates. Security policy and user access based on IP lack context and allow architectures that exhibit overly permissive access. Most likely, this will result in a brittle security posture enabling the need for Zero Trust Access. Our environment has changed considerably, leaving traditional network and security architectures vulnerable to attack. The threat landscape is unpredictable. We are getting hit by external threats from all over the world. However, the environment is not just limited to external threats. There are insider threats also within a user group, and insider threats, across user group boundaries.

Therefore, we need to find ways to decouple security from the physical network and also decouple application access from the network. To do this, we need to change our mindset and invert the security model. Software-Defined Perimeter (SDP) is an extension of zero trust which presents a revolutionary development. It provides an updated approach that current security architectures fail to address. SDP is often referred to as Zero Trust Access (ZTA). Safe-T’s package of the access control software is called: Safe-T Zero+. Safe-T offers a phased deployment model, enabling you to progressively migrate to zero-trust network architecture while lowering the risk of technology adoption. Safe-T’s Zero+ model is flexible to meet today’s diverse hybrid I.T requirements. It satisfies the zero-trust principles that are used to combat today’s network security challenges.

 

Network Challenges

  • Connect First and Then Authenticate

TCP has a weak security foundation. When clients want to communicate and have access to an application: they first set up a connection. It is only after the connect stage has been carried out, can the authentication stage be accomplished. Unfortunately, with this model, we have no idea who the client is until they have completed the connect phase. There is a possibility that the requesting client is not trustworthy.

 

  • The Network Perimeter

We began with static domains, whereby internal and external segments are separated by a fixed perimeter. Public IP addresses are assigned to the external host and private addresses to the internal. If a host is assigned a private IP, it is thought to be more trustworthy than if it has a public IP address. Therefore, trusted hosts operate internally, while untrusted operate externally to the perimeter. Here, the significant factor that needs to be considered is that IP addresses lack user knowledge to assign and validate trust.

Today, I.T has become more diverse since it now supports hybrid architectures with a variety of different user types, humans, applications, and the proliferation of connected devices. Cloud adoption has become the norm these days since there is an abundance of remote workers accessing the corporate network from a variety of devices and places.

The perimeter approach no longer reflects the typical topology of users and servers accurately. It was actually built for a different era where everything was inside the walls of the organization. However, today, organizations are increasingly deploying applications in the public clouds that are located in geographical locations. These are the locations that are remote from an organization’s trusted firewalls and the perimeter network. This certainly stretches the network perimeter.

We have a fluid network perimeter where data and users are located everywhere. Hence, now we operate in a completely new environment. But the security policy controlling user access is built for static corporate-owned devices, within the supposed trusted LAN

 

  • Lateral Movements

A major concern with the perimeter approach is that it assumes a trusted internal network. However, evidently, 80% of threats are from internal malware or a malicious employee that will often go undetected.

Besides, with the rise of phishing emails, an unintentional click will give a bad actor broad-level access. And once on the LAN, the bad actors can move laterally from one segment to another. They are likely to navigate undetected between, or within the segments.

Eventually, the bad actor can steal the credentials and use them to capture and exfiltrate valuable assets. Even social media accounts can be targeted for data exfiltration since they are not often inspected by the firewall as a file transfer mechanism.

 

  • Issues with the Virtual Private Network (VPN)

What is happening with traditional VPN access is that the tunnel creates an extension between the client’s device and the application’s location. The VPN rules are static and do not dynamically change with the changing levels of trust on a given device. They provide only network information which is a crucial limitation.

Therefore, from a security standpoint, the traditional method of VPN access enables the clients to have broad network-level access. This makes the network susceptible to undetected lateral movements. Also, the remote users are authenticated and authorized but once permitted to the LAN they have coarse-grained access. This obviously creates a high level of risk as undetected malware on a user’s device can spread to an inner network.

Another significant challenge is that VPNs generate administrative complexity and cannot easily handle cloud, or multiple network environments. They require the installation of end-user VPN software clients and knowing where the application that they are accessing is located. Users would have to make changes to their VPN client software to gain access to the applications situated at different locations. In a nutshell, traditional VPNs are complex for administrators to manage and for users to operate.

With public concern over surveillance, privacy, and identity theft growing, an increasing number of people are turning to VPNs to help keep them safer online. But where should you start when choosing the best VPN for your needs?

Also, poor user experience is most likely to occur as you need to backhaul the user traffic to a regional data center. This adds latency and bandwidth costs.

In recent years, torrenting has started to become increasingly popular amongst computer users who wish to download files such as movies, books, and songs. Without having a VPN, this could risk your privacy and security. It is also important to note that you should be very careful when it comes to downloading files to your computer as they could cause more harm than good. 

 

Can Zero Trust Access be the Solution?

The main principle that ZTA follows is that nothing should be trusted. This is regardless of whether the connection is originating inside or outside the network perimeter. Reasonably, today we have no reason to trust any user, device, or application, some companies may try and decrease accessibility with the use of programs like office 365 distribution group to allow and disallow users and devices’ specific network permissions. You know that you cannot protect what you cannot see but the fact that you also cannot attack what you cannot see also holds true. ZTA makes the application and the infrastructure completely undetectable to unauthorized clients, thereby creating an invisible network.

Preferably, application access should be based on contextual parameters, such as who/where the user is located, the judgment of the security stance of the device, and then a continuous assessment of the session should be performed. This moves us from network-centric to user-centric, providing a connection-based approach to security. Security enforcement should be based on user context and include policies that matter to the business. It should be unlike a policy based on subnets that have no meaning. The authentication workflows should include context-aware data, such as device ID, geographic location, and the time and day when the user requests access.

It’s not good enough to provide network access. We must provide granular application access with a dynamic segment of 1. Here, an application microsegment gets created for every request that comes in. Micro-segmentation creates the ability to control access by subdividing the larger network into small secure application micro perimeter internal to the network. This abstraction layer puts a lockdown on lateral movements. In addition, zero trust access also implements a policy of least privilege by enforcing controls that enable the users to have access only to the resources they need to perform their tasks.

 

Characteristics of Safe-T

Safe-T has 3 main pillars to provide a secure application and file access solution with:

1) An architecture that implements zero trust access,

2) A proprietary secure channel that enables users to remotely access/share sensitive files and

3) User behavior analytics.

Safe-T’s SDP architecture is designed to substantially implement the essential capabilities delineated by the Cloud Security Alliance (CSA) architecture. Safe-T’s Zero+ is built using these main components:

The Safe-T Access Controller is the centralized control and policy enforcement engine that enforces end-user authentication and access. It acts as the control layer, governing the flow between end-users and backend services.

Secondly, the Access Gateway acts as a front-end to all the backend services published to an untrusted network. The Authentication Gateway presents to the end-user in a clientless web browser. Hence, a pre-configured authentication workflow is provided by the Access Controller. The authentication workflow is a customizable set of authentication steps, such as 3rd party IDPs (Okta, Microsoft, DUO Security, etc.). In addition, it has built-in options, such as captcha, username/password, No-Post, and OTP.

 

Safe-T Zero+ Capabilities

The Safe-T Zero+ capabilities are in line with zero trust principles. With Safe-T Zero+, clients requesting access must go through authentication and authorization stages before they can access the resource. Any network resource that has not passed these steps is blackened. Here, URL rewriting is used to hide the backend services.

This reduces the attack surface to an absolute minimum and follows the Safe-T’s axiom: If you can’t be seen, you can’t be hacked. In a normal operating environment, for the users to get access to services behind a firewall, they have to open ports on the firewall. This presents security risks as a bad actor could directly access that service via the open port and exploit any vulnerabilities of the service.

Another paramount capability of Safe-T Zero+ is that it implements a patented technology called reverse access to eliminate the need to open incoming ports in the internal firewall. This also eliminates the need to store sensitive data in the demilitarized zone (DMZ). It has the ability to extend to on-premise, public, and hybrid cloud, supporting the most diverse hybrid and meeting the I.T requirements. Zero+ can be deployed on-premises, as part of Safe-T’s SDP services, or on AWS, Azure, and other cloud infrastructures, thereby protecting both cloud and on-premise resources.

Zero+ provides the capability of user behavior analytics that monitors the actions of protected web applications. This allows the administrator to inspect the details of anomalous behavior. Thence, forensic assessment is easier by offering a single source for logging.

Finally, Zero+ provides a unique, native HTTPS-based file access solution for the NTFS file system, replacing the vulnerable SMB protocol. Besides, users can create a standard mapped network drive in their Windows explorer. This provides a secure, encrypted, and access-controlled channel to shared backend resources.

 

Deployment Strategy

Safe-T customers can exclusively select an architecture that meets their on-premise or cloud-based requirements.

 

There are 3 options:

i) The customer deploys three VMs: 1) Access Controller, 2) Access Gateway, and 3) Authentication Gateway. The VMs can be deployed on-premises in an organization’s LAN, on Amazon Web Services (AWS) public cloud, or on Microsoft’s Azure public cloud.

ii) The customer deploys the 1) Access Controller VM and 2) Access Gateway VM on-premises in their LAN. The customer deploys the Authentication Gateway VM on a public cloud, such as AWS or Azure.

iii) The customer deploys the Access Controller VM on-premise in the LAN and Safe-T deploys and maintains two VMs 1) Access Gateway and 2) Authentication Gateway; both hosted on Safe-T’s global SDP cloud service.

 

ZTA Migration Path

Today, organizations recognize the need to move to zero trust architecture. However, there is a difference between recognition and deployment. Also, new technology brings with it considerable risks. Chiefly, traditional Network Access Control (NAC) and VPN solutions fall short in many ways, but a rip and replace model is a very aggressive approach.

To begin the transition from legacy to ZTA, you should look for a migration path that you feel comfortable with. Maybe you want to run a traditional VPN in parallel or in conjunction with your SDP solution and only for a group of users for a set period of time. A probable example could be: choosing a server used primarily by experienced users, such as DevOps or QA personnel. This ensures that the risk is minimal if any problem occurs during the phased deployment of SDP access in your organization.

A recent survey carried out by the CSA indicates that SDP awareness and adoption are still in an early stage. However, when you do go down the path of ZTA, vendor selection which provides an architecture that matches your requirements is the key to successful adoption. For example, look for SDP vendors who allow you to continue using your existing VPN deployment while adding

SDP/ZTA capabilities on top of your VPN. This could sidestep the possible risks involved if you switch to completely new technology.

 

VPN virtual private network conncetion concept. Lan cable and a router with different flags. 3d illustration

Remote Browser Isolation; Complementing the SDP Story

Our digital environment has been transformed significantly. Unlike earlier times, we now have a bunch of different devices, access methods, and types of users accessing applications from a variety of locations. This makes it more difficult to know which communications can be trusted. The perimeter-based approach to security can no longer be limited to just the physical location of the enterprise. In this modern world, the perimeter is becoming increasingly difficult to enforce as organizations adopt mobile and cloud technologies. Hence, the need for Remote Browser Isolation (RBI). Under these circumstances, the perimeter is more likely to be breached; it’s just a matter of time. A bad actor would then be relatively free to move laterally, potentially accessing the privileged intranet and corporate data, both on-premises and in the cloud. Therefore, we must operate under the assumption that users and resources on internal networks are as untrustworthy as those on the public internet, and design enterprise application security with this in mind. 

 

What within the perimeter leads us to assume that it can no longer be trusted?

Security becomes less and less tenable once there are many categories of users, device types, and locations. The very fact that users are diverse means that it is not possible, for example, to slot all vendors into one user segment with uniform permissions. As a result, access to applications should be based on contextual parameters such as who and where the user is. And sessions should be continuously assessed to ensure they’re legit.  We need to find ways to decouple security from the physical network and more importantly, to decouple application access from the network. In short, we need a new approach to providing access to applications that is cloud, network, and device agnostic. This is where Software-Defined Perimeter (SDP) comes into the picture.

 

What is a Software-Defined Perimeter (SDP)?

SDP complements zero trust, which considers both internal and external networks and actors to be completely untrusted. The network topology is divorced from trust. There simply is no concept of inside or outside of the network. This may result in users not automatically being granted broad access to resources, simply by virtue of their being inside the perimeter.  Primarily, security pros must focus on solutions where they can set and enforce discrete access policies and protections for those requesting to use an application. SDP lays the foundation and secures the access architecture, which enables an authenticated and trusted connection between the entity and the application. Unlike security based solely on IP, SDP does not grant access to network resources based on a user’s location. Access policies are rather based on device, location, state, and associated user information, along with other contextual elements. The applications are considered in the abstract, so whether they run on-premise or in the cloud is irrelevant to the security policy.

 

  • Periodic Security Checking

Clients and their interactions are periodically checked to make sure they are complying with the security policy. Periodic security checking protects against any additional actions or requests that are not allowed while the connection is open. Let’s say, you have a connection open to a financial application and users are accessing the recording software to record the session. In this case, the SDP management platform can check if the software has been started or not. If so, it employs protective mechanisms to ensure smooth and secure operation.

 

Microsegmentation

Front-end authentication and periodic checking are one part of the picture. However, we not only need to go a layer deeper to secure the front door to the application but also must secure the numerous doors within, which can potentially create additional access paths. Primarily, this is the job of microsegmentation. It’s not sufficient just to provide network access. We must enable granular application access for dynamic segments of 1. In this scenario, a microsegment is created for every request that comes in. Microsegmentation creates the minimal accessible network required to get specific tasks done smoothly and securely. This is accomplished by subdividing larger networks into small secure and flexible micro-perimeters.

 

Introducing Remote Browser Isolation (RBI)

SDP provides mechanisms to prevent the possibility of lateral movement once users are inside the network. However, we also need to address how external resources located on the internet and public clouds can be accessed while protecting end-users, their devices, and the networks to which they connect. This is where remote browser isolation (RBI) comes into the picture.

Initially, we started with browser isolation, which protects the user from external sessions by isolating the interaction. Essentially, it generates complete browsers within a virtual machine on the endpoint, providing a proactive approach to isolate user’s sessions from, for example, malicious websites, emails, and links. But these solutions do not reliably isolate the web content from the end-user’s device, which is of course, on the network.

Remote browser isolation takes local browser isolation to the next level by enabling the rendering process to take place remotely from the user’s device, in the cloud. Because only a clean data stream touches the endpoint, users can securely access untrusted websites from within the perimeter of the protected area.

 

Remote Browser Isolation

Diagram: Remote Brower Isolation.

 

SDP along with Remote Browser Isolation (RBI)

In many important ways, remote browser isolation complements the SDP approach. When you access a corporate asset, you operate within the SDP. But when you need to access external assets, RBI is needed to keep you safe.

Zero trust and SDP are about authentication, authorization, and accounting (AAA) for internal resources, but there must be secure ways to get to external resources as well. For this, RBI secures browsing elsewhere on your behalf. No SDP solution can be complete without including rules to secure external connectivity. RBI takes the essence of zero trust to the next level by securing the internet browsing perspective. If access is to an internal corporate asset we create a dynamic tunnel of one individualized connection. For external access, RBI allows information to be transferred without full, risky connectivity.

This is particularly crucial when it comes to email attacks, like phishing. Malicious actors use social engineering tactics to convince recipients to trust them enough to click on embedded links. Quality RBI solutions protect users by “knowing” when to allow user access while preventing malware from entering endpoints; entirely blocking malicious sites; or protecting users from entering confidential credentials by enabling read-only access.

 

The RBI Components

To understand just how RBI works, let’s look under the hood of Ericom Shield. With RBI, for every tab that a user opens on his or her device, the solution spins up a virtual browser in its own dedicated Linux container in a remote cloud location. For example, if the user is actively browsing 19 open tabs on his Chrome browser, each will have a corresponding browser in its own remote container. This sounds like it takes a lot of computing power but enterprise-class  RBI solutions do a lot of optimizations to ensure that it is not eating up too much of the endpoint resources.

If a tab is unused for a period of time, the associated container is automatically terminated and destroyed. This frees up computing resources and also eliminates the possibility of persistence. As a result, whatever malware may have resided on the external site being browsed is destroyed, and cannot accidentally infect the endpoint, server, or cloud location. When the user shifts back to the tab, he is reconnected in a fraction of a second to the same location but with a fresh container, creating a secure enclave for internet browsing. 

Website rendering is carried out in real-time from the remote browser. The web page is translated into a media stream, which then gets streamed back to the end-user via HTML5 protocol. In reality, the browsing experience is made out of images. When you look at the source code on the endpoint browser, you will find that the HTML code consists solely of a block of Ericom-generated code. This block manages to send and receiving of images via the media stream.

Regardless of whether the user is accessing the Wall Street Journal or YouTube, they are always going to get the same source code from Ericom Shield. This is ample proof that no local download, drive-by download, or any other contact that may try to hook up into your endpoint is ever going to get there, as it does not come into contact with the endpoint. It runs only remotely, in a container located outside of the local LAN. The browser farm does all the heavy — and dangerous — lifting via container-bound browsers that read and execute the uniform resource locator (URL) requests coming from the user. 

 

  • Summary

SDP vendors have figured out device user authentication and how to continuously secure sessions. However, vendors are now looking for a way to secure the tunnel through to external resource access.  If you use your desktop to access a cloud application, the session can be hacked or compromised. But with RBI, you can maintain one-to-one secure tunneling. With a dedicated container for each specific app, you are ensured of an end-to-end zero-trust environment. 

RBI based on hardened containers, and with a rigorous process to eliminate malware through limited persistence, forms a critical component of the SDP story. The power of RBI is that it stops both known and unknown threats, making it a natural evolution from the zero trust perspective.