Truth or trust concept.

Zero Trust

 

zero trust

 

The drive for Zero Trust is again gaining momentum. The bad actors are getting increasingly sophisticated, resulting in a pervasive sense of unease in traditional networking and security methods. So why are our network infrastructure and applications open to such severe security risks? This Zero Trust tutorial will recap some technological weaknesses driving the path to Zero Trust and a Zero Trust architecture. We give devices I.P. addresses to connect to the Internet and signposts three pathways. None of these techniques ensure attacks will not happen. They are like preventive medicine. However, with bad actor sophistication, we need to be more at a total immunization level to ensure that attacks cannot even touch your infrastructure.

 

zero trust networking

Diagram: Define Zero Trust: The standard three pathways.

 

Introduction to Zero Trust

The idea behind the Zero Trust model and software-defined perimeter (SDP) is a connection-based security architecture designed to stop attacks. It doesn’t expose the infrastructure and its applications. Instead, it enables you to know the authorized users by authenticating, authorizing, and validating the devices they are on before connecting to protected resources. A Zero Trust architecture allows you to operate while vulnerabilities, patches, and configurations are in progress. Essentially, it cloaks applications or groups of the application, so they are invisible to attack.

 

Zero Trust principles

Zero Trust and SDP is a security philosophy and set of Zero Trust principles, which, taken together, represent a significant shift in how security should be approached. Foundational security elements used before Zero Trust often achieved only coarse-grained separation of users, networks, and applications. On the other hand, Zero Trust enhances this, effectively requiring that all identities and resources be segmented from one another. Zero Trust enables fine-grained, identity-and-context-sensitive access controls driven by an automated platform. Although Zero Trust started as a narrowly focused approach of not trusting any network identities until authenticated and authorized.

 

  • A key point: Traditional security boundaries

Traditionally, security boundaries were placed at the edge of the enterprise network in a classic “castle wall and moat” approach. However, a major issue with this was not just the design but also how we connected. Traditional non-Zero Trust security solutions have been unable to bridge the disconnect between network and application level security. Traditionally, users (and their devices) obtained broad access to networks, and applications relied upon authentication-only access control.

 

Issue 1 – We Connect First and Then Authenticate

Connect first, authenticate, second

TCP/IP is a fundamentally open network protocol designed to facilitate easy connectivity and reliable communications between distributed computing nodes. It has served us well in terms of enabling our hyper-connected world but—for various reasons—doesn’t include security as part of its core capabilities.

 

TCP has a weak security foundation

Transmission Control Protocol (TCP) has been around for decades and has a weak security foundation. When it was created, security was out of scope. TCP can detect and retransmit error packets but leave them to their default; communication packets are not encrypted; this poses security risks. In addition, TCP operates with a Connect First, Authenticate, Second operation model, which is inherently insecure. It leaves the two connecting parties wide open for an attack. When clients want to communicate and access an application, they first set up a connection. Then only once the connect stage has been carried out successfully can the authentication stage occur. And once the authentication stage has been carried out, we can only begin to pass the data. 

zero trust

Diagram: Zero Trust security. The TCP model of connectivity.

 

From a security perspective, the most important thing to understand is that this connection occurs purely at a network layer with no identity, authentication, or authorization. The beauty of this model is that it enables anyone with a browser to easily connect to any public web server without requiring any upfront registration or permission. This is a perfect approach for a public web server but a lousy approach for a private application.

 

The potential for malicious activity

With this process of Connect first, Authenticate Second, we are essentially opening up the door of the network and the application without knowing who is on the other side. Unfortunately, with this model, we have no idea who the client is until they have carried out the connect phase, and once they have connected, they are already in the network. Maybe the requesting client is not trustworthy and has bad intentions. If so, once they connect, they have the opportunity to carry out the malicious activity and potentially perform data exfiltration. 

 

Developing a Zero Trust Architecture

A Zero Trust architecture requires endpoints to authenticate and be authorized before obtaining network access to protected servers. Then, encrypted connections are created between requesting systems and application infrastructure in real-time. With a Zero Trust architecture, we must first establish trust between the client and the application before the client can set up the connection. Zero Trust is all about trust – never trust always verify. The trust is bi-directional between the client and the Zero Trust architecture ( that can take forms ) and the application to the Zero Trust architecture. It’s not a one-time check; it’s a continuous mode of operation. Only once sufficient trust has been established, we move into the next stage, the authentication. Once authentication has been established, we can connect the user to the application. Zero Trust access events flips the entire security model and makes it more robust. 

  • We have gone from connecting first, authenticate second to authenticate first, connect second.

zero trust model

Diagram: The Zero Trust model of connectivity.

 

Example of a Zero Trust network access

Single Pack Authorization ( SPA)

The user cannot see or know where the applications are located. SDP hides the application and creates a “dark” network by using Single Packet Authorization (SPA) for the authorization. SPAs’ goal is to overcome the open and insecure nature of TCP/IP, which follows a “connect then authenticate” model.  SPA is a lightweight security protocol that validates a device or user’s identity before permitting network access to the SDP. The purpose of SPA is to allow a service to be darkened via a default-deny firewall. Essentially, the systems use a One-Time-Password (OTP) generated by algorithm 14 and embed the current password in the initial network packet sent from the client to the Server. The SDP specification mentions using the SPA packet after establishing a TCP connection. In contrast, the open source implementation from the creators of SPA15 uses a UDP packet before the TCP connection.

 

Issue 2 – Fixed perimeter approach to networking and security

Traditionally, security boundaries were placed at the edge of the enterprise network in a classic “castle wall and moat” approach. However, as technology evolved, remote workers and workloads became more common. As a result, security boundaries necessarily followed and expanded from just the corporate perimeter.

 

The traditional world of static domains

The traditional world of networking started with static domains. Networks were initially designed to create internal segments separated from the external world by a fixed perimeter. The classical network model divided clients and users into trusted and untrusted groups. The internal network was deemed trustworthy, whereas the external was considered hostile. The perimeter approach to network and security has several zones. We have, for example, the Internet, DMZ, Trusted, and then Privileged. In addition, we have public and private address spaces that separate network access from here. Private addresses were deemed more secure than public addresses as they were not reachable on the Internet. However, this trust assumption that all private addresses are safe is where all of our problems started!!!! 

zero trust architecture

Diagram: Zero Trust security meaning. The issues with traditional network and security.

 

The fixed perimeter 

The digital threat landscape is concerning. We are getting hit by external threats to your applications and networks from all over the world. They also come internally within your network, and we have insider threats within a user group and internally as insider threats across user group boundaries. These types of threats need to be addressed one by one. One issue with the fixed perimeter approach is that it assumes a trusted internal network and a hostile external network. However, we need to assume that the internal network is as hostile as the external one. Over 80% of threats are from internal malware or malicious employees. The fixed perimeter approach to networking and security is still the foundation for most network and security professionals, even though a lot has changed since the inception of the design. 

 

 

We get hacked daily!

We are now at a stage where 45% of US companies have experienced a data breach. The 2022 Thales Data Threat Report found that almost half (45%) of US companies suffered a data breach in the past year. But this could be higher due to the potential for as yet undetected breaches.

We are getting hacked daily, and major networks with skilled staff are crashing. Unfortunately, the perimeter approach to networking has failed to provide adequate security in today’s digital world. It works to an extent by delaying an attack. However, a bad actor will eventually penetrate your guarded walls with enough patience and skill. If a large gate and walls guard your house, you would feel safe and think you are fully protected while inside the house. However, the perimeter protecting your house may be as large and thick as possible. There is still a chance that someone can climb the walls, access your front door and gain entry to your property. However, if a bad actor cannot even see your house, then they will not have the ability to go to the next step and try to breach your security.

 

Issue 3 – Dissolved perimeter caused by the changing environment

The environment has changed with the introduction of the cloud, advanced BYOD, machine-to-machine connections, the rise in remote access, and phishing attacks. We have many internal devices and a variety of users, such as on-site contractors that need to access network resources. There is also a trend for corporate devices to move to the cloud, collocated facilities, and off-site to customer and partner locations. In addition, it is becoming more diversified with hybrid architectures.

zero trust network design

Diagram: Zero Trust concept.

 

These changes are causing major security problems with the fixed perimeter approach to networking and security. For example, with the cloud, the internal perimeter is stretched to the cloud, but traditional security mechanisms are still being used. But it is a completely new paradigm. Also, remote workers: we have abundant remote workers working from various devices and places. Again, traditional security mechanisms are still being used. As our environment evolves, security tools and architectures must evolve. Let’s face it the network perimeter has dissolved as your remote users, things, services, applications, and data are everywhere. In addition, as the world moves to the cloud, mobile, and the IoT, the ability to control and secure everything in the network is longer available.

 

Phishing attacks are on the rise

We have witnessed increased phishing attacks that can result in a bad actor landing on your local area network (LAN). Phishing is a type of social engineering where an attacker sends a fraudulent message designed to trick a person into revealing sensitive information to the attacker or to deploy malicious software on the victim’s infrastructure like ransomware. The term “phishing” was first used in 1994 when a group of teens worked to obtain credit card numbers from unsuspecting users on AOL manually.

 

Hackers are inventing new ways

By 1995, they created a program called AOHell to automate their work. Since then, hackers have continued to invent new ways to gather details from anyone connected to the internet. These actors have created several programs and types of malicious software still in use today. Recently, I was a victim of a phishing email. Clicking and downloading the file is very easy if you are not educated about phishing attacks. In my case, the particular file was a .wav file. It looked safe, but it’s not.

 

Issue 4 – Broad-level access

So, you may have heard of broad-level access and lateral movements. Remember, with traditional network and security mechanisms, when a bad actor lands on a particular segment, i.e., a VLAN, known as zone-based networking, they can see everything on that segment. So this gives them broad-level access. But, generally speaking, when you are on a VLAN, you can see everything in that VLAN, and VLAN to VLAN communication is not the hardest thing to do, resulting in lateral movements.

 

The issue of lateral movements

Lateral movement is the set of techniques attackers use to progress through the organizational network after gaining initial access. Adversaries use lateral movement to identify target assets and sensitive data for their attack. Lateral movement is the tenth step in the MITRE Att&ck framework. It is the set of techniques used by attackers to move in the network while gaining access to credentials and without being detected.

 

No intra-VLAN filtering

This is made possible as traditionally, a security device does not filter this low down on the network, i.e., inside of the VLAN, known as intra-VLAN filtering. A phishing email can easily lead the bad actor to the LAN with broad-level access and the capability to move laterally throughout the network. For example, a bad actor can initially access an unpatched central file sharing server; they move laterally between segments to the web developers’ machines and use a keylogger to get the credentials to access critical information on the all-important database servers. They can then carry out data exfiltration with DNS or even a social media account like Twitter. However, firewalls generally do not check DNS as a file transfer mechanism, so data exfiltration using DNS will often go unnoticed. 

 

zero trust

Diagram: Zero trust application access. One of the many security threats: Lateral movements.

 

Issue 5 – The challenges with traditional firewalls

 

The limited world of 5-tuple

Traditional firewalls typically control access to network resources based on source I.P. addresses. This creates the fundamental challenge of securing access. Namely, we need to solve the user access problem, but we only have the tools to control access based on I.P. addresses. As a result, you have to group users, some of which may work in different departments and roles, to access the same service and with the same I.P. addresses. The firewall rules are also static and don’t dynamically change based on levels of trust changing on a given device. They provide only network information.

Maybe the user moves to a more risky location such as an Internet cafe, its local Firewall, or antivirus software has been turned off by malware or even by accident. Unfortunately, a traditional firewall cannot detect this and live in the limited world of the 5-tuple.  Traditional firewalls are only capable of expressing static rule sets and cannot express or enforce rules based on identity information.

 

zero trust

 

Issue 6 – A cloud-focused environment

Upon examining the cloud, let’s make the comparison of a public parking space. A public cloud is where you can put your car in comparison to your car in your parking garage. In a public parking space, we have multiple tenants who can take your space and don’t know what they can do to your car.

Today we are very cloud-focused, but when moving applications to the cloud, we need to be very security-focused. However, the cloud environment is less mature in providing the traditional security control we are used to in our legacy environment.  So when you are putting applications in the cloud, you shouldn’t leave security to its default. Why?? Firstly we are operating in a shared model where the tenant next to you can steal your encryption keys or data. There have been a lot of cloud breaches. We have firewalls with the static ruleset, authentication, and key management issues in cloud protection.

 

Control point change

One of the biggest problems is that the perimeter has moved when you move to a cloud-based application. Servers are no longer under your control. Mobile and tablets exacerbate the problem as they can be located everywhere. So trying to control the perimeter is very difficult. More importantly, firewalls only have access to and control network information, but they should have more content. Defining this perimeter is what ZTNA architecture and software-defined perimeter are doing. Firewalls are now managed by cloud users who are moving their applications to the cloud, not the I.T. teams within the cloud providers. So when you are moving applications to the cloud, even though cloud providers provide security tools, the cloud consumer has to integrate security to have more security visibility than they have today.

zero trust cloud

Diagram: ZTNA. Zero Trust cloud security.

 

Before, we had clear network demarcation points set by a central physical firewall creating inside and outside trust zones. Anything outside was considered hostile, and anything on the inside was considered trusted.

 

1.Connection-centric model

The Zero Trust model flips this around and considers everything untrusted. To do this, there are no longer pre-defined fixed network demarcation points. Instead, the network perimeter that was initially set in stone is now fluid and software-based. Zero Trust is connection-centric, not network-centric. Each user on a specific device connected to the network gets an individualized connection to a specific service hidden by the perimeter. Instead of having one perimeter every user uses, SDP creates many small perimeters purposely built for users and applications. These are known as micro perimeters. Clients are cryptographically signed into these micro perimeters.

security micro perimeters

Diagram: Security micro perimeters.

 

2. Micro perimeters

The micro perimeter is based on user and device context and can dynamically adjust to environmental changes. So as a user moves to different locations or devices, the Zero Trust architecture can detect this and set the appropriate security controls based on the new context. The data center is no longer the center of the universe. Instead, the user on specific devices, along with their service requests, is the new center of the universe. Zero Trust does this by decoupling the user and device from the network. To remove the user from the network, there is a separation of the data plane from the control plane. The control plane is where the authentication happens first. Then the data plane, the client to application connection, transfers the data. Therefore the users don’t need to be on the network to gain application access. As a result, they have the least privilege and no broad-level access.

 

A final issue 7 – The I.P. address conundrum

Everything today relies on I.P. addresses for trust, but there is a problem: I.P. addresses lack user knowledge to assign and validate the device’s trust. There is no way for an I.P. address to do this. I.P. addresses provide connectivity but do not get involved in validating the trust of the endpoint or the user. Also, I.P. addresses should not be used as an anchor for network locations as they are today because when a user moves from one location to another, the I.P. address changes. 

 

security flaws

Diagram: Three main network security flaws.

 

Can’t have security related to an I.P. address.

But what about the security policy assigned to the old I.P. addresses? What happens with your change I.P.s? Anything tied to I.P. is ridiculous as we don’t have a valid hook to hang things on for security policy enforcement. When you examine policy, there are several facets. For example, the user access policy touches on authorization, the network access policy touches on what to connect to, and user account policies touch on authentication. With either one, there is no policy visibility with I.P. addresses. This is also a major problem for traditional firewalling, which displays static configurations; for example, a static configuration may state that this particular source can reach this destination using this port number. 

 

Security-related issues to I.P.

  1. This has no meaning. There is no indication of why that rule exists and under what conditions a packet should be allowed from one source to another.
  2. There is no contextual information taken into consideration. When creating a robust security posture, we need to look at more than ports and I.P. addresses.

For a robust security posture, you need full visibility into the network to see who, what, when, and how they connect with the device. Unfortunately, today’s Firewall is static and only contains information about the network. On the other hand, Zero Trust enables a dynamic firewall with the user and device context to open a firewall for a single secure connection. The Firewall remains closed at all other times, creating a ‘black cloud’ stance regardless of whether the connections are made to the cloud or on-premise. 

 

The rise of the next-generation firewall?

Next-generation firewalls are more advanced than traditional firewalls, and they use the information in layers 5 through 7 (session layer, presentation layer, and application layer) to perform additional functions. They can provide advanced features such as intrusion detection, prevention, and virtual private networks. Today, most enterprise firewalls are “next generation” and typically include IDS/IPS, traffic analysis and malware detection for threat detection, URL filtering, and some degree of application awareness/control. Like the NAC market segment, vendors in this area began a journey to identity-centric security around the same time Zero Trust ideas began percolating through the industry. Today, many NGFW vendors offer Zero Trust capabilities but many operate with the perimeter security model.

 

Still IP-based security systems

They are still IP-based systems offering limited identity and application-centric capabilities. In addition, NGFWs are still static firewalls most do not employ zero trust segmentation, and they often mandate traditional perimeter-centric network architectures with site-to-site connections and don’t offer flexible network segmentation capabilities. Similar to traditional firewalls, their access policy models are typically coarse-grained, providing users broader network access than what is strictly necessary.

 

Zero Trust Access

Safe-T; A Progressive Approach to Zero Trust Access

The foundations that support our systems are built with connectivity and not security as an essential feature. TCP connects before it authenticates. Security policy and user access based on IP lack context and allow architectures that exhibit overly permissive access. Most likely, this will result in a brittle security posture enabling the need for Zero Trust Access. Our environment has changed considerably, leaving traditional network and security architectures vulnerable to attack. The threat landscape is unpredictable. We are getting hit by external threats from all over the world. However, the environment is not just limited to external threats. There are insider threats also within a user group, and insider threats, across user group boundaries.

Therefore, we need to find ways to decouple security from the physical network and also decouple application access from the network. To do this, we need to change our mindset and invert the security model. Software-Defined Perimeter (SDP) is an extension of zero trust which presents a revolutionary development. It provides an updated approach that current security architectures fail to address. SDP is often referred to as Zero Trust Access (ZTA). Safe-T’s package of the access control software is called: Safe-T Zero+. Safe-T offers a phased deployment model, enabling you to progressively migrate to zero-trust network architecture while lowering the risk of technology adoption. Safe-T’s Zero+ model is flexible to meet today’s diverse hybrid I.T requirements. It satisfies the zero-trust principles that are used to combat today’s network security challenges.

 

Network Challenges

  • Connect First and Then Authenticate

TCP has a weak security foundation. When clients want to communicate and have access to an application: they first set up a connection. It is only after the connect stage has been carried out, can the authentication stage be accomplished. Unfortunately, with this model, we have no idea who the client is until they have completed the connect phase. There is a possibility that the requesting client is not trustworthy.

 

  • The Network Perimeter

We began with static domains, whereby internal and external segments are separated by a fixed perimeter. Public IP addresses are assigned to the external host and private addresses to the internal. If a host is assigned a private IP, it is thought to be more trustworthy than if it has a public IP address. Therefore, trusted hosts operate internally, while untrusted operate externally to the perimeter. Here, the significant factor that needs to be considered is that IP addresses lack user knowledge to assign and validate trust.

Today, I.T has become more diverse since it now supports hybrid architectures with a variety of different user types, humans, applications, and the proliferation of connected devices. Cloud adoption has become the norm these days since there is an abundance of remote workers accessing the corporate network from a variety of devices and places.

The perimeter approach no longer reflects the typical topology of users and servers accurately. It was actually built for a different era where everything was inside the walls of the organization. However, today, organizations are increasingly deploying applications in the public clouds that are located in geographical locations. These are the locations that are remote from an organization’s trusted firewalls and the perimeter network. This certainly stretches the network perimeter.

We have a fluid network perimeter where data and users are located everywhere. Hence, now we operate in a completely new environment. But the security policy controlling user access is built for static corporate-owned devices, within the supposed trusted LAN

 

  • Lateral Movements

A major concern with the perimeter approach is that it assumes a trusted internal network. However, evidently, 80% of threats are from internal malware or a malicious employee that will often go undetected.

Besides, with the rise of phishing emails, an unintentional click will give a bad actor broad-level access. And once on the LAN, the bad actors can move laterally from one segment to another. They are likely to navigate undetected between, or within the segments.

Eventually, the bad actor can steal the credentials and use them to capture and exfiltrate valuable assets. Even social media accounts can be targeted for data exfiltration since they are not often inspected by the firewall as a file transfer mechanism.

 

  • Issues with the Virtual Private Network (VPN)

What is happening with traditional VPN access is that the tunnel creates an extension between the client’s device and the application’s location. The VPN rules are static and do not dynamically change with the changing levels of trust on a given device. They provide only network information which is a crucial limitation.

Therefore, from a security standpoint, the traditional method of VPN access enables the clients to have broad network-level access. This makes the network susceptible to undetected lateral movements. Also, the remote users are authenticated and authorized but once permitted to the LAN they have coarse-grained access. This obviously creates a high level of risk as undetected malware on a user’s device can spread to an inner network.

Another significant challenge is that VPNs generate administrative complexity and cannot easily handle cloud, or multiple network environments. They require the installation of end-user VPN software clients and knowing where the application that they are accessing is located. Users would have to make changes to their VPN client software to gain access to the applications situated at different locations. In a nutshell, traditional VPNs are complex for administrators to manage and for users to operate.

With public concern over surveillance, privacy, and identity theft growing, an increasing number of people are turning to VPNs to help keep them safer online. But where should you start when choosing the best VPN for your needs?

Also, poor user experience is most likely to occur as you need to backhaul the user traffic to a regional data center. This adds latency and bandwidth costs.

In recent years, torrenting has started to become increasingly popular amongst computer users who wish to download files such as movies, books, and songs. Without having a VPN, this could risk your privacy and security. It is also important to note that you should be very careful when it comes to downloading files to your computer as they could cause more harm than good. 

 

Can Zero Trust Access be the Solution?

The main principle that ZTA follows is that nothing should be trusted. This is regardless of whether the connection is originating inside or outside the network perimeter. Reasonably, today we have no reason to trust any user, device, or application, some companies may try and decrease accessibility with the use of programs like office 365 distribution group to allow and disallow users and devices’ specific network permissions. You know that you cannot protect what you cannot see but the fact that you also cannot attack what you cannot see also holds true. ZTA makes the application and the infrastructure completely undetectable to unauthorized clients, thereby creating an invisible network.

Preferably, application access should be based on contextual parameters, such as who/where the user is located, the judgment of the security stance of the device, and then a continuous assessment of the session should be performed. This moves us from network-centric to user-centric, providing a connection-based approach to security. Security enforcement should be based on user context and include policies that matter to the business. It should be unlike a policy based on subnets that have no meaning. The authentication workflows should include context-aware data, such as device ID, geographic location, and the time and day when the user requests access.

It’s not good enough to provide network access. We must provide granular application access with a dynamic segment of 1. Here, an application microsegment gets created for every request that comes in. Micro-segmentation creates the ability to control access by subdividing the larger network into small secure application micro perimeter internal to the network. This abstraction layer puts a lockdown on lateral movements. In addition, zero trust access also implements a policy of least privilege by enforcing controls that enable the users to have access only to the resources they need to perform their tasks.

 

Characteristics of Safe-T

Safe-T has 3 main pillars to provide a secure application and file access solution with:

1) An architecture that implements zero trust access,

2) A proprietary secure channel that enables users to remotely access/share sensitive files and

3) User behavior analytics.

Safe-T’s SDP architecture is designed to substantially implement the essential capabilities delineated by the Cloud Security Alliance (CSA) architecture. Safe-T’s Zero+ is built using these main components:

The Safe-T Access Controller is the centralized control and policy enforcement engine that enforces end-user authentication and access. It acts as the control layer, governing the flow between end-users and backend services.

Secondly, the Access Gateway acts as a front-end to all the backend services published to an untrusted network. The Authentication Gateway presents to the end-user in a clientless web browser. Hence, a pre-configured authentication workflow is provided by the Access Controller. The authentication workflow is a customizable set of authentication steps, such as 3rd party IDPs (Okta, Microsoft, DUO Security, etc.). In addition, it has built-in options, such as captcha, username/password, No-Post, and OTP.

 

Safe-T Zero+ Capabilities

The Safe-T Zero+ capabilities are in line with zero trust principles. With Safe-T Zero+, clients requesting access must go through authentication and authorization stages before they can access the resource. Any network resource that has not passed these steps is blackened. Here, URL rewriting is used to hide the backend services.

This reduces the attack surface to an absolute minimum and follows the Safe-T’s axiom: If you can’t be seen, you can’t be hacked. In a normal operating environment, for the users to get access to services behind a firewall, they have to open ports on the firewall. This presents security risks as a bad actor could directly access that service via the open port and exploit any vulnerabilities of the service.

Another paramount capability of Safe-T Zero+ is that it implements a patented technology called reverse access to eliminate the need to open incoming ports in the internal firewall. This also eliminates the need to store sensitive data in the demilitarized zone (DMZ). It has the ability to extend to on-premise, public, and hybrid cloud, supporting the most diverse hybrid and meeting the I.T requirements. Zero+ can be deployed on-premises, as part of Safe-T’s SDP services, or on AWS, Azure, and other cloud infrastructures, thereby protecting both cloud and on-premise resources.

Zero+ provides the capability of user behavior analytics that monitors the actions of protected web applications. This allows the administrator to inspect the details of anomalous behavior. Thence, forensic assessment is easier by offering a single source for logging.

Finally, Zero+ provides a unique, native HTTPS-based file access solution for the NTFS file system, replacing the vulnerable SMB protocol. Besides, users can create a standard mapped network drive in their Windows explorer. This provides a secure, encrypted, and access-controlled channel to shared backend resources.

 

Deployment Strategy

Safe-T customers can exclusively select an architecture that meets their on-premise or cloud-based requirements.

 

There are 3 options:

i) The customer deploys three VMs: 1) Access Controller, 2) Access Gateway, and 3) Authentication Gateway. The VMs can be deployed on-premises in an organization’s LAN, on Amazon Web Services (AWS) public cloud, or on Microsoft’s Azure public cloud.

ii) The customer deploys the 1) Access Controller VM and 2) Access Gateway VM on-premises in their LAN. The customer deploys the Authentication Gateway VM on a public cloud, such as AWS or Azure.

iii) The customer deploys the Access Controller VM on-premise in the LAN and Safe-T deploys and maintains two VMs 1) Access Gateway and 2) Authentication Gateway; both hosted on Safe-T’s global SDP cloud service.

 

ZTA Migration Path

Today, organizations recognize the need to move to zero trust architecture. However, there is a difference between recognition and deployment. Also, new technology brings with it considerable risks. Chiefly, traditional Network Access Control (NAC) and VPN solutions fall short in many ways, but a rip and replace model is a very aggressive approach.

To begin the transition from legacy to ZTA, you should look for a migration path that you feel comfortable with. Maybe you want to run a traditional VPN in parallel or in conjunction with your SDP solution and only for a group of users for a set period of time. A probable example could be: choosing a server used primarily by experienced users, such as DevOps or QA personnel. This ensures that the risk is minimal if any problem occurs during the phased deployment of SDP access in your organization.

A recent survey carried out by the CSA indicates that SDP awareness and adoption are still in an early stage. However, when you do go down the path of ZTA, vendor selection which provides an architecture that matches your requirements is the key to successful adoption. For example, look for SDP vendors who allow you to continue using your existing VPN deployment while adding

SDP/ZTA capabilities on top of your VPN. This could sidestep the possible risks involved if you switch to completely new technology.