zero trust network design

Zero Trust SASE

Zero Trust SASE

In today's digital age, where remote work and cloud-based applications are becoming the norm, traditional network security measures are no longer sufficient to protect sensitive data. Enter Zero Trust Secure Access Service Edge (SASE), a revolutionary approach that combines the principles of Zero Trust security with the flexibility and scalability of cloud-based architectures. In this blog post, we will delve into the concept of Zero Trust SASE and explore its benefits and implications for the future of network security.

Zero Trust is a security model that operates on "never trust, always verify." It assumes that no user or device should be granted automatic trust within a network, whether inside or outside the perimeter. Instead, every user, device, and application must be continuously authenticated and authorized based on various contextual factors, such as user behavior, device health, and location.

SASE is a comprehensive security framework that combines networking and security capabilities into a single cloud-based service. It aims to simplify and unify network security by providing secure access to applications and data, regardless of the user's location or device.

SASE integrates various security functions, such as secure web gateways, cloud access security brokers, and data loss prevention, into a single service, reducing complexity and improving overall security posture.

Highlights: Zero Trust SASE

Innovative Security Framework

Zero Trust SASE is an innovative security framework that combines Zero Trust principles with Secure Access Service Edge (SASE) architecture. It emphasizes continuous verification and validation of every user, device, and network resource attempting to access an organization’s network, regardless of location. By adopting a zero-trust approach, organizations can enhance security by eliminating the assumption of trust and implementing stricter access controls.

1. Note: Zero Trust SASE is built upon several key components to create a robust and comprehensive security framework. These components include identity and access management, multi-factor authentication, network segmentation, encryption, continuous monitoring, and threat intelligence integration. Each element is crucial in strengthening network security and protecting against evolving cyber threats.

2. Note: Both SASE and ZTNA are essential components of modern security architecture. However, they are two different solutions. SASE provides a comprehensive, multi-faceted security framework, while ZTNA is a more narrowly focused model focused on limiting resource access, which is a part of SAS

**Challenge: The Lag in Security** 

Today’s digital transformation and strategy initiatives require speed and agility in I.T. However, there is a lag, and that lag is with security. Security can either hold them back or not align with the fluidity needed for agility. As a result, we have decreased an organization’s security posture, which poses a risk that needs to be managed. We have a lot to deal with, such as the rise in phishing attacks, mobile malware, fake public Wi-Fi networks, malicious apps, and data leaks. Therefore, we have new requirements that SASE can help with.

Zero Trust Security

Zero Trust Security is a paradigm shift from the traditional perimeter-based security model. It operates on the principle of “never trust, always verify.” Unlike the old approach, where users and devices were granted broad access once inside the network, Zero Trust Security treats every user, device, and network segment as potentially untrusted. This enhanced approach minimizes the risk of unauthorized access and lateral movement within the network.

Continuous Verification & Strict Access Control

Zero Trust is a security model that operates on the principle of never trusting any network or user by default. It emphasizes continuous verification and strict access control to mitigate potential threats. With Zero Trust, organizations adopt a granular approach to security, ensuring that every user, device, and application is authenticated and authorized before accessing any resources.

Knowledge Checks: Microsegmentation

**Understanding Microsegmentation**

Microsegmentation is a network security technique that involves creating secure zones within a data center, enabling organizations to isolate different workloads from each other. This isolation limits the potential for lateral movement by attackers, thereby reducing the attack surface. Unlike traditional network segmentation, which often relies on physical barriers, microsegmentation uses software-defined policies to manage and secure traffic between segments.

**Benefits of Microsegmentation**

The primary advantage of microsegmentation is its ability to provide granular security controls. By implementing policies at the workload level, organizations can tailor security measures to specific applications and data. This not only enhances protection but also improves compliance with regulatory standards. Additionally, microsegmentation offers greater visibility into network traffic, allowing for more effective monitoring and threat detection.

 

Challenge: Large Segments with VLANs

Example Technology: Network Endpoint Groups

#### What Are Network Endpoint Groups?

Network Endpoint Groups are collections of endpoints, which can be instances, IP addresses, or other network entities, that share the same configuration within Google Cloud. NEGs are pivotal for managing traffic and balancing loads across multiple endpoints, ensuring that applications remain available and performant even under heavy demand. This flexibility allows businesses to seamlessly scale their operations without compromising on efficiency or security.

#### Types of Network Endpoint Groups

Google Cloud provides different types of NEGs to cater to various networking needs:

1. **Zonal NEGs**: These are used for backend services within a single zone, offering reliability and low-latency connections.

2. **Internet NEGs**: Designed for external services, these endpoints can reside outside of Google Cloud, enabling effective hybrid cloud architectures.

3. **Serverless NEGs**: Perfect for serverless applications, these allow you to integrate Cloud Run, App Engine, and Cloud Functions with your load balancer.

Each type serves distinct use cases, making it vital to choose the right NEG for your specific application requirements.

**Understanding Micro-segmentation**

Microsegmentation is a critical strategy in modern network management, providing a method to improve security by dividing a network into smaller, isolated segments. This approach ensures that any potential security breaches are contained and do not spread across the network. In the context of Google Cloud, NEGs can be effectively used to implement microsegmentation. By creating smaller, controlled segments, you can enforce security policies more rigorously, reducing the risk of unauthorized access and enhancing the overall security posture of your applications.


network endpoint groups

**The SASE Concept**

Gartner coined the SASE concept after seeing a pattern emerge in cloud and SD-WAN projects where full security integration was needed. We now refer to SASE as a framework and a security best practice. SASE leverages multiple security services into a framework approach.

The idea of SASE was not far from what we already did, which was integrating numerous security solutions into a stack that ensured a comprehensive, layered, secure access solution. By calling it a SASE framework, the approach to a complete solution somehow felt more focused than what the industry recognized as a best security practice.

The security infrastructure and decisions must become continuous and adaptive, not static, that formed the basis of traditional security methods. Consequently, we must enable real-time decisions that balance risk, trust, and opportunity. As a result, security has beyond a simple access control list (ACL) and zone-based segmentation based on VLANs. In reality, no network point acts as an anchor for security.

Example Technology: IPv6 Access Lists 

Many current network security designs and technologies were not designed to handle all the traffic and security threats we face today. This has forced many to adopt multiple-point products to address the different requirements. Remember that for every point product, there is an architecture to deploy, a set of policies to configure, and a bunch of logs to analyze. I find correlating logs across multiple-point product solutions used in different domains hard.

For example, a diverse team may operate the secure web gateways (SWG) to that of the virtual private network (VPN) appliances. It could be the case that these teams work in silos and are in different locations.

Zero Trust SASE requirements:

  1. Information hiding: SASE requires clients to be authenticated and authorized before accessing protected assets, regardless of whether the connection is inside or outside the network perimeter.
  2. Mutually encrypted connections: SASE uses the full TLS standard to provide mutual, two-way cryptographic authentication. Mutual TLS provides this and goes one step further to authenticate the client.
  3. Need to know the access model: SASE employs a need-to-know access model. As a result, SASE permits the requesting client to view only the resources appropriate to the assigned policy.
  4. Dynamic access control: SASE deploys a dynamic firewall that starts with one rule – deny all. Then, requested communication is dynamically inserted into the firewall, providing an active firewall security policy instead of static configurations.
  5. Identity-driven access control: SASE provides adaptive, identity-aware, precision access for those seeking more precise access and session control to applications on-premises and in the cloud.

Starting Zero Trust

Endpoint Security 

Understanding ARP (Address Resolution Protocol)

ARP is a vital network communication protocol that maps an IP address to a physical MAC address. By maintaining an ARP table, endpoints can efficiently communicate within a network. 

Routes and gateways act as the pathways for data transmission between networks. Safeguarding these routes is crucial to ensure network integrity. We will discuss the significance of secure routing protocols, such as OSPF and BGP, and how they contribute to endpoint security. 

Netstat, short for Network Statistics, is a powerful command-line tool providing detailed information about network connections and statistics. This section will highlight the importance of using netstat for monitoring endpoint security. From identifying active connections to detecting suspicious activities, netstat empowers administrators to protect their networks proactively.

Understanding SELinux

SELinux is a robust security framework built into the Linux kernel. It provides fine-grained access control policies and mandatory access controls (MAC) to enforce system-wide security policies. Unlike traditional Linux discretionary access controls (DAC), SELinux operates on the principle of least privilege, ensuring that only authorized actions are allowed.

Organizations can establish a robust security posture for their endpoints by combining SELinux with zero trust principles. SELinux provides granular control over system resources, enabling administrators to define strict policies based on user roles, processes, and system components. This ensures that even if an endpoint is compromised, the attacker’s lateral movement and potential damage are significantly limited.

### Understanding Authentication in Vault

Authentication is the process of verifying the identity of a user or system. In Vault, this is achieved through various authentication methods such as tokens, AppRole, LDAP, GitHub, and more. Each method serves different use cases, allowing flexibility and scalability in managing access. Vault ensures that only authenticated users can access sensitive data, thus mitigating the risk of unauthorized access.

### The Role of Authorization

While authentication verifies identity, authorization determines what authenticated users can do. Vault uses policies to define the actions that users and applications can perform. These policies are written in HashiCorp Configuration Language (HCL) or JSON, and they provide a fine-grained control over access to secrets. By segregating duties and defining clear access levels, Vault helps prevent privilege escalation and minimizes the risk of data exposure.

### Managing Identity with Vault

Vault’s identity management capabilities allow organizations to unify identities across various platforms. By integrating with identity providers and managing roles and entities, Vault simplifies user management and enhances security. This integration ensures that user credentials are consistently verified and that access rights are updated as roles change, reducing the risk of stale credentials being exploited.

Vault

Use Case: WAN Edge Performance Routing

SASE & Performance-Based Routing

Performance-based routing is a dynamic routing technique that selects the best path for network traffic based on real-time performance metrics. Traditional routing protocols often follow static routes, leading to suboptimal network performance. However, performance-based routing leverages latency, packet loss, and bandwidth availability metrics to make informed routing decisions. By continuously evaluating these metrics, networks can adapt and reroute traffic to ensure optimal performance.

Google Cloud & IAP

**Understanding the Basics of IAP**

At its core, Identity-Aware Proxy is a security service that acts as a gatekeeper for applications and resources. It ensures that only authenticated and authorized users can access specific web applications hosted on cloud platforms. Unlike traditional security models that rely on network-level access controls, IAP takes a user-centric approach, verifying identity and context before granting access. This method not only strengthens security but also simplifies access management across distributed environments.

**The Role of IAP in Google Cloud**

Google Cloud offers a versatile and integrated approach to using IAP, making it an attractive option for organizations leveraging cloud services. With Google Cloud’s IAP, businesses can secure their web applications and VMs without the need for traditional VPNs or complex network configurations. This section will delve into how Google Cloud implements IAP, highlighting its seamless integration with other Google Cloud services and the ease with which it can be deployed. By utilizing Google Cloud’s IAP, businesses can streamline their security operations and focus on delivering value to their customers.

**Benefits of Using Identity-Aware Proxy**

The advantages of implementing IAP are manifold. Firstly, it enhances security by enforcing granular access controls based on user identity and context. This reduces the risk of unauthorized access and potential data breaches. Secondly, IAP simplifies the user experience by enabling single sign-on (SSO) capabilities, allowing users to access multiple applications with a single set of credentials. Additionally, IAP’s integration with existing identity providers ensures that businesses can maintain a consistent security policy across their entire IT ecosystem.

Identity aware proxy

Related: For pre-information, you may find the following helpful:

  1. SD-WAN SASE
  2. SASE Model
  3. SASE Solution
  4. Cisco Secure Firewall
  5. SASE Definition

Zero Trust SASE

Many challenges to existing networks and infrastructure create big security holes and decrease security posture. In reality, several I.T. components give the entity more access than required. We have considerable security flaws with using I.P. addresses as a security anchor and static locations; the virtual private networks (VPN) and demilitarized zone (DMZ) architectures used to establish access are often configured to allow excessive implicit trust.  

##Challenge 1: The issue with a DMZ

The DMZ is the neutral network between the Internet and your organization’s private network. It’s protected by a front-end firewall that limits Internet traffic to specific systems within its zone. The DMZ can have a significant impact on security if not appropriately protected. Remote access technologies such as VPN or RDP, often located in the DMZ, have become common targets of cyberattacks. One of the main issues I see with the DMZ is that the bad actors know it’s there. It may be secured, but it’s visible.

##Challenge 2: The issue with the VPN

In basic terms, a VPN provides an encrypted server and hides your IP address. However, the VPN does not secure users when they land on a network segment and is based on coarse-grained access control where the user has access to entire network segments and subnets. Traditionally, once you are on a segment, there will be no intra-filtering on that segment. That means all users in that segment need the same security level and access to the same systems, but that is not always the case. 

GRE without IPsec GRE with IPsec

##Challenge 3: permissive network access

VPNs generally provide broad, overly permissive network access with only fundamental access control limits based on subnet ranges. So, the traditional VPN provides overly permissive access and security based on I.P. subnets. Note: The issue with VLAN-based segmentation is large broadcast domains with free-for-all access. This represents a larger attack surface where lateral movements can take place. Below is a standard VLAN-based network running Spanning Tree Protocol ( STP ).

## Challenge 4: Security-based on trust

Much of the non-zero trust security architecture is based on trust, which bad actors abuse. On the other hand, examining a SASE overview includes zero trust networking and remote access as one of its components, which can adaptively offer the appropriate trust required at the time and nothing more. It is like providing a narrow segmentation based on many contextual parameters continuously assessed for risk to ensure the users are who they are and that the entities, either internal or external to the network, are doing what they are supposed to do.

**Removes excessive trust**

A core feature of SASE and Zero Trust is that it removes the excessive trust once required to allow entities to connect and collaborate. Within a zero-trust environment, our implicit trust in traditional networks is replaced with explicit identity-based trust with a default denial. With an identity-based trust solution, we are not just looking at IP addresses to determine trust levels. After all, they are just binary, deemed a secure private or a less trustworthy public. This assumption is where all of our problems started. They are just ones and zeros.

## Challenge 5: IP for Location and Identity 

To improve your security posture, it would be best to stop relying primarily on IP addresses and network locations as a proxy for trust. We have been doing this for decades. There is minimal context in placing a policy with legacy constructs. To determine the trust of a requesting party, we need to examine multiple contextual aspects, not just IP addresses.

And the contextual aspects are continuously assessed for security posture. This is a much better way to manage risk and allows you to look at the entire picture before deciding to enter the network or access a resource.

Example: Firewall Tagging

Firewall tags

1) SASE: First attempt to 

Organizations have adopted different security technologies to combat these changes and include them in their security stack. Many of the security technologies are cloud-based services. Some of these services include the cloud-based secure web gateway (SWG), content delivery network [CDN], and web application firewall [WAF].

A secure web gateway (SWG) protects users from web-based threats and applies and enforces acceptable corporate use policies. A content delivery network (CDN) is a geographically distributed group of servers that works together to deliver Internet content quickly. A WAF, or web application firewall, helps protect web applications by filtering and monitoring HTTP traffic between them and the Internet.

The data center is the center of the universe.

However, even with these welcomed additions to security, the general trend was that the data center is still the center of most enterprise networks and network security architectures. Let’s face it: These designs are becoming ineffective and cumbersome with the rise of cloud and mobile technology. Traffic patterns have changed considerably, and so has the application logic.

2) SASE: Second attempt to

The next attempt was for a converged cloud-delivered secure access service edge (SASE) to accomplish this shift in the landscape. And that is what SASE architecture does. As you know, the SASE architecture relies on multiple contextual aspects to establish and adapt trust for application-level access. It does not concern itself with significant VLANs and broad-level access or believe that the data center is the center of the universe. Instead, the SASE architecture is often based on PoPs, where each PoP acts as the center of the universe.

The SASE definition and its components are a transformational architecture that can combat many of these discussed challenges. A SASE solution converges networking and security services into one unified, cloud-delivered solution that includes the following core capabilities of sase.

From the network side of things: SASE in networking:

    1. Software-defined wide area network (SD-WAN)
    2. Virtual private network (VPN)
    3. Zero Trust Network ZTN
    4. Quality of service (QoS)
    5. Software-defined perimeter (SDP)

Example SDP Technology: VPC Service Controls

**What are VPC Service Controls?**

VPC Service Controls are a security feature offered by Google Cloud that allows organizations to define a security perimeter around their cloud resources. This perimeter helps prevent unauthorized access to sensitive data and provides an additional layer of protection against potential threats. By using VPC Service Controls, you can restrict access to your resources based on specific criteria, ensuring that only trusted entities can interact with your data.

**Key Benefits of Implementing VPC Service Controls**

Implementing VPC Service Controls offers several key benefits for organizations seeking to enhance their cloud security:

1. **Enhanced Data Security**: By creating a security perimeter around your cloud resources, you can reduce the risk of data breaches and ensure that sensitive information remains protected.

2. **Granular Access Control**: VPC Service Controls allow you to define access policies based on various factors such as source IP addresses, user identities, and more. This granular control ensures that only authorized users can access your resources.

3. **Simplified Compliance**: For organizations operating in regulated industries, compliance with data protection laws is critical. VPC Service Controls simplify the process of meeting regulatory requirements by providing a robust security framework.

4. **Seamless Integration**: Google Cloud’s VPC Service Controls integrate seamlessly with other Google Cloud services, allowing you to maintain a consistent security posture across your entire cloud environment.

**Setting Up VPC Service Controls**

Getting started with VPC Service Controls is a straightforward process. First, identify the resources you want to protect and define the security perimeter around them. Next, configure access policies to control who can access these resources and under what conditions. Google Cloud provides detailed documentation and tools to guide you through the setup process, ensuring a smooth implementation.

**Best Practices for Using VPC Service Controls**

To maximize the effectiveness of your VPC Service Controls, consider the following best practices:

– **Regularly Review Access Policies**: Periodically review and update access policies to ensure they align with your organization’s security requirements and industry standards.

– **Monitor and Audit Activity**: Use Google Cloud’s monitoring and logging tools to track access to your resources and identify any potential security incidents.

– **Educate Your Team**: Ensure that your team is well-versed in the use of VPC Service Controls and understands the importance of maintaining a secure cloud environment.

VPC Security Controls VPC Service Controls

From the security side of things, SASE capabilities in security:

    1. Firewall as a service (FWaaS)
    2. Domain Name System (DNS) security
    3. Threat prevention
    4. Secure web gateways
    5. Data loss prevention (DLP)
    6. Cloud access security broker (CASB)

Example Technology: The Web Security Scanner

### How Google Cloud’s Web Security Scanner Works

Google Cloud’s Web Security Scanner is a robust solution that integrates seamlessly with the Google Cloud environment. It automatically scans your web applications for common vulnerabilities, such as cross-site scripting (XSS), mixed content, and outdated libraries. The scanner’s intuitive interface provides detailed reports, highlighting potential issues and offering actionable recommendations for mitigation. This automation not only saves time but also ensures that your applications remain secure as you continue to develop and deploy new features.

### Key Features and Benefits

One of the standout features of Google Cloud’s Web Security Scanner is its ability to perform authenticated scans. This means it can test parts of your web application that require user login, ensuring a comprehensive security assessment. Additionally, the scanner is designed to work seamlessly with other Google Cloud services, making it a convenient choice for those already invested in the Google ecosystem. Its cloud-native architecture ensures that it scales efficiently to meet the needs of businesses, big and small.

### Best Practices for Using Web Security Scanners

To get the most out of your web security scanner, it’s important to integrate it into your continuous integration and deployment processes. Regularly scanning your applications ensures that any new vulnerabilities are promptly identified and addressed. Additionally, consider using the scanner alongside other security tools to create a multi-layered defense strategy. Training your development team on common security pitfalls can also help prevent vulnerabilities from being introduced in the first place.

security web scanner

SASE changes the focal point to the identity of the user and device. With traditional network design, we have the on-premises data center, considered the universe’s center. With SASE, that architecture changes this to match today’s environment and moves the perimeter to the actual user, devices, or PoP with some SASE designs. In contrast to traditional enterprise networks and security architectures, the internal data center is the focal point for access. 

Example Product: Cisco Meraki

### What is Cisco Meraki?

Cisco Meraki is a suite of cloud-managed IT solutions that include wireless, switching, security, EMM (Enterprise Mobility Management), and security cameras, all centrally managed from the web. The Meraki dashboard provides powerful and intuitive tools to manage your entire network from a single pane of glass. This holistic approach ensures that businesses can maintain robust security protocols without compromising on ease of management.

### Key Features of Cisco Meraki

#### Cloud-Based Management

One of the standout features of Cisco Meraki is its cloud-based management. This allows for real-time monitoring, configuration, and troubleshooting from anywhere in the world. With automatic updates and seamless scalability, businesses can ensure their network is always up-to-date and secure.

#### Advanced Security Features

Cisco Meraki offers a range of advanced security features designed to protect your network from various threats. These include intrusion detection and prevention systems (IDS/IPS), advanced malware protection (AMP), and content filtering. By leveraging these tools, businesses can safeguard their data and maintain the integrity of their network.

#### Simplified Deployment

Deploying a traditional network can be a complex and time-consuming task. Cisco Meraki simplifies this process with zero-touch provisioning, which allows devices to be pre-configured and managed remotely. This reduces the need for on-site technical expertise and accelerates the deployment process.

### Benefits of Using Cisco Meraki for Network Security

#### Centralized Control

The centralized control offered by the Meraki dashboard enables IT teams to manage multiple sites from a single interface. This not only streamlines operations but also ensures consistent security policies across all locations.

#### Scalability

As businesses grow, their network needs evolve. Cisco Meraki’s scalable solutions allow for easy expansion without the need for significant infrastructure changes. This flexibility ensures that businesses can adapt to changing demands without compromising on security.

#### Cost Efficiency

By reducing the need for on-site hardware and simplifying management, Cisco Meraki can lead to significant cost savings. Additionally, the reduced need for technical expertise can lower operational costs, making it an attractive option for businesses looking to optimize their IT budget.

VPN Security Scenario 

  • Challenge: Traditional remote access VPNs

Remote access VPNs are primarily built to allow users outside the perimeter firewall to access resources inside the perimeter firewall. As a result, they often follow a hub-and-spoke architecture, with users connected by tunnels of various lengths depending on their distance from the data center. Traditional VPNs introduce a lot of complexity. For example, what do you do if you have multiple sites where users need to access applications? In this scenario, the cost of management would be high. 

  • Challenge: Tunnel based on I.P

What’s happening here is that the tunnel creates an extension between the client device and the application location. The tunnel is based on IP addresses on the client device and the remote application. Now that there is I.P. connectivity between the client and the application, the network where the application is located is extended to the client.

However, the client might not sit in an insecure hotel room or from home. These may not be sufficiently protected, and such locations should be considered insecure. The traditional VPN has many issues to deal with. It is user-initiated, and policy often permits split-tunnel VPNs without Internet or cloud traffic inspection.

SASE: A zero-trust VPN solution

A SASE solution encompasses VPN services and enhances the capabilities of operating in cloud-based infrastructure to route traffic. On the other hand, with SASE, the client connects to the SASE PoP, which carries out security checks and forwards the request to the application. A SASE design still allows clients to access the application, but they can only access that specific application and nothing more, like a stripped-down VLAN known as a micro-segmentation.

Restricting Lateral Movements

Clients must pass security controls, and no broad-level access is susceptible to lateral movements. Access control is based on an allowlist rather than the traditional blocklist rule. Also, other variables present in the request context are used instead of using I.P. addresses as the client identifier. As a result, the application is now the access path, not the network.

Simplified Management & Policy Control

So, no matter what type of VPN services you use, the SASE provides a unified cloud to connect to instead of backhauling to a VPN gateway—simplifying management and policy control. Well-established technologies such as VPN, secure web gateway, and firewall are being reviewed and reassessed in Zero Trust remote access solutions as organizations revisit approaches that have been in place for over a decade. 

A recommendation: SASE and SD-WAN

The value of SD-WAN is high. However, it also brings many challenges, including new security risks. In some of my consultancies, I have seen unreliable performance and increased complexity due to the need for multiple overlays. Also, these overlays need to terminate somewhere, and this will be at a hub site.  However, when combined with SASE, the SD-WAN edge devices can be connected to a cloud-based infrastructure rather than the physical SD-WAN hubs. This brings the value of interconnectivity between branch sites without the complexity of deploying or managing physical Hub sites.

Zero Trust SASE: Vendor considerations

SASE features converge various individual components into one connected, cloud-delivered service, making it easy to control policies and behaviors. The SASE architecture is often based on a PoP design. When examining the SASE vendor, the vendor’s PoP layout should be geographically diverse, with worldwide entry and exit points. 

Also, considerations should be made regarding the vendor’s edge/physical infrastructure providers or colocation facilities. We can change your security posture, but we can’t change the speed of light and the laws of physics.

Consider how the SASE vendor routes traffic in their PoP fabric. Route optimization should be performed at each PoP. Some route optimizations are for high availability, while others are for performance. Does the vendor offer cold-potato or hot-potato routing? The cold-potato routing means bringing the end-user device into the provider’s network as soon as possible. On the other hand, “hot-potato routing” means the end user’s traffic traverses more of the public Internet.

The following is a list of considerations to review when discussing SASE with your preferred cybersecurity vendor:

A. Zero Trust SASE requirements: Information hiding:

Secure access service requires clients to be authenticated and authorized before accessing protected assets, regardless of whether the connection is inside or outside the network perimeter. Then, real-time encrypted connections are created between the requesting client and the protected asset. As a result, all SASE-protected servers and services are hidden from all unauthorized network queries and scan attempts.

You can’t attack what you can’t see.

The base for network security started by limiting visibility – you cannot attack what you cannot see. Public and private IP addresses range from separate networks. This was the biggest mistake we ever made as I.P. addresses are just binary, whether they are deemed public or private. If a host were assigned a public address and wanted to communicate with a host with a private address, it would need to go through a network address translation (NAT) device and have a permit policy set.

Understanding Port Knocking

Port knocking is a technique that enables secure and controlled access to network services. Traditionally, network ports are open and accessible, leaving systems vulnerable to unauthorized access. However, with port knocking, access to specific ports is only granted after a predefined sequence of connection attempts is made to other closed ports. This sequence acts as a virtual “knock” on the door, allowing authorized users to gain access while keeping malicious actors at bay.

To fully comprehend port knocking, let’s explore its inner mechanics. When users wish to access a specific service, they must first send connection attempts to a series of closed ports in a particular order. This sequence acts as a secret handshake, notifying the server that the user is authorized. Once the correct sequence is detected, the server dynamically opens the desired port, granting access to the requested service. It’s like having a hidden key that unlocks the door to a secure sanctuary.

Security based on the visibility

Network address translation is mapping an IP address space into another by modifying network address information in the IP header of packets while they are in transit across a traffic routing device. Limiting visibility this way works to a degree, but we cannot ignore the fact that a) if you have someone’s IP address, you can reach them, and b) if a port is open, you can potentially connect to it.

Therefore, the traditional security method can open your network wide for compromise, especially when bad actors have all the tools. However, finding, downloading, and running a port scanning tool is not hard.

“Nmap,” for Network Mapper, is the most widely used port scanning tool. Nmap works by checking a network for hosts and services. Once found, the software platform sends information to those hosts and services, responding. Nmap reads and interprets the response and uses the data to create a network map.

Example: Understanding Lynis

Lynis is an open-source security auditing tool for discovering vulnerabilities on Unix, Linux, and macOS systems. It comprehensively analyzes the system’s configuration and provides valuable insights into potential security weaknesses. By scanning the system against a vast database of known security issues, Lynis helps identify areas for improvement.

Lynis runs a series of tests and audits on the target system. It examines various aspects, including file permissions, system settings, available software packages, and network configurations. Lynis generates a detailed report highlighting any identified vulnerabilities or potential security gaps by analyzing these factors. This report becomes a valuable resource for system administrators and security professionals to take necessary actions and mitigate risks.

Example: Single Packet Authorization

Zero-trust network security hides information and infrastructure through lightweight protocols such as single-packet authorization (SPA). No internal IP addresses or DNS information is shown, creating an invisible network. As a result, we have zero visibility and connectivity, only establishing connectivity after clients prove they can be trusted to allow legitimate traffic. Now, we can have various protected assets hidden regardless of location: on-premise, public or private clouds, a DMZ, or a server on the internal LAN, in keeping with today’s hybrid environment.

Default-drop dynamic firewall

This approach mitigates denial-of-service attacks. Anything internet-facing is reachable on the public Internet and, therefore, susceptible to bandwidth and server denial-of-service attacks. The default-drop firewall is deployed, with no visible presence to unauthorized users. Only good packets are allowed. Single packet authorization (SPA) also provides for attack detection.

If a host receives anything other than a valid SPA packet or similar construct, it views that packet as part of a threat. The first packet to a service must be a valid SPA packet or similar security construct.

If it receives another packet type, it views this as an attack, which is helpful for bad packet detection. Therefore, SPA can determine an attack based on a single malicious packet, a highly effective way to detect network-based attacks. Thus, external network and cross-domain attacks are detected.

B. Zero Trust SASE architecture requirements: Mutually encrypted connections:

Transport Layer Security ( TLS ) is an encryption protocol that protects data when it moves between computers. When two computers send data, they agree to encrypt the information in a way they both understand. Transport layer security (TLS) was designed to provide mutual device authentication before enabling confidential communication over the public Internet. However, the standard TLS configuration validates that the client is connected to a trusted entity. So, typical TLS adoptions authenticate servers to clients, not clients to servers. 

Mutually encrypted connections

SASE uses the full TLS standard to provide mutual, two-way cryptographic authentication. Mutual TLS provides this and goes one step further to authenticate the client. Mutual TLS connections are set up between all components in the SASE architecture. Mutual Transport Layer Security (mTLS) establishes an encrypted TLS connection in which both parties use X. 509 digital certificates to authenticate each other.

MTLS can help mitigate the risk of moving services to the cloud and prevent malicious third parties from imitating genuine apps. This offers robust device and user authentication, as connections from unauthorized users and devices are mitigated. Secondly, forged certificates, which are attacks aimed at credential theft, are disallowed. This will reduce impersonation attacks, where a bad actor can forge a certificate from a compromised authority.

C. Need to know the access model: Zero Trust SASE architecture requirements

Thirdly, SASE employs a need-to-know access model. As a result, SASE permits the requesting client to view only the resources appropriate to the assigned policy. Users are associated with their devices, which are validated based on policy. Only connections to the specifically requested service are enabled, and no other connection is allowed to any other service. SASE provides additional information, such as who made the connection, from what device, and to what service.

This gives you complete visibility into all the established connections, which is hard to do without an IP-based solution. So now we have a contextual aspect of determining the level of risk. As a result, it makes forensics easier. The SASE architecture only accepts good packets; bad packets can be analyzed and tracked for forensic activities.

Key Point: Device validation

Secondly, it enforces device validation, which helps against threats from unauthorized devices. We can examine the requesting user and perform device validation. Device validation ensures that the machine runs on trusted hardware and is used by the appropriate user.

Finally, suppose a device becomes compromised. In that case, lateral movements are entirely locked down, as a user is only allowed access to the resource it is authorized to. Or they could be placed into a sandbox zone where human approval must intervene and assess the situation.

D. Dynamic access control: Zero Trust SASE architecture requirements

This traditional type of firewall is limited in scope as it cannot express or enforce rules based on identity information, which you can with zero trust identity. Attempting to model identity-centric control with the limitations of the 5-tuple, SASE can be used alongside traditional firewalls and take over the network access control enforcement that we try to do with conventional firewalls. SASE deploys a dynamic firewall that starts with one rule – deny all.

Then, requested communication is dynamically inserted into the firewall, providing an active firewall security policy instead of static configurations. For example, every packet hitting the firewall is inspected with a single packet authentication (SPA) and then quickly verified for a connection request. 

Key Point: Dynamic firewall

Once established, the firewall is closed again. Therefore, the firewall is dynamically opened only for a specific period. The connections made are not seen by rogues outside the network or the user domain within the network. Allows dynamic, membership-based enclaves that prevent network-based attacks.

The SASE dynamically binds users to devices, enabling those users to access protected resources by dynamically creating and removing firewall rules.  Access to protected resources is facilitated by dynamically creating and removing inbound and outbound access rules. Therefore, we now have more precise access control mechanisms and considerably reduced firewall rules.

**Micro perimeter**

Traditional applications were grouped into VLANs whether they offered similar services or not. Everything on that VLAN was reachable. The VLAN was a performance construct to break up broadcast domains, but it was pushed into the security world and never meant to be there. 

Its prime use was to increase performance. However, it was used for security in what we know as traditional zone-based networking. The segments in zone-based networks are too large and often have different devices with different security levels and requirements.

Key Points:

A. Logical-access boundary: SASE enables this by creating a logical access boundary encompassing a user and an application or set of applications. And that is it—nothing more and nothing less. Therefore, we have many virtual micro perimeters specific to the business instead of the traditional main inside/outside perimeter. Virtual perimeters allow you to grant access to the particular application, not the underlying network or subnet.

B. Reduce the attack surface: The smaller micro perimeters reduce the attack surface and limit the need for excessive access to all ports and protocols or all applications. These individualized “virtual perimeters” encompass only the user, the device, and the application. They are created and specific to the session and then closed again when it is over or if the risk level changes and the device or user needs to perform setup authentication.

C. Software-defined perimeter (SDP): SASE only grants access to the specific application at an application layer. The SDP part of SASE now controls which devices and applications can access distinctive services at an application level. Permitted by a policy granted by the SDP part of SASE, machines can only access particular hosts and services and cannot access network segments and subnets.

**Reduced: Broad Network Access**

Broad network access is eliminated, reducing the attack surface to an absolute minimum. SDP provides a fully encrypted application communication path. However, the binding application permits only authorized applications to communicate through the established encrypted tunnels, thus blocking all other applications from using them. This creates a dynamic perimeter around the application, including connected users and devices. Furthermore, it offers a narrow access path—reducing the attack surface to an absolute minimum.

E. Identity-driven access control: Zero Trust SASE architecture requirements

Traditional network solutions provide coarse-grained network segmentation based on someone’s IP address. However, someone’s IP address is not a good security hook and does not provide much information about user identity. SASE enables the creation of microsegmentation based on user-defined controls, allowing a 1-to-1 mapping, unlike with a VLAN, where there is the potential to see everything within that VLAN.

Identity-aware access: SASE provides adaptive, identity-aware, precision access for those seeking more precise access and session control to applications on-premises and in the cloud. Access policies are primarily based on user, device, and application identities. The procedure is applied independent of the user’s physical location or the device’s I.P. address, except where it prohibits it. This brings a lot more context to policy application. Therefore, if a bad actor gains access to one segment in the zone, they are prevented from compromising any other network resource.

Detecting Authentication Failures in Logs:

Syslog: Useful Security Technology

Syslog, short for System Logging Protocol, is a standard for message logging within computer systems. It collects various log entries from different sources and stores them in a centralized location. Syslog is a valuable resource for detecting security events as it captures information about system activities, errors, and warnings.

Auth.log is a specific type of log file that focuses on authentication-related events in Unix-based operating systems. It records user logins, failed login attempts, password changes, and other authentication activities. Analyzing auth.log can provide vital insights into potential security breaches, such as brute-force attacks or suspicious login patterns.

Now that we understand the importance of syslog and auth.log, let’s delve into some effective techniques for detecting security events in these files. One widely used approach is log monitoring, where automated tools analyze log entries in real time, flagging suspicious or malicious activities. Another technique is log correlation, which involves correlating events across multiple log sources to identify complex attack patterns.

Summary: Zero Trust SASE

Traditional security measures are no longer sufficient in today’s rapidly evolving digital landscape, where remote work and cloud-based applications have become the norm. Enter Zero Trust Secure Access Service Edge (SASE), a revolutionary approach that combines network security and wide-area networking into a unified framework. In this blog post, we explored the concept of Zero Trust SASE and its implications for the future of cybersecurity.

Understanding Zero Trust

Zero Trust is a security framework that operates under the “never trust, always verify.” It assumes no user or device should be inherently trusted, regardless of location or network. Instead, Zero Trust focuses on continuously verifying and validating identity, access, and security parameters before granting any level of access.

The Evolution of SASE

Secure Access Service Edge (SASE) represents a convergence of network security and wide-area networking capabilities. It combines security services, such as secure web gateways, firewall-as-a-service, and data loss prevention, with networking functionalities like software-defined wide-area networking (SD-WAN) and cloud-native architecture. SASE aims to provide comprehensive security and networking services in a unified, cloud-delivered model.

The Benefits of Zero Trust SASE:

a) Enhanced Security: Zero Trust SASE brings a holistic approach to security, ensuring that every user and device is continuously authenticated and authorized. This reduces the risk of unauthorized access and mitigates potential threats.

b) Improved Performance: By leveraging cloud-native architecture and SD-WAN capabilities, Zero Trust SASE optimizes network traffic, reduces latency, and enhances overall performance.

c) Simplified Management: A unified security and networking framework can streamline organizations’ management processes, reduce complexity, and achieve better visibility and control over their entire network infrastructure.

Implementing Zero Trust SASE

a) Comprehensive Assessment: Before adopting Zero Trust SASE, organizations should conduct a thorough assessment of their existing security and networking infrastructure, identify vulnerabilities, and define their security requirements.

b) Architecture Design: Organizations must design a robust architecture that aligns with their needs and integrates Zero Trust principles into their existing systems. This may involve deploying virtualized security functions, adopting SD-WAN technologies, and leveraging cloud services.

c) Continuous Monitoring and Adaptation: Zero Trust SASE is an ongoing process that requires continuous monitoring, analysis, and adaptation to address emerging threats and evolving business needs. Regular security audits and updates are crucial to maintaining a solid security posture.

Conclusion: Zero Trust SASE represents a paradigm shift in cybersecurity, providing a comprehensive and unified approach to secure access and network management. By embracing the principles of Zero Trust and leveraging the capabilities of SASE, organizations can enhance their security, improve performance, and simplify their network infrastructure. As the digital landscape continues to evolve, adopting Zero Trust SASE is not just an option—it’s necessary to safeguard our interconnected world’s future.

Matt Conran: The Visual Age
Latest posts by Matt Conran: The Visual Age (see all)

Comments are closed.