Computer Networks

Computer Networking: Building a Strong Foundation for Success

Computer Networking

Computer networking has revolutionized how we communicate and share information in today's digital age. Computer networking offers many possibilities and opportunities, from the Internet to local area networks. This blog post will delve into the fascinating world of computer networking and discover its key components, benefits, and prospects.

Computer networking is essentially the practice of connecting multiple devices to share resources and information. It involves using protocols, hardware, and software to establish and maintain these connections. Understanding networking fundamentals, such as IP addresses, routers, and switches, is crucial for anyone venturing into this field.

The Birth of Networking: In the early days of computer networking, it was primarily used for military and scientific purposes. The advent of ARPANET in the late 1960s laid the foundation for what would eventually become the internet. This pioneering effort allowed multiple computers to communicate with each other, setting the stage for the interconnected world we know today.

The Internet Era Begins: The 1990s marked a significant turning point in computer networking with the emergence of the World Wide Web. Tim Berners-Lee's creation of the HTTP protocol and the first web browser fueled the rapid growth and accessibility of the internet. Suddenly, information could be shared and accessed with just a few clicks, transforming the way we gather knowledge, conduct business, and connect with others.

From Dial-Up to Broadband: Remember the days of screeching dial-up modems? As technology progressed, so did our means of connecting to the internet. The widespread adoption of broadband internet brought about faster speeds and more reliable connections. With the introduction of DSL, cable, and fiber-optic networks, users could enjoy seamless online experiences, paving the way for streaming media, online gaming, and the rise of cloud computing.

Wireless Networking and Mobility: Gone are the days of being tethered to a desktop computer. The advent of wireless networking technologies such as Wi-Fi and Bluetooth opened up a world of mobility and convenience. Whether it's connecting to the internet on our smartphones, laptops, or IoT devices, wireless networks have become an indispensable part of our daily lives, enabling us to stay connected wherever we go.

Highlights: Computer Networking

Network Components

Creating a computer network requires a lot of preparation and knowledge of the right components used. One of the first steps in computer networking is identifying what features to use and where to place them. This includes selecting the proper hardware, such as the Layer 3 routers, Layer 2 switches, and Layer 1 hubs if you are on an older network. Along with the right software, such as operating systems, applications, and network services. Or if any advanced computer networking techniques, such as virtualization and firewalling, are required.

Diagram: Cloud Application Firewall.

Network Structure

Once the network components are identified, it’s time to plan the network’s structure. This involves deciding where each piece will be placed and how they will be connected. The majority of networks you will see today will be Ethernet-based. You will need a design process for more extensive networks. Still, for smaller networks, such as your home network, once physically connected, you are ready as all the network services are set up for you on the WAN router by the local service provider.

Network Design

To embark on our journey into network design, it’s crucial to grasp the fundamental concepts. This section will cover topics such as network topologies, protocols, and the different layers of the OSI model. By establishing a solid foundation, you’ll be better equipped to make informed decisions in your network design endeavors.

Assessing Requirements and Goals

Before exploring the technical aspects of network design, it’s essential to identify your specific requirements and goals. This section will explore the importance of conducting a thorough needs analysis, considering factors such as scalability, security, and bandwidth. By aligning your network design with your objectives, you can build a robust and future-proof infrastructure.

Choosing the Right Equipment and Technologies

With a clear understanding of your requirements, it’s time to select the appropriate equipment and technologies for your network. We’ll delve into the world of routers, switches, firewalls, and wireless access points, discussing the criteria for evaluating different options. Additionally, we’ll explore emerging technologies like Software-Defined Networking (SDN) and Network Function Virtualization (NFV) that can revolutionize network design.

Designing for Efficiency and Redundancy

Efficiency and redundancy are vital aspects of network design that ensure reliable and optimized performance. This section will cover load balancing, fault tolerance, and network segmentation strategies. We’ll explore techniques like VLANs (Virtual Local Area Networks), link aggregation, and the implementation of redundant paths to minimize downtime and enhance network resilience.

Securing Your Network

Network security is paramount in an era of increasing cyber threats. This section will address best practices for securing your network, including firewalls, intrusion detection systems, and encryption protocols. We’ll also touch upon network access control mechanisms and the importance of regular updates and patches to safeguard against vulnerabilities.

Firewall types
Diagram: Displaying the different firewall types.

 

 

Related: Additional links to internal content for pre-information:

  1. Data Center Topologies
  2. Distributed Firewalls
  3. Internet of Things Access Technologies
  4. LISP Protocol and VM Mobility.
  5. Port 179
  6. IP Forwarding
  7. Forwarding Routing Protocols
  8. Technology Insight for Microsegmentation
  9. Network Security Components
  10. Network Connectivity

Computer Networks

Key Computer Networking Discussion Points:


  • Introduction to computer networks and what is involved.

  • Highlighting the details of how you connect up networks.

  • Technical details on approaching computer networking and the importance of security.

  • Scenario: The main network devices: Are Layer 2 switches and Layer 3 routers.

  • The different types of protocols sued in computer networks.

Back to Basics: Computer Networks

A network is a collection of interconnected systems that share resources. Networks connect IoT (Internet of Things) devices, desktop computers, laptops, and mobile phones. A computer network will consist of standard devices such as APs, switches, and routers, the essential network components.

Network services

You can connect your network’s devices to other computer networks and the Internet, a global system of interconnected networks. So when we connect to the Internet, we secure the Local Area Network (LAN) to the Wide Area Network (WAN). As we move between computer networks, we must consider security.

You will need a security device between these segments for a stateful inspection firewall. You are probably running IPv4, so you will need a network service called Network Address Translation (NAT). IPv6, the latest version of the IP protocol, does not need NAT but may need a translation service to communicate with IPv4-only networks.

Network Address Translation

♦Types of Networks

There are various types of computer networks, each serving different purposes. Local Area Networks (LANs) connect devices within a limited geographical area, such as homes or offices. Wide Area Networks (WANs) span larger areas, connecting multiple LANs. The internet itself can be considered the most extensive WAN, connecting countless networks across the globe.

Computer networking brings numerous benefits to individuals and businesses. It enables seamless communication, file sharing, and resource access among connected devices. In industry, networking enhances productivity and collaboration, allowing employees to work together efficiently regardless of physical location. Moreover, networking facilitates company growth and expansion by providing access to global markets.

Computer Networking

Computer Networking Main Components


  •  A network is a collection of interconnected systems that share resources. The primary use case of a network was to share printers.

  • A network must offer a range of network services such as NAT.

  • Various types of computer networks, each serving different purposes. LAN vs WAN.

  • Protecting sensitive data, preventing unauthorized access, and mitigating potential threats are constant challenges.

Security and Challenges

With the ever-increasing reliance on computer networks, security becomes a critical concern. Protecting sensitive data, preventing unauthorized access, and mitigating potential threats are constant challenges. Network administrators employ various security measures such as firewalls, encryption, and intrusion detection systems to safeguard networks from malicious activities.

As technology continues to evolve, so does computer networking. Emerging trends such as cloud computing, the Internet of Things (IoT), and software-defined networking (SDN) are shaping the future of networking. The ability to connect more devices, handle massive amounts of data, and provide faster and more reliable connections opens up new possibilities for innovation and advancement.

Local Area Network

A Local Area Network (LAN) is a computer network that connects computers and other devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Ethernet cables typically connect LANs but may also be connected through wireless connections. LANs are usually used within a single organization or business but may connect multiple locations. The equipment in your LAN is in your control.

computer networking

Wide Area Network

Then, we have the Wide Area Network (WAN). In contrast to the LAN, a WAN is a computer network covering a wide geographical area, typically connecting multiple locations. Your LAN may only consist of Ethernet and a few network services.

However, a WAN may consist of various communications equipment, protocols, and media that provide access to multiple sites and users. WANs usually use private leased lines, such as T-carrier lines, to connect geographically dispersed locations. The equipment in the WAN is out of your control.

Computer Networks
Diagram: Computer Networks with LAN and WAN.

LAN

WAN

  • LAN means local area network.

  •   Itconnect users and applications in close geographical proximity (same building).

  •  LANs use OSI Layer 1 and Layer 2 data connection equipment for transmission.

  •   LANs use local connections like Ethernet cables and wireless access points.

  • LANs are faster, because they span less distance and have less congestion.

  • LANs are good for private IoT networks, bot networks, and small business networks. LANs use OSI Layer 1 and Layer 2 data connection equipment for transmission.

  • WAN means wide area network.

  • Itconnect users and applications in geographically dispersed locations (across the globe).

  • WANs use Layer 1, 2, and 3 network devices for data transmission.

  • WANs use wide area connections like MPLS, VPNs, leased lines, and the cloud.

  • WANs are slightly slower, but that may not be perceived by your users.

  • WANs use Layer 1, 2, and 3 network deviceWANs are good for disaster recovery, applications with global users, and large corporate networks.s for data transmission.

Virtual Private Network ( VPN )

We use a VPN to connect LAN networks over a WAN. A virtual private network (VPN) is a secure and private connection between two or more devices over a public network such as the Internet. Its purpose is to provide fast, encrypted communication over an untrusted network.

VPNs are commonly used by businesses and individuals to protect sensitive data from prying eyes. One of the primary benefits of using a VPN is that it can protect your online privacy by masking your IP address and encrypting your internet traffic. This means that your online activities are hidden from your internet service provider (ISP), hackers, and other third parties who may be trying to eavesdrop on your internet connection.

Example: VPN Technology

An example of a VPN technology is Cisco DMVPN. DMVPN operates in phases; there is DMVPN Pashe 1 to 3. For true hub and spoke, you would implement Phase 1; however, today, Phase 3 is the most popular, offering spoke-to-spoke tunnels. The screenshot below is an example of DMVPN Phase 1 running an OSPF network type of broadcast.

DMVPN

Computer Networking

Once the network’s components and structure have been determined, the next step is configuring computer networking. This involves setting up network parameters, such as IP addresses and subnets, and configuring routing tables.

Remember that security is paramount, especially when connecting to the Internet, an untrusted network with a lot of malicious activity. Firewalls help you create boundaries and secure zones for your networks. Different firewall types exist for the other network parts, making a layered approach to security.

Once the computer networking is completed, the next step is to test the network. This can be done using tools such as network analyzers, which can detect any errors or issues present. You can conduct manual tests using Internet Control Message Protocol (ICMP) protocols, such as ping and traceroute. Testing for performance is only half of the pictures. It’s also imperative to regularly monitor the network for potential security vulnerabilities. So, you must have antivirus software, a computer firewall, and other endpoint security controls.

Finally, it’s critical to keep the network updated. This includes updating the operating system and applications and patching any security vulnerabilities as soon as possible. It’s also crucial to watch for upcoming or emerging technologies that may benefit the network.

packet loss testing
Diagram: Packet loss testing.

Lab Guide: Endpoint Networking and Security

Address Resolution Protocol (ARP)

The first command you will want to become familiar with is arp

At its core, ARP is a protocol that maps an IP address to a corresponding MAC address. It enables devices within a local network to communicate with each other by resolving the destination MAC address for a given IP address. Devices store these mappings in an ARP table for efficient and quick communication.

Analysis: What you see are 5 column headers explained as follows:

  • Address: The IP address of a device on the network identified through the ARP protocol is resolved to the hostname.

  • HWtype: This describes the type of hardware facilitating the network connection. In this case, it is an ethernet rather than a Wi-Fi interface.

  • HW address: The MAC address assigned to the hardware interface responding to ARP requests.

  • Flags Mask: A hex value translated into ASCII defines how the interface was set.

  • Iface: Lists the interface’s name associated with the hardware and IP address.


Analysis: The output contains the same columns and information, with additional information about the contents of the cache. The -v flag is for verbose mode and provides additional information about the entries in the cache. Focus on the Address. The -n flag tells the command not to resolve the address to a hostname; the result is seeing the Address as an IP.

Note: The IP and Mac address returned is an additional VM running Linux in this network. This is significant because if a device is within the same subnet or layer two broadcast domain as a device identified by its local ARP cache, it will simply address traffic to the designated MAC address. In this way, if you can change the ARP cache, you can change where the device sends traffic within its subnet.

Locally, you can change the ARP cache directly by adding entries yourself.  See the screenshot above:

Analysis: Now you see the original entry and the entry you just set within the local ARP cache. When your device attempts to send traffic to the address 192.168.18.135, the packets will be addressed at layer 2 to the corresponding MAC address from this table. Generally, MAC address to IP address mappings are learned dynamically through the ARP network protocol activity, indicated by the “C” under the Flags Mask column. The CM reflects that the entry was manually added.

Note: Additional Information on ARP

  • ARP Request and Response

When a device needs to communicate with another device on the same network, it initiates an ARP request. The requesting device broadcasts an ARP request packet containing the target IP address for which it seeks the MAC address. The device with the matching IP address responds with an ARP reply packet, providing its MAC address. This exchange allows the requesting device to update its ARP table and establish a direct communication path.

  • ARP Cache Poisoning

While ARP serves a critical purpose in networking, it is vulnerable to attacks like ARP cache poisoning. In this type of attack, a malicious entity spoofs its MAC address, tricking devices on the network into associating an incorrect MAC address with an IP address. This can lead to various security issues, including interception of network traffic, data manipulation, and unauthorized access.

  • Address Resolution Protocol in IPv6

While ARP is predominantly used in IPv4 networks, IPv6 networks utilize a similar protocol called Neighbor Discovery Protocol (NDP). NDP performs functions identical to ARP but with additional features such as stateless address autoconfiguration and duplicate address detection. Although NDP differs from ARP in several ways, its purpose of mapping IP addresses to link-layer addresses remains the same.

Computer Networking & Data Traffic

Computer networking aims to carry data traffic so we can share resources. The first use case of computer networks was to share printers; now, we have a variety of use cases that evolve around data traffic. Data traffic can be generated from online activities such as streaming videos, downloading files, surfing the web, and playing online games. It is also generated from behind-the-scenes activities such as system updates and background and software downloads.

The Importance of Data Traffic

Data traffic is the amount transmitted over a network or the Internet. It is typically measured in bits, bytes, and packets per second. Data traffic can be both inbound and outbound. Inbound traffic is data coming into a network or computer, and outbound traffic is data leaving a network or computer. Inbound data traffic should be inspected by a security device, such as a firewall, which can either be at the network’s perimeter or on your computing device. At the same time, outbound traffic is generally unfiltered.

To keep up with the increasing demand, companies must monitor data traffic to ensure the highest quality of service and prevent network congestion. With the right data traffic monitoring tools and strategies, organizations can improve network performance and ensure their data is secure.

 

The Issues of Best Efforts or FIFO

Network devices don’t care what kind of traffic they have to forward. Ethernet frames are received by your switch, which looks for the destination MAC address before forwarding them. Your router does the same thing: it gets an IP packet, checks the routing table for the destination, and forwards the packet.

Would the frame or packet contain data from a user downloading the latest songs from Spotify or significant speech traffic from a VoIP phone? It doesn’t matter to the switch or router. This forwarding logic is called best effort or FIFO (First In, First Out). Sometimes, this can be an issue when applications are hungry for bandwidth. 

Example: Congestion

The serial link is likely congested when the host and IP phone transmit data and voice packets to the host and IP phone on the other side. Packets queued for transmission will not be indefinitely held by the router.

When the queue is full, how should the router proceed? Are data packets being dropped? Voice packets? If voice packets are dropped, there will be complaints about poor voice quality on the other end. If data packets are dropped, users may complain about slow transfer speeds.

You can change how the router or switch handles packets using QoS tools. For example, the router can prioritize voice traffic over data traffic.

The Role of QoS

Quality of Service (QoS) is a popular technique used in computer networking. QoS can segment applications so that different types will have different priority levels. For example, Voice traffic is often considered more critical than web surfing traffic. Especially as it is sensitive to packet loss. So, when there is congestion on the network, QoS allows administrators to prioritize network traffic so users have the best experience.

Quality of Service (QoS) refers to techniques and protocols prioritizing and managing network traffic. By allocating resources effectively, QoS ensures that critical applications and services receive the necessary bandwidth, low latency, and minimal packet loss while maintaining a stable network connection. This optimization process considers factors such as data type, network congestion, and the specific requirements of different applications.

Expedited Forwarding (EF)

Expedited Forwarding (EF) is a network traffic management model that provides preferential treatment to certain types of traffic. The EF model prioritizes traffic, specifically real-time traffic such as voice, video, and streaming media, over other types of traffic, such as email and web browsing. This allows these real-time applications to function more reliably and efficiently by reducing latency and jitter.

The EF model works by assigning a traffic class to each data packet based on the type of data it contains. The assigned class dictates how the network treats the packet. The EF model has two categories: EF for real-time traffic and Best Effort (BE) for other traffic. EF traffic is given preferential treatment, meaning it is prioritized over BE traffic, resulting in a higher quality of service for the EF traffic.

The EF model is an effective and efficient way to manage computer network traffic. By prioritizing real-time traffic, the EF model allows these applications to function more reliably, with fewer delays and a higher quality of service. Additionally, the EF model is more efficient, reducing the amount of traffic that needs to be managed by the network.

Lab Guide: QoS and Marking Traffic

TOS ( Type of Service )

In this Lab, we’ll take a look at marking packets. Marking means we set the TOS (Type of Service) byte with an IP Precedence or DSCP value.

Marking and Classifcaiton take place on R2. R1 is the source of the ICMP and HTTP Traffic. R3 has an HTTP server installed. As traffic, both telnet and HTTP packets get sent from R1 and traverse R2, classification takes place.

Note:

To ensure each application gets the treatment it requires, we must implement QoS (Quality of Service). The first step when implementing QoS is classification,

QoS classification

We will mark the traffic and apply a QoS policy once it has been classified. Marking and configuring QoS policies are a whole different story, so we’ll stick to classification in this lesson.

On IOS routers, there are a couple of methods we can use for classification:

  • Header inspection
  • Payload inspection

We can use some fields in our headers to classify applications. For example, telnet uses TCP port 23, and HTTP uses TCP port 80. Using header inspection, you can look for:

  • Layer 2: MAC addresses
  • Layer 3: source and destination IP addresses
  • Layer 4: source and destination port numbers and protocol

QoS

♦Benefits of Quality of Service

A) Bandwidth Optimization:

One of the primary advantages of implementing QoS is the optimized utilization of available bandwidth. By classifying and prioritizing traffic, QoS ensures that bandwidth is allocated efficiently, preventing congestion and bottlenecks. This translates into smoother and uninterrupted network experiences, especially when multiple users or devices access the network simultaneously.

B) Enhanced User Experience:

With QoS, users can enjoy a seamless experience across various applications and services. Whether streaming high-quality video content, engaging in real-time online gaming, or participating in video conferences, QoS helps maintain low latency and minimal jitter, resulting in a smooth and immersive user experience.

♦Implementing Quality of Service

To implement QoS effectively, network administrators need to understand the specific requirements of their network and its users. This involves:

A) Traffic Classification:

Different types of network traffic require different levels of priority. Administrators can allocate resources by classifying traffic based on its nature and importance.

B) Traffic Shaping and Prioritization:

Once traffic is classified, administrators can prioritize it using various QoS mechanisms such as traffic shaping, packet queuing, and traffic policing. These techniques ensure critical applications receive the necessary resources while preventing high-bandwidth applications from monopolizing the network.

C) Monitoring and Fine-Tuning:

Regular monitoring and fine-tuning of QoS parameters are essential to maintain optimal network performance. By analyzing network traffic patterns and adjusting QoS settings accordingly, administrators can adapt to changing demands and ensure a consistently high level of service.

Computer Networking Components – Devices:

First, the devices. Media interconnect devices provide the channel over which the data travels from source to destination. Many devices are virtualized today, meaning they no longer exist as separate hardware units.

One physical device can emulate multiple end devices. In addition to having its operating system and required software, an emulated computer system operates as a separate physical unit. Devices can be further divided into endpoints and intermediary devices.

Endpoint: 

Endpoint is a device part of a computer network, including PCs, laptops, tablets, smartphones, video game consoles, and televisions. Endpoints can be physical hardware units, such as file servers, printers, sensors, cameras, manufacturing robots, and smart home components. Nowadays, we have virtualized endpoints.

Computer Networking Components – Intermediate Devices

Layer 2 Switches:

These devices enable multiple endpoints, such as PCs, file servers, printers, sensors, cameras, and manufacturing robots, to connect to the network. Switches allow devices to communicate on the same network. Switches attempt to forward messages from the sender so the destination can only receive them, unlike a hub that floods traffic out of all ports. The switch operates with MAC addresses and works at Layer 2 of the OSI model.

Usually, all the devices that connect to a single switch or a group of interconnected switches belong to a standard network. They can, therefore, exchange information directly with each other. If an end device wants to communicate with a device on a different network, it requires the “services” of a device known as a router. Routers connect other networks and work higher up in the OSI model at Layer 3. They use the IP protocol.

Routers

Routers’ primary function is to route traffic between computer networks. For example, you need a router to connect your office network to the Internet. Routers connect computer networks and intelligently select the best paths between them, and they hold destinations in what is known as a routing table. There are different routing protocols for different-sized networks, and each will have other routing convergence times.

routing convergence
The well-known steps in routing convergence.

We recently combined Layer 2 and Layer 3 functionality. So we have a Layer 3 router with a Layer 2 switch module inserted, or we can have a multilayer switch that combines the functions of Layer 3 routing and Layer 2 switch functionality on a single device.

Computer Networks
Diagram: Computer Networks with Switch and Routers.

Wi-Fi access points

These devices allow wireless devices to connect. They usually connect to switches but can also be integrated into routers. My WAN router has everything in one box: Wi-Fi, Ethernet LAN, and network services such as NAT and WAN. Wi-Fi access points provide wireless internet access within a specified area.

Wi-Fi access points are typically found in coffee shops, restaurants, libraries, and airports in public settings. These access points allow anyone with a Wi-Fi-enabled device to access the Internet without needing additional hardware. 

WLAN controllers: 

WLAN controllers are devices used to automate the configuration of wireless access points. They provide centralized management of wireless networks and act as a gateway between wireless and wired networks. Administrators can monitor and manage the entire WLAN, set up security policies, and configure access points through the controller. WLAN controllers also authenticate users, allowing them to access the wireless network.

In addition, the WLAN controller can also detect and protect against malicious activities such as unauthorized access, denial-of-service attacks, and interference from other wireless networks. By using the controller, administrators can also monitor the usage of the wireless network and make sure that the network is secure.

Network firewalls:

Then, we have firewalls, which are the cornerstone of security. Depending on your requirements, there will be different firewall types. Firewalls range from basic packet filtering to advanced next-generation firewalls and come in virtual and physical forms.

Generally, a firewall monitors and controls incoming and outgoing traffic according to predefined security rules. The firewall will have a default rule set so that some firewall interfaces are more trusted than others, blankly restricting traffic from outside to inside, but you need to set up a policy for firewalls to work.

A firewall typically establishes a barrier between a trusted, secure internal network and another outside network, such as the Internet, which is assumed not to be secure or trusted. Firewalls are typically deployed in a layered approach, meaning multiple security measures are used to protect the network. Firewalls provide application, protocol, and network layer protection.

data center firewall
Diagram: The data center firewall.
  • Application layer protection:

The next layer is the application layer, designed to protect the network from malicious applications, such as viruses and malware. The application layer also includes software like firewalls to detect and block malicious traffic.

  • Protocol layer protection: 

The third layer is the protocol layer. This layer focuses on ensuring that the data traveling over a network is encrypted and that it is not allowed to be modified or corrupted in any way. This layer also includes authentication protocols that prevent unauthorized users from accessing the network.

  • Network Layer protection

Finally, the fourth layer is network layer protection. This layer focuses on controlling access to the network and ensuring that users cannot access resources or applications they are not authorized to use.

A network intrusion protection system (IPS): 

An IPS or IDS analyzes network traffic to search for signs that a particular behavior is suspicious or malicious. If the IPS detects such behavior, it can take protective action immediately. In addition, the IPS and firewall can work together to protect a network. So, if an IPS detects suspicious behavior, it can trigger a policy or rule for the firewall to implement.

An intrusion protection system can alert administrators of suspicious activity, such as attempts to gain unauthorized access to confidential files or data. Additionally, it can block malicious activity if necessary; it provides a layer of defense against malicious actors and cyber attacks. Intrusion protection systems are essential to any organization’s security plan.

Cisco IPS
Diagram: Traditional Intrusion Detection. With Cisco IPS.

Computer Networking Components – Media

Next, we have the media. The media connects network devices. Different media have different characteristics, and selecting the most appropriate medium depends on the circumstances, including the environment in which the media is used and the distances that need to be covered.

The media will need some connectors. A connector makes it much easier to connect wired media to network devices. A connector is a plug attached to each end of the cable. RJ-45 connector is the most common type of connector on an Ethernet LAN.

Ethernet: Wired LAN technology.

The term Ethernet refers to an entire family of standards. Some standards define how to send data over a particular type of cabling and at a specific speed. Other standards define protocols or rules that the Ethernet nodes must follow to be a part of an Ethernet LAN. All these Ethernet standards come from the IEEE and include 802.3 as the beginning of the standard name.

Introducing Copper and Fiber

Ethernet LANs use cables for the links between nodes on a computer network. Because many types of cables use copper wires, Ethernet LANs are often called wired LANs. Ethernet LANs also use fiber-optic cabling, which includes a fiberglass core that devices use to send data using light. 

Materials inside the cable: UTP and Fiber

The most fundamental cabling choice concerns the materials used inside the cable to transmit bits physically: either copper wires or glass fibers. 

  • Unshielded twisted pair (UTP) cabling devices transmit data over electrical circuits via the copper wires inside the cable.
  • Fiber-optic cabling, the more expensive alternative, allows Ethernet nodes to send light over glass fibers in the cable’s center. 

Although more expensive, optical cables typically allow longer cabling distances between nodes. So you have UTP cabling in your LAN and Fiber-optic cabling over the WAN.

UTP and Fiber

The most common copper cabling for Ethernet is UTP. An unshielded twisted pair (UTP) is cheaper than the other two and is easier to install and troubleshoot. Many UTP-based Ethernet standards can use a cable length of up to 100 meters, which means that most Ethernet cabling in an enterprise uses UTP cables.

The distance from an Ethernet switch to every endpoint on a building’s floor will likely be less than 100m. In some cases, however, an engineer might prefer to use fiber cabling first for some links in an Ethernet LAN to reach greater distances.

Fiber Cabling

Then we have fiber-optic cabling, a glass core that carries light pulses and is immune to electrical interference. Fiber-optic cabling is typically used as a backbone between buildings. So fiber cables are high-speed transmission mediums. It contains tiny glass or plastic filaments as the medium to which light passes.

Cabling types: Multimode and Single Mode

There are two main types of fiber optic cables. We have single-mode fiber ( SMF) and multimode fiber ( MMF). Two implementations of fiber-optic include MMF for shorter distances and SMF for longer distances. Multimode improves the maximum distances over UTP and uses less expensive transmitters than single-mode. Standards vary; for instance, the criteria for 10 Gigabit Ethernet over Fiber allow for distances up to 400m, often allowing for connecting devices in different buildings in the same office park.

Network Services and Protocols

We need to follow these standards and the rules of the game. We also need protocols so we have the means to communicate. If you use your web browser, you use the HTTP protocol. If you send an email, you use other protocols, such as IMAP and SMTP.

A protocol establishes a set of rules that determine how data is transmitted between different devices in the network. The two protocols must talk to each other, such as HTTP at one end and HTTP at the other.

Consider protocol the same way you would speak the same language. We need to communicate in the same language. Then, we have standards that we need to follow for computer networking, such as the TCP/IP suite.

Types of protocols

We have different types of protocols. The following are the main types of protocols used in computer networking.

  • Communication Protocols

For example, we have routing protocols on our routers that help you forward traffic. This would be an example of a communication protocol that allows different devices to communicate with each other. Another example of a communication protocol would be instant messaging.

Instant messaging is instantaneous, text-based communication you probably have used on your smartphone. So here we have several instant messaging network protocols. Short Message Service (SMS): This communications protocol was created to send and receive text messages over cellular networks.  

  • Network Management

Network management protocols define and describe the various operating procedures of a computer network. These protocols affect multiple devices on a single network—including computers, routers, and servers—to ensure that each one and the network as a whole perform optimally.

  • Security Protocols

Security protocols, also called cryptographic protocols, ensure that the network and the data sent over it are protected from unauthorized users. Security protocols are implemented on more than just your network security devices. They are implemented everywhere. The standard functions of security network protocols include encryption: Encryption protocols protect data and secure areas by requiring users to input a secret key or password to access that information.

The following screenshot is an example of an IPsec tunnel offering end-to-end encryption. Notice that the first packet in the ping ( ICMP request ) was lost due to ARP working in the background. Five pings are sent, but only four are encapsulated/decapsulated.

Site to Site VPN

Characteristics of a network

Network Topology:

In a carefully designed network, data flows are optimized, and the network performs as intended based on the network topology. Network topology is the arrangement of a computer network’s elements (links, nodes, etc.). It can be used to illustrate a network’s physical and logical layout and how it functions. 

what is spine and leaf architecture

Bitrate or Bandwidth:

It is often referred to as bandwidth or speed in device configurations, sometimes considered speed. Bitrate measures the data rate in bits per second (bps) of a given link in the network. The number of bits transmitted in a second is more important than the speed at which one bit is transmitted over the link – which is determined by the physical properties of the medium that propagates the signal. Many link bit rates are commonly encountered today, including 1 and 10 gigabits per second (1 and 10 billion bits per second). Some links can reach 100 and even 400 gigabits per second.

Network Availability: 

Network availability is determined by several factors, including the type of network being used, the number of users, the complexity of the network, the physical environment, and the availability of network resources. Network availability should also be addressed in terms of redundancy and backup plans. Redundancy helps to ensure that the system is still operational even if one or more system components fail. Backup plans should also be in place in the event of a system failure.

A network’s availability is calculated based on the percentage of time it is accessible and operational. To calculate this percentage, divide the number of minutes the network is available by the total number of minutes it is available for over an agreed period and divide it by 100. In other words, availability is the ratio of uptime and full-time, expressed in percentage. 

Gateway Load Balancer Protocol

Network High Availability: 

High availability is a critical component of a successful IT infrastructure. It ensures that systems and services remain available and accessible to users and customers. High availability is achieved by using redundancies, such as multiple servers, systems, and networks, to ensure that if one component fails, a backup component is available.

High availability is also achieved through fault tolerance, which involves designing systems that respond to failures without losing data or becoming unavailable. Various strategies, such as clustering, virtualization, and replication, can achieve high availability.

Network Reliability:

Network reliability can be achieved by implementing a variety of measures, often through redundancy. Redundancy is a crucial factor in ensuring a reliable network. Redundancy has multiple components to provide a backup in case of failure. Redundancy can include having multiple servers, routers, switches, and other hardware devices. Redundancy can also involve having numerous sources of power, such as various power supplies or batteries, and multiple paths for data to travel through the network.

For adequate network reliability, you also need to consider network monitoring. Network monitoring involves using software and hardware tools to monitor the network’s performance continuously. Monitoring can detect and alert administrators of potential performance issues or failures. We have a new term called Observability, which accurately reflects tracking in today’s environment.

Network Characteristics
Diagram: Network Characteristics

Network Scalability:

A network’s scalability indicates how easily it can accommodate more users and data transmission requirements without affecting performance. Designing and optimizing a network only for the current conditions can make it costly and challenging to meet new needs when the network grows.

Several factors must be taken into account in terms of network scalability. First and foremost, the network must be designed with the expectation that the number of devices or users will increase over time. This includes hardware and software components, as the network must support the increased traffic. Additionally, the network must be designed to be flexible so that it can easily accommodate changes in traffic or user count. 

Network Security: 

Network security is protecting the integrity and accessibility of networks and data. It involves a range of protective measures designed to prevent unauthorized access, misuse, modification, or denial of a computer network and its processing data. These measures include physical security, technical security, and administrative security. A network’s security tells you how well it protects itself against potential threats.

The subject of security is essential, and defense techniques and practices are constantly evolving. The network infrastructure and the information transmitted over it should also be protected. Whenever you take actions to affect the network, you should consider security. An excellent way to view network security is to take a zero-trust approach.

Software Defined Perimeter and Zero Trust
Software Defined Perimeter and Zero Trust

Virtualization: 

Virtualization can be done at the hardware, operating system, and application level. At the hardware level, physical hardware can be divided into multiple virtual machines, each running its operating system and applications.

At the operating system level, virtualization can run multiple operating systems on the same physical server, allowing for more efficient resource use. At the application level, multiple applications can run on the same operating system, allowing for better resource utilization and scalability. 

container based virtualization

Overall, virtualization can provide several benefits, including improved efficiency, utilization, flexibility, security, and scalability. It can consolidate and manage hardware or simplify application movement between different environments. Virtualization can also make it easier to manage other settings and provide better security by isolating various applications.

Computer Networking

Characteristics of a Network



  • Network Topology– It is the arrangement of a computer network’s elements (links, nodes, etc.)

  • Bitrate or Bandwidth– Bitrate measures the data rate in bits per second (bps) of a given link in the network.

  • Network Availability– It calculate based on the percentage of time it is accessible and operational..

  •  High Availability– It ensures that systems and services remain available and accessible to users and customers.

  • Reliability– It can be achieved by implementing a variety of measures, often through redundancy.

  • Scalability– Indicates how easily it can accommodate more users and data transmission needs without affecting performance.

  • Security– It protect the integrity, accessibility of networks & data, tells you how well it protects itself against potential threats..

  • Virtualization– It helps to improved efficiency, utilization & flexibility, as well as improved security and scalability.

Computer Networking and Network Topologies

Physical and logical topologies exist in networks. The physical topology describes the physical layout of the devices and cables. A physical topology may be the same in two networks but may differ in distances between nodes, physical connections, transmission rates, or signal types.

There are various types of physical topologies you may encounter in wired networks. Identifying the kind of cabling used is essential when describing the physical topology. Physical topology can be categorized into the following categories:

Bus Topology:

In a bus topology, every workstation is connected to a common transmission medium, a single cable called a backbone or bus. In a previous bus topology, computers and other network devices were connected to a central coaxial cable via connectors, resulting in direct connectivity.

Ring Topology:

In a ring topology, computers and other network devices are cabled in succession, with the last device connected to the first to form a circle or ring. There are two neighbors for every device in the network, and there are no direct connections between them. When one node sends data to another, it passes through each node between them until it reaches its destination.

  • Star Topology

A star topology is the most common physical topology, where network devices are connected to a central device through point-to-point connections. It is also known as the hub and spoke topology. A spoke device does not have a direct physical connection to another. This topology can also be called the extended star topology. A device with its spokes replaces one or more spoke devices in an extended star topology.

Mesh Topology

One device can be connected to more than one other in a mesh topology. Multiple paths are available for one node to reach another. Redundant links enhance reliability and self-healing. In a full mesh topology, all nodes are connected. In partial mesh, some nodes do not connect to all other nodes.

Introducing Switching Technologies

All Layer 2 devices connect to switches to communicate with one another. Switches work at layer two of the Open Systems Interconnection (OSI) model, the data link layer. Switches are ready to use right out of the box. In contrast to a router, a switch doesn’t require configuration settings by default. When you unbox the switch, it does not need to be configured to perform its role, which is to provide connectivity for all devices on your network. After putting power on the switch and connecting the systems, the switch will forward traffic to each connected device as needed.

Switch vs. Hubs

Moreover, you learned that switches had replaced hubs since they provide more advanced capabilities and are better suited to today’s computer networks. Advanced functionality includes filtering traffic by sending data only to the destination port (while a hub always sends data to all ports).

Full Duplex vs. Half Duplex

With a full duplex, both parties can talk and listen simultaneously, making it more efficient than half-duplex communication, where only one can speak simultaneously. Full duplex transmission is also more reliable since it is less likely to experience interference or distortion. Until switches became available, communication devices were only half-duplexed with hubs. A half-duplex device can send and receive simultaneously, but not simultaneously send and receive.

VLAN: Logical LANs

Virtual Local Area Networks (VLANs) are computer networks that divide a single physical local area network (LAN) into multiple logical networks. This partitioning allows for the segmentation of broadcast traffic, which helps to improve network performance and security.

VLANs enable administrators to set up multiple networks within a single physical LAN without needing separate cables or ports. These benefits businesses need to separate data and applications between various teams, departments, or customers.

In a VLAN, each segment is identified by a unique identifier or VLAN ID. The VLAN ID is used to associate traffic with a particular VLAN segment. For example, if a user needs to access an application on a different VLAN, the packet must be tagged with the VLAN ID of the destination segment to be routed correctly.

In the screenshot below, we have an overlay with VXLAN. VXLAN, short for Virtual Extensible LAN, is an overlay network technology that enables the creation of virtual Layer 2 networks over an existing Layer 3 infrastructure. It addresses traditional VLANs’ limitations by extending network virtualization’s scalability and flexibility. By encapsulating Layer 2 frames within UDP packets, VXLAN allows for creating up to 16 million logical networks, overcoming the limitations imposed by the 12-bit VLAN identifier field.

VXLAN
Diagram: Changing the VNI

VLANs also provide security benefits. A VLAN can help prevent malicious traffic from entering a segment by segmenting traffic into logical networks. This helps prevent attackers from gaining access to the entire network. Additionally, VLANs can isolate critical or confidential data from other users on the same network. VLANs can be implemented on almost any network, including wired and wireless networks. They can also be combined with other network technologies, such as routing and firewalls, to improve security further.

Overall, VLANs are powerful tools for improving performance and security in a local area network. With the right implementation and configuration, businesses can enjoy improved performance and better protection.

Switching Technologies

Switching Technologies


  •  Switch vs. Hubs- Switches replaced hubs since they provide more advanced capabilities and are better suited to today’s computer networks.

  • Full Duplex vs. Half Duplex- In Half Duplex mode, Sender can send the data and also can receive the data but one at a time. In Full Duplex mode, Sender can send the data and also can receive the data simultaneously.

  •  VLAN: Logical LANs- VLANs are a powerful tool to help improve performance and security in a local area network.

IP Routing Process

IP routing works by examining the IP address of each packet and determining where it should be sent. Routers are responsible for this task and use routing protocols such as RIP, OSPF, EIGRP, and BGP to decide the best route for each packet. In addition, each router contains a routing table, which includes information on the best path to a given destination.

When a router receives a packet, it looks up the destination in its routing table. If the destination is known, the router will make a forwarding decision based on the routing protocol. The router will use a default gateway to forward the packet if the destination is unknown.

Routing Protocol
Diagram: Routing Protocol. ISIS.

To route packets successfully, routers must be configured appropriately and able to communicate with one another. They must also be able to detect any changes to the network, such as link failures or changes in network topology.

IP routing is essential to any network, ensuring packets are routed as efficiently as possible. Therefore, it is crucial to ensure that routers are correctly configured and maintained.

IP Forwarding Example
Diagram: IP Forwarding Example.

Routing Table

A routing table is a data table stored in a router or a networked computer that lists the possible routes a packet of data can take when traversing a network. The routing table contains information about the network’s topology and decides which route a packet should take when leaving the router or computer. Therefore, the routing table must be updated to ensure data packets are routed correctly.

The routing table usually contains entries that specify which interface to use when forwarding a packet. Each entry may have network destination addresses and associated metrics, such as the route’s cost or hop count. In addition to the destination address, each entry can include a subnet mask, a gateway address, and a list of interface addresses.

Routers use the routing table to determine which interface to use when forwarding packets. When a router receives a packet, it looks at the packet’s destination address and compares it to the entries in the routing table. Once it finds a match, it forwards the packet to the corresponding interface.

Lab Guide: Networking and Security

Routing Tables and Netstat

Routing tables are essentially databases stored within networking devices, such as routers. These tables contain valuable information about the available paths and destinations within a network. Each entry in a routing table consists of various fields, including the destination network address, next-hop address, and interface through which the data packet should be forwarded.

One of the fundamental features of Netstat is its ability to display active connections. Using the appropriate flags, you can view the list of established connections, their local and remote IP addresses, ports, and the protocol being used. This information is invaluable for identifying suspicious or unauthorized connections.

Get started by running the route command.

Analysis: Seem familiar? Yet another table with the following column headers:

    • Destination: This refers to the destination of traffic from this device. The default refers to anything not explicitly set.

    • Gateway: The next hop for traffic headed to the specific destination.

    • Genmask: The netmask of the destination.

      Note: For more detailed explanations of all the columns and results, run man route.

Run netstat to get a stream of information relating to network socket connections and UNIX domain sockets.

Note: UNIX domain sockets are a mechanism that allows processes local to the devices to exchange data.

  1. To clean this up, you can view just the network traffic using. netstat -at.

    • -a displays all ports, including IPV4 & IPV6

    • -t displays only TCP sockets

Analysis: When routes are created in different ways, they display differently. In the most recent rule, you can see that no metric is listed, and the scope is different from the other automatic routes. That is the kind of information we can use for detection.

The route table will send traffic to the designated gateway regardless of the route’s validity. Threat actors can use this to intercept traffic destined for another location, making it a crucial place to look for indicators of compromise.

How Routing Tables Work:

Routing tables utilize various routing protocols, such as OSPF (Open Shortest Path First) or BGP (Border Gateway Protocol), to gather information about network topology and make informed decisions about the best paths for data packets. These protocols exchange routing information between routers, ensuring that each device has an up-to-date understanding of the network’s structure.

Routing Table Entries and Metrics:

Each entry in a routing table contains specific metrics that determine the best path for forwarding packets. Metrics can include hop count, bandwidth, delay, or reliability. By evaluating these metrics, routers can select the most optimal route based on network conditions and requirements.

Summary: Computer Networking

It’s the backbone of modern communication, from browsing the internet to sharing files across devices. In this blog post, we delved into the fascinating world of computer networking, exploring its key concepts, benefits, and future prospects.

Section 1: What is Computer Networking?

Computer networking refers to connecting multiple computers and devices to facilitate data sharing and communication. It involves hardware components such as routers, switches, cables, and software protocols that enable seamless data transmission.

Section 2: The Importance of Computer Networking

Computer networking has revolutionized how we work, communicate, and access information. It enables efficient collaboration, allowing individuals and organizations to share resources, communicate in real-time, and access data from anywhere in the world. Whether a small local network or a global internet connection, networking plays a pivotal role in our digital lives.

Section 3: Types of Computer Networks

There are various types of computer networks, each serving different purposes. Local Area Networks (LANs) connect devices such as homes, offices, or schools within a limited area. Wide Area Networks (WANs) span larger geographical areas, connecting multiple LANs together. Additionally, there are Metropolitan Area Networks (MANs), Wireless Networks, and the vast Internet itself.

Section 4: Key Concepts in Computer Networking

To understand computer networking, you must familiarize yourself with key concepts like IP addresses, protocols (such as TCP/IP), routing, and network security. These concepts form the foundation of how data is transmitted, received, and protected within a network.

Section 5: The Future of Computer Networking

As technology advances, so does the world of computer networking. Emerging trends such as the Internet of Things (IoT), 5G networks, and cloud computing are reshaping the networking landscape. These developments promise faster speeds, increased connectivity, and enhanced security, paving the way for a more interconnected future.

Conclusion:

In conclusion, computer networking is a fascinating field that underpins our digital world. Its importance cannot be overstated, as it enables seamless communication, resource sharing, and global connectivity. Understanding the key concepts and staying updated with the latest trends in computer networking will empower individuals and organizations to make the most of this ever-evolving technology.

Diagram: Cloud Application Firewall.

Cisco CloudLock

Cisco CloudLock

In today's digital age, data security is of utmost importance. With the increasing reliance on cloud-based services, organizations need robust solutions to protect their sensitive information. Enter Cisco Cloudlock, a cutting-edge cloud security platform that offers comprehensive data protection. In this blog post, we will explore the key features and benefits of Cisco Cloudlock, and how it can help businesses secure their valuable data.

Cisco Cloudlock is a cloud-native security platform that provides visibility, control, and threat protection for cloud-based applications and services. It offers a wide range of features including data loss prevention, access controls, encryption, and advanced threat intelligence.

By integrating seamlessly with popular cloud platforms like Google Workspace, Microsoft 365, and Salesforce, Cloudlock ensures data security across multiple environments.

Table of Contents

Highlights: Cisco CloudLock

Lack of Visibility

Cloud computing is becoming more popular due to its cost-savings, scalability, and accessibility. However, there is a drawback when it comes to security posture. Firstly, you no longer have as much visibility or control as you used to with on-premise application access. Cloud providers assume more risk and have less visibility into your environment the more they manage it for you.

A critical security concern is that you have yet to learn what’s being done in the cloud and when. In addition, the cloud now hosts your data, which raises questions about what information is there, who can access it, where it goes, and whether it’s being stolen. Cloud platforms’ security challenges are unique, and Cisco has several solutions that can help alleviate these challenges. 

Examples: Cloud Security Solutions.

  1. Cisco CloudLock
  2. Cisco Umbrella 
  3. Cisco Secure Cloud Analytics
  4. Cisco Duo Security

Cisco CloudLock

Related: Before you proceed, you may find the following posts helpful for pre-information:

  1. Cisco Secure Firewall
  2. Dropped Packet Test
  3. Network Security Components
  4. Cisco Umbrella CASB
  5. CASB Tools
  6. SASE Definition
  7. Open Networking
  8. Distributed Firewalls
  9. Kubernetes Security Best Practice

Cloud Security Solutions. 

Key Cisco CloudLock Discussion Points:


  • Introduction to Cisco CloudLock and what is involved.

  • Highlighting the challenging landscape’s details and issues with moving to the cloud.

  • Technical details on approaching cloud security with the different cloud security solutions.

  • Scenario: The future of cloud securing with SASE.

  • Details on Cisco CloudLock CASB.

Back to basics: Cisco CloudLock

♦ Key Features and Capabilities

Cloudlock offers a wide range of powerful features to ensure the highest level of data security. These include advanced threat protection, data loss prevention, access controls, and security analytics. Let’s delve deeper into each of these capabilities:

-Advanced Threat Protection: Cloudlock leverages advanced machine learning algorithms to detect and prevent various threats, including malware, phishing attempts, and account compromise. It continuously monitors user behavior and identifies suspicious activities to neutralize potential risks.

– Data Loss Prevention: Protecting sensitive data is crucial for any organization. Cloudlock’s data loss prevention (DLP) capabilities help you identify, classify, and protect sensitive information across your cloud applications. It enables you to define policies, enforce encryption, and prevent unauthorized sharing of critical data.

– Access Controls: With Cloudlock, you have granular control over who can access specific files, folders, or applications within your cloud environment. You can define access policies based on user roles, departments, or other criteria, ensuring only authorized personnel can view or edit sensitive data.

– Security Analytics: Cloudlock’s robust security analytics provide valuable insights into your cloud environment. It provides detailed reports on user activities, data usage patterns, and potential security gaps. This helps you identify and proactively address vulnerabilities to strengthen your overall security posture.

Cisco CloudLock

Cisco CloudLock

Cisco CloudLock Main Components 

  • Cloudlock leverages advanced machine-learning algorithms to detect and prevent various threats.

  • Cloudlock’s data loss prevention (DLP) capabilities help you identify, classify, and protect sensitive information.

  • With Cloudlock, you have granular control over who can access specific files, folders, or applications.

  • Cloudlock’s robust security analytics provide valuable insights into your cloud environment.

♦Benefits of Cisco Cloudlock

Implementing Cisco Cloudlock offers numerous benefits for organizations of all sizes:

– Enhanced Security: By leveraging Cloudlock’s advanced threat protection and data loss prevention capabilities, organizations can significantly enhance their security posture and reduce the risk of data breaches or cyber-attacks.

– Compliance and Regulatory Requirements: Cloudlock helps organizations meet various compliance and regulatory requirements by providing comprehensive visibility and control over their cloud environment. It assists in enforcing data privacy regulations and ensures adherence to industry-specific security standards.

– Improved Productivity: With Cloudlock’s robust access controls and security policies, organizations can confidently embrace cloud collaboration and empower employees to work seamlessly across cloud applications. This leads to improved productivity and collaboration while maintaining data security.

Lab Guide: Social Engineering Toolkit

Below, we have an example of a phishing attack. I’m using the Social Engineering Toolkit to perform a phishing attack for a web template. Follow the screenshots and notice we have a hit at the end.

New Security Challenges

Our approach to technology has changed as a result of cloud technology. But unfortunately, bad actors have also exploited vulnerabilities in digital infrastructure to create a new set of security challenges that we must deal with. Firstly, enforcing corporate security policies becomes more challenging since third-party hosted SaaS applications do not guarantee that users will pass through corporate security infrastructure where traditional security screening would have occurred.

This needs to be more visibility. Due to the gaps in visibility and coverage, a breach can go undetected for months. So, we can employ several cloud security controls to convert this gap. All of these fall under the cloud security solution of Cisco CloudLock.

We have user and entity behavior analytics (UEBA), data loss protection (DLP), and application firewalls, which are today’s SaaS applications’ most important security controls. Cisco offers these security services as part of the Cisco CloudLock, the Cisco CASB offering. In addition, Cisco Cloudlock provides security across multiple cloud environments.

Cloud Security Concepts.

Before we go any further, let us brush up on some critical security concepts. The principle of least privilege states that people or automated tools should be able to access only the information they need to do their jobs. However, when the least privilege is applied in practice, your access policies are typically denied by default.

Users are not granted any privileges by default and must request and approve any required privileges. The concept of defense in depth acknowledges that almost any security control can fail, either because a bad actor is sufficiently determined or because the security control is implemented incorrectly.

By overlapping security controls, defense in depth prevents bad actors from gaining access to sensitive information if one fails. In addition, you should remember who will most likely cause you trouble. These are your potential “threat actors,” as cybersecurity professionals call them.

Examples: Threat actors.

  1. Organized crime or independent criminals interested in making money
  2. Hacktivists, interested primarily in discrediting you by releasing stolen data, committing acts of vandalism, or disrupting your business
  3. Inside attackers are usually interested in denying you or making money.
  4. State actors who may steal secrets or disrupt your business

NMAP is a tool that bad actors can use. Notice below you can use stealth scans that go under the radar of firewalls.

Cloud Security Solutions

Authentication and group-based access control policies defined in the application are part of the security the SaaS environment provides. However, SaaS providers significantly differ regarding security features, functionality, and capabilities. It is far from one size fits all regarding security across the different SaaS providers.

For example, behavioral analytics, data loss prevention, and application firewalling are not among most SaaS providers’ main offerings – or capabilities. We will discuss these cloud security features in just a moment.

Organizations must refrain from directly deploying custom firewalls or other security mechanisms into SaaS environments because they need to expose infrastructure below the application layer. Most SaaS platforms allow users to control their infrastructure through tools provided by the provider, but not all.

Cloud Security Solutions: Data Loss Prevention (DLP)

Let us start with DLP. Data loss prevention (DLP) aims to prevent critical data from leaving your business unauthorizedly. This presents a significant challenge for security because the landscape and scope are complex, particularly when multiple cloud environments are involved.

Generally, people think of firewalls, load balancers, email security systems, and host-based antimalware solutions as protecting their internal users. However, organizations use data loss prevention (DLP) to prevent internal threats, whether deliberate or unintentional.

DLP solutions are specifically designed to address “inside-out” threats, whereas firewalls and other security solutions are not positioned to be experts in detecting those types of threats.

By preventing authorized users from performing authorized actions on approved devices, data loss prevention solutions address the challenge of preventing authorized users from moving data outside authorized realms. Intentional, unintentional, or at least accidental data breaches are not uncommon.

Example of Threat

Let us examine a typical threat. A financial credit services company user could possess legitimate access to unlimited credit card numbers and personally identifiable information (PII) through an intentional insider breach. It is also likely that the insider has access to email, so attachments can also be sent this way.

Even firewalls and email security solutions can’t prevent this insider from emailing an Excel spreadsheet with credit card numbers and other personal information from their corporate email account to their email address.

They are not looking for that type of metadata. However, a DLP is more aligned with this type of threat. So, with the help of adequately configured data loss prevention solutions, unacceptable data transfers can be mitigated, prevented, and alerted. 

Remember that disaster recovery and data loss prevention go hand in hand. The data you can access is lost to you once you re-access it. In other words, preventing data loss is a worthwhile goal. However, recovering from data loss and disasters that prevent you from accessing your data (whether they are caused by malware or something more straightforward, such as forgotten domain renewals) requires planning.

  • A key point: It boils down to a lack of visibility

In on-premises DLP systems, visibility is limited to network traffic and does not extend to cloud environments, such as SaaS-bound traffic. Additionally, given the ease with which users can distribute information in cloud environments and their highly collaborative nature, distributing sensitive information to external parties is easy for employees.

However, it is difficult for security analysts to detect with traditional mechanisms. Cloudlock’s data loss prevention technology continuously monitors cloud environments to detect and secure sensitive information in cloud environments. Cloudlock, for instance, can see whether files stored in an application are shared outside of an organization, outside of specific organizational groups, or outside the entire organization.

Cloud Security Solutions: Application Firewalls

Next, we have application firewalls. How does an application firewall differ from a “traditional firewall”? What is its difference from a “next-generation firewall”? First, an application firewall focuses on the application, not the user or the network. Its logic differs entirely from a non-application firewall, and it can create policies based on different objects. Establishing a policy on traditional things in cloud environments is useless.

Application Firewall vs. Traditional Firewall.

Many traditional approaches to protecting cloud applications will not work when you use a firewall. Because your cloud application needs to be accessible from anywhere, it is not feasible to configure rules for “Source IP.” You might be able to geo-fence using IP blocks assigned by IANA, but what about a traveling user or someone on vacation who needs remote assistance? Source IP addresses can not be used to write security policies for cloud applications.

Your toolkit just became ineffective when it came to Layer 3 and Layer 4 security controls. In addition, the attack could originate from anywhere in the world using IPv4 or IPv6. So, how you secure your cloud applications and data must change from a traditional firewall to an application firewall focusing directly on the application and nothing below.

The Issue of Static Firewall Rules

In addition, you face challenges when you write firewall policies based on user IDs. To make your cloud application accessible from anywhere by anyone, you may as well not write firewall rules based on directory services like LDAP or Active Directory.

Compared with an on-premise solution, you have fewer options for filtering traffic between clients and the cloud application. In an application firewall, data is exchanged, and access is controlled to (or from) an application. Security of IP networks and Layer 4 ports are not the focus of application firewalls but rather the protection of applications and services.

A firewall at the application layer cares little about how data is received and connected to the application or how it is formatted or encrypted. And this is what a traditional firewall would focus on. Instead, an application firewall monitors data exchanges between applications and other entities. Data exchange methods rather than location are examined when determining if policy violations have occurred.

Diagram: Cloud Application Firewall.

The road to Cisco CloudLock or multiple products.

It is possible to enable security microservices such as UEBA, DLP, and the application firewall to protect your SaaS environment by deploying multiple products for each capability and then integrating them with different SaaS vendors and offerings.

This approach provides additional capabilities but at the cost of managing multiple products per environment and application. Adding other security products to the cloud environment increases security capabilities. Still, there comes the point where the additional security capabilities become unmanageable due to time, financial costs, and architectural limitations. 

Cisco can help customers close the complexity of multiple-point products and introduce additional security services for your SaaS environments under one security solution, Cisco CloudLock. It has UEBA, Application Firewall, DLP, and CASB. This has been extended to secure access service edge (SASE) with Cisco Umbrella, which we will touch on at the end of the post.

Cloud Security Solutions: Cloud Access Security Broker

Users use cloud access security brokers (CASBs) to interact with cloud services such as SaaS applications, IaaS, and PaaS environments. Moreover, they help you comply with security policies and enforce them. Now, we can enforce policy in settings that we do not control.

CASBs safeguard cloud data, applications, and user accounts, regardless of where the user is or how they access the cloud application. Where other security mechanisms focus on protecting the endpoint or the network, CASB solutions focus on protecting the cloud environment. They are purpose-built for the job of cloud protection.

CASB solutions negotiate access security between the user and the cloud application on its behalf. CASB solutions go beyond merely “permitting” or “denying” access. A CASB solution can enable users to access cloud applications, monitor user behavior, and protect organizations from risky cloud applications by providing visibility into user behavior.

The cloud application continues to be accessible to end users in the same way as before CASB deployment. Applications are still advertised and served by cloud application service providers in the same manner as before the implementation of CASB. Cloud applications do not change, nor does the user environment.

Additionally, due to a lack of control, more visibility will be needed – many SaaS environments need a mechanism for tracking user behavior and controlling the users (although most cloud providers have their own UEBA systems).

cloud access security brokers

Identifying the Different CASB Categories

CASB architectures generally fall into two categories: In-line deployment or out-of-band deployment.

Reverse proxies and forward proxies are two types of In-line CASB deployments. Proxy servers provide security services to users when they connect to resources. They are usually located in front of the help to be accessed. Users connect to remote resources directly through forward proxies, which provide security services.

CASB solutions based on in-line CASBs are susceptible to data path problems if interruptions occur in the CASB environment or the services on which the CASB solution depends. Forward Proxies have another drawback: You must know where your users are to place the proxy appropriately.

In addition, proxy-based CASB security capabilities are limited, given the nature of cloud usage. For instance, proxy-based CASBs can’t secure cloud-to-cloud traffic, and users and devices within the cloud are unmanaged. These deficiencies create potential security gaps. 

CASB Categories

It is possible to categorize out-of-band CASB into API-based CASB and log-based CASB, which live outside users and cloud applications. Compared to a log-based CASB, API-based CASB exchanges API calls with the cloud application environment rather than log data. SIEM or other reporting tools typically ingest log data, but API calls allow the CASB solution to control cloud applications directly. API-based are not dependent on cloud applications. They are integrated with cloud applications but external to their environments.

CASB solutions based on logs are limited because they only take action once logs have been parsed by a SIEM or other tool. CASBs based on APIs monitor cloud usage, whether on or off the corporate network or using managed or unmanaged devices, along with monitoring cloud usage. Cloud-to-cloud applications can also be protected using a CASB that uses APIs – communications that never reach the corporate network. 

So, Cloudlock is an API-based CASB. Therefore, it doesn’t need to be in the user traffic path to provide security, unlike proxy-based CASBs. As a result, there is no need to worry about undersizing or oversizing a proxy. Also, you don’t have to maintain proxy rulesets, cloud application traffic doesn’t have to be routed through another security layer, and traffic doesn’t have to circumvent the proxy, which is a significant value-add to cloud application security.

  • A key point: CloudLock and machine learning

To detect anomalies, Cloudlock uses advanced machine learning algorithms. It also sees actions that appear to be occurring across distances at impossible speeds outside Whitelist countries. Identifying suspicious behavior and anomalies in behavior is one of the critical features of Cisco Cloudlock.

Cisco Umbrella Firewall
Diagram: Cisco CASB

The Evolution of Cloud Security Service

Cisco CloudLock is now part of Cisco SASE. People are now calling this the evolution of cloud security. Cisco SASE includes a secure web gateway, firewall, CASB functionality, DNS-layer security, and interactive threat intelligence, all delivered from one cloud security service so organizations can embrace direct Internet access. The cloud security service Cisco Umbrella provides multiple security functions and integrates well with Cisco SD-WAN and Cisco Thousand Eyes.

Cisco Umbrella Features:

DNS-Layer Security

Using Umbrella’s DNS-layer security, you can improve your security quickly and easily. Its ability to stop threats over any port or protocol before they reach your network or endpoints improves security visibility, detects compromised systems, and protects your users.

For some background on DNS Functionality. Notice the ports used by DNS along with the different DNS records.

Secure Web Gateway

With Umbrella’s secure web gateway, you can view and inspect web traffic, control URLs and applications, and protect yourself against malware. To enforce acceptable use policies and block advanced threats, use IPsec tunnels, PAC files, or proxy chaining.

Firewall

With Umbrella’s firewall, all activity is logged, and unwanted traffic is blocked using IP, port, and protocol rules. An IPsec tunnel can be configured on any network device to forward traffic. Policies automatically apply to newly created tunnels to ensure consistent enforcement and easy setup.

Cloud Access Security Broker

You can discover and report on cloud applications used throughout your organization through Cisco Umbrella. To better manage cloud adoption and reduce risk, you can view details on risk levels for discovered apps and block or control usage. 

Cisco Umbrella
Diagram: Cisco Umbrella. Source is Cisco

Summary of Cisco CloudLock’s main features:

User security: Cloudlock uses advanced machine learning algorithms to detect anomalies based on multiple factors. It also identifies activities outside allowed countries and spots actions that occur at impossible speeds across distances.

App security: The Cloudlock Apps Firewall discovers and controls cloud apps connected to your corporate environment. You can see a crowd-sourced Community Trust Rating for individual apps, and you can ban or allowlist them based on risk.

Data security: Cloudlock’s data loss prevention (DLP) technology continuously monitors cloud environments to detect and secure sensitive information. It provides countless out-of-the-box policies as well as highly tunable custom policies. SaaS applications can come from many sources, both reliable and unreliable. Therefore, Data Security is a primary concern when using SaaS applications in the cloud.

 

 

Summary: Cisco CloudLock

In today’s digital age, businesses increasingly rely on cloud-based platforms to store and manage their data. However, with this convenience comes the need for robust security measures to protect sensitive information from potential threats. One such solution that stands out in the market is Cisco Cloudlock. In this blog post, we delved into the features, benefits, and implementation of Cisco Cloudlock, empowering you to safeguard your cloud environment effectively.

Section 1: Understanding Cisco Cloudlock

Cisco Cloudlock is a comprehensive cloud access security broker (CASB) solution that provides visibility, control, and security for cloud-based applications like Google Workspace, Microsoft 365, and Salesforce. By integrating seamlessly with these platforms, Cloudlock enables organizations to monitor and protect their data, ensuring compliance with industry regulations and mitigating the risk of data breaches.

Section 2: Key Features and Benefits

a) Data Loss Prevention (DLP): Cloudlock’s DLP capabilities allow businesses to define and enforce policies to prevent sensitive data from being shared or leaked outside of approved channels. With customizable policies and real-time scanning, Cloudlock ensures your critical information remains secure.

b) Threat Protection: Recognizing the evolving threat landscape, Cloudlock employs advanced threat intelligence and machine learning algorithms to detect and block malicious activities in real time. From identifying compromised accounts to detecting anomalous behavior, Cloudlock is a proactive shield against cyber threats.

c) Compliance and Governance: Maintaining regulatory compliance is a top priority for organizations across various industries. Cloudlock assists in achieving compliance by providing granular visibility into data usage, generating comprehensive audit reports, and enforcing data governance policies, thereby avoiding potential penalties and reputational damage.

Section 3: Implementing Cisco Cloudlock

Implementing Cisco Cloudlock is a straightforward process that involves a few key steps. Firstly, organizations need to integrate Cloudlock with their chosen cloud platforms—once integrated, Cloudlock scans and indexes data to gain visibility into the cloud environment. Organizations can then define policies, configure alerts, and set up automated responses based on specific security requirements. Regular monitoring and fine-tuning of policies ensure optimal protection.

Conclusion:

Cisco Cloudlock emerges as a powerful solution for safeguarding your cloud environment. With its robust features, including data loss prevention, threat protection, and compliance capabilities, Cloudlock empowers organizations to embrace the cloud securely. By implementing Cisco Cloudlock, businesses can unlock the full potential of cloud-based platforms while ensuring their valuable data’s confidentiality, integrity, and availability.

rsz_1moving_thrugh_thr_layer

Network Connectivity

Network Connectivity

Network connectivity has become integral to our lives in today's digital age. A reliable and efficient network is crucial, from staying connected with loved ones to conducting business operations. In this blog post, we will explore the significance of network connectivity and how it has shaped our world.

Over the years, network connectivity has evolved significantly. Gone are the days of dial-up connections and limited bandwidth. Today, we have access to high-speed internet connections, enabling us to connect with people around the globe instantly. This advancement has revolutionized communication, work, learning, and entertainment.

Network connectivity is the ability of devices or systems to connect and communicate with each other. It allows data to flow seamlessly, enabling us to access information, engage in online activities, and collaborate across vast distances. Whether through wired connections like Ethernet or wireless technologies such as Wi-Fi and cellular networks, network connectivity keeps us interconnected like never before.

The Navigators of Networks: Routers are the heart of any network, directing traffic and ensuring data packets reach their intended destinations. They analyze network addresses, make decisions, and establish connections across different networks. With their advanced routing protocols, routers enable efficient and secure data transmission.

Switches - The Traffic Managers: While routers handle traffic between different networks, switches manage the flow of data within a network. They create multiple paths for data to travel, ensuring efficient data transfer between devices. Switches also enable the segmentation of networks, enhancing security and network performance.

The Lifelines of Connectivity: Behind the scenes, network cables provide the physical connections that transmit data between devices. Ethernet cables, such as Cat5e or Cat6, are commonly used for wired connections, offering high-speed and reliable data transmission. Fiber optic cables, on the other hand, provide incredibly fast data transfer over long distances.

Wireless Access Points - Unleashing the Power of Mobility: In an era of increasing wireless connectivity, wireless access points (WAPs) are vital components. WAPs enable wireless devices to connect to a network, providing flexibility and mobility. They use wireless communication protocols like Wi-Fi to transmit and receive data, allowing users to access the network without physical connections.

Highlights: Network Connectivity

Firewalls – The Gatekeepers of Networks

Firewalls act as the first line of defense against unauthorized access to a network. These security devices monitor incoming and outgoing network traffic based on predetermined rules, allowing or blocking data packets accordingly. By implementing firewalls, organizations can prevent potential threats and maintain control over their network’s security posture.

Intrusion Detection Systems (IDS) – Detecting Suspicious Activities

Intrusion Detection Systems (IDS) are designed to identify and respond to potential security breaches. They monitor network traffic, looking for signs of malicious activity or unauthorized access attempts. IDS can be host- or network-based, providing real-time alerts and helping network administrators take prompt action to mitigate potential threats.

Virtual Private Networks (VPNs) – Securing Remote Connections

The need for secure remote connections arises as the workforce becomes increasingly mobile. Virtual Private Networks (VPNs) establish encrypted tunnels over public networks, allowing remote users to access company resources securely. By encrypting data, VPNs provide confidentiality and integrity, ensuring that sensitive information remains protected during transmission.

Access Control Systems – Limiting Unauthorized Entry

Access Control Systems ensure only authorized personnel can access sensitive data and resources. This includes various authentication mechanisms such as passwords, biometrics, or smart cards. Organizations can significantly reduce the risk of unauthorized access to critical systems or information by implementing access control systems.

firewalling device

Network and Security Components: Rules of the game

To understand network connectivity, we will break networking down into layers. Then, we can fit the different networking and security components that make up a network into each layer. This is the starting point for understanding how networks work and carrying out the advanced stages of troubleshooting.

Networking does not just magically happen; we need to follow protocols and rules so that two endpoints can communicate and share information. These rules and protocols don’t just exist on the endpoint, such as your laptop; they also need to exist on the network and security components in the path between the two endpoints. 

OSI Model

TCP/IP Suite and OSI Model

We have networking models to help you understand what rules and protocols we need on all components, such as the TCP/IP Suite and the OSI model. These networking models are like a blueprint for building a house. They allow you to follow specific patterns and have certain types of people, which are protocols in networking.

For example, when you know the destination’s IP address, you use the Address Resolution Protocol (ARP) to find the MAC address. So, we have rules and standards to follow. By learning these rules, you can install, configure, and troubleshoot the main networking components of routers, switches, and security devices.

Related: Useful links to pre-information

  1. Network Security Components
  2. IP Forwarding
  3. Cisco Secure Firewall
  4. Distributed Firewalls
  5. Virtual Firewalls
  6. IPv6 Attacks
  7. Layer 3 Data Center
  8. SD WAN SASE

Back to Basics: What is Network Connectivity?

♦ Types of Network Connectivity

1. Wired Connectivity: Wired connections provide reliable and high-speed data transmission. Ethernet cables, fiber optics, and powerline adapters are typical examples of wired network connectivity. They offer stability and security, making them ideal for tasks requiring consistent and fast data transfer.

2. Wireless Connectivity: Wireless network connectivity has revolutionized how we connect. Wi-Fi networks have become ubiquitous, allowing us to access the internet wirelessly within a specific range. Additionally, cellular networks enable us to stay connected on the go, providing internet access even in remote areas.

Despite its numerous benefits, network connectivity can face challenges. Signal interference, network congestion, and security threats can hinder smooth connectivity. However, advancements in technology have paved the way for solutions. Mesh networks, signal boosters, and encryption protocols are tools and techniques to overcome these challenges and ensure reliable connectivity.

Network Connectivity

Network Connectivity Components

Main Connectivity Types: Wired vs Wireless

  • Wired connections provide reliable and high-speed data transmission.

  • Wireless connections utilize radio waves to transmit data between devices without needing physical cables.

  • Wireless networks also eliminate the need for physical infrastructure

  • Wired networks are less susceptible to interference and congestion, resulting in faster and more stable data transfer.

Section 1: Understanding Wired Connections

Wired connections have a long-standing history and are widely used in various settings. They involve physical cables that connect devices to a network. Ethernet cables, for instance, are commonly used to establish wired connections. These cables transmit data through electrical signals, ensuring reliable and secure connections. Wired connections are often preferred when stability and speed are crucial, such as in offices, data centers, and gaming setups.

Section 2: Pros and Cons of Wired Connections

While wired connections offer several advantages, they also come with their own set of limitations. One notable advantage is the consistent and reliable speed that wired connections provide. They are less susceptible to interference and congestion, resulting in faster and more stable data transfer. However, the downside of wired connections lies in their lack of mobility. Users are tethered to the physical connection point, limiting their freedom to move while remaining connected.

Section 3: Embracing Wireless Technology

On the other hand, wireless connections have revolutionized how we connect to networks. They utilize radio waves to transmit data between devices without needing physical cables. Wi-Fi networks have become incredibly popular, enabling users to connect multiple devices simultaneously. Wireless connections offer the convenience of mobility, allowing users to move freely within the coverage area while staying connected.

Section 4: Pros and Cons of Wireless Connections

Wireless connections have undoubtedly brought us unparalleled convenience, but they have some drawbacks. One of the main advantages is their flexibility, allowing users to connect devices without the hassle of cables. Wireless networks also eliminate the need for physical infrastructure, making them more cost-effective and more accessible to set up. However, wireless connections can be affected by interference from other devices, walls, and distance limitations, leading to potential signal drops and slower speeds.

1st Lab Guide: Networking Scanning

PowerShell and TNC

There are multiple ways to scan a network to determine host and open ports. PowerShell is used with variables and can perform advanced scripting.  Below, I am using TNC to monitor my own Ubuntu VM and the WAN gateway.

Note:

The command stands for test network connection. This will display a summary of the request and a timeout. The PingSucceeded value will be equal to False. This output can indicate port filtering or that the target machine is powered off. The different statuses can vary between operating systems even when the results appear to be the same.

You can scan for the presence of multiple systems on the network with the following 1..2 | % {tnc 192.168.0.$_}

Analysis:

    • This command will attempt to scan 2 IP addresses in the range 192.168.0.1 and 192.168.0.2. The number range 1..1 can be extended, for example, 1..200, although it will take longer to complete.
    • RDP is a prevalent protocol for administrative purposes on machines within a corporate network. This will display a summary of the request and show a successful connection. The output will show TcpTestSucceeded equals False. This indicates the system is not running and active, and a service could be running on port 3389, which is typically used for administration and remote desktop access.

In the following example, we have a PowerShell code to create a variable called $ports by typing $ports = 22,53,80,445,3389 and pressing the Return key. This variable will store multiple standard ports found on the target system.

Then scan the machine using the new variable $ports with the command $ports | ForEach-Object {$port = $_; if (tnc 192.168.0.2 -Port $port ) {“$port is open” } else {“$port is closed”} }.

Analysis:

    • This code will scan the IP address 172.31.24.20 and test each port number within the previously created $ports variable. For each port found, an open port message is displayed.
    • For any port not found, the message port is shown as closed. Several ports should be opened on the machine according to the output.

Network Connectivity: Technical Details

Source and Destination

Networking, or computer networking, transports and exchanges data between nodes over a shared medium in an information system. It’s about moving information from your application across and within your network. Generally speaking, the essence of network connectivity exists as a source and a destination where we can communicate. T

There are different modes of communication, such as unicast, broadcast, and multicast. But for now, consider a network and the infrastructure used within a network to support communication between a single source and destination.

The source can be the application you use on your computer, such as your web browsers that use HTTP protocol. So, there are rules that your web browser software needs to follow, and the HTTP protocol specifies these. The destination could be elsewhere, such as an application hosted in the cloud or another network from your on-premise Local Area Network (LAN). In this case, we are moving from an on-premise network to a cloud network.

What is network connectivity
Diagram: What is network connectivity? The source is less.

2nd Lab Guide: IGMPv1

IGMPv1 is a network-layer protocol that enables hosts to join or leave multicast groups on an Internet Protocol (IP) network. It is primarily designed to manage multicast group membership within a local area network (LAN). Using IGMPv1, hosts can receive information from a single sender and distribute it to multiple receivers, optimizing network traffic and improving efficiency.

IGMP (Internet Group Management Protocol) version 1 is the first version hosts can use to announce to a router that they want to receive multicast traffic from a specific group. It’s a simple protocol that uses only two messages:

  • Membership report
  • Membership query

Below, we have one router and two hosts. We will enable multicast routing and IGMP on the router’s Gigabit 0/1 interface. All modern operating systems support IGMP,

  1. First, we enabled multicast routing globally; this is required for the router to process IGMP traffic.
  2. We enabled PIM on the interface. PIM is used for multicast routing between routers and is also required for the router to process IGMP traffic.

IGMPv1

 
debug ip igmp
Diagram: Debug IP IGMP

Network Connectivity: Edges of Control

In the world of computer networking and network connectivity, there are different types of edges of control. In this case, if you are sitting in your home network, the edge of control is your home router provided by a service provider in your area, along with a firewall device positioned at each of these perimeters, marking the points between internal and external networks.

In your home network, this parameter is static. However, the perimeter is more dissolved, especially in more extensive networks. You would need multiple firewalls and firewall types positioned in the local area network, creating a defense-in-depth approach to security.

Network connectivity
Diagram: Sample network for network connectivity.

One way to create the boundary between the external and internal networks is with a firewall. The example below shows a Cisco ASA firewall configured with zones. The zones create the border. Below, Gig0/0 is the internal zone with a security level of 0. By default, a higher-level area, such as the outside zone, with a security level of 100, cannot communicate with zones of lower numbering.

ASA security zones are virtual boundaries created within your network infrastructure to control and monitor traffic flow. These zones provide an added layer of defense, segregating different network segments based on their trust levels. Administrators can apply specific security policies and access controls by classifying traffic into zones, reducing the risk of unauthorized access or malicious activities.

Cisco ASA configuration
Diagram: Cisco ASA Configuration

3rd Lab Guide: DMVPN – Network connectivity over the WAN.

The lab guide below shows a Dynamic Multipoint VPN topology based on a GRE tunnel. R12 is the hub, and R11 is the spoke. Usually, additional spokes are located across the WAN. DMVPN provides layer 3 network connectivity over the Wide Area Network, and DMPVPN provides connectivity based on a Layer 3 overlay with GRE.

The underlay network connectivity is the SP network, and the overlay network connectivity is based on GRE. In the diagram below, we can see the tunnel configuration with icon 1. We have a tunnel source and destination with the point-to-point GRE tunnel protocol.

Note:

DMVPN operates with different DMVPN phases. Point-to-point GRE on the spokes is DMVPN phase 1. Multipoint GRE on the spokes would illustrate DMVPN Phase 3. Icon 2 displays the routing protocol running over the GRE ( overlay tunnel ), and icon 3 shows the traceroute capture. We only see one hop as the TTL is encapsulated in the GRE tunnel.

DMVPN configuration
Diagram: DMVPN Configuration

Network Connectivity with Network Models

So, as I said, computer networks enable connected hosts—computers—to share and access resources. So when you think of a network, think of an area, and this area exists for sharing. The first purpose of network connectivity was to share printers, and it has not been expanded to many other devices to share, but in reality, the use case of sharing is still its primary use case.

You need to know how all the connections happen and all the hardware and software that enables that exchange of resources. We do this using a networking model. So, we can use network models to conceptualize the many parts of a network, relying primarily on the Open Systems Interconnection (OSI) seven-layer model to help you understand networking. 

Remember that we don’t implement the OSI; we implement the TCP/IP suite. However, the OSI is a great place to start learning, as everything is divided into individual layers. You can place the network and security components at each layer to help you understand how networks work. Let us start with the OSI model before we move to the TCP/IP suite.

Why use the OSI Model?

The open systems interconnection (OSI) model is based on splitting a communication system into seven abstract layers, each stacked upon the last. What can you use the OSI model for? Understanding OSI enables a tech to determine quickly at what layer a problem can occur. Second, the OSI model provides a common language techs use to describe specific network functions.

Understanding the functions of each OSI layer is very important when troubleshooting network components and network communication. Once you understand these functions and the troubleshooting tools available to you at the various layers of the model, troubleshooting network-related problems and understanding will be much easier.

 

Highlighting the OSI layers

  • Layer 7 Application

The application layer provides the user interface. Software applications like web browsers and email clients, to name a few, rely on the application layer to initiate communications. Application layer protocols include HTTP and SMTP (Simple Mail Transfer Protocol is one of the protocols enabling email communications).

  • Layer 6 Presentation

The presentation layer determines how data is represented to the user. This layer is primarily responsible for preparing data so the application layer can use it; in other words, layer 6 makes the data presentable for applications to consume. Encryption and compression work at this layer.

  • Layer 5 Session

This layer is responsible for opening and closing communication between the two devices. The time between open and closed communication is known as the session. 

  • Layer 4 Transport

Layer 4, the transport layer, is responsible for end-to-end communication between the two devices. These activities include taking data from the session layer and breaking it into segments before sending it to layer 3. Layer 4 is also responsible for flow control and error control. 

  • Layer 3 Network

The network layer facilitates data transfer between different networks. It is unnecessary if the two devices communicating are on the same network. 

  • Layer 2 Data Link

The data link layer is very similar to the network layer, except it facilitates data transfer between two devices on the same network. The data link layer takes packets from the network layer and breaks them into smaller pieces called frames. 

  • Layer 1 Physical

The physical layer defines physical properties for connections and communication: repeaters and hubs operate here. Wireless solutions are defined at the physical layer. 

4th Lab Guide: Data link layer and MAC addresses

The following lab guide will explore the addresses of Media Access Control (MAC). MAC address works at the data link layer of the OSI model. This address may also be called the physical address since it’s the identifier assigned to a Network Interface Card (NIC).

While this is typically a physical card or controller that you might plug the ethernet or fiber into, MACs are also used to identify a pseudo-physical address for logical interfaces. This example shows the MAC changes seen in virtual machines or docker containers. 

Note:

We have a Docker container running a web service and mapped port 80 on the container to 8000 on the Docker host, which is an Ubuntu VM. Also, notice the assigned MAC addresses; we will change these immediately. I’m also running a TCPDump, which will start a packet capture on Docker0.

Docker networking

Analysis:

    • For this challenge, we will focus on the virtual network between your local endpoint and a web application running locally inside a docker container. The docker0 interface is your endpoint’s interface for communication with docker containers. The “veth…” interfaces are the virtual interfaces for web applications.
    • Even though the MAC address is supposed to be a statically assigned identifier for the specific NIC, they are straightforward to change. We changed the MAC address in the following screenshots and dropped the Docker0.

Note:

Typically, attackers will spoof a MAC to mimic a desired type of device or use randomization software to mask their endpoint.

MAC addresses

Now that you have seen how MAC addresses work, we can look at the ARP process.

Note:

When endpoints communicate across networks, they use logical IP addresses to track where the requests come from and the intended destination. Once a packet arrives internal to an environment, networking devices must convert that IP address to the more specific “physical” location the packets are destined for. That “physical” location is the MAC address you analyzed in the last challenge. The Address Resolution Protocol (ARP) is the protocol that makes that translation.

Analysis:

Let’s take this analysis step-by-step. When you send the curl request or any traffic, the first thing that must occur is to determine the intended destination. So we are giving the IP address as this, but we don’t know the Layer 2 MAC address. ARP is the process of finding this.

Where did the initial ARP request come from?

    • It looks like the first packet has a destination MAC of “ff:ff:ff:ff:ff:ff.”  Since your endpoint doesn’t know the destination MAC address, the first ARP packet is broadcast. Although this works, it is a bit of a security concern.
    • A broadcast packet will be sent to every host within the local network. Unfortunately, the ARP protocol was not developed with security in mind, so in most configurations, the first host to respond to the ARP request will be the “winner.” This makes it very simple if an attacker controls a host within an environment to spoof their own MAC, respond faster, and effectively perform a Man-In-The-Middle (MITM) attack. Notice we have “Request how has ” above.
    • The requesting IP address must be found in the payload of the packet. This is an important distinction since most packets are returned to the requesting IP address found in the IPv4 header. This allows adversaries to use attacks such as ARP spoofing and MAC flooding since the original requester doesn’t have to be the intended destination. Notice we have a “Reply” at the end of the ARP process.

Understanding ARP:

ARP bridges the Network Layer (Layer 3) and the OSI model’s Data Link Layer (Layer 2). Its primary function is to map an IP address to a corresponding MAC address, allowing devices to exchange data efficiently.

Address Resolution Protocol

The ARP Process:

1. ARP Request:

When a device wants to communicate with another on the same network, it sends an ARP request broadcast packet. This packet contains the target device’s IP address and the requesting device’s MAC address.

2. ARP Reply:

Upon receiving the ARP request, the device with the matching IP address sends an ARP reply containing its MAC address. This reply is unicast to the requesting device.

3. ARP Cache:

Devices store the ARP mappings in an ARP cache to optimize future communications. This cache contains IP-to-MAC address mappings, eliminating the need for ARP requests for frequently accessed devices.

4. Gratuitous ARP:

In specific scenarios, a device may send a Gratuitous ARP packet to announce its presence or update its ARP cache. This packet contains the device’s IP and MAC address, allowing other devices to update their ARP caches accordingly.

5th Lab Guide: Host Enumeration

Linux Host Enumeration

In a Linux environment, it is common practice to identify the host network details. A standalone isolated machine is scarce these days, and most systems are interconnected to other systems somehow. Run the following command to display IP information, saving the output to a text file instead of the popular method of displaying text on the screen.

Note:

1. Below, you can see there is usually a lot of helpful information displayed with network information. The screenshot shows that the network device ens33 and the MAC address are also listed.

2. The hping3 is a command-line tool that can craft and send customized network packets. It offers various options and functionalities, making it an invaluable asset for network discovery, port scanning, and firewall testing tasks.

3. One of hping3’s critical strengths lies in its advanced features. From TCP/IP stack fingerprinting to traceroute mode, hping3 goes beyond basic packet crafting and provides robust network analysis and troubleshooting techniques.

Analysis:

    • The w command will show who, what, and where from; in the above screenshot, a user is connecting from a remote location, and this highlights how interconnected we are today; connection could be anywhere in the world. Other helpful information here shows the user has an open terminal bash and is running the w command.
    • Use the hping command to ping your machine seven times using. sudo hping3 127.0.0.1 -c 57.
    • The sudo is needed as elevated privileges are required to run hping3. The IP address 127.0.0.1 is the loopback address, meaning this is your machine. We work in a secure lab environment and cannot ping systems online.
    • In the screenshot, errors will be displayed if there are any connection issues on the network. Generally, ping helps identify interconnected systems on the network. Hping is a much more advanced tool with many features beyond this challenge; It can also do advanced techniques for testing firewall port scanning and help penetration testers look for weaknesses. A potent tool!

Highlighting the TCP/IP Suite: Protocols

TCP/IP is a protocol suite—meaning multiple protocols exist to provide network connectivity. Each protocol in the suite has a specific purpose and function, and protocols work at different layers. TCP/IP is a suite of protocols, the most popular of which are Transmission Control Protocol (TCP), User Data Protocol (UDP), Internet Protocol (IP), and Address Resolution Protocol (ARP).

IP performs logical addressing so your computer can be found and reached across different networks. ARP converts these logical addresses to a physical MAC address to be transmitted on the wire. We can use the ICMP protocol for troubleshooting and diagnostics, which is the status- and error-reporting protocol.  

IP is the Internet’s address system and delivers packets of information from a source device to a target device. It is the primary way network connections are made and establishes the basis of the Internet. IP does not handle packet ordering or error checking. Such functionality requires another protocol, often the TCP.

For example, when an email is sent over TCP, a connection is established, and a 3-way handshake is made. First, the source sends an SYN “initial request” packet to the target server to start the dialogue. Then, the target server sends a SYN-ACK packet to agree to the process. Lastly, the source sends an ACK packet to the target to confirm the process, after which the message contents can be sent.

  • TCP/IP: Networking Model: 4-layer model vs. 7-layer OSI

The TCP/IP model is a four-layer model similar in concept to the seven-layer OSI Reference Model. To simplify life, the four layers of the TCP/IP model map to the seven layers of the OSI version. The TCP/IP model combines multiple layers of the OSI model, so when starting with networking. It’s good to start with the OSI, as none of the layers are combined.

Moving through Layers: Enabling Network Connectivity

Each OSI model layer is responsible for communicating with the layers directly above and below, receiving data from or passing it to its neighboring layers. For example, the presentation layer will receive information from the application layer, format it appropriately, which could be encryption, as we mentioned, or compression, and then pass it to the session layer. The presentation layer will never deal directly with the transport, network, data link, or physical layers. The same idea is valid for all layers regarding their communication with other layers.

OSI Layer Example: Computers communicate with a server.

Let’s look at the layers from the point of view of two computers sending data to each other. The data is called different things at each layer due to the encapsulation process, but we will call it data for now.

So we have Host A and Host B, who want to send files to each other and, therefore, will exchange data on the network. Or Host B has a local web server, and Host A in their browser types is in the IP address of Host B. We need a source and a destination for network connectivity.

So, Host A is the sending computer, the source, and Host B is the receiving computer, the destination. The data exchange starts with Host A sending a request to Host B in the application layer. So, we have Host A initiating the request.

At the receiving end, the destination, on host B, the data moves back up through the layers to the application layer, which passes the data to the appropriate application or service on the system. Port numbers will identify the proper service.

Starting to move through the layers

Network connectivity starts at the application layer of the OSI model, which will be on the sending system, which in our case is Host A, and works its way down through the layers to the physical layer. The information then passes the communication medium, physical cablings such as Copper or Fiber, or wireless, until it receives the far-end system, which operates back up the layers, starting at the physical layer until the application layer.  

Action at one layer undone at another layer

When you think of two devices communicating, such as two computers, it is crucial to understand that whatever action is done at one layer of a sending computer is undone at the same layer on the receiving computer. For example, if the presentation layer compresses or encrypts traffic the information on the sending computer, the data is uncompressed or decrypted on the receiving computer.

Network Connectivity and Network Security

So, we have just looked at generic connectivity. However, these networking and security devices will have two main functions. First, there is the network connectivity side of things. 

So, we will have network devices that need to forward your traffic so it can reach its destination. Traffic is delivered based on IP. Keep in mind that IP is not guaranteed. Enabling reliable network connectivity is handled further up the stack. The primary version of IP used on the Internet today is Internet Protocol Version 4 (IPv4).

Due to size constraints with the total number of possible addresses in IPv4, a newer protocol was developed. The latest protocol is called IPv6. It makes many more addresses available and is increasing in adoption.

Network Security and TCPdump

Secondly, we will need to have network security devices. These devices allow traffic to pass through their interfaces if they deem it safe, and policy permits the traffic to pass through that zone in the network. The threat landscape is dynamic, and bad actors have many tools to disguise their intentions. Therefore, we have many different types of network security devices to consider.

Tcpdump is a powerful command-line packet analyzer that allows users to capture and examine network traffic in real time. It captures packets from a network interface and displays their content, offering a detailed glimpse into the intricacies of data transmission.

Getting Started with tcpdump

To utilize TCPdump effectively, it is crucial to understand its primary usage and command syntax. By employing a combination of command-line options, filters, and expressions, users can tailor their packet-capturing experience to suit their specific needs. We will explore various TCPdump commands and parameters, including filtering by source or destination IP, port numbers, or protocol types.

Analyzing Captured Packets

Once network packets are captured using TCPdump, the next step is to analyze them effectively. This section will explore techniques for examining packet headers and payload data and extracting relevant information. We will also explore how to interpret and decode different protocols, such as TCP, UDP, ICMP, and more, to better understand network traffic behavior.

6th Lab Guide: tcpdump

Capturing Traffic with tcpdump

Note:

Remember that starting tcpdump requires elevated permissions and initiates a continuous traffic capture by default, resulting in an ongoing display of network packets scrolling across your screen. To save the output of tcpdump to a file, use the following command:

sudo tcpdump -vw test.pcap

Tip: Learn tcpdump arguments

  • sudo Run tcpdump with elevated permissions

  • -v User verbose output

  • -w write output to the file

tcpdump

Analysis:

    • Running TCPdump is an invaluable tool for network analysis and troubleshooting. It lets you capture and view the live traffic flowing through your network interfaces. This real-time insight can be crucial for identifying issues, understanding network behavior, and detecting security threats.

Next, to capture traffic from a specific IP address, at the terminal prompt, enter:

sudo tcpdump ip host 192.168.18.131

Tip: Learn tcpdump arguments

  • ip the protocol to capture

  • host <ip address> limit the capture to a single host’s IP address

To capture a set number of packets, type the following command:

sudo tcpdump -c20

tcpdump

Analysis:

    • Filtering tcpdump on a specific IP address streamlines the analysis by focusing only on the traffic involving that address. This targeted approach can reveal patterns, potential security threats, or performance issues related to that host.
    • Limiting the packet count in a tcpdump capture, such as 20 packets, creates a more focused and manageable dataset for analysis. This can be particularly useful in isolating incidents or behaviors without being overwhelmed by continuous information.
    • Tcpdump finds practical applications in various scenarios. Whether troubleshooting network connectivity issues, detecting network intrusions, or performing forensic analysis, tcpdump is an indispensable tool.

Components for network connectivity

In general, we have routers forwarding the traffic based on IP, and they usually work with switches that help connect all the devices. Switches work with MAC addresses and not IP addresses. Then we have the security devices such as firewalls that help with the security side of things. Generally, a firewall device will allow all traffic to leave the network, but only traffic you permit can enter the network.

♦ Starting with the Layer 3 Router

Routers, the magic boxes that act as the interconnection points, have all the built-in smarts to inspect incoming packets and forward them toward their eventual LAN destination.  Routers are, for the most part, automatic. A router is any hardware or software that forwards packets based on their destination IP address.

Routers work at the OSI model’s Network layer (Layer 3). Classically, routers are dedicated boxes with at least two connections, although many contain many more connections and offer various network connectivity options.

The router inspects each packet’s destination IP address and then sends the IP packet out to the correct port. To perform this inspection, every router has a routing table that tells the router exactly where to send the packets. This table is the key to understanding and controlling forwarding packets to their proper destination.

♦ Starting with Switches

Then we have switches, which can control and only send frames to the proper destination. This reduces the number of devices receiving the frame, reducing the chance of collisions. So when we have switches, we have a star topology but consider the links between the end host and the switch port to be point-to-point.

This allows full duplex communication that effectively disables the CSMA/CD process between the switch port and the attached device. Now, the ability to transmit and receive simultaneously only occurs between the switch port and the end station. So, consider full-duplex to be a 10x speed improvement over half-duplex. The switch port also acts as a boundary for collisions.

Building a small network: Network and Security Components

Information on Hub 

With the information you learned from the OSI, let’s look at some networking components in more detail. Networking started with hubs. A hub is an older network device you hopefully do not encounter on your networks because more effective and secure switches have replaced them. A network hub has three pitfalls: 

  • No filtering 

When a system sends data to another system, the hub receives the data and then sends it to all other ports on the hub. A switch operating at Layer 2 will understand MAC addresses to make better forwarding decisions. 

  • Collisions 

Because any data was sent to all other ports, and any system could send its data at any time, this resulted in many network collisions. A collision occurs when two data pieces collide, must be retransmitted, and degrades application performance.

  • Security For Hubs 

Because the data was sent to all ports on the hub, all systems receive all data. Systems look at the destination address in the frame to decide whether to process or discard the data. 

Packet Sniffer

But if someone were running a packet sniffer such as Wireshark or tcpdump on a system that used a hub, they would receive all packets and be able to read them.  Sniffers examine streams of data packets that flow between computers on a network, between networked computers, and the more extensive Internet.  This created a huge security concern.

The solution to the hub problem was to replace network hubs with switches with better filtering capabilities and the capability to carve a switch into multiple switches using VLANs. This improves security and performance.

7thh Lab Guide: Networking Scanning with Python

Python and NMAP

I am scanning my local netowrk in this lab guide, looking for targets and potential weaknesses. Knowing my shortcomings will help strengthen the overall security posture. I am scanning and attempting to gain access to services with Python.

Network scanning involves identifying and mapping the devices and resources within a network. It helps identify potential vulnerabilities, misconfigurations, and security loopholes. Python, a versatile scripting language, provides several modules and libraries for network scanning tasks.

Note:

  1. Python offers various libraries and modules that can be used with Nmap for network scanning. One such library is “python-nmap,” which provides a Pythonic way to interact with Nmap. By leveraging this library, we can easily automate scanning tasks, customize scan parameters, and retrieve results for further analysis.
  2. The code will import the Nmap library used to provide Nmap functionality. Then, the most basic default scan will be performed against the Target 1 virtual machine.

Steps:

  1. Using the nano editor, create a new text file called scannetwork.py by typing nano scannetwork.py. This is where the Python script will be made.
  2. With nano open, enter the following Python code to perform a basic default port scan using Nmap with Python. Add your IP address for the Target 1 virtual machine to the script.
    import nmap
    import subprocess
    nm = nmap.PortScanner()
    print(‘Perform default port scan’)
    nm.scan(‘add.ip.address.here’)
    print(nm.scaninfo())

Note: The code will import the Nmap library to provide Nmap functionality, and then the most basic default scan will be performed against the Target 1 virtual machine.


Analysis:

    • Scan results may vary. The following output shows many numbers, signifying port numbers, with the scan completing quickly. With this type of full scan without arguments and the speed at which Python returns, the results will likely produce errors.
    • Remember that you need to have NMAP installed first.

Conclusion:

Python network scanning has numerous real-world applications. From security audits to vulnerability assessments, Python-based network scanning tools can greatly assist in identifying potential risks and strengthening overall network security. Additionally, network administrators can automate routine scanning tasks, saving time and effort.

Python and tools like Nmap empower network security professionals to conduct comprehensive and efficient network scans. By automating the scanning process and leveraging Python’s flexibility, developers can create robust solutions tailored to their specific needs. Whether for security auditing or network exploration, Python network scanning opens up a world of possibilities.

Network connectivity: Start with switches 

At layer 2, we can have switches that reduce collisions, optimize traffic, and are better from a security point of view. LAN Switch Switches are one of the most common devices used on networks today. All other devices connect to the switch to gain access to the network.  For example, you will connect workstations, servers, printers, and routers to a switch so that each device can send and receive data to and from other devices. The switch acts as the central network connectivity point for all devices on the network. 

Layer 2 Switch: How switches work

The switch tracks every device’s MAC address (the physical address burned into the network card) and then associates that device’s MAC address with the port on the switch to which the device is connected. The switch stores this information in a MAC address table in memory on the switch. The switch then acts as a filtering device by sending data only to the port for which the data is destined.

Collision Domains and Broadcast Domains 

Collision Domain: Hub – Single Collision Domain 

In a collision domain, data transmission collisions can occur. For example, suppose you are using a hub to connect ten systems to a network. Because traffic is sent to all ports on the hub, the data could collide on the network if several systems send data simultaneously. For this reason, all network ports on a hub (and any devices connected to those ports) are considered parts of a single collision domain. This also means that when you cascade a hub of another, all hubs are part of the same collision domain. Do you connect 100 hubs, and even though they are different physical devices, it is still one collision domain?

Switches: Break down Collision Domains

If you were using a switch to connect the ten systems, each port on the switch would create its network segment. When data is sent by a system connected to the switch, the switch sends the data only to the port on which the destination system resides. For this reason, if another system were to send data simultaneously, the data would not collide. As a result, each port on the switch creates a separate collision domain.

Controlling Broadcast Domain

A broadcast domain is a group of systems that can receive one another’s broadcast messages. When using a hub to connect five systems in a network environment, if one system sends a broadcast message, the message is received by all other systems connected to the hub. For this reason, all ports on the hub create a single broadcast domain. Likewise, if all five systems were connected to a switch and one sent a broadcast message, all other systems on the network would receive the broadcast message. 

Therefore, when using a switch, all ports are part of the same broadcast domain. If you wanted to control which systems received broadcast messages, you would have to use a router that does not forward broadcast messages to other networks. You could also use virtual LANs (VLANs) on a switch, with each VLAN being a different broadcast domain.

Network Connectivity: Starting with Routers

A switch connects all systems in a LAN setup, but what if you want to send data from your network to another network or across the Internet? That is the job of a router. Routers work at Layer 3 of the OSI model. A router sends or routes data from one network to another until the data reaches its final destination. Note that although switches look at the MAC address to decide where to forward a frame, routers use the IP address to determine what network to send the data to. 

Network Connectivity with Network Routing

Network routing is selecting a path across one or more networks. Routing principles can apply to any network, from telephone to public transportation. In packet-switching networks, such as the Internet, routing selects the paths for Internet Protocol (IP) packets to travel from origin to destination. These Internet routing decisions are made by specialized network hardware called routers.  

Routing Tables and Layer 3 Connectivity

Routers refer to internal routing tables to decide how to route packets along network paths. A routing table records the paths packets should take to reach every destination the router is responsible for. The router has a routing table listing all the networks it can get. Routing protocols populate routing tables. Routing protocols can be dynamic or static.

Routing tables can either be static or dynamic. Static routing tables do not change. A network administrator manually sets up static routing tables. This sets in stone the routes data packets take across the network unless the administrator manually updates the tables. Dynamic routing tables update automatically. 

Layer 3 connectivity
Diagram: Layer 3 connectivity. Source is geeksforgeeks

Dynamic routers use various routing protocols (see below) to determine the shortest and fastest paths. They also make this determination based on how long it takes packets to reach their destination. Dynamic routing requires more computing power, so smaller networks may rely on static routing. However, dynamic routing is much more efficient for medium-sized and large networks.

Understanding NAT Static

NAT static, also known as static NAT, maps an internal private IP address to a specific public IP address. Unlike dynamic NAT, which dynamically assigns public IP addresses from a pool, NAT static uses a fixed mapping configuration. This means the private IP address is consistently associated with the same public IP address, ensuring reliability.

One of the critical advantages of NAT static is enhanced security. A one-to-one mapping between private and public IP addresses creates a clear separation between internal and external networks. This adds a layer of protection against potential cyber threats. Moreover, NAT static enables organizations to host services or applications on internal servers by exposing them to the public using a dedicated public IP address.

Implementation of NAT Static

Implementing NAT static requires configuration settings on a network device, typically a router or firewall. The process involves specifying the mapped internal IP address and the corresponding public IP address. Port forwarding rules can also be set up to direct incoming traffic to specific services or applications within the internal network.

NAT static finds valuable applications in various scenarios. For instance, it is widely used in organizations that require external access to internal resources, such as web servers, FTP servers, or VPN gateways. By utilizing NAT static, these resources can be accessed securely from the internet while maintaining the privacy of the internal network.

8th Lab Guide: Static NAT in Cisco IOS

In the lab guide below, you will see 3 routers called Host, NAT, and Web1. There are two segments to this network—an internal and an external segment. The NAT device creates the network boundary. In our case, this is a Cisco IOS router. In the production network, this would be a Firewall. Imagine our host is on our LAN, and the web server is somewhere on the Internet. Our NAT router in the middle is our connection to the Internet.

Note:

1. Disabling “routing” on a router that turns it into a typical host that requires a default gateway is possible. This is very convenient because it will save you the hassle of connecting real computers/laptops to your lab.

2. Use no ip routing to disable the routing capabilities.

3. I use debug ip packet to see the IP packets that I receive. Don’t do this on a production network, or you’ll be overburdened with debug messages!

Analysis:

    • You can use the show ip nat translations command to verify our configuration. The web server’s packet from the host has a source IP address of 192.168.23.2.
    • And when it responds to the destination, the IP address is 192.168.23.2. Now we know that static NAT is working.

Conclusion:

In conclusion, NAT static offers a reliable and secure method of connecting internal networks to the external world. With its fixed mapping configuration and enhanced security features, organizations can confidently expose services and applications while protecting their network infrastructure.

What is Dynamic NAT?

Dynamic NAT, also known as NAPT (Network Address Port Translation), translates private IP addresses to public ones on a network. Unlike Static NAT, which uses one-to-one mapping, Dynamic NAT allows multiple private IP addresses to share a single public IP address. This dynamic mapping is based on the availability of public IP addresses in the NAT pool.

When a device from a private network initiates a connection to the internet, Dynamic NAT dynamically assigns a public IP address from the NAT pool to that device. This dynamic mapping is stored in a NAT translation table, which keeps track of the private IP address, the assigned public IP address, and the associated ports. As the connection terminates, the mapping is released, making the public IP address available for other devices.

Benefits and Use Cases of Dynamic NAT

Dynamic NAT offers several advantages, making it a popular choice for organizations and network administrators. First, it allows the conservation of public IP addresses by sharing them among multiple devices. This scalability is particularly useful for large networks with limited public IP resources. Additionally, Dynamic NAT provides an additional layer of security, as private IP addresses are not exposed to the Internet directly. This makes it an ideal solution for securing internal networks.

Use cases for Dynamic NAT range from small office setups to large enterprise networks. It is commonly used in scenarios where there is a need for multiple devices to access the internet simultaneously while sharing a limited number of public IP addresses. This includes home networks, small businesses, and service providers managing large-scale networks.

Implementing Dynamic NAT

Implementing Dynamic NAT involves configuring NAT policies on networking devices such as routers or firewalls. These policies define the NAT pool, which includes the range of public IP addresses available for dynamic mapping. Additionally, access control lists (ACLs) can be used to specify which devices are eligible for Dynamic NAT. Careful planning and network design are essential to ensure smooth operation and efficient utilization of available resources.

9th Lab Guide on Dynamic NAT

It’s time to configure dynamic NAT, where we use a pool of IP addresses for translation. I’ll use a fairly simple topology with two hosts and one router performing NAT. This time, we have 2 host routers on the left side, and I’m using another subnet. The subnet 192.168.123.0/24 is the internal network, and 192.168.23.0 is the external network.

Note:

  • The ip nat pool The command lets us create a pool. I’m called “MYPOOL.” For this pool, I’m using IP addresses 192.168.23.10 up to 192.168.23.20. We can now select the hosts we want to translate, which is done with the access list.
  • The access list above matches network 192.168.123.0 /24. That’s where host1 and host2 are located. The last step is to put the access list and pool together:
  • The command above selects access-list 1 as the source, and we will translate it to the pool called “MYPOOL.” This ensures that host1 and host2 are translated to an IP address from our pool.  Remember that the unmanaged switch is just used for port connectivity and has no VLAN configuration.

Analysis:

    • And as you can see, host2 has been translated to IP address 192.168.2.11.
    • As you can see above, host1 has been translated to IP address 192.168.23.10.
    • Inside global is the IP address on the outside interface of your router performing NAT.
    • Inside local is the IP address of one of your inside hosts translated with NAT.
    • Outside local is the IP address of the device you are trying to reach, in our example, the web server (Web1).
    • Outside global is also the IP address of the device you are trying to reach, such as the webserver (Web1).

Why are the outside local and outside global IP addresses the same? With NAT, it’s possible to translate more than just from “inside” to “outside.” It’s possible to create an entry in our NAT router that whenever one of the hosts sends a ping to an IP address (say 10.10.10.10), it will be forwarded to Web1. In this example, the “outside webserver” is “locally” seen by our hosts as 10.10.10.10, not 192.168.23.3.

Conclusion:

Dynamic NAT serves as a versatile and efficient solution for network address translation. Allowing multiple devices to share a public IP address offers scalability, security, and optimized resource utilization.

Network Connectivity: Starting with Firewalls 

A firewall is a security system that monitors and controls network traffic based on security rules. It usually sits between trusted and untrusted networks, often the Internet. For example, office networks often use a firewall to protect their networks from online threats. Firewalls control which traffic is allowed to enter a network or system and which traffic should be blocked.

When configuring a firewall, you create the rules for allowing and denying traffic based on the traffic protocol, port number, and direction. Firewalls work at Layer 3 and Layer 4 of the OSI model. We know now that Layer 3 is the Network Layer where IP works. Then we have Layer 4, the Transport Layer, where TCP and UDP work. 

Stateful Inspection Firewall

Packet filtering firewall 

A packet-filtering firewall can filter traffic based on the source and destination IP addresses, the source and destination port numbers, and the protocol used. The downfall of a simple packet-filtering firewall is that it needs to understand the context of the conversation, making it easy for a bad actor to craft a packet to pass through the firewall.   

Stateful packet inspection

Stateful packet inspection firewalls. Like a packet filtering firewall, a stateful packet inspection firewall filters traffic based on source and destination IP addresses, the source and destination port numbers, and the protocol in use. Still, it also understands the context of a conversation. Stateful firewalls rely on many contexts when making decisions.

For example, if the firewall records outgoing packets on one connection requesting a certain kind of response, it will only allow incoming packets on that connection if they provide the requested type of response. Stateful firewalls can also protect ports* by keeping them all closed unless incoming packets request access to a specific port. This can mitigate an attack known as port scanning.

Lab Guide: Traffic flow and NAT

NAT operates as a middleman between a local network and the internet. When a device within a private network wants to communicate with an external device on the internet, NAT translates the private IP address of the sending device into the public IP address assigned to the network.

This translation process allows the device to establish a connection and send data packets across the Internet. In the below example, the ASAv performs NAT as traffic flows from R1 to R2. R1 is in an internal zone, while R2 is outside.

Firewall traffic flow
Diagram: Firewall traffic flow and NAT

Next-generation firewall 

Next-generation firewall A next-generation firewall (NGFW) is a layer seven firewall that can inspect the application data and detect malicious packets. A regular firewall filters traffic based on it being HTTP or FTP traffic (using port numbers), but it cannot determine if there is malicious data inside the HTTP or FTP packet. 

An application-layer NGFW can inspect the application data in the packet and determine whether there is questionable content inside. NGFWs are firewalls with the capabilities of traditional firewalls but also employ a host of added features to address threats on other OSI model layers. Some NGFW-specific features include: 

  1. Deep packet inspection (DPI) – NGFWs perform much more in-depth inspection of packets than traditional firewalls. This deep inspection can examine packet payloads and which application the packet accesses. This allows the firewall to enforce more granular filtering rules. 
  2. Application awareness – Enabling this feature makes the firewall aware of which applications are running and which ports those applications use. This can protect against certain types of malware that aim to terminate a running process and then take over its port. 
  3. Identity awareness lets a firewall enforce rules based on identity, such as which computer is being used, which user is logged in, etc. 
  4. Sandboxing – Firewalls can isolate pieces of code associated with incoming packets and execute them in a “sandbox” environment to ensure they are not behaving maliciously. The results of this sandbox test can then be used as criteria when deciding whether or not to let the packets enter the network.

Web Application Firewalls (WAF)

While traditional firewalls help protect private networks from malicious web applications, WAFs help protect web applications from malicious users. A WAF filters and monitors HTTP traffic between a web application and the Internet, protecting web applications from attacks like cross-site forgery, scripting (XSS), file inclusion, and SQL injection.

web applicationn firewall

Intrusion Prevention System 

An intrusion prevention system (IPS) is a security device that monitors activity, logs any suspicious activity, and then takes corrective action. For example, if someone is doing a port scan on the network, the IPS would discover this suspicious activity, log the action, and then disconnect the system performing the port scan from the network.

Summary: Network Connectivity

Network connectivity is crucial in our daily lives in today’s digital age. From smartphones to home devices, staying connected and communicating seamlessly is essential. In this blog post, we delved into the fascinating world of network connectivity, exploring its different types, the challenges it faces, and the future it holds.

Section 1: Understanding Network Connectivity

Network connectivity refers to the ability of devices to connect and communicate with each other, either locally or over long distances. It forms the backbone of modern communication systems, enabling data transfer, internet access, and various other services. To comprehend network connectivity better, it is essential to explore its different types.

Section 2: Wired Connectivity

As the name suggests, wired connectivity involves physical connections between devices using cables or wires. This traditional method provides a reliable and stable network connection. Ethernet, coaxial, and fiber optic cables are commonly used for wired connectivity. They offer high-speed data transfer and are often preferred when stability is crucial, such as in offices and data centers.

Section 3: Wireless Connectivity

Wireless connectivity has revolutionized the way we connect and communicate. It eliminates physical cables and allows devices to connect over the airwaves. Wi-Fi, Bluetooth, and cellular networks are well-known examples of wireless connectivity. They offer convenience, mobility, and flexibility, enabling us to stay connected on the go. However, wireless networks can face challenges such as signal interference and limited range.

Section 4: Challenges in Network Connectivity

While network connectivity has come a long way, it still faces particular challenges. One of the significant issues is network congestion, where increased data traffic leads to slower speeds and reduced performance. Security concerns also arise, with the need to protect data from unauthorized access and cyber threats. Additionally, the digital divide remains a challenge, with disparities in access to network connectivity across different regions and communities.

Section 5: The Future of Network Connectivity

As technology continues to evolve, so does network connectivity. The future holds exciting prospects, such as the widespread adoption of 5G networks, which promise faster speeds and lower latency. The Internet of Things (IoT) will also play a significant role, with interconnected devices transforming various industries. Moreover, satellite communication and mesh network advancements aim to bring connectivity to remote areas, bridging the digital divide.

Conclusion:

In conclusion, network connectivity is an integral part of our modern world. Whether wired or wireless, it enables us to stay connected, access information, and communicate effortlessly. While challenges persist, the future looks promising with advancements like 5G and IoT. As we embrace the ever-evolving world of network connectivity, we must strive for inclusivity, accessibility, and security to create a connected future for all.

Cisco Snort

Cisco Firewall with Cisco IPS

Cisco Firewall with IPS

In today's digital landscape, the need for robust network security has never been more critical. With the increasing prevalence of cyber threats, businesses must invest in reliable firewall solutions to safeguard their sensitive data and systems. One such solution that stands out is the Cisco Firewall. In this blog post, we will explore the key features, benefits, and best practices of Cisco Firewall to help you harness its full potential in protecting your network.

Cisco Firewall is an advanced network security device designed to monitor and control incoming and outgoing traffic based on predetermined security rules. It is a barrier between your internal network and external threats, preventing unauthorized access and potential attacks. With its stateful packet inspection capabilities, the Cisco Firewall analyzes traffic at the network, transport, and application layers, providing comprehensive protection against various threats.

Cisco Firewall with IPS functions offers a plethora of features designed to fortify network security. These include:

1. Signature-based detection: Cisco's extensive signature database enables the identification of known threats, allowing for proactive defense.

2. Anomaly-based detection: By monitoring network behavior, Cisco Firewall with IPS functions can detect anomalies and flag potential security breaches.

3. Real-time threat intelligence: Integration with Cisco's threat intelligence ecosystem provides up-to-date information and protection against emerging threats.

The combination of Cisco Firewall with IPS functions offers several enhanced security measures, such as:

1. Intrusion Prevention: Proactively identifies and blocks intrusion attempts, preventing potential network breaches.

2. Application Awareness: Deep packet inspection allows for granular control over application-level traffic, ensuring secure usage of critical applications.

3. Virtual Private Network (VPN) Protection: Cisco Firewall with IPS functions offers robust VPN capabilities, securing remote connections and data transmission.

Highlights: Cisco Firewall with IPS

Introducing Cisco Firewall

Cisco Firewall is renowned for its industry-leading performance and comprehensive security features. This section will examine Cisco Firewall’s key features, such as stateful packet inspection, application control, VPN support, and intrusion prevention system (IPS) integration.

Unleashing the Power of Cisco IPS

As mentioned earlier, Cisco IPS integration is a notable feature that sets Cisco Firewall apart from its counterparts. This section will focus on Cisco IPS, its purpose, and how it seamlessly integrates with Cisco Firewall to provide enhanced threat detection and prevention capabilities.

Deploying Cisco Firewall in Your Network

Implementing a Cisco Firewall requires careful planning and configuration. This section will discuss best practices for deploying Cisco Firewall, including network topology considerations, rule management, and the importance of regular updates and patches.

To showcase the effectiveness of the Cisco Firewall in real-world scenarios, we will highlight success stories from organizations that have implemented it and experienced significant improvements in their network security posture. These case studies will inspire and demonstrate the tangible benefits of deploying the Cisco Firewall.

Cisco Firewall and Zero Trust

Cisco Firewall offers a robust set of features and capabilities that align with the principles of Zero Trust. These include advanced threat detection and prevention mechanisms, granular access control policies, identity-based access management, and seamless integration with other security tools.

firewalling device

Implementing Cisco Firewall in a Zero Trust Environment

Deploying a Cisco Firewall within a Zero Trust framework involves careful planning and configuration. Organizations must define their security policies, segment their network resources, and establish strict access controls based on user roles and least privilege principles.

Related: Before you proceed, you may find the following posts helpful for pre-information:

  1. Cisco Secure Firewall
  2. WAN Design Considerations
  3. Routing Convergence
  4. Distributed Firewalls
  5. IDS IPS Azure
  6. Stateful Inspection Firewall
  7. Cisco Umbrella CASB

Cisco IPS

Key Cisco Firewall Discussion Points:


  • Introduction to the Cisco Firewall and what is involved in the solution.

  • I am highlighting the details of the challenging landscape along with recent trends.

  • Technical details on how to approach implementing a Cisco IPS based on Snort.

  • Scenario: Different types of network security vantage points. Cisco Secure Endpoint and Cisco Secure Malware.

  • Details on starting the different types of Snort releases and the issues with Snort 2.

  • Technical details on Cisco Snort 3.

Back to basics: Cisco Firewall and Cisco IPS

Key Features and Benefits

1. Robust Threat Defense: Cisco Firewall employs various security measures, including intrusion prevention system (IPS), VPN support, URL filtering, and advanced malware protection. This multi-layered approach ensures comprehensive threat defense, effectively detecting and mitigating known and emerging threats.

2. Scalability and Performance: Cisco Firewall solutions are built to cater to the needs of organizations of all sizes. From small businesses to large enterprises, Cisco offers various firewall models with varying performance levels, ensuring scalability and optimal network performance without compromising security.

3. Simplified Management: Cisco Firewall solutions have intuitive management interfaces, allowing network administrators to configure and monitor firewall policies easily. Advanced features like centralized management platforms and automation capabilities further streamline security operations, saving time and effort.

Cisco Firewall

Ciso Firewall Main Components

Cisco Firewall Features and Benefits 

  • Cisco Firewall employs various security measures.

  • Cisco Firewall solutions are built to cater to the needs of organizations of all sizes

  • Cisco Firewall solutions have intuitive management interfaces

  • Establish a robust security policy that aligns with your organization’s requirements

  • Keep your Cisco Firewall up to date by regularly installing firmware updates and security patches

  • Implement strict access control measures to restrict network access only to authorized personnel.

Best Practices for Deploying Cisco Firewall

1. Comprehensive Security Policy: Establish a robust security policy that aligns with your organization’s requirements. Define and enforce rules for traffic filtering, application control, user access, and more.

2. Regular Firmware Updates: Keep your Cisco Firewall up to date by regularly installing firmware updates and security patches. This ensures your firewall has the latest threat intelligence and vulnerability fixes.

3. Access Control: Implement strict access control measures to restrict network access only to authorized personnel. For enhanced security, utilize user-based access control lists (ACLs) and two-factor authentication.

Integration with Cisco IPS

Cisco Firewall can be seamlessly integrated with Cisco Intrusion Prevention System (IPS) to enhance network security. While the firewall acts as the first line of defense, IPS adds a layer of protection by actively monitoring network traffic for suspicious activities and automatically taking action to prevent potential threats.

attack vectors

The Security Landscape: Key Points

Range of Attack Vectors

We are constantly under pressure to ensure mission-critical systems are thoroughly safe from bad actors that will try to penetrate your network and attack critical services with a range of attack vectors. So, we must create a reliable way to detect and prevent intruders. Adopting a threat-centric network security approach with the Cisco intrusion prevention system is viable. The Cisco IPS is an engine based on Cisco Snort that is an integral part of the Cisco Firewall, specifically, the Cisco Secure Firewall.

 

The Role of the Firewall

Firewalls have been around for decades and come in various sizes and flavors. The most typical idea of a firewall is a dedicated system or appliance that sits in the network and segments an “internal” network from the “external” Internet. The traditional Layer 3 firewall has baseline capabilities that generally revolve around the inside being good and the outside being bad. However, we must move from just meeting our internal requirements to meeting the dynamic threat landscape in which the bad actors are evolving.  There are various firewall security zones, each serving a specific purpose and catering to different security requirements. Let’s explore some common types:

1. DMZ (Demilitarized Zone):

The DMZ is a neutral zone between the internal and untrusted external networks, usually the Internet. It acts as a buffer zone, hosting public-facing services such as web servers, email servers, or FTP servers. By placing these services in the DMZ, organizations can mitigate the risk of exposing their internal network to potential threats.

2. Internal Zone:

The internal zone is the trusted network segment where critical resources, such as workstations, servers, and databases, reside. This zone is typically protected with strict access controls and security measures to safeguard sensitive data and prevent unauthorized access.

3. External Zone:

The external zone represents the untrusted network, which is usually the Internet. It serves as the gateway through which traffic from the external network is filtered and monitored before reaching the internal network. By maintaining a secure boundary between the internal and external zones, organizations can defend against external threats and potential attacks.

Firewall traffic flow

Numerous Attack Vectors

We have Malware, social engineering, supply chain attacks, advanced persistent threats, denial of service, and various man-in-the-middle attacks. And nothing inside the network should be considered safe. So, we must look beyond Layer 3 and incorporate multiple security technologies into firewalling.

We have the standard firewall that can prevent some of these attacks, but we need to add additional capabilities to its baseline. Hence, we have a better chance of detection and prevention. Some of these technologies that we layer on are provided by Cisco Snort, which enables the Cisco intrusion prevention system ( Cisco IPS ) included in the Cisco Firewall solution that we will discuss in this post.

Cisco Umbrella Firewall
Diagram: Cisco CASB

Intrusion Detection.

An intrusion detection system (IDS) can assist in detecting intrusions and intrusion attempts within your network, allowing you to take suitable mitigation and remediation steps. However, remember that a pure IDS will not prevent these attacks; instead, it will let you know when they occur.

So they can fix half of the puzzle. An IDS will parse and interpret network traffic and host activities. This data can vary from network packet analysis to the contents of log files from routers, firewalls, and servers, local system logs, access calls, and network flow data, to name a few.

Attack Signatures

Likewise, an IDS often stores a database of known attack signatures and can compare patterns of activity, traffic, or behavior it sees in the data it’s monitoring against those signatures to recognize when a close match between a signature and current or recent behavior occurs.

It is possible to distinguish IDSes by the types of activities, traffic, transactions, or systems they monitor. For example, IDSes that monitor network links and backbones looking for attack signatures are called network-based IDSes. In contrast, those that operate on hosts and defend and monitor the operating and file systems for signs of intrusion are called host-based IDSes.

Cisco IPS
Diagram: Traditional Intrusion Detection. With Cisco IPS.

Cisco Firewall

The Cisco Firewall is a next-generation firewall that provides several compelling threat detection and prevention technologies to the security professional’s toolbox. The Cisco Firewall solution is more than just Firewall Threat Detection (FTD). We have several components that make up the security solution. Firstly, we have the Firewall Management Center (FMC), which is the device that gives you the GUI and configures the policy and operational activities for the FTD. We also include several services.

Cisco Secure Endpoint

We have two key pieces around malware. First, the Cisco Secure Endpoint cloud is a database of known bad and good files and maintains a file hash for all those entries. So, as files pass through the firewall, they can decide on known files. These hashes can be calculated at the line rate, and the Cisco firewall can do quick lookups. This allows you to hold the last packet of the file and determine whether it is good, bad, or unknown.

Cisco Secure Malware Analytics

So, we can make a policy by checking the hash if you like. However, you can extract the file if you have not seen it before, and it can be submitted to Cisco Secure Malware Analytics. This is a sandbox technology. The potentially bad file is placed in a VM-type world, and we can get a report with a score sent back. So this is a detection phase and not prevention, as it can take around 15 mins to get the score sent back to us.

These results can then be fed back into the Cisco Secure Endpoint cloud. Now, everyone, including other organizations that have signed up to the Cisco Secure Endpoint cloud, can block this file seen in just one place. So, no data is shared; it’s just the hash. Also, Talos Intel. This is the research organization’s secret source, with over 250 highly skilled researchers. It can provide intelligence such as Indicator of Compromise (IoC), bad domains, and signatures looking for exploits. And this feeds all security products.

Cisco Firewall
Diagram: Components of the Cisco Firewall solution.

Cisco IPS

We need several network security technologies that can work together. First, we need a Cisco IPS that provides protocol-aware deep packet inspection and detection, which Cisco Snort can provide, and which we will discuss soon. You also need a list of bad IPs, Domains, and file hashes that allow you to tune your policy based on these. For example, for networks that are the source of spam, you want a different response from networks known to host the bad actors C&C.

Also, for URL filtering. When you think about URL filtering, we think about content filtering in the sense that users should not access specific sites from work. However, the URL is valuable from a security and threat perspective. Often, transport is only over HTTP, DNS is constantly changing, and the bad actors rely only on a URL to connect to, for example, a C&C. So this is a threat intelligence area that can’t be overlooked.

We also need to look at file hashing and run engines on the firewall that can identify Malware without sending it to the cloud for checking. Finally, it would help if you also had real-time network awareness and indicators of compromise. The Cisco Firewall can watch all traffic, and you tell us that here are the networks that this firewall protects, and these are the top talkers. Potentially to notice any abnormal behavior.

Cisco Snort

This is where Cisco Snort comes into play. Snort can carry out more or less all of the above with its pluggable architecture. More specifically, Snort 3. Cisco now develops and maintains Snort, known as Cisco Snort. Snort an open-source network intrusion prevention system. In its most straightforward terms, Snort monitors network traffic, examining each packet closely to detect a harmful payload or suspicious anomalies.

As an open-source prevention system, Cisco Snort can perform real-time traffic analysis and packet logging. So, the same engine runs in commercial products as in open-source development. The open-source core engine has over 5 million downloads and 500,000 registered users. Snort is a leader in its field. Before the Cisco IPS team got their hands on it, Snort was released in 1998, and the program was meant to be a packet logger. You can still download the first version. It has come a long way since then. So Snort is so much more than a Cisco IPS.

In reality, Snort is a flexible, high-performance packet processing engine. The latest version of Snort 3 is pluggable, so you can add modules to make it adaptable to cover different security aspects. Snort 2 to Snort 3 takes two years to evolve. With the release 7, Cisco Secure Firewall Threat Defence introduced Snort 3 on FMC-managed devices. Now, we can have a Snort 3 filter with the Cisco Firewall, rule groups, and rule recommendations. These combined will help you use the Cisco firewall better to improve your security posture.

Snort 2

So we started with Snort 2, even though Snort 3 has been out for a few years. Sort 2 has 4 primaries or, let’s say, essential components:

  1. It starts with the decoder. So, this is where some minor decoding is performed once the packers are pulled off the wire. This is what you might see with TCPDump.
  2. Then, we have the pre-processor, the secret sauce of Snort 2. These are responsible for the normalization and assembly. Their primary role is to present data to the next component, the detection agent.
  3. The detection engine is where the Snort rules are, and this is where we process the regulations against the traffic to observe. 
  4. Log module. Based on the rules on traffic, if something is found, we have a log module enabling you to create a unified alert.
  • A key point: Snort Rule tree

When Snort looks like a rule set, it doesn’t start at the top and run a packet through; it breaks it up into what is known as rule trees based on, for example, source port or destination port. So, when it comes to a rule to evaluate a packet, a packet only goes through a few rules. So, Cisco Snort, which provides the Cisco IPS for the Cisco Firewall, is efficient because it only needs to enable packets through the rules it might be appropriate for.

  • A key point: Knowledge check for Packet Sniffing

Capturing network traffic is often a task during a penetration testing engagement or while participating in a bug bounty. One of the most popular packet capture tools (sniffer) is Wireshark. If you are familiar with Linux, you know about another lightweight but powerful packet-capturing tool called tcpdump. The packet sniffing process involves a cooperative effort between software and hardware. This process can be broken down into three steps:

Collection: The packet sniffer collects raw binary data from the wire. Generally, this is accomplished by switching the selected network interface into promiscuous mode. In this mode, the network card can listen to all traffic on a network segment, not only the traffic addressed.

Conversion: The captured binary data is converted into readable form. This is as far as most developed command-line packet sniffers can go. At this point, the network data can be interpreted fundamentally, leaving most of the analysis to the end user.

Analysis: Finally, the packet sniffer analyzes the captured and converted data. The sniffer verifies the protocol of the captured network data based on the information extracted and begins its analysis of that protocol’s distinguishing features.

tcpdump

Snort 3

Then, we have a new edition of Cisco IPS. Snort 3.0 is an updated version with a unique design and a superset of Snort 2. Snort 3 includes additional functionality that improves efficacy, performance, scalability, usability, and extensibility. In addition, Snort 3 aimed to address some of the limitations of Snort 2. For example, Snort 2 is packet-based, so it’s a packet sniffer per packet. So it would help if you built in statefulness and awareness of fragments and the fact that HTTP GET’s boundaries are not packet boundaries, which can spread over multiple packets.

HTTP Protocol Analyzer

Snort 3 has a good HTTP protocol analyzer that can detect HTTP running over any port. Many IPS providers only look at 80, 8080, and 442. But HTTP over any port other than the Cisco IPS assumes it is TCP. However, based on Cisco Snort, Cisco IPS can detect HTTP over any port. Now that it knows HTTP, Snort can’t set up different pointers in the other parts of the packet. So when you get to the IPS rules section looking for patterns, you don’t need to do the lookup and calculation again, which is essential when you are going at a line rate.

  • A key point: Snort is pluggable

Also, within the Cisco firewall, Cisco Snort is pluggable and does much more than protocol analysis. It can perform additional security functions and make network discovery, a type of passive detection. Along with advanced malware protection and application identification, not by ports and protocols but by doing deep packet inspection. Now, you can have a policy based on the application. An identity engine can also map users to IP, allowing identity-based firewalling. So, Cisco Snort does much of the heavy lifting for the Cisco Firewall.

Cisco Snort
Diagram: Cisco Snort typical deployment.

Snort 2 architecture: The issues

Snort 3 has a modern architecture for handling all of the Snort 2 packet-based evasions. It also supports HTTP/2, whereas Snort 2 only supports HTTP/1. The process architecture is the most meaningful difference between Snort 2 and Snort 3. To go faster in Snort 2, you put more Snorts running on the box. Depending on the product, a connection arrives and is hashed based on a 5-tuple or a 6-tuple. I believe 5tuple is for open-source products and 6tuple is for commercial products.

Connections on the same hash go to the same CPU. To improve Snort 2 performance, if you had a single CPU on a box, you add another Snort CPU and get double the performance with no overhead. Snort 2 works with multiple Snort processes, each affiliated with an individual CPU core, and within each Snort process, there is a separate thread for management and data handling.

But we are loading Snorts over and over again. So, we have linear scalability, which is good, but duplicated memory structure is bad. So every time we load Cisco Snort, we load the rules, and everything runs in their isolated world.

Snort 3 architecture: Resolving the issues

On the other hand, Snort 3 is multi-threaded, unlike Snort 2. This means we have one control thread and multiple packet threads. The packet arrives at the control thread, and we have the same connection hashing with 5-tuple or 6-tuple. Snort 3 only runs on one process, with each thread affiliated with individual CPU cores, backed by one control thread that handles data for all packet-processing threads. The connections are still pinned to the core, but they are packet threads, and each one of these packet threads is running on its CPU, but they share the control thread, and this shares the rules. 

The new Snort 3 architecture eliminates the need for a control thread per process and facilitates configuration/data sharing among all threads. As a result, less overhead is required to orchestrate the collaboration among packet-processing threads. We get better memory utilization, and reloads are much faster.

Snort 3 inspectors

Snort 3 has inspectors now. In Snort 2, we had pre-processors. We have an HTTP inspector instead of a pre-processor. Packets are processed differently in Snort 3 than in Snort 2. So, in Snort 2, the packet comes linearly in specific steps. This was done with a preprocessing stage.

What has to happen is that the packet has to go through, and every field of the packet will be decoded. and if this is HTTP, they will look at the GET, the body, and the header, for example. All of this will be decoded in case a rule needs that data. In the case of RPC, there are so many fields in an RPC packet. So, it could decode fields in the packet that a rule never needs. So, you need to save time in decoding the data.

Parallel resource utilization

On the other hand, Snort 3 uses what is known as parallel resource utilization. We have plugins and a publish and subscribe model in the packet inspection process. So, when it looks at a packet, there are things it can decode. When the packet gets to the rule, the rule might say that it needs the body and not any other fields. Then, the body will only be decoded. This is referred to as just in time instead of just in case. You don’t waste time if any fields in the packet need to be translated.

Rules Group Security Levels.

With Snort 2 regarding rule sets, you have only a few options. For example, you can pick no rules active-based policy, which is not recommended. There is also a connection-based rule set ( connectivity over security). We also have balanced security and connectivity. Then, we have protection over the connectivity rules that are set. With Snort 3, you will get more than just these policy sets. We have rule groups that we can use to set the security levels individually. So, the new feature is Rule Groups, making it easier to adjust your policy.

With rule groups, we can assign security levels to each sub-group. So you can adjust based on your usage, such as a more aggressive rule set for Chrome or not for Internet Explorer. So, the security level can be set on a per-group basis. However, Snort 2 offers this only in the base policy. 

  • Level 1 – Connectivity over Security 
  • Level 2 – Balanced Security and Connectivity 
  • Level 3 – Security over connectivity 
  • Level 4 – Maximum Detection

Now, there is no need to set individual rule states. We have levels that equate to policy. With Snort 2, you would have to change the entire base policy, but with Snort 3, we can change the groups related to the rule set. What I like about this is the trade-off so you can have rules, for example, for the browser, that are not common on your network but still exist. 

Summary: Cisco Firewall and IPS

In today’s rapidly evolving digital landscape, cybersecurity is of paramount importance. With increasing cyber threats, organizations must employ robust security measures to safeguard their networks and sensitive data. One such solution that has gained immense popularity is the Cisco Firewall and IPS (Intrusion Prevention System). This blog post dived deep into Cisco Firewall and IPS, exploring their capabilities, benefits, and how they work together to fortify your network defenses.

Section 1: Understanding Cisco Firewall

Cisco Firewall is a formidable defense mechanism that acts as a barrier between your internal network and external threats. It carefully inspects incoming and outgoing network traffic, enforcing security policies to prevent unauthorized access and potential attacks. By leveraging advanced technologies such as stateful packet inspection, network address translation, and application-level filtering, Cisco Firewall provides granular control over network traffic, allowing only legitimate and trusted communication.

Section 2: Exploring Cisco IPS

On the other hand, Cisco IPS takes network security to the next level by actively monitoring network traffic for potential threats and malicious activities. It uses a combination of signature-based detection, anomaly detection, and behavior analysis to identify and mitigate various types of attacks, including malware, DDoS attacks, and unauthorized access attempts. Cisco IPS works in real-time, providing instant alerts and automated responses to ensure a proactive defense strategy.

Section 3: The Power of Integration

While Cisco Firewall and IPS are powerful, their true potential is unleashed when they work together synchronously. Integration between the two enables seamless communication and sharing of threat intelligence. When an IPS identifies a threat, it can communicate this information to the Firewall, immediately blocking the malicious traffic at the network perimeter. This collaborative approach enhances the overall security posture of the network, reducing response time and minimizing the impact of potential attacks.

Section 4: Benefits of Cisco Firewall and IPS

The combined deployment of Cisco Firewall and IPS offers numerous benefits to organizations. Firstly, it provides comprehensive visibility into network traffic, allowing security teams to identify and respond to threats effectively. Secondly, it offers advanced threat detection and prevention capabilities, reducing the risk of successful attacks. Thirdly, integrating Firewall and IPS streamlines security operations, enabling a proactive and efficient response to potential threats. Lastly, Cisco’s continuous research and updates ensure that Firewalls and IPS remain up-to-date with the latest vulnerabilities and attack vectors, maximizing network security.

Conclusion:

In conclusion, the Cisco Firewall and IPS duo are formidable forces in network security. By combining the robust defenses of a Firewall with the proactive threat detection of an IPS, organizations can fortify their networks against a wide range of cyber threats. With enhanced visibility, advanced threat prevention, and seamless integration, Cisco Firewall and IPS empower organizations to stay one step ahead in the ever-evolving cybersecurity landscape.

rsz_1dc_secreu_5

Data Center Security

ACI Security: L4-L7 Services

Data centers are crucial in storing and managing vast information in today's digital age. However, with increasing cyber threats, ensuring robust security measures within data centers has become more critical. This blog post will explore how Cisco Application Centric Infrastructure (ACI) can enhance data center security, providing a reliable and comprehensive solution for safeguarding valuable data.

Cisco ACI segmentation is a cutting-edge approach that divides a network into distinct segments, enabling granular control and segmentation of network traffic. Unlike traditional network architectures, which rely on VLANs (Virtual Local Area Networks), ACI segmentation leverages the power of software-defined networking (SDN) to provide a more flexible and efficient solution. By utilizing the Application Policy Infrastructure Controller (APIC), administrators can define and enforce policies to govern communication between different segments.

Micro-segmentation has become a buzzword in the networking industry. Leaving the term and marketing aside, it is easy to understand why customers want its benefits.Micro-segmentation's primary advantage is reducing the attack surface by minimizing lateral movement in the event of a security breach.

With traditional networking technologies, this is very difficult to accomplish. However, SDN technologies enable an innovative approach by allowing degrees of flexibility and automation impossible with traditional network management and operations. This makes micro-segmentation possible.

Highlights: Data Center Security

Understanding Network Segmentation

Network segmentation involves dividing a network into multiple smaller segments or subnetworks, isolating different types of traffic, and enhancing security. Cisco ACI offers an advanced network segmentation framework beyond traditional VLAN-based segmentation. It enables the creation of logical network segments based on business policies, applications, and user requirements.

Benefits of Cisco ACI Network Segmentation

– Enhanced Security: With Cisco ACI, network segments are isolated, preventing lateral movement of threats. Segmentation also enables micro-segmentation, allowing fine-grained control over traffic flow and access policies.

– Improved Performance: By segmenting the network, organizations can prioritize critical applications, allocate resources efficiently, and optimize network performance.

– Simplified Management: Cisco ACI’s centralized management allows administrators to define policies for network segments, making it easier to enforce consistent security policies and streamline network operations.

Endpoint Groups

Cisco ACI is one of many data center topologies that need to be secured. It does not consist of a data center firewall and has a zero-trust model. However, more is required; the policy must say what can happen. Firstly, we must create a policy. You have Endpoint groups (EPG) and a contract. These would be the initial security measures. Think of a contract as the policy statement and an Endpoint group as a container or holder for applications of the same security level.

Micro-segmentation

Micro-segmentation has become a buzzword in the networking industry. Leaving the term and marketing aside, it is easy to understand why customers want its benefits.

Micro-segmentation’s primary advantage is reducing the attack surface by minimizing lateral movement in the event of a security breach. With traditional networking technologies, this isn’t easy to accomplish. However, SDN technologies enable an innovative approach by allowing degrees of flexibility and automation that are impossible with traditional network management and operations. This makes micro-segmentation possible.

For those who haven’t explored this topic yet, Cisco ACI has ESG. ESGs are an alternative approach to segmentation that decouples it from the early concepts of forwarding and security associated with Endpoint Groups. Thus, segmentation and forwarding are handled separately by ESGs, allowing for greater flexibility and possibilities.

Cisco ACI and ACI Service Graph

The ACI service graph is how Layer 4 to Layer 7 functions or devices can be integrated into ACI. This helps ACI redirect traffic between different security zones of FW or load balancer. The ACI L4-L7 services can be anything from load balancing and firewalling to advanced security services. Then, we have ACI segments that reduce the attack surface to an absolute minimum.

ACI Service Graph

Then, you can add an ACI service graph to insert your security function that consists of ACI L4-L7 services. Now, we are heading into the second stage of security. What we like about this is the ease of use. If your application is removed, all the dots, such as the contract, EPG, ACI service graph, and firewall rules, get released. Cisco calls this security embedded in the application and allows automatic remediation, a tremendous advantage for security functionality insertion.

Related: For pre-information, you may find the following posts helpful:

  1. Cisco ACI 
  2. ACI Cisco
  3. ACI Networks
  4. Stateful Inspection Firewall
  5. Cisco Secure Firewall
  6. Segment Routing

Back to basic: Cisco ACI Foundations 

The ACI, an application-centric infrastructure SDN solution, consists of a spine-leaf fabric with a spine that connects the leaf, and the leaf switches combine the workloads and the security services. The controller manages all of this. So, to create policy, we need groups, and here we have EPG. In an EPG, all applications can talk by default. 

Cisco ACI is a software-defined networking (SDN) solution offering a holistic data center security approach. With its policy-driven framework, ACI provides centralized control over security policies, making it easier to manage and enforce consistent security measures across the entire data center infrastructure. By automating security policies, ACI minimizes human error and ensures a robust security posture.

Data Center Security

Data Center Security 

Cisco ACI Main Security Components 

  • Cisco ACI provides granular visibility into application traffic flows.

  • ACI’s micro-segmentation capabilities, data centers can be divided into smaller, isolated segments.

  • Threat intelligence systems, leveraging real-time threat feeds and anomaly detection mechanisms.

  • Cisco ACI is its seamless integration with existing data center infrastructure.

Key Features and Benefits of Cisco ACI

Application Visibility and Control

Cisco ACI provides granular visibility into application traffic flows, allowing administrators to identify potential security vulnerabilities and take necessary actions promptly. This visibility enables better control and enforcement of security policies, effectively reducing the attack surface and mitigating threats.

Micro-Segmentation

With ACI’s micro-segmentation capabilities, data centers can be divided into smaller, isolated segments, ensuring the rest remain secure even if one segment is compromised. This approach limits lateral movement within the network, preventing the spread of threats and reducing the overall impact of potential security breaches.

Threat Intelligence and Automation

Cisco ACI integrates with sophisticated threat intelligence systems, leveraging real-time threat feeds and anomaly detection mechanisms. By automating threat response and mitigation, ACI enhances the data center’s ability to detect and neutralize threats promptly, providing a proactive security approach.

Seamless Integration and Scalability

One of Cisco ACI’s critical advantages is its seamless integration with existing data center infrastructure, including virtualized environments and third-party security tools. This flexibility allows organizations to leverage their existing investments while enhancing security measures. Additionally, ACI’s scalability ensures that data center security can evolve alongside business growth and changing threat landscapes.

EPG communication with ACI segments

To control endpoints, we have ACI segments based on Endpoint Groups. Devices within an Endpoint group can communicate, provided they have IP reachability, which the Bridge Domain or VRF construct can supply. Communication between Endpoint groups is not permitted by default. The defaults can be changed, for example, with intra-EPG isolation.

Now, we have a more fine-grained ACI segment, and the endpoint in a single Endpoint group cannot communicate. They need a contract like a stateless reflective access list for external communication. There is no full handshake inspection. So, the ACI contract construct is not a complete data center firewall and needs to provide stateful inspection firewall features.

ACI and applicaton-centric infrastrucure

ACI security addresses security concerns with several application-centric infrastructure security options. You may have heard of the allowlist policy model. This is the ACI security starting point, meaning only something can be communicated if policy allows it. This might prompt you to think that a data center firewall is involved. Still, although the ACI allowlist model does change the paradigm and improves how you apply security, it is only analogous to access control lists within a switch or router. 

We need additional protection. So, there is still a need for further protocol inspection and monitoring, which data center firewalls and intrusion prevention systems (IPSs) do very well and can be easily integrated into your ACI network. Here, we can introduce Cisco Firepower Threat Defence (FTD) to improve security with Cisco ACI.

ACI L4-L7 Services

ACI and Policy-based redirect: ACI L4-L7 Services

The ACI L4–L7 policy-based redirect (PBR) concept is similar to policy-based routing in traditional networking. In conventional networking, policy-based routing classifies traffic and steers desired traffic from its actual path to a network device as the next-hop route (NHR). For decades, this feature was used in networking to redirect traffic to service devices such as firewalls, load balancers, IPSs/IDSs, and Wide-Area Application Services (WAAS).

In ACI, the PBR concept is similar: You classify specific traffic to steer to a service node by using a subject in a contract. Then, other traffic follows the regular forwarding path, using another subject in the same contract without the PBR policy applied.

ACI L4-l7 services
Diagram: ACI PBR. Source is Cisco

Deploying PBR for ACI L4-L7 services

With ACI policy-based redirect ( ACI L4-L7 services ), firewalls and load balancers can be provisioned as managed or unmanaged nodes without requiring Layer 4 to Layer 7 packages. The typical use cases include providing appliances that can be pooled, tailored to application profiles, scaled quickly, and are less prone to service outages. 

In addition, by enabling consumer and provider endpoints to be located in the same virtual routing and forwarding instance (VRF), PBR simplifies the deployment of service appliances. To deploy PBR, you must create an ACI service graph template that uses the route and cluster redirect policies. 

After deploying the ACI service graph template, the service appliance enables endpoint groups to consume the service graph endpoint group. Using vzAny can be further simplified and automated. Dedicated service appliances may be required for performance reasons, but PBR can also be used to deploy virtual service appliances quickly.

ACI l4-l4 services
Diagram: ACI Policy-based redirect. Source is Cisco

ACI Segments with Cisco ACI ESG

ACI Segments

We also have an ESG, which is different from an EPG. The EPG is mandatory and is how you attach workloads to the fabric. Then we have the ESG, which is an abstraction layer. Now, we are connected to a VRF, not a bridge domain, so we have more flexibility.

As of ACI 5.0, Endpoint Security Groups (ESGs) are Cisco ACI’s new network security component. Although Endpoint Groups (EPGs) have been providing network security in Cisco ACI, they must be associated with a single bridge domain (BD) and used to define security zones within that BD. 

This is because the EPGs define both forwarding and security segmentation simultaneously. The direct relationship between the BD and an EPG limits the possibility of an EPG spanning more than one BD. The new ESG constructs resolve this limitation of EPGs.

ACI Segments
Diagram: Endpoint Security Groups. The source is Cisco.

Standard Endpoint Groups and Policy Control

As discussed in ACI security, devices are grouped into Endpoint groups, creating ACI segments. This grouping allows the creation of policy enforcement of various types, including access control. Once we have our EPGs defined, we need to create policies to determine how they communicate with each other.

For example, a contract typically refers to one or more ‘filters’ to describe specific protocols & ports allowed between EPGs. We also have ESGs that provide additional security flexibility with more fine-grained ACI segments. Let’s dig a little into the world of contracts in ACI and how these relate to old access control of the past.

data center security
Diagram: Data center security. With Cisco ACI.

Starting ACI Security

ACI Contract

In network terminology, contracts are a mechanism for creating access lists between two groups of devices. This function was initially developed in the network via network devices using access lists and then eventually managed by firewalls of various types, depending on the need for deeper packet inspection. As the data center evolved, access-list complexity increased.

Adding devices to the network that required new access-list modification could become increasingly more complex. While contracts satisfy the security requirements handled by access control lists (ACLs) in conventional network settings, they are a more flexible, manageable, and comprehensive ACI security solution.

Contracts control traffic flow within the ACI fabric between EPGs and are configured between EPGs or between EPGs and L3out. Contracts are assigned a scope of Global, Tenant, VRF, or Application Profile, which limits their accessibility.

Issues with ACL with traditional data center security

With traditional data center security design, we have standard access control lists (ACLs) with several limitations the ACI fabric security model addresses and overcomes. First, the conventional ACL is very tightly coupled with the network topology. They are typically configured per router or switch ingress and egress interface and are customized to that interface and the expected traffic flow through those interfaces. 

Due to this customization, they often cannot be reused across interfaces, much less across routers or switches. In addition, traditional ACLs can be very complicated because they contain lists of specific IP addresses, subnets, and protocols that are allowed and many that are not authorized. This complexity means they are challenging to maintain and often grow as administrators are reluctant to remove any ACL rules for fear of creating a problem.

The ACI fabric security model addresses these ACL issues. Cisco ACI administrators use contract, filter, and label managed objects to specify how groups of endpoints are allowed to communicate. 

ACI Security
Diagram: ACI security with policy controls.

ACI Security: Topology independence

The critical point is that these managed objects are not tied to the network’s topology because they are not applied to a specific interface. Instead, they are rules that the network must enforce irrespective of where these endpoints are connected.

So, security follows the workloads, allowing topology independence. Furthermore, this topology independence means these managed objects can easily be deployed and reused throughout the data center, not just as specific demarcation points.

The ACI fabric security model uses the endpoint grouping construct directly, so allowing groups of servers to communicate with one another is simple. With a single rule in a contract, we can allow an arbitrary number of sources to communicate with an equally random number of destinations. 

ACI Segments with Micro-segmentation in ACI

We know that perimeter security is insufficient these days: lateral movement can allow bad actors to move within large segments to compromise more assets once breached. Traditional segmentation based on large zones gives bad actors a large surface to play with. Keep in mind that identity attacks are hard to detect.

How can you tell if a bad actor moves laterally through the network with compromised credentials or if an IT administrator is carrying out day-to-day activities?  Micro-segmentation can improve the security posture inside the data center. Now, we can perform segmentation to minimize segment size and provide lesser exposure for lateral movement due to a reduction in the attack surface.

ACI Segments

ACI microsegmentation refers to segmenting an application-centric infrastructure into smaller, more granular units. This segmentation allows for better control and management of network traffic, improved security measures, and better performance. Organizations implementing an ACI microsegmentation solution can isolate different applications and workloads within their network. This allows them to reduce the attack surface of their network, as well as improve the performance of their applications.

Creating ACI segments based on ACI microsegmentation works by segmenting the network infrastructure into multiple subnets. This allows for fine-grained control over network traffic and security policies. Furthermore, it will enable organizations to quickly identify and isolate different applications and workloads within the network.

The benefits of ACI microsegmentation are numerous. By segmenting the network infrastructure into multiple subnets, organizations can create a robust security solution that reduces the attack surface of their network. Additionally, by isolating different applications and workloads, organizations can improve the performance of their applications and reduce the potential for malicious traffic.

Microsegmentation with Cisco ACI

Microsegmentation with Cisco ACI adds the ability to group endpoints in existing application EPGs into new microsegment (uSeg) EPGs and configure the network or VM-based attributes for those uSeg EPGs. This enables you to filter with those attributes and apply more dynamic policies. 

We can use various attributes to classify endpoints in an EPG called µEPG. Network-based attributes: IP/MAC VM-based attributes: Guest OS, VM name, ID, vnic, DVS, Datacenter.

aci segments
Diagram: Cisco ACI Security with microsegmentation

Example: Microsegmentation for Endpoint Quarantine 

Let us look at a use case. You might have separate EPGs for web and database servers, each containing both Windows and Linux VMs. Suppose a virus affecting only Windows threatens your network, not the Linux environment.

In that case, you can isolate Windows VMs across all EPGs by creating a new EPG called, for example, “Windows-Quarantine” and applying the VM-based operating systems attribute to filter out all Windows-based endpoints. 

This quarantined EPG could have more restrictive communication policies, such as limiting allowed protocols or preventing communication with other EPGs by not having any contract. A microsegment EPG can have a contract or not have a contract.

Improving ACI Security

Cisco ACI includes many tools to implement and enhance security and segmentation from day 0. We already mentioned tenant objects like EPGs, and then for policy, we have contracts permitting traffic between them. We also have micro-segmentation with Cisco ACI.

Even though the ACI fabric can deploy zoning rules with filters and act as a distributed data center firewall, the result is comparable to a stateless set of access lists ACLs. As a result, they can provide coarse security for traffic flowing through the fabric.

However, for better security, we can introduce deep traffic inspection capabilities like application firewalls, intrusion detection (prevention) systems (IDS/IPS), or load balancers, which often secure application workloads. 

ACI service graph

ACI’s service graph and policy-based redirect (PBR) objects bring advanced traffic steering capabilities to universally utilize any Layer 4 – Layer 7 security device connected in the fabric, even without needing it to be a default gateway for endpoints or part of a complicated VRF sandwich design and VLAN network stitching. So now it has become much easier to implement a Layer 4 – Layer 7 inspection.

You won’t be limited to a single L4-L7 appliance; ACI can chain many of them together or even load balance between multiple active nodes according to your needs. The critical point here is to utilize it universally. The security functions can be in their POD connected to a leaf switch or a pair of leaf switches dedicated to security appliances not located at strategic network points.

An ACI service graph represents the network using the following elements:

  • Function node—A function node represents a function that is applied to the traffic, such as a transform (SSL termination, VPN gateway), filter (firewalls), or terminal (intrusion detection systems). A function within the ACI service graph might require one or more parameters and have one or more connectors.
  • Terminal node—A terminal node enables input and output from the service graph.
  • Connector—A connector enables input and output from a node.
  • Connection—A connection determines how traffic is forwarded through the network.
ACI Service Graph
Diagram: ACI Service Graph. Source is Cisco

ACI Service graph: Cisco FTD

With these features, we can now have additional security from Cisco FTD. FTD is a hardware form. If you don’t want physical, it can be virtual on public and private cloud platforms. As you know, ACI can be extended to AWS, and you can use the same data center firewall.

FTD, which stands for Firepower threat defense, comes from a converged solution. We have a converged NGFW/NGIPS on the new Firepower and ASA5500-x platforms. But now we have a single management point with the Firewall Management Center (FMC). So, we take two images and combine them.

Data Center Firewall: Cisco Security Firewall

We can use the Cisco secure firewall for a data center firewall. The architecture of the Cisco secure firewall is modular. A high-end single chassis comprises multiple blade servers, also known as security modules. In addition, the threat defense software runs on a supervisor. 

The data center firewall is a highly flexible security solution. Multiple ways exist to enable scalability and ensure resiliency in a Secure Firewall deployment, such as clustering, multi-instance, high availability, and more.

Datacenter firewall: Routed mode

The Cisco secure firewall has different modes of operation. First, it can be deployed in routed mode, in which every interface has an IP address. This design enables you to deploy a Secure Firewall threat defense as a default gateway for your network so that the end users can use the threat defense to communicate with a different subnet or connect to the Internet.

In routed mode, a threat defense acts like a Layer 3 hop. Each interface on a threat defense can be connected to a different subnet, and the threat defense can serve as the default gateway. In addition, the threat defense can route traffic between subnets, like a Layer 3 router.

data center firewall
Diagram: The data center firewall.

Data center firewall: Transparent Mode

You can also deploy a threat defense transparently to remain invisible to your network hosts. In transparent mode, a threat defense bridges the inside and outside interfaces into a single Layer 2 network and remains transparent to the hosts. We have no IP addresses on the interfaces and need to change the VLAN between interfaces.

When a threat defense is transparent, the management center does not allow you to assign an IPv4 address to a directly connected interface. As a result, the hosts cannot communicate with any connected interfaces on the threat defense. Unlike with routed mode, you cannot configure the connected interfaces as the default gateway for the hosts.

Data center firewall: FDT Multi-instance DC use case

The higher Cisco secure firewall models also offer multi-instance capability powered by the Docker container technology. It enables you to create and run multiple application instances using a small subset of the total hardware resources of a chassis.

In addition, you can independently manage the threat defense application instances as separate threat defense devices. We are slicing the physical into multiple physicals to allocate each instance to CPU, memory, and disk. We physically cut the hardware in multi or FTD. This use case helps have a separate firewall for different traffic flows in the data center.

Let’s say for compliance, it would help to have a separate firewall for north-to-south traffic and another for east-west traffic. You can also use VRF light instead of multi-instance, giving you more scalability, as you can only have a certain number of multi-instance FTD. So we can use these two features together. If you have a physical device, you can slide it, and in the management domain, we can have different management domains.

Data center security with Service Insertion

In ACI, service devices can also be connected in traditional Layer 2 Transparent/Bridge mode or Layer 3 Routed mode by a front-end and back-end endpoint group (EPG), commonly known as a sandwich design. This type of service integration is called service insertion or service chaining.

Data center security with Service Graph

The concept of a service graph differs from the concept of service insertion. Instead, the service graph specifies that the path from one EPG (the source) to another EPG (the destination) must pass through certain functions by using a contract and internal and external EPGs, also known as “shadow EPGs,” to communicate to service nodes.

Cisco designed the service graph technology to automate the deployment of L4–L7 services in the network. Cisco ACI does not provide the service device separately from a physical device. Still, it can be configured as part of the same logical construct that creates tenants, bridge domains, EPGs, etc. When deploying an L4–L7 ACI service graph, you can choose the following deployment methods:

  • Transparent mode: Deploy the L4–L7 device in transparent mode when it bridges the two bridge domains. In Cisco ACI, this mode is called Go-Through mode.
  • Routed mode: Deploy the L4–L7 device in Routed mode when the L4–L7 device is routing between the two bridge domains. In Cisco ACI, this mode is called the Go-To mode.
  • One-Arm mode: Deploy the L4–L7 device when a load balancer is on a dedicated bridge domain with a single interface.
  • Two-Arm mode: Deploy the L4–L7 device in Two-Arm mode when a load balancer is located on a dedicated bridge domain with two interfaces.
  • Policy-based redirect (PBR): Deploy the L4–L7 device on a separate bridge domain from the clients or the servers and redirect traffic to it based on protocol and port number.

With policy-based redirect (PBR), the Cisco ACI fabric can redirect traffic between security zones to ACI L4-L7 services, such as a firewall, intrusion-prevention system (IPS), or load balancer, without the need for the L4-L7 device to be the default gateway for the servers or the need to perform traditional networking configuration such as virtual routing and forwarding (VRF) sandwiching or VLAN stitching.

PBR simplifies design because the VRF sandwich configuration is not required to insert a Layer 3 firewall between security zones. The traffic is instead redirected to the node based on the PBR policy.

Data Center Firewall: Secure Firewall Insertion and PBR

Let’s say you have a single application design. We have an EPG that groups applications. These EPGs are tied to the bridge domain, and each bridge domain has a different subnet. This could be a simple 3-tier application with each tier in its own EPG. The fabric performs the routing. Now, we need to introduce additional security and insert a firewall. So, we must have FTD between each EPG, representing the application tiers.

So, what happens is that you create an ACI service graph on top of the contract that will influence the routing decisions. In this case, the ACI relies on PBR to redirect traffic defined in the contract to the security service. So when traffic hits the leaf switch, the firewall will be waiting in a different bridge domain and subnet. 

aci l4-l7 services
Diagram: ACI l4-l7 services and PBR. Source is Cisco

The fabric will create whatever is needed to forward the traffic to the firewall, get inspected, and return to the destination you remove. The firewall and the ACI will return to regular ACI routing. More and less instantaneously. So, PBR is not routing; it is switching. Here, we can pre-empt the switching decisions and forward traffic to the firewall. Because traffic goes to the leaf switch where the PBR rules are enforced, traffic will be sent to the security service defined in the service graph.

We can also use this for microsegment, even if you have all workloads in the same EPG. So, we can leverage PBR to redirect traffic within an EPG/ESG. For example, attaching a service graph to redirect traffic to the FTD for traffic inside an EPG is possible.

Closing Highlights of ACI Security 

Application-centric policy model: ACI security provides an abstraction using endpoint groups (EPGs) and contracts to define policies more easily using the language of applications rather than network topology. This overcomes many of the problems we have with standard access lists.

The ACI security allowlist policy approach supports a zero-trust model by denying traffic between EPGs unless a policy explicitly allows it. Make sure you have applications of the same security level in each EPG.

Unified Layer 4 through 7 security policy management with ACI L4-L7 services and ACI service graph: Cisco ACI automates and centrally manages Layer 4 through 7 security policies in the context of an application using a unified application-centric policy model that works across physical and virtual boundaries and third-party devices. 

Policy-based segmentation with ACI segments: Cisco ACI enables detailed and flexible segmentation of physical and virtual endpoints based on group policies, thereby reducing the scope of compliance and mitigating security risks.

Integrated Layer 4 security for east-west traffic: The Cisco ACI fabric includes a built-in distributed Layer 4 stateless firewall to secure east-west traffic between application components and across tenants in the data center. 

Summary: Data Center Security

In today’s digital landscape, network security is of utmost importance. Organizations constantly seek ways to protect their data and infrastructure from cyber threats. One solution that has gained significant attention is Cisco Application Centric Infrastructure (ACI). In this blog post, we explored the various aspects of Cisco ACI Security and how it can enhance network security.

Section 1: Understanding Cisco ACI

Cisco ACI is a policy-based automation solution providing a centralized network management approach. ACI offers a flexible and scalable network infrastructure combining software-defined networking (SDN) and network virtualization.

Section 2: Key Security Features of Cisco ACI

2.1 Micro-Segmentation:

One of Cisco ACI’s standout features is micro-segmentation. It allows organizations to divide their network into smaller segments, providing granular control over security policies. This helps limit threats’ lateral movement and contain potential breaches.

2.2 Integrated Security Services:

Cisco ACI integrates seamlessly with various security services, such as firewalls, intrusion prevention systems (IPS), and threat intelligence platforms. This integration ensures a holistic security approach and enables real-time threat detection and prevention.

Section 3: Policy-Based Security

3.1 Policy Enforcement:

With Cisco ACI, security policies can be defined and enforced at the application level. This means that security rules can follow applications as they move across the network, providing consistent protection. Policies can be defined based on application requirements, user roles, or other criteria.

3.2 Automation and Orchestration:

Cisco ACI simplifies security management through automation and orchestration. Security policies can be applied dynamically based on predefined rules, reducing the manual effort required to configure and maintain security settings. This agility helps organizations respond quickly to emerging threats.

Section 4: Threat Intelligence and Analytics

4.1 Real-Time Monitoring:

Cisco ACI provides comprehensive monitoring capabilities, allowing organizations to gain real-time visibility into their network traffic. This includes traffic behavior analysis, anomaly detection, and threat intelligence integration. Proactively monitoring the network can identify and mitigate potential security incidents promptly.

4.2 Centralized Security Management:

Cisco ACI offers a centralized management console where security policies and configurations can be easily managed. This streamlines security operations, simplifies troubleshooting, and ensures consistent policy enforcement across the network.

Conclusion:

Cisco ACI is a powerful solution for enhancing network security. Its micro-segmentation capabilities, integration with security services, policy-based security enforcement, and advanced threat intelligence and analytics make it a robust choice for organizations looking to protect their network infrastructure. By adopting Cisco ACI, businesses can strengthen their security posture and mitigate the ever-evolving cyber threats.

identity security

Identity Security

Identity Security

In today's digitized world, where everything from shopping to banking is conducted online, ensuring identity security has become paramount. With cyber threats rising, protecting our personal information from unauthorized access has become more critical than ever. This blog post will delve into identity security, its significance, and practical steps to safeguard your digital footprint.

Identity security is the measures taken to protect personal information from being accessed, shared, or misused without authorization. It encompasses a range of practices designed to safeguard one's identity, such as securing online accounts, protecting passwords, and practicing safe online browsing habits.

Maintaining robust identity security is crucial for several reasons. Firstly, it helps prevent identity theft, which can have severe consequences, including financial loss, damage to one's credit score, and emotional distress. Secondly, identity security safeguards personal privacy by ensuring that sensitive information remains confidential. Lastly, it helps build trust in online platforms and e-commerce, enabling users to transact confidently.

Table of Contents

Highlights: Identity Security

 

Sophisticated Attacks

Identity security has pushed authentication to a new, more secure landscape, reacting to improved technologies and sophisticated attacks. The need for more accessible and secure authentication has led to the wide adoption of zero-trust identity management zero trust authentication technologies like risk-based authentication (RBA), fast identity online (FIDO2), and just-in-time (JIT) techniques.

New Attack Surface

If you examine our identities, applications, and devices, they are in the crosshairs of bad actors, making them probable threat vectors. In addition, we are challenged by the sophistication of our infrastructure, which increases our attack surface and creates gaps in our visibility. Controlling access and the holes created by complexity is the basis of all healthy security. Before we jump into the zero-trust authentication and components needed to adopt zero-trust identity management, let’s start with the basics of identity security.

 

Related: Before you proceed, you may find the following posts helpful

  1. SASE Model
  2. Zero Trust Security Strategy
  3. Zero Trust Network Design
  4. OpenShift Security Best Practices
  5. Zero Trust Networking
  6. Zero Trust Network
  7. Zero Trust Access

 

Zero Trust Identity 

Key Identity Security Discussion Points:


  • Introduction to identity security and what is involved.

  • Highlighting the details of the challenging landscape along with recent trends.

  • Technical details on how to approach implementing a zero trust identity strategy.

  • Scenario: Different types of components make up zero trust authentication management. 

  • Details on starting a zero trust identity security project.

 

Back to basics: Identity Security

In its simplest terms, an identity is an account or a persona that can interact with a system or application. And we can have different types of identities.

  1. Human Identity: Human identities are the most common. These identities could be users, customers, or other stakeholders requiring various access levels to computers, networks, cloud applications, smartphones, routers, servers, controllers, sensors, etc. 
  2. Non-Human: Identities are also non-human as operations automate more processes. These types of identities are seen in more recent cloud-native environments. Applications and microservices use these machine identities for API access, communication, and the CI/CD tools. 

 

♦Tips for Ensuring Identity Security:

1. Strong Passwords: Create unique, complex passwords for all your online accounts. Passwords should contain a combination of upper- and lowercase letters, numbers, and special characters. Do not use easily guessable information, such as birthdates or pet names.

2. Two-Factor Authentication (2FA): Enable 2FA whenever possible. This adds an extra layer of security by requiring an additional verification step, such as a temporary code sent to your phone or email.

3. Keep Software Up to Date: Regularly update your operating system, antivirus software, and other applications. These updates often include security patches that address known vulnerabilities.

4. Be Cautious with Personal Information: Be mindful of the information you share online. Avoid posting sensitive details on public platforms or unsecured websites, such as your full address or social security number.

5. Secure Wi-Fi Networks: When connecting to public Wi-Fi networks, ensure they are secure and encrypted. Avoid accessing sensitive information, such as online banking, on public networks.

6. Regularly Monitor Accounts: Keep a close eye on your financial accounts, credit reports, and other online platforms where personal information is stored. Report any suspicious activity immediately.

7. Use Secure Websites: Look for the padlock symbol and “https” in the website address when providing personal information or making online transactions. This indicates that the connection is secure and encrypted.

 

Example: Identity Security: The Workflow 

The concept of identity security is straightforward and follows a standard workflow that can be understood and secured. First, a user logs into their employee desktop and is authenticated as an individual who should have access to this network segment. This is the authentication stage.

They have appropriate permissions assigned so they can navigate to the required assets (such as an application or file servers) and are authorized as someone who should have access to this application. This is the authorization stage.

As they move across the network to carry out their day-to-day duties, all of this movement is logged, and all access information is captured and analyzed for auditing purposes. Anything outside of normal behavior is flagged. Splunk UEBA has good features here.

identity security
Diagram: Identity security workflow.

 

  • Identity Security: Stage of Authentication

Authentication: You need to authenticate every human and non-human identity accurately. After an identity is authenticated to confirm who it is, it only gets a free one for some to access the system with impunity. 

  • Identity Security: Stage of Re-Authentication

Identities should be re-authenticated if the system detects suspicious behavior or before completing tasks and accessing data that is deemed to be highly sensitive. If we have an identity that acts outside of normal baseline behavior, they must re-authenticate.

  • Identity Security: Stage of Authorization

Then we need to move to the authorization: It’s necessary to authorize the user to ensure they’re allowed access to the asset only when required and only with the permissions they need to do their job. So we have authorized each identity on the network with the proper permissions so they can access what they need and not more. 

  • Identity Security: Stage of Access

Then we look into the Access: Provide access for that identity to authorized assets in a structured manner. How can the appropriate access be given to the person/user/device/bot/script/account and nothing more? Following the practices of zero trust identity management and least privilege. Ideally, access is granted to microsegments instead of significant VLANs based on traditional zone-based networking.

  • Identity Security: Stage of Audit

Finally, Audit: All identity activity must be audited or accounted for. Auditing allows insight and evidence that Identity Security policies are working as intended. How do you monitor the activities of identities? How do you reconstruct and analyze the actions an identity performed?

An auditing capability ensures visibility into activities performed by an identity, provides context for the identity’s usage and behavior, and enables analytics that identify risk and provide insights to make smarter decisions about access.

 

Starting Zero Trust Identity Management

Now, we have an identity as the new perimeter compounded by identity being the new target. Any identity is a target. Looking at the modern enterprise landscape, it’s easy to see why. Every employee has multiple identities and uses several devices.

What makes this worse is that security teams’ primary issue is that identity-driven attacks are hard to detect. For example, how do you know if a bad actor or a sys admin uses the privilege controls? As a result, security teams must find a reliable way to monitor suspicious user behavior to determine the signs of compromised identities.

We now have identity sprawl,, which may be acceptable if only one of those identities has user access. However, they don’t, and they most likely have privileged access. All these widen the attack surface by creating additional human and machine identities that can gain privileged access under certain conditions. All of which will establish new pathways for bad actors.

We must adopt a different approach to secure our identities regardless of where they may be. Here, we can look for a zero-trust identity management approach based on identity security. Next, I’d like to discuss your common challenges when adopting identity security.

 

zero trust identity management
Diagram: Zero trust identity management. The challenges.

 

Challenges to zero trust identity management

  • Challenge: Zero trust identity management and privilege credential compromise

Current environments may result in anonymous access to privileged accounts and sensitive information. Unsurprisingly, 80% of breaches start with compromised privilege credentials. If left unsecured, attackers can compromise these valuable secrets and credentials to gain possession of privileged accounts and perform advanced attacks or use them to exfiltrate data.

  • Challenge: Zero trust identity management and exploiting privileged accounts

So, we have two types of bad actors. First, there are external attackers and malicious insiders that can exploit privileged accounts to orchestrate a variety of attacks. Privileged accounts are used in nearly every cyber attack. With privileged access, bad actors can disable systems, take control of IT infrastructure, and gain access to sensitive data. So, we face several challenges when securing identities, namely protecting, controlling, and monitoring privileged access.

  • Challenge: Zero trust identity management and lateral movements

Lateral movements will happen. A bad actor has to move throughout the network. They will never land directly on a database or important file server. The initial entry point into the network could be an unsecured IoT device, which does not hold sensitive data. As a result, bad actors need to pivot across the network.

They will laterally move throughout the network with these privileged accounts, looking for high-value targets. They then use their elevated privileges to steal confidential information and exfiltrate data. There are many ways to exfiltrate data, with DNS being a common vector that often goes unmonitored. How do you know a bad actor is moving laterally with admin credentials using admin tools built into standard Windows desktops?

  • Challenge: Zero trust identity management and distributed attacks

These attacks are distributed, and there will be many dots to connect to understand threats on the network. Could you look at ransomware? Enrolling the malware needs elevated privilege, and it’s better to detect this before the encryption starts. Some ransomware families perform partial encryption quickly. Once encryption starts, it’s game over. You need to detect this early in the kill chain in the detect phase.

The best way to approach zero trust authentication is to know who accesses the data, ensure the users they claim to be, and operate on the trusted endpoint that meets compliance. There are plenty of ways to authenticate to the network; many claim password-based authentication is weak.

The core of identity security is understanding that passwords can be phished; essentially, using a password is sharing. So, we need to add multifactor authentication (MFA). MFA gives a big lift but needs to be done well. You can get breached even if you have an MFA solution in place.

 

Knowledge Check: Multi-factor authentication (MFA)

More than simple passwords are needed for healthy security. A password is a single authentication factor – anyone with it can use it. No matter how strong it is, keeping information private is useless if lost or stolen. You must use a different secondary authentication factor to secure your data appropriately.

Here’s a quick breakdown:

•Two-factor authentication: This method uses two-factor classes to provide authentication. It is also known as ‘2FA’ and ‘TFA.’

Multi-factor authentication: use of two or more factor classes to provide authentication. This is also represented as ‘MFA.’

Two-step verification: This method of authentication involves two independent steps but does not necessarily require two separate factor classes. It is also known as ‘2SV’.

Strong authentication: authentication beyond simply a password. It may be represented by the usage of ‘security questions’ or layered security like two-factor authentication.

 

The Move For Zero Trust Authentication

No MFA solution is an island. Every MFA solution is just one part of multiple components, relationships, and dependencies. Each piece is an additional area where an exploitable vulnerability can occur.

Essentially, any component in the MFA’s life cycle, from provisioning to de-provisioning and everything in between, is subject to exploitable vulnerabilities and hacking. And like the proverbial chain, it’s only as strong as its weakest link.

  • The need for zero trust authentication: Two or More Hacking Methods Used

Many MFA attacks use two or more of the leading hacking methods. Often, social engineering is used to start the attack and get the victim to click on a link or to activate a process, which then uses one of the other methods to accomplish the necessary technical hacking. 

For example, a user gets a phishing email directing them to a fake website, which accomplishes a man-in-the-middle (MitM) attack and steals credential secrets. Or physical theft of a hardware token is performed, and then the token is forensically examined to find the stored authentication secrets. MFA hacking requires using two or all of these main hacking methods.

You can’t rely on MFA alone; you must validate privileged users with context-aware Adaptive Multifactor Authentication and secure access to business resources with Single Sign-On. Unfortunately, credential theft remains the No. 1 area of risk. And bad actors are getting better at bypassing MFA using a variety of vectors and techniques.

For example, a bad actor can be tricked into accepting a push notification to their smartphone to grant access in the context of getting admission. You are still acceptable to man-in-the-middle attacks. This is why MFA and IDP vendors introduce risk-based authentication and step-up authentication. These techniques limited the attack surface, which we will talk about soon.

 

Considerations for zero trust authentication 

Think like a bad actor.

By thinking like a bad actor, we can attempt to identify suspicious activity, restrict lateral movement, and contain threats. Try envisioning what you would look for if you were a bad external actor or malicious insider. For example, are you looking to steal sensitive data to sell it to competitors, or are you looking to start Ransomware binaries or use your infrastructure for illicit crypto mining? 

Attacks with happen

The harsh reality is that attacks will happen, and you can only partially secure some of their applications and infrastructure wherever they exist. So it’s not a matter of ‘if’ but a concern of “when.” Protection from all the various methods that attackers use is virtually impossible, and there will occasionally be day 0 attacks. So, they will get in eventually; It’s all about what they can do once they are in.

 

zero trust authentication
Diagram: Zero trust authentication. Key considerations.

 

The first action is to protect Identities.

Therefore, the very first thing you must do is protect their identities and prioritize what matters most – privileged access. Infrastructure and critical data are only fully protected if privileged accounts, credentials, and secrets are secured and protected.

The bad actor needs privileged access.

We know that about 80% of breaches tied to hacking involve using lost or stolen credentials. Compromised identities are the common denominator in virtually every severe attack. The reason is apparent: 

The bad actor needs privileged access to the network infrastructure to steal data. However, without privileged access, an attacker is severely limited in what they can do. Furthermore, without privileged access, they may be unable to pivot from one machine to another. And the chances of landing on a high-value target are doubtful.

The malware requires admin access.

The malware used to pivot and requires admin access to gain persistence; privileged access without vigilant management creates an ever-growing attack surface around privileged accounts.

 

Adopting Zero Trust Authentication 

Zero trust authentication: Technology with Fast Identity Online (FIDO2)

Where can you start identity security with all of this? Firstly, we can look at a zero-trust authentication protocol. We need an authentication protocol that can be phishing-resistant. This is FIDO2, known as Fast Identity Online (FIDO2), built on two protocols that effectively remove any blind protocols. FIDO authentication Fast Identity Online (FIDO) is a challenge-response protocol that uses public-key cryptography. Rather than using certificates, it manages keys automatically and beneath the covers.

The FIDO2 standards

FIDO2 uses two standards. The Client to Authenticator Protocol (CTAP) describes how a browser or operating system establishes a connection to a FIDO authenticator. The WebAuthn protocol is built into browsers and provides an API that JavaScript from a web service can use to register a FIDO key, send a challenge to the authenticator, and receive a response to the challenge.

So there is an application the user wants to go to, and then we have the client that is often the system’s browser, but it can be an application that can speak and understand WebAuthn. FIDO provides a secure and convenient way to authenticate users without using passwords, SMS codes, or TOTP authenticator applications. Modern computers and smartphones and most mainstream browsers understand FIDO natively. 

FIDO2 addresses phishing by cryptographically proving that the end-user has a physical position over the authentication. There are two types of authenticators: a local authenticator, such as a USB device, and a roaming authenticator, such as a mobile device. These need to be certified FIDO2 vendors. 

The other is a platform authenticator such as Touch ID or Windows Hello. While roaming authenticators are available, for most use cases, platform authenticators are sufficient. This makes FIDO an easy, inexpensive way for people to authenticate. The biggest impediment to its widespread use is that people won’t believe something so easy is secure.

 

Zero trust authentication: Technology with risk-based authentication

Risk is not a static attribute, and it needs to be re-calculated and re-evaluated so you can make intelligent decisions for step-up and user authentication. We have Cisco DUO that reacts to risk-based signals at the point of authentication.

So, these risk signals are processed in real time to detect signs of known account takeout signals. These signals may include Push Bombs, Push Sprays, and Fatigue attacks. Also, a change of location can signal high risk. Risk-based authentication (RBA) is usually coupled with step-up authentication.

For example, let’s say your employees are under attack. RBA can detect this attack as a stuffing attack and move from a classic authentication approach to a more secure verified PUSH approach than the standard PUSH. 

This would add more friction but result in better security, such as adding three to six digital display keys at your location/devices, and you need to enter this key in your application. This eliminates fatigue attacks. This verified PUSH approach can be enabled at an enterprise level or just for a group of users.

 

Conditional Access

Then, we move towards conditional access, a step beyond authentication. Conditional access goes beyond authentication to examine the context and risk of each access attempt. For example, contextual factors may include consecutive login failures, geo-location, type of user account, or device IP to either grant or deny access. Based on those contextual factors, it may be granted only to specific network segments. 

 

  • A key point: Risk-based decisions and recommended capabilities

The identity security solution should be configurable to allow SSO access, challenge the user with MFA, or block access based on predefined conditions set by policy. It would help if you looked for a solution that can offer a broad range of shapes, such as IP range, day of the week, time of day, time range, device O/S, browser type, country, and user risk level. 

These context-based access policies should be enforceable across users, applications, workstations, mobile devices, servers, network devices, and VPNs. A key question is whether the solution makes risk-based access decisions using a behavior profile calculated for each user.

 

Zero trust authentication: Technology with JIT techniques

Secure privileged access and manage entitlements. For this reason, many enterprises employ a least privilege approach, where access is restricted to the resources necessary for the end-user to complete their job responsibilities with no extra permission. A standard technology here would be Just in Time (JIT). Implementing JIT ensures that identities have only the appropriate privileges, when necessary, as quickly as possible and for the least time required. 

JIT techniques that dynamically elevate rights only when needed are a technology to enforce the least privilege. The solution allows for JIT elevation and access on a “by request” basis for a predefined period, with a full audit of privileged activities. Full administrative rights or application-level access can be granted, time-limited, and revoked.

 

Final Notes For Identity Security 

Zero trust identity management is where we continuously verify users and devices to ensure access, and privileges are granted only when needed. The backbone of zero-trust identity security starts by assuming that any human or machine identity with access to your applications and systems may have been compromised.

The “assume breach” mentality requires vigilance and a Zero Trust approach to security centered on securing identities. With identity security as the backbone of a zero-trust process, teams can focus on identifying, isolating, and stopping threats from compromising identities and gaining privilege before they can harm.

 

Identity Security
Diagram: Identity Security: Final notes.

 

Zero Trust Authentication

The identity-centric focus of zero trust authentication uses an approach to security to ensure that every person and every device granted access is who and what they say they are. It achieves this authentication by focusing on the following key components:

  1. The network is always assumed to be hostile.
  2. External and internal threats always exist on the network.
  3. Network locality needs to be more sufficient for deciding trust in a network. Just so you know, other contextual factors, as discussed, must be taken into account.
  4. Every device, user, and network flow is authenticated and authorized. All of this must be logged.
  5. Security policies must be dynamic and calculated from as many data sources as possible.

 

Zero Trust Identity: Validate Every Device

Not just the user

Validate every device. While user verification adds a level of security, more is needed. We must ensure that the devices are authenticated and associated with verified users, not just the users.

Risk-based access

Risk-based access intelligence should reduce the attack surface after a device has been validated and verified as belonging to an authorized user. This allows aspects of the security posture of endpoints, like device location, a device certificate, OS, browser, and time, to be used for further access validation. 

Device Validation: Reduce the attack surface

Remember that while device validation helps limit the attack surface, device validation is only as reliable as the endpoint’s security. Antivirus software to secure endpoint devices will only get you so far. We need additional tools and mechanisms that can tighten security even further.

 

 

 

Summary: Identity Security

In today’s interconnected digital world, protecting our identities online has become more critical than ever. From personal information to financial data, our digital identities are vulnerable to various threats. This blog post aimed to shed light on the significance of identity security and provide practical tips to enhance your online safety.

Section 1: Understanding Identity Security

Identity security refers to the measures taken to safeguard personal information and prevent unauthorized access. It encompasses protecting sensitive data such as login credentials, financial details, and personal identification information (PII). By ensuring robust identity security, individuals can mitigate the risks of identity theft, fraud, and privacy breaches.

Section 2: Common Threats to Identity Security

In this section, we’ll explore some of the most prevalent threats to identity security. This includes phishing attacks, malware infections, social engineering, and data breaches. Understanding these threats is crucial for recognizing potential vulnerabilities and taking appropriate preventative measures.

Section 3: Best Practices for Strengthening Identity Security

Now that we’ve highlighted the importance of identity security and identified common threats let’s delve into practical tips to fortify your online presence:

1. Strong and Unique Passwords: Utilize complex passwords that incorporate a combination of letters, numbers, and special characters. Avoid using the same password across multiple platforms.

2. Two-Factor Authentication (2FA): Enable 2FA whenever possible to add an extra layer of security. This typically involves a secondary verification method, such as a code sent to your mobile device.

3. Regular Software Updates: Keep all your devices and applications current. Software updates often include security patches that address known vulnerabilities.

4. Beware of Phishing Attempts: Be cautious of suspicious emails, messages, or calls asking for personal information. Verify the authenticity of requests before sharing sensitive data.

5. Secure Wi-Fi Networks: When connecting to public Wi-Fi networks, use a virtual private network (VPN) to encrypt your internet traffic and protect your data from potential eavesdroppers.

Section 4: The Role of Privacy Settings

Privacy settings play a crucial role in controlling the visibility of your personal information. Platforms and applications often provide various options to customize privacy preferences. Take the time to review and adjust these settings according to your comfort level.

Section 5: Monitoring and Detecting Suspicious Activity

Remaining vigilant is paramount in maintaining identity security. Regularly monitor your financial statements, credit reports, and online accounts for any unusual activity. Promptly report any suspicious incidents to the relevant authorities.

Conclusion:

In an era where digital identities are constantly at risk, prioritizing identity security is non-negotiable. By implementing the best practices outlined in this blogpost, you can significantly enhance your online safety and protect your valuable personal information. Remember, proactive measures and staying informed are key to maintaining a secure digital identity.

data center firewall

Cisco Secure Firewall with SASE Cloud

Cisco Secure Firewall with SASE Cloud

In today's rapidly evolving digital landscape, ensuring robust network security while maintaining optimal performance and scalability is paramount. Cisco Secure Firewall with Secure Access Service Edge (SASE) Cloud offers a comprehensive solution that combines advanced firewall capabilities with the agility and flexibility of cloud-based architecture. This blog post will delve into the key features and benefits of Cisco Secure Firewall with SASE Cloud, shedding light on its transformative potential for organizations of all sizes.

Cisco SASE offers an alternative to traditional data center-oriented security. It unifies networking and security services into a cloud-delivered service to provide access and protection from edge to edge — including the data center, remote offices, roaming users, and beyond.

SASE Cloud takes network security to the next level by leveraging cloud-native architecture. Integrating networking and security functionalities offers a unified and scalable approach to protecting the entire network infrastructure. With Cisco Secure Firewall seamlessly integrated into the SASE Cloud framework, organizations can achieve enhanced security and performance while simplifying network management.

Table of Contents

Highlights: Cisco Firewall and SASE Cloud

Cisco Secure Firewall

The secure firewall is an integral part of Cisco SASE. Cisco Secure Firewall is an advanced security solution designed to provide comprehensive protection for your network. It provides access control, intrusion prevention, and application security features that protect your network from malicious attacks.

With Cisco Secure Firewall, you can control who has access to your network and what types of activity are allowed. The firewall also provides detailed analytics and reporting so you can quickly identify any suspicious activity.

SASE Cloud

Cisco SASE (Secure Access Service Edge) is an integrated platform that provides secure access to applications, data, and users while supporting cloud-native architectures. It is a cloud-native platform built on a microservices architecture and designed to enable secure access for mobile, distributed, and cloud-native applications. It combines zero trust network access, secure web gateway, cloud access security broker, and advanced threat protection services in one unified platform.

Related: For additional pre-information, you may find the following helpful:

  1. SD WAN SASE
  2. SASE Model
  3. Zero Trust SASE
  4. SASE Solution
  5. Distributed Firewalls
  6. SASE Definition

SASE Cloud.

Key Cisco Secure Firewall Discussion Points:


  • Introduction to the Cisco Secure Firewall and what is involved.

  • Highlighting the details of the challenging landscape along with recent trends.

  • Technical details on how to approach implementing a firewalling strategy.

  • Scenario: Different types of network security vantage points. Cisco Secure Workload.

  • Details on starting a SASE project with Cisco Umbrella Firewall alongside Cisco Secure Firewall.

Back to Basics: SASE and Secure Firewall

♦ Key Features and Benefits

Unified Policy Enforcement: Cisco Secure Firewall with SASE Cloud enables organizations to enforce consistent security policies across all network edges, including branch offices, remote workers, and cloud environments. This unified policy enforcement ensures that security measures are applied uniformly, reducing the risk of vulnerabilities.

Scalability and Flexibility: With SASE Cloud, organizations can scale their network security effortlessly as their business grows. The cloud-native architecture allows for seamless deployment and management of firewalls across multiple locations, providing unparalleled flexibility and agility.

Advanced Threat Intelligence: Cisco Secure Firewall leverages threat intelligence feeds and machine learning algorithms to detect and mitigate emerging threats in real-time. Continuously analyzing network traffic identifies anomalous behavior and blocks malicious activities, ensuring comprehensive protection against evolving cyber threats.

  • A key point: Challenging Landscape

In the past, network security was typically delivered from the network using the Firewall. However, these times, network security extends well beyond just firewalling. We now have different points in the infrastructure that we can use to expand our security posture while reducing the attack surface.

You would have commonly heard of Cisco Umbrella Firewall and SASE, along with Cisco Secure Workload security that can be used with your Cisco Secure firewall that is still deployed at the network’s edge. Unfortunately, you can’t send everything to the SASE cloud.

You will still need an on-premise firewall, such as the Cisco Secure Firewall, that can perform standard stateful filtering, intrusion detection, and threat protection. This post will examine the Cisco Secure Firewall and its integration with Cisco Umbrella via the SASE Cloud. Firstly, let us address some firewalling basics.

Basics of Firewalling

A firewall is an entity or obstacle deployed between two structures to prevent fire from spreading from one system to another. This term has been taken into computer networking, where a firewall is a software or hardware device that enables you to filter unwanted traffic and restrict access from one network to another. The Firewall is a vital network security component in securing network infrastructure and can take many forms. For example, we can have a host-based or network-based Firewall.

Firewall types
Diagram: Firewall types. Source is IPwithease

Host-based Firewall

A host-based firewall service is installed locally on a computer system. In this case, the end user’s computer system takes the final action—to permit or deny traffic. Every operating system has some Firewall. It consumes the resources of a local computer to run the firewall services, which can impact the other applications running on that particular computer. Furthermore, in a host-based firewall architecture, traffic traverses all the network components and can consume the underlying network resources until the traffic reaches its target.

Network-based Firewall

On the other hand, a network-based firewall can be entirely transparent to an end user and is not installed on the computer system. Typically, you deploy it in a perimeter network or at the Internet edge where you want to prevent unwanted traffic from entering your network. The end-user computer system remains unaware of any traffic control by an intermediate device performing the filtering. In a network-based firewall deployment, you do not need to install additional software or daemon on the end-user computer systems. However, it would help if you used both firewall types for a defense-in-depth approach.

Firewall types
Diagram: Displaying the different firewall types.

The early generation of firewalling

The early generation of firewalls could allow or block packets only based on their static elements, such as a packet’s source address, destination address, source port, destination port, and protocol information. These elements are also known as the 5-tuple.

When an early-generation firewall examined a particular packet, it was unaware of any prior packets that passed through it because it was agnostic of the Transmission Control Protocol (TCP) states that would have signaled this. Due to the nature of its operation, this type of Firewall is called a stateless firewall.

A stateless firewall is unable to distinguish the state of a particular packet. So, for example, it could not determine if a packet is part of an existing connection, trying to establish a legitimate new connection, or whether it is a manipulated, rogue packet. We then moved to a stateful inspection firewall and an application-aware form of next-generation firewalling.

The stateful inspection examines the TCP and UDP port numbers, while an application-aware firewall examines Layer 7. So now we are at a stage where the Firewall does some of everything, such as the Cisco Secure Firewall.

Cisco Secure Firewall
Diagram: The transition to the Cisco Secure Firewall

Cisco Secure Firewall 3100

Cisco has the Cisco Secure Firewall 3100, a mid-range model that can be an Adaptive Security Appliance (ASA) for standard stateful firewall inspection or Firewall Threat Defense (FTD) software.

So it can perform one or the other. It also has clustering, a multi-instance firewall, and high availability, which we will discuss. In addition, the Cisco Series Firewall throughput range addresses use cases from the Internet edge to the data center and private cloud.

Highlights of the Cisco Secure Firewall

Cisco Secure Firewall 3100 is an advanced next-generation firewall that provides comprehensive security and high performance for businesses of all sizes. Its advanced security features can protect the most critical assets of an organization, from data, applications, and users to the network infrastructure. Cisco Secure Firewall 3100 offers an integrated threat defense system that combines intrusion prevention, application control, and advanced malware protection. This firewall is designed to detect and block malicious traffic and protect your network from known and unknown threats.

Secure Firewall
Diagram: Cisco Secure Firewall. The source is Cisco.

Adaptive Security Appliance (ASA) and Firewall Threat Defense (FTD)

The platforms can be deployed in Firewall (ASA) and dedicated IPS (FTD) modes. In addition, the 3100 series supports Q-in-Q (stacked VLAN) up to two 802.1Q headers in a packet for inline sets and passive interfaces. The platform also supports FTW (fail-to-wire) network modules.

Remember that you cannot mix and match ASA and FTD modes. You can, however, make FTD operate close to how the ASA works. For example, the heart of the Cisco Secure Firewall is Snort—one of the most popular open-source intrusion detection and prevention systems capable of real-time traffic inspection. 

CPU Core Allocation

What’s powerful about the Cisco Secure Firewall is its high decryption performance due to the Crypto Engine. The Firewall has an architecture built around decrypting traffic and has impressive performance. In addition, you can tune your CPU cores to do more ASA traditional functionality, such as termination IPsec and some stateful firewall inspection.

In such a scenario, we have an IPS engine ( based on Snort ) but give it only, let’s say, 10%. We can provide 90% of the data plane to traditional firewalling in this case. So, a VPN headend or basic stateful Firewall would use more data plane cores.

On the other hand, any heavy IPS and file inspection would be biased toward more “Snort” Cores. Snort provides the IPS engine. So, the performance profiles can be tailored to how you see fit. So, we have configurable CPU Core allocation, which can be set statically, not dynamically.

  • Knowledge Check: Cisco’s Firewalling

Cisco integrated its original Sourcefire’s next-generation security technologies into Cisco’s existing firewall solutions called the Adaptive Security Appliances (ASA). Sourcefire technologies were running as a separate service module in that early implementation. Later, Cisco designed new hardware platforms to support Sourcefire technologies natively.

They are named Cisco Firepower, later rebranded as Cisco Secure Firewall, which is the current implementation of Firewalling. In the new implementation, Cisco converges Sourcefire’s next-generation security features, open-source Snort, and ASA’s firewall functionalities into a unified software image. This unified software is called the Firepower Threat Defense (FTD). After rebranding, this software is now known as the Cisco Secure Firewall.

Secure Firewalling Feature: Clustering

Your Secure Firewall deployment can also expand as your organization grows to support its network growth. You do not need to replace your existing devices for additional horsepower; you can add threat defense devices to your current deployment and group them into a single logical cluster to support additional throughput. 

A clustered logical device offers higher performance, scalability, and resiliency at the same time. You can create a cluster between multiple chassis or numerous security modules of the same chassis. When a cluster is built with various independent chassis, it is called inter-chassis clustering.

Secure Firewalling Feature: Multi-Instance

The Secure Firewall offers multi-instance capability powered by the Docker container technology. It enables you to create and run multiple application instances using a small subset of the total hardware resources of a chassis. You can independently manage the threat defense application instances as separate threat defense devices. Multi-instance capability enables you to isolate many critical elements.

Secure Firewalling Feature: High Availability

In a high-availability architecture, one device operates actively while the other stays on standby. A standby device does not actively process traffic or security events. For example, suppose a failure is detected in the active device, or there’s any discontinuation of keepalive messages from the active device.

In that case, the standby device takes over the role of the active device and starts operating actively to maintain continuity in firewall operations. An active device periodically sends keepalive messages and replicates its configurations to a standby device. Therefore, the communication channel between the peers of a high-availability pair must be robust and with much less latency. 

Evolution of the Network Security

Let’s examine the evolution of network security before we get into some inbound and outbound traffic use cases. Traditionally, the Firewall was placed at the network edge, acting as a control point for the network’s ingress/egress point. The Firewall was responsible for validating communications with rule sets and policies created and enforced at this single point of control to ensure that desired traffic was allowed into and out of the network and undesirable traffic was prevented. This type of design was known as the traditional perimeter approach to security.

SASE Cloud
Diagram: Network challenges and the need for SASE Cloud.

Firewalling challenges

Today, branch office locations, remote employees, and increasing use of cloud services drive more data away from the traditional “perimeter,” The cloud-first approach completely bypasses the conventional security control point.

Further, the overwhelming majority of business locations and users also require direct access to the Internet, where an increasing prevalence of cloud-based critical applications and data now lives. As a result, applications and data become further de-centralized, and networks become more diverse.

This evolution of network architectures has dramatically increased our attack surfaces and did the job of protecting more complicated ones. So, we started to answer this challenge with point solutions. Typically, organizations have attempted to address these challenges by adding the “best” point security solution to address each new problem as it emerges. 

Because of this approach, we have seen tremendous device sprawl. Multiple security products across different vendors can pose significant management problems for network security teams, which will eventually lead to complexity and then blind spots.

Consequently, our “traditional” firewall devices are being augmented by a mixture of physical and virtual appliances—some are embedded into the network. In contrast, others are delivered as a service, host-based, or included within public cloud environments. Regardless of the design, you will stall inbound and outbound traffic to protect.

Inbound Use case

The Firewall picks up every packet, looks at different fields, examines for signatures that could signal an attack is in process, and then re-packs and sends the packet out its interfaces. Still, the technique is relevant. It tracks inbound traffic to tell if someone outside or inside is accessing the private applications you want to keep secure. So, looking at every packet is still relevant for the inbound traffic use case. 

While everything is encrypted these days, you need to decrypt traffic to get value for security. Deep Packet Inspection (DPI) is still very relevant for inbound traffic. So, we will continue to decrypt inbound traffic for complete application threat protection with the hope of minimal functional impact.

Outbound Use Case

Then, we need to look at outbound traffic. Here, things have changed considerably. Some users need to catch up to a firewall and then go to applications hosted outside the protection of your on-premise security stack and network. These are applications in the cloud, such as SaaS applications, that do not like when the network devices in the middle interfere with the traffic.

Therefore, applications such as Office365 make an effort with their design to reduce the chances of the potential of any network and security device from peeking into their traffic. For example, you could have mutual certificate authentication with the service in the cloud. So, there are a couple of options here besides the traditional DPI use case for inbound traffic use case.

SASE Cloud

One way to examine SaaS-based applications and introduce some cloud security is by using Cisco Umbrella with the SASE Cloud. The SASE Cloud has a cloud access security broker known as Cloudlock. The Cisco Umbrella CASB delivered from the Cisco Cloudlock solution is like a broker that hooks into the application’s backend to determine users’ actions. It does this by asking for the service via an Application Programming Interface (API) call and not by DPI.

Cisco Umbrella
Diagram: Cisco Umbrella. Source is Cisco

SASE Cloud and CloudLock

Cisco Cloudlock is part of the SASE cloud that provides a cloud-native cloud access security broker (CASB) that protects your cloud users, data, and apps. Cloud lock’s simple, open, and automated approach uses APIs to manage the risks in your cloud app ecosystem. With Cloudlock, you can more quickly combat data breaches while meeting compliance regulations.

Cisco Umbrella also has a firewall known as the Cisco Umbrella Firewall. We can take the Cisco Umbrella Firewall to improve its policy decision using information gleaned from the CASB. In addition, we map network flows to a specific user action via cloud applications and CASB solutions. So this is one area you can look into.

Cisco Umbrella Firewall
Diagram: Cisco Umbrella Firewall with the CASB

Endpoint controls

Then, we have the endpoint, such as your desktop computer or phone. We can collect a wealth of information about each network connection. This information can be fed into the Firewall via metadata. So you can provide both the Cisco Umbrella Firewall and the Cisco Secure Firewall. Again, for improved policy.

The Firewall, either the Cisco Secure Firewall or the Cisco Umbrella Firewall, does not need to decrypt any traffic. Instead, we can get client context discovery via passive fingerprinting using an agent on the endpoint. We can get a wealth of attributes you can’t get with DPI. So we can move from DPI to everything and augment that with all other components to get better visibility.

Data Center Security. Use Case:

Regarding data center security, network firewalls are difficult to insert for two main reasons. Firstly, because of encrypted traffic, developers implement different overlay solutions to help protect their applications. For example, we could have a service mesh overlay technology.

How does the Firewall look at this traffic? However, the network will still have to have an entry point. So, there will still need to be an edge. So we still need a firewall, and we will always have an edge, and it can be a physical or virtual or a cloud-delivered firewall via a SASE solution.  

In this use case, we have a private or cloud-delivered firewall that inspects the application edge. We can implement Zero Trust Network Access (ZTNA) and continuously apply a stack of relevant inline security services.

  • A key point: Client Zero Trust Network Access (ZTNA)

ZTNA has expanded well beyond network admission control. Admission control is no longer a binary yes or no. With ZTNA, user activity must be continuously tracked throughout the application session. Cisco has a Secure Client called AnyConnect, which delivers ZTNA with Firewall. We can have a bunch of technologies here, such as dynamic policies and access lists for granular posture-driven app access to single sign-on with SAML for unified authentication. ZTNA also has certificate-based and Cisco Duo Passwordless authentication.

Cisco Secure Workload

Then, we go deeper into hybrid cloud data center use cases. First, we need to look at Cisco Secure Workload. We have network security that spins up a firewall next to the application, so instead of 30,000 signatures, you can spin up only what you need. So, these tiny firewalls and enforcement points can protect relevant workloads.

For this space, Cisco has what’s known as the Cisco Secure Workload feature. Cisco Secure workload protects the host OS and file levels in this case.

The main difference is that instead of doing the entire inspection, we can selectively inspect network and service mesh traffic with an inline firewall and API controls. This Cisco Secure Workload feature from Cisco integrates with the public cloud and cloud-native orchestrators. 

Cisco Secure Workload
Diagram: Cisco Secure Workload and FMC integration
  • A key point: Cisco Secure Workload.

With a Cisco secure workload, we ingest network telemetry from agents, Netflow/IPFIX, and VPC logs. Then, we can have policy recommendations based on observed communication. So, with all these components, we get end-to-end application protection.

This solution will help you reduce your attack surface to an absolute minimum with zero trust microsegmentation. With this approach to segmentation, we can stop threats from spreading and protect the application with zero-trust microsegmentation on any workload across any environment.

Extending the Firewall with SASE Cloud: Cisco Umbrella Firewall

The SASE Cloud with Cisco Umbrella firewall is a good solution that can be combined with the on-premise Firewall. So, if you have FDT at the edge of your network, why would you need to introduce a Cisco Umbrella Firewall or any other SASE technologies? Or if you have a SASE cloud with a Cisco Umbrella, why would you need FDT?

First, it makes sense to process specific traffic locally. But the two categories of traffic that Cisco Umbrella excels in beyond any firewall are DNS and CASB. Your edge firewall is less effective against some outbound traffic, such as dynamically changing DNS and undecryptable TLS connections. DNS is the bread and butter of Cisco Umbrella.

  • Knowledge Check: Cisco DNS-layer security.

DNS requests precede the IP connection, enabling DNS resolvers to log requested domains over any port or protocol for all network devices, office locations, and roaming users. As a result, you can monitor DNS requests and subsequent IP connections to improve the accuracy and detection of compromised systems, security visibility, and network protection. 

You can also block requests to malicious destinations before a connection is even established, thus stopping threats before they reach your network or endpoints. Cisco Umbrella under the hood can clean your DNS traffic and stop the attacks before they get to any malicious connection. 

DNS Reflection Attack
Diagram: DNS Reflection Attack.

SASE Cloud: Cisco Umbrella CASB.

Also, for SaaS-based applications and CASB. You can not decrypt those on the edge firewall. The Firewall can’t detect if the user is carrying out any data exfiltration.

With SASE cloud, Cisco Umbrella, and its integrated CASB offering, we get better visibility in this type of traffic and apply a risk category to certain kinds of activity. So now we have an excellent combination. The cloud security stack does what it does best: processing cycles away from the Firewall.

Cisco Umbrella Integration

With the Cisco Secure Firewall, they have nice DNS redirection to the Cisco Umbrella Firewall. The on-premise Firewall communicates API to Cisco Umbrella and pulls in the existing DNS policy so the Umbrella DNS policies can be used with the current firewalling policies.  Recently, Cisco has gone one step further, and you can have a SIG tunnel between the Cisco Secure Firewall Management Center (FMC) and the Cisco Umbrella.

So there is a tunnel and have per tunnel IKE ID and bundle multiple tunnels to Umbrella.  Now, we can have load balance across multi-spoke tunnels with per-tunnel custom IKE ID. Once set up, we can have certain kinds of traffic going down each tunnel.

 

Summary: Cisco Firewall and SASE Cloud

In today’s rapidly evolving digital landscape, organizations face the challenge of ensuring robust security while embracing the benefits of cloud-based solutions. Cisco Secure Firewall with SASE (Secure Access Service Edge) Cloud offers a comprehensive and streamlined approach to address these concerns. This blog post delved into the features and benefits of this powerful combination, highlighting its ability to enhance security, simplify network management, and optimize performance.

Section 1: Understanding Cisco Secure Firewall

Cisco Secure Firewall serves as the first line of defense against cyber threats. Its advanced threat detection capabilities and deep visibility into network traffic provide proactive protection for organizations of all sizes. Cisco Secure Firewall ensures a secure network environment by preventing unauthorized access, blocking malicious content, or detecting and mitigating advanced threats.

Section 2: Introducing SASE Cloud

On the other hand, SASE Cloud revolutionizes how organizations approach network and security services. SASE Cloud offers a scalable and agile solution by converging network functions and security services into a unified cloud-native platform. It combines features such as secure web gateways, data loss prevention, firewall-as-a-service, and more, all delivered from the cloud. This eliminates the need for costly on-premises infrastructure and allows businesses to scale their network and security requirements effortlessly.

Section 3: The Power of Integration

When Cisco Secure Firewall integrates with SASE Cloud, it creates a formidable combination that enhances security posture while delivering optimal performance. The integration allows organizations to extend their security policies seamlessly across the entire network infrastructure, including remote locations and cloud environments. This unified approach ensures consistent security enforcement, reducing potential vulnerabilities and simplifying management overhead.

Section 4: Simplified Network Management

One of the key advantages of Cisco Secure Firewall with SASE Cloud is its centralized management and control. Administrators can easily configure and enforce security policies, monitor network traffic, and gain valuable insights through a single glass pane of glass. This simplifies network management, reduces complexity, and enhances operational efficiency, enabling IT teams to focus on strategic initiatives rather than mundane tasks.

Conclusion:

In conclusion, the combination of Cisco Secure Firewall with SASE Cloud provides organizations with a robust and scalable security solution that meets the demands of modern networks. By integrating advanced threat detection, cloud-native architecture, and centralized management, this potent duo empowers businesses to navigate the digital landscape confidently. Experience the benefits of enhanced security, simplified management, and optimized performance by adopting Cisco Secure Firewall with SASE Cloud.

SASE Model

SASE Model | Zero Trust Identity

SASE Model | Zero Trust Identity

In today's rapidly evolving digital landscape, businesses face numerous challenges securing their networks. The traditional security model is no longer sufficient to protect against sophisticated cyber threats. This is where the Secure Access Service Edge (SASE) model comes into play. In this blog post, we will delve into the world of SASE, exploring its key concepts, benefits, and how it revolutionizes network security.

The SASE model brings together networking and security into a unified cloud-based architecture. It combines wide area networking (WAN) capabilities with advanced security functions, creating a holistic approach to network security. The SASE model simplifies management and enhances overall security posture by consolidating and centralizing security functions.

Table of Contents

Highlights: SASE Model and Identity

Cisco Umbrella

Once you have a SASE solution, you need to evolve it. The SASE model is unlike installing a firewall and configuring policies; you can add and enhance your SASE technology in many ways to increase your security posture. With Umbrella SASE, we are moving our security to the cloud and expanding this with the Cisco Umbrella platform and Zero Trust Identity from Cisco Duo. First, Cisco Umbrella provides the core SASE technology security functionality, such as DNS-layer filtering, and then Cisco Duo focuses on the Zero Trust Identity side.

Traditional Security Devices

Firewalls and other security services will still have a crucial role, but we must modernize the solution, especially regarding encrypted traffic and applying policies on an enterprise-wide scale. It’s a good idea to start offloading functions to the SASE solution and replacing them with Umbrella SASE. The SASE model is more of a journey than a product you can switch on and could take 3 – 5 years.

New Cloud Locations

The enterprise data center’s virtual private network (VPN) must remain. Even though most applications are SaaS-based, on-premise applications will still be around for compliance and security, or they will be more complex to offload to the Internet. This could be partner resources. We need a solution to satisfy all these access requirements: cloud and on-premises application access. So, we need VPN access to the enterprise data center’s enterprise application and protected DIA for SaaS-based applications.

Related: Before you proceed, you may find the following posts helpful:

  1. SD WAN SASE
  2. Zero Trust SASE
  3. SASE Definition
  4. SASE Visibility

Zero Trust Identity.

Key SASE Model Discussion Points:


  • Introduction to the SASE model and what is involved.

  • Highlighting the details of the challenging landscape along with recent trends.

  • Technical details on how to approach SASE with the SASE technology.

  • Scenario: Identity-based controls with zero trust identity.

  • Details on starting a SASE project with Umbrella SASE.

  • Discuss Cisco Duo and its key components, such as MFA and adaptive policies. 

Back to Basics: SASE Model | Zero Trust Identity

Zero Trust and SASE

Zero Trust is essential to protecting IT systems, data, and infrastructure because all organizations must move away from the traditional perimeter-based approach to security, which no longer fits the intention in an era of cloud computing and remote working. Because Zero Trust is one of the security components that enables SASE, they are complementary, but their relationship is a little more complicated.

For instance, SASE solutions often include ZTNA as one of the capabilities. Still, it may be debated whether the dependence on SD-WAN as the underlying infrastructure needs to stand in contrast to the basic principles of Zero Trust. The risk is to suppose that SD-WAN is always secure and can be trusted, but trusting a single element in the multi-layered security stack is the exact opposite of what zero trust is about.

SASE Model

SASE and Zero Trust

SASE and Zero Trust Identity Main Components 

  • SASE offers components that deliver comprehensive security and networking capabilities.

  • Seamless and secure access to applications from anywhere, anytime, and on any device.

  • SASE reduces complexity by consolidating network and security functions.

  • With its cloud-native approach, SASE enables organizations to adapt to changing network demands.

♦ Key Components of SASE

SASE is built on several key components that deliver comprehensive security and networking capabilities. These components include cloud-native security services, software-defined wide area networking (SD-WAN), zero-trust network access (ZTNA), and secure web gateways (SWG). Each component provides secure and efficient access to applications and resources.

Implementing the SASE model brings a multitude of benefits to organizations. Firstly, it enables seamless and secure access to applications from anywhere, anytime, and on any device. SASE also reduces complexity by consolidating security functions, leading to simplified management and improved operational efficiency. Additionally, it enhances scalability and agility, allowing organizations to adapt quickly to changing business needs.

Organizations increasingly adopt cloud services and embrace remote work as the digital landscape evolves. SASE is uniquely positioned to address these shifts by providing a flexible and scalable security framework. With its cloud-native approach, SASE enables organizations to adapt to changing network demands while maintaining a strong security posture.

Challenging Landscape

When you think about it, surface challenges must be solved by examining recent trends. For a start, historically, most of the resources lived in the data center, and we could centralize our security stack. However, with users accessing the network anywhere, we have public cloud apps with different connectivity metrics to understand.

In addition, we now have an internet/cloud-centric connectivity model. So, we need to re-think to facilitate these new communication flows.

As a first step, you don’t need to throw out all your network and security appliances and jump to the SASE model. For an immediate design, you can augment your on-premises network security appliance with Umbrella SASE DNS-layer security. DNS-layer security is a good starting point with Cisco Umbrella.

It would be best if you made some slight changes to this. This way, you don’t need to make any significant architectural changes to get immediate benefits from SASE and its cloud-native approach to security.

SASE Technology with Zero Trust Identity

You can then further this SASE model to include Zero Trust Identity with, for example, Cisco Duo. With Cisco Duo, we are moving from inline security inspection on the network to securing users at the endpoint or the application layer. An actual Zero Trust Identity strategy changes the level of access or trust based on contextual data about the user or device requesting access.

Zero Trust Identity
Diagram: Zero Trust Identity. Identity is the new perimeter.

Now, we are heading into identity as the new perimeter. Identity, in its variety of different forms, is the new perimeter. The new identity perimeter needs to be protected with other mechanisms you may have in your existing environments.

We have identity sprawl with potentially unprecedented access, making any of the numerous identities a high-value target for bad actors to compromise. For example, in a multi-cloud environment, it’s common for identities to be given a dangerous mix of entitlements, further extending the attack surface area security teams need to protect.

Identity attacks are hard to detect

Nowadays, bad actors can use even more gaps and holes as entry points. With the surge of identities, including humans and non-humans, IT security administrators face the challenge of containing and securing the identity sprawl as the attack surface widens. 

What makes this worse is that security teams’ primary issue is that identity-driven attacks are hard to detect. How do you know if a bad actor or a sys admin uses the privilege controls? 

Security teams must find a reliable way to monitor suspicious user behavior to determine the signs of compromised identities. For this, there needs to be some behavioral analysis happening in the background, looking for deviations from baselines. Once a variation has occurred, we can trigger automation, such as with a SOAR playbook that can, for example, perform threat hunting.

Example: Social-Engineering Toolkit. 

Credential harvester or phishing attacks aim to trick individuals into providing their sensitive login information through fraud. Attackers often create deceptive websites or emails resembling legitimate platforms or communication channels. These masquerading techniques exploit human vulnerabilities, such as curiosity or urgency, to deceive unsuspecting victims.

The Mechanics Behind the Attack

To execute a successful credential harvester attack, perpetrators typically utilize various methods. One common approach involves creating fake login pages that mimic popular websites or services. Unaware of the ruse, unsuspecting victims willingly enter their login credentials, unknowingly surrendering their sensitive information to the attacker. Another technique involves sending phishing emails that appear genuine, prompting recipients to click on malicious links and unknowingly disclose their login details.

Consequences of Credential Harvester Attacks

The consequences of falling victim to a credential harvester attack can be severe. From personal accounts to corporate networks, the compromised login information paves the way for unauthorized access, data theft, identity theft, and financial fraud. It is not uncommon for attackers to leverage their credentials to gain entry into other platforms, potentially compromising sensitive information and causing extensive damage to individuals or organizations.

Mitigating the Risks

Thankfully, several proactive measures can mitigate the risks associated with credential harvester attacks. First and foremost, user education plays a crucial role. Raising awareness about the existence of these attacks and providing guidance on identifying phishing attempts can empower individuals to make informed decisions. Implementing robust email filters, web filters, and antivirus software can also help detect and block suspicious activities.

Two-factor Authentication as a Defense Mechanism

One highly effective strategy to fortify defenses against credential harvester attacks is the implementation of two-factor authentication (2FA). By requiring an additional verification step, such as a unique code sent to a registered mobile device, 2FA adds an extra layer of security. Even if attackers obtain login credentials, they would still be unable to access the account without secondary verification.

The Changing Landscape: Evolution to a SASE Model

The Internet: New Enterprise Network

We are stating that there has been a substantial evolution. The Internet is the new network, and users and apps are more distributed; the Internet is used to deliver those services. As a result, we have a more considerable dependency on the Internet, but the reliability of the Internet could be more consistent around the globe. For example, BGP is unreliable, and we always have BGP incidents. We need to look at other tools and solutions to layer on top of what we have to improve Internet reliability.

BGP operates over TCP port 179. BGP TCP Port 179 serves as the channel through which BGP routers establish connections and exchange routing information. The linchpin facilitates the dynamic routing decision-making process across diverse networks. However, due to its criticality, BGP Port 179 has become an attractive target for malicious actors seeking to disrupt network operations or launch sophisticated attacks.

Common Threats Targeting BGP TCP Port 179

BGP TCP Port 179 faces various security threats as the backbone of internet routing. From route hijacking to Distributed Denial of Service (DDoS) attacks, the vulnerabilities within this port can have severe consequences on network stability and data integrity. Understanding these threats is essential in implementing effective countermeasures.

Port 179
Diagram: Port 179 with BGP peerings.

Also, the cloud is the new data center. So, we no longer control and own the data and apps in the public cloud. Instead, these apps communicate to other public clouds and back to on-premises to access applications or databases that can’t be moved to the cloud. Not to mention the new paradigm to try and solve. We also reduce the types of applications on our enterprise network.

SASE Technology
Diagram: Challenging landscape. Need for SASE technology.

Most are trying to minimize custom applications and streamline SaaS-based applications. For most, we can implement a lot of SaaS-based applications. These applications are hosted in public and private clouds and accessed online. The service model is now accessible only via the public Internet.

We also want the same experience at home as in the office. When I return to the office, all the network and security functions at home stay the same.

How To Approach The SASE Model?

How do you do this? Well, there are two ways. You can facilitate this with a bespoke platform, which can be self-managed with many on-premise network and security stacks, sticking the product together and then building your own PoPs. However, you can get away from this and consume this as a service from a SASE provider, so we have a cloud consumption model of all network and security services. This is the essence of the SASE model. Why not offload all the complexity to someone else?

Umbrella SASE and SASE Technology

Network Connectivity and Network Security

You want an any-to-any connectivity model, even though your users and applications are highly distributed. What types of technology do you need to have to support this? You need two essential things: network connectivity and security services. Network connectivity, such as SD-WAN for branch locations. With everything, you start with network connectivity, and then you can layer security services on top of this stack.

These services include BGP sinkhole, DNS protection, secure firewall, WAN encryption, web security, and Cisco Duo with zero trust access. We have many components that need to work together, and you will have a lot of infrastructure components used and managed.

End Visibility

We also need to have good visibility into the full end-to-end path. You can use your SASE technology with Cisco ThousandEyes for end-to-end visibility and tools to orchestrate all of this together. This has many challenges, such as building and operating these components together.

A better way is to have all these services available via one unified portal. For example, we can have network and security as a service where you can add services you need on-demand to each Umbrella SASE PoP that is outsourced to a SASE provider. Some PoPs can filter the DNS layer, while others have the entire security stack. They are turning functions on and off at will.

Policy Maintenance

This should be wrapped up with policy maintenance so you can implement policy at any point, along with good scalability and multi-tenancy. It would help if you lowered the cost, and employing the SASE can help. Not to mention the skills used. With the SASE model, you can export this to the experts and consume it.

The Issue of Provisioning

You can now bring users closer to the application with the Umbrella SASE PoP architecture. Also, we have access to a more modern and diverse toolkit by employing SASE technology. Remember that a big issue with on-premise hardware appliances is that we always over-provision, which can result in high management for handling traffic spikes that may only happen sometimes. When it comes to hardware-based solutions, we always over-provision them.

With SASE, we have the agility of a software-based model where we can scale up and down, which you can do with a hardware-based model. If you need more scale, you or your Umbrella SASE provider can introduce another Virtual Network Function (VNF) and scale this out in software configuration instead of a new hardware appliance.

Required SASE Technology: Encryption Traffic.

We have inline security services that inspect traffic and try to glean metadata about what is happening. The inspection was easy when we connected to a web page on port 80, and everything was in clear text. Inspection and seeing what the user was doing can be done with standard firewall monitoring. But now we have end-to-end encryption between the user device and the applications.

The old IDS/IPS and firewalls need help to gain insights into encrypted traffic. We need complete visibility at the endpoint and the application layer to have more context and to understand if there is any malicious activity in the encrypted traffic—Also, appropriate visibility of encrypted traffic is more important than having control. 

encrypted traffic analysis

Required SASE Technology: SIEM with Splunk and Machine Data

You are also going to need a SIEM tool. Splunk can be used as the primary SIEM tool and log collection from various data sources to provide insights and traffic traversing the network. Remember that machine data is everywhere and flows from all the devices we interact with, making up around 90% of today’s data. And harnessing this data can give you powerful security insights.

The machine data can be in many formats, such as structured and unstructured. As a result, it can be challenging to predict and process. There are plenty of options for storing data. Collecting all security-relevant data and turning all that data into actionable intelligence, however, is a different story.

Example Solution: Splunk

This is where Splunk comes into play, and it can take any data and create an intelligent, searchable index—adding structure to previously unstructured data. This will allow you to extract all sorts of insights, which can be helpful for security and user behavior monitoring. In the case of Splunk, it helps you quickly know your data. Splunk is a big data platform for machine data. It collects raw unstructured data and converts them into searchable events.

Umbrella SASE – Starting

Start with DNS Protection

As a first SASE model step, we need DNS protection. This is the first SASE technology to be implemented with a SASE solution. Cisco Umbrella can be used here. Cisco umbrella is a recursive DNS service; you can get a lot of information from DNS requests, and a great place to start security. You can learn to see attacks before they launch, have the correct visibility to protect access anywhere, and block and stop threats before the connection.

Below is a recap on DNS. DNS, by default, uses UDP and works with several records.

DNS and TTL 

DNS can be updated dynamically and has very little TTL. If you can interact with that traffic at a base level regardless of where the user is, you can see what they are doing. For example, you can see what updates happen if a malware attack occurs. DNS is very lightweight; we can protect the endpoint and block malware before attempting the connection.

Suppose someone clicks on a phishing link or malware calls back to a C&C server for additional attack information. In that case, that connection does not happen, and you don’t need to process this traffic across a firewall or other security screen stack that can add latency.

Umbrella SASE
Diagram: Cisco Umbrella

Connecting to Umbrella SASE does not cause latency issues. We can offload the hardware used to protect this and now put it into the cloud, and you don’t need the additional hardware to accommodate traffic spikes and growth protection at a DNS layer. Cisco Umbrella gives you accuracy at the DNS layer without any overhead. You can control this traffic and see what is going on to see who is and where. All of the traffic can be identified with DNS.

Do you think you could implement Umbrella SASE?

Gaining Insight: DNS 

Point the existing DNS resolver to Cisco Umbrella, then connect users and get insight into DNS requests for on or off-the-network traffic. We start with passive monitoring, and then we go to deploy blocking. It would help if you did this without re-architecting your network with the ability to minimize false positives. Therefore, pointing your existing DNS to Umbrella, a passive change, is a good starting point. Then, enable blocking internally based on policy.

There is an enterprise network, and endpoints must point to internal DNS servers. You can modify existing internal DNS servers to have their traffic go to the Cisco Umbrella for screening. So the DNS query goes to Cisco Umbrella for internet-bound traffic, and then Cisco Umbrella carries the recursive DNS queries to the Authoritative DNS servers.

The Role of Clients and Agents

It would help to get an Umbrella client or agent on your endpoint. When you have an agent on the endpoint, it will give you additional visibility. What happens when the users go home from the office? You want to maintain visibility, which can be achieved with an agent. What I like about SASE is that you can have an enterprise-wide policy in a few minutes. You can also increase your DNS performance by leveraging the SASE PoPs. The SASE PoPs should be well integrated with an authoritative DNS server. 

In summary, there are two phases. First, you can start with a network monitoring and blocking stage with DNS-layer filtering and then move to the endpoint, gaining visibility and lowering your attack surface. Now, we are heading into the zero-trust identity side of things.

Key SASE Technology: Zero Trust Identity

For additional security, we can look at Zero Trust Identity. This can be done with Cisco Dou, which provides Zero Trust Identity on the endpoint and ensures the device is healthy and secure. We need to trust the user, my endpoint, and the network they are on. In the past, we just looked at the IP as an anchor for trust. With zero trust, we can now have adaptive policies and risk-based decisions, enforce the least privilege with, for example, just-in-time access, and bring in a lot more context than we had with IP addressing for security.

zero trust identity
Diagram: Zero Trust Identity

Highlighting Cisco Duo Technologies for Umbrella SASE

Duo’s MFA (multi-factor authentication) and 2FA (two-factor Authentication) app and access tools can help make security resilience easy for your organization with user-friendly features for secure access, strong authentication, and device monitoring. The following are some of the technologies used with Cisco Duo.

Multi-factor Authentication (MFA): Multi-factor Authentication (MFA) is an access security product used to verify a user’s identity at login. Using secure authentication tools adds two or more identity-checking steps to user logins.

Adaptive Access: With adaptive access, we have security policies for every situation. Now, we can gain granular information about who can access what and when. Cisco Duo lets you create custom access policies based on role, device, location, and other contextual factors. So we can take in a lot of contextual information to make decisions.

Device Verification: Also, verify any device’s trust, identify risky devices, enforce contextual access policies, and report on device health using an agentless approach or by integrating your device management tools.

Single-Sign-On: Then we have single sign-on (SSO): Single sign-on (SSO) from Duo provides users with an easy and consistent login experience for any application, whether on-premises or cloud-based. With SSO, we have a platform that we connect to for access to all of our applications. Not just SaaS-based applications but also custom applications. CyberArk is good in this space, too.

  • Key Technology: Adaptive policies

First, adaptive policies. Cisco Duo has built a cloud platform where you can set up adaptive policies to check for anomalies and then give the user an additional check. This is like step-up authentication. Then, we move towards conditional access, a step beyond authentication.

Conditional access goes beyond authentication to examine the context and risk of each access attempt. For example, contextual factors may include consecutive login failures, geo-location, type of user account, or device IP to either grant or deny access. Based on those contextual factors, it may be granted only to specific network segments. 

  • Key Technology: Risk-based decisions 

The identity solution should be configurable to allow SSO access, challenge the user with MFA, or block access based on predefined conditions set by policy. It would help if you looked for a solution that can offer a broad range of requirements, such as IP range, day of the week, time of day, time range, device O/S, browser type, country, and user risk level. 

These context-based access policies should be enforceable across users, applications, workstations, mobile devices, servers, network devices, and VPNs. A key question is whether the solution makes risk-based access decisions using a behavior profile calculated for each user.

  • Key Technology: Enforce Least Privilege and JIT Techniques

Secure privileged access and manage entitlements. For this reason, many enterprises employ a least privilege approach, where access is restricted to the resources necessary for the end-user to complete their job responsibilities with no extra permissions. 

A standard technology here would be Just in Time (JIT). Implementing JIT ensures that identities have only the appropriate privileges, when necessary, as quickly as possible and for the least time required. 

A technology to enforce the least privilege is just-in-time (JIT) techniques that dynamically elevate rights only when needed. The solution allows for JIT elevation and access on a “by request” basis for a predefined period, with a full audit of privileged activities. Full administrative rights or application-level access can be granted, time-limited, and revoked.

A final note: Zero Trust Identity

The identity-centric focus of zero trust uses an approach to security to ensure that every person and every device granted access is who and what they say they are. It achieves this authentication by focusing on the following key components:

  1. The network is always assumed to be hostile. 
  2. External and internal threats always exist on the network. 
  3. Network locality needs to be more sufficient for deciding trust in a network. Just so you know, other contextual factors, as discussed, must be taken into account.
  4. Every device, user, and network flow is authenticated and authorized. All of this must be logged.
  5. Security policies must be dynamic and calculated from as many data sources as possible.

Summary SASE Model and Identity

Organizations face numerous challenges in ensuring secure and efficient network connectivity in today’s rapidly evolving digital landscape. This blog post delved into the fascinating world of the Secure Access Service Edge (SASE) model and its intersection with the Zero Trust Identity framework. Organizations can fortify their networks and safeguard their critical assets by understanding the key concepts, benefits, and implementation considerations of these two security approaches.

Section 1: Understanding the SASE Model

The SASE Model, an innovative framework introduced by Gartner, combines network security and wide-area networking into a unified cloud-native service. This section explores the core principles and components of the SASE Model, such as secure web gateways, data loss prevention, and secure access brokers. By converging network and security functions, the SASE Model enables organizations to embrace a more streamlined and scalable approach to network security.

Section 2: Unpacking Zero Trust Identity

Zero Trust Identity is a security paradigm emphasizing continuous verification and granular access controls. This section delves into the fundamental principles of Zero Trust Identity, including the concepts of least privilege, multifactor authentication, and continuous monitoring. By adopting a zero-trust approach, organizations can mitigate the risk of unauthorized access and minimize the impact of potential security breaches.

Section 3: Synergies and Benefits

This section explores the synergies between the SASE Model and Zero Trust Identity. Organizations can establish a robust security posture by leveraging the SASE Model’s network-centric security capabilities alongside the granular access controls of Zero Trust Identity. The seamless integration of these approaches enhances visibility, minimizes complexity, and enables dynamic policy enforcement, thereby empowering organizations to protect their digital assets effectively.

Section 4: Implementation Considerations

Implementing the SASE Model and Zero Trust Identity requires careful planning and consideration. This section discusses key implementation considerations, such as organizational readiness, integration challenges, and scalability. By addressing these considerations, organizations can successfully deploy a comprehensive security framework that aligns with their unique requirements.

Conclusion:

In conclusion, the SASE Model and Zero Trust Identity are two powerful security approaches that, when combined, create a formidable defense against modern threats. Organizations can establish a robust, scalable, and future-ready security posture by adopting the SASE Model’s network-centric security architecture and integrating it with the granular access controls of Zero Trust Identity. Embracing these frameworks enables organizations to adapt to the evolving threat landscape, protect critical assets, and ensure secure and efficient network connectivity.

rsz_1te_agents

SASE Visibility with Cisco ThousandEyes

SASE Visibility with Cisco ThousandEyes

Introduction: In today's rapidly evolving digital landscape, the need for robust network visibility is paramount. As organizations embrace Secure Access Service Edge (SASE) frameworks, ensuring comprehensive visibility becomes even more crucial. In this blog post, we will explore how Cisco ThousandEyes empowers enterprises with enhanced SASE visibility, enabling them to optimize network performance, strengthen security, and elevate user experience.

The traditional network perimeter has become increasingly obsolete with the rise of cloud applications, mobile workforces, and the proliferation of edge computing. Secure Access Service Edge (SASE) architecture emerges as a holistic approach that combines network security and wide-area networking capabilities. This convergence enables organizations to streamline their network infrastructure while ensuring robust security measures

Table of Contents

Highlights: SASE and Visibility

Proactive Approach

The following post discusses SASE visibility for the Cisco SASE solution, known as Cisco Umbrella SASE with Cisco ThousandEyes. Combining Cisco ThousandEyes with your SASE VPN gives you end-to-end visibility into the SASE security stacks and all network paths, including any nodes. All of these can be consumed from Cisco ThousandEyes, enabling a proactive approach to monitoring your SASE solution, a bundle of components. 

Network Visibility

The following post aims to help you gain valuable insights and will guide you into deploying the correct network visibility and Observability into your Cisco SASE solution. Cisco ThousandEyes has several agent deployment models that you can use depending on whether you want visibility into remote workers or users at the branch site or even agent-to-agent testing.

Remember that ThousandEyes is not just for a Cisco SASE solution; it has multiple monitoring use cases, of which Cisco Umbrella SASE is just one. ThousandEyes also has good integrations with Cisco AppDynamics for full-stack end-to-end observability. First, let’s do a quick recap on the SASE definition.

Related: Before you proceed, you may find the following posts helpful:

  1. Zero Trust SASE
  2. SD-WAN SASE
  3. SASE Solution
  4. Dropped Packet Test
  5. Secure Firewall
  6. SASE Definition

Cisco Umbrella SASE

Key SASE Visibility Discussion Points:


  • Introduction to SASE visibility and what is involved.

  • Highlighting the details of the challenging landscape. Nothing is in your control.

  • Technical details on the issues of Internet stability and cloud connectivity.

  • Scenario: Monitoring the SD-WAN underlay and overlay. Cisco SASE solution.

  • Details on monitoring remote works and branch office locations. SASE VPN.

  • Discussion on Cisco ThousandEyes agent deployment model.

Back to Basics: SASE Visibility

♦ Key Features of SASE Visibility

SASE visibility offers many features that empower organizations to gain deep insights into their network infrastructure. These features include:

1. Real-time Monitoring: SASE visibility solutions continuously monitor network traffic, providing instant visibility into application performance, bandwidth utilization, and network latency.

2. Advanced Analytics: By leveraging sophisticated analytics algorithms, SASE visibility solutions can identify trends, anomalies, and potential threats, enabling proactive network management and security.

3. User Behavior Analysis: SASE visibility allows organizations to understand user behavior patterns, such as application usage, location, and device preferences, enabling personalized experiences and targeted security measures.

SASE Visibility

SASE and Cisco ThousandEyes

SASE Visibility

  • SASE visibility solutions continuously monitor network traffic.

  • SASE visibility solutions can identify trends, anomalies, and potential threats.

  • SASE visibility allows organizations to understand user behavior patterns.

  • Cisco ThousandEyes, a leading network intelligence platform, seamlessly integrates with SASE visibility.

Integration with Cisco ThousandEyes

Cisco ThousandEyes, a leading network intelligence platform, seamlessly integrates with SASE visibility, amplifying its capabilities. With this integration, organizations can leverage ThousandEyes’ comprehensive network monitoring and troubleshooting capabilities, combined with SASE visibility’s holistic approach. This collaboration empowers organizations to identify network issues, optimize performance, and ensure a secure and seamless user experience.

The integration of SASE visibility and Cisco ThousandEyes brings forth numerous benefits for organizations, including:

1. Enhanced Network Performance: By combining real-time monitoring and advanced analytics, organizations can identify and resolve performance bottlenecks, ensuring optimal network performance and application delivery.

2. Improved Security: SASE visibility, along with Cisco ThousandEyes, enables organizations to detect and mitigate potential security threats, ensuring robust network security and data protection.

3. Simplified Network Management: The unified approach of SASE visibility and Cisco ThousandEyes simplifies network management, providing a single pane of glass for monitoring, troubleshooting, and security operations.

Example Vendor: Cisco Umbrella SASE

Cisco Umbrella SASE provides recursive DNS services and helps organizations securely embrace direct internet access (DIA). We don’t need to backhaul all traffic to the enterprise data center when applications are hosted in the cloud. There will still be applications hosted in the data center; we can use SD-WAN.

Cisco Umbrella started with DNS security solutions and then moved to include the following features, all delivered from a single cloud security service: So, we have DNS-layer security and interactive threat intelligence, a secure web gateway, firewall, cloud access security broker (CASB) functionality, and integration with Cisco SD-WAN.

The following diagram shows several Cisco SASE solution PoPs connecting to form a SASE fabric. At each PoP location, we have network and security functions. A viable way to connect PoPs over large distances is with MPLS or Segment Routing

Understanding MPLS

MPLS, short for Multiprotocol Label Switching, revolutionized how network data packets are forwarded. By assigning labels to packets, MPLS enables routers to make forwarding decisions based on these labels rather than examining the packet’s entire header. This label-switching technique improves network efficiency, reduces processing overhead, and enables traffic engineering capabilities.

MPLS forms an overlay, and in the core, everything is label-switched. The core, represented by the P node below, does not need customer routes to assist end-to-end reachability. Now, the core can focus on what is essential to the core: speed. All BGP prefixes are held on the PE nodes.

MPLS overlay
Diagram: MPLS Overlay

Exploring Segment Routing

Segment Routing takes a different approach to packet forwarding by leveraging the concept of source routing. Instead of relying on intermediate routers to make forwarding decisions, Segment Routing allows the source node to specify the path the packet should take through the network. This flexibility simplifies network design and enables enhanced traffic engineering, faster convergence, and greater scalability.

Enhanced Traffic Engineering

By leveraging MPLS and Segment Routing, network operators gain precise control over traffic paths, allowing for optimized utilization of network resources. This fine-grained traffic engineering capability enables better load balancing, improved Quality of Service (QoS), and efficient bandwidth allocation.

Scalability and Simplified Network Design

The label-switching nature of MPLS and the source-routing approach of Segment Routing contribute to simplified network architectures and improved scalability. These technologies provide the ability to efficiently handle increasing traffic demands while reducing complexity and operational costs.

Cisco SASE Solution
Diagram: Cisco SASE Solution.

Challenging Landscape: Out of your control

Now that workers are everywhere with an abundance of cloud-based applications, the Internet is the new enterprise network. The perimeter is now moved to the edges, with most devices and components out of their control. And this has many consequences. So how do enterprises ensure digital experience when they no longer own the underlying transport, services, and applications their business relies on? 

With these new complex and dynamic deployment models, we now have significant blind spots. Network paths are now much longer than they were in the past. Nothing is just one or two hops away. If you do a traceroute from your SASE VPN client, it may seem like one hop, but it’s much more.

Outside of SASE, VPNs come in many shapes and sizes. In the screenshot below, we have DMVPN and IPsec. DMVPN is a routing technique classified as an overlay network. Does DMVPN have phases, or should we design phases? Below, we have DMVPN phase 3, which enabled spoke-to-spoke on-demand tunnels.

DMVPN over IPsec
Diagram: DMVPN over IPsec

Multiple Segments & Multiple Components

And we have a lot of complexity with multiple segments and different types of components such as the Internet, security providers such as Zscaler, and cloud providers. All of this is out of your control. If I were to put my finger in the air, on average, 80% could be out of my control. So, it would help if you paid immediate attention to some things, such as visibility into the underlay, applications, and service dependencies.

SASE VPN
Diagram: SASE VPN. Many components and blind spots.

The way forward: Understanding the SASE VPN end-to-end

Firstly, you need to gain visibility into the network underlay. If you do a traceroute, you may see only one hop. Still, it would help if you had insights into every Layer 3 hop across the network underlay, including Layer 2 or firewalling and load-balancing services in the path.

Secondly, you also need to monitor business-critical applications efficiently. And fully understand how users are experiencing an application with full-page loads, metrics that matter most to the users, and multistep transactions beyond an application’s front door. This will include login availability along with the entire application workflow.

It would help if you gained actionable visibility into service dependencies. This will enable you to detect, for example, any service disruptions in ISP networks and DNS providers. See how they impact application availability, response times, and page load performance.

 

Testing DNS

Always test the DNS servers, both internal and external; if this does not work, you will have problems with everything. So, you need to test internal and external domains for the DNS servers that your users are using. Once the transactions start, we will have a DNS process here. You will be familiar with DNS, and it’s something familiar; it developed in early 1980. It is used to manage the mapping between names and numbers.

However, we have a hierarchy of servers involved in the DNS process to support the number of steps in the DNS process. For example, some of these steps would include requesting website information, contacting the recursive DNS servers, querying the authoritative DNS servers, Accessing the DNS record, etc. We must consider the performance of your network’s DNS servers, resolvers, and records. And this can include various vendors across the DNS hierarchy.

Cisco Umbrella DNS

The Internet is unstable.

The first issue we have is that the Internet is fragile. We have around 14,000 BGP routing incidents per year. This will include a range of outages and attacks on BGP protocol and peering relationships: Port 179. Border Gateway Protocol (BGP) is the glue of the Internet backbone, so when attacks and outages happen, it can have a rippling effect across different Autonomous Systems (AS). So, if BGP is not stable, which it is not.

Cloud connectivity based on the Internet will not be stable. Internet cloud providers need more stability regarding network performance on the Internet. These providers rely on the public Internet instead of using a private backbone to carry traffic.

BGP has neighbor relationships that operate over TCP port 179. While BGP is essential for internet routing, it is not immune to security vulnerabilities. Attackers can exploit weaknesses in BGP implementations or misuse BGP messages to carry out malicious activities. Unsecured BGP TCP Port 179 can be an entry point for various attacks, including route hijacking, route leaks, and distributed denial-of-service (DDoS) attacks.

Introducing Cisco ThousandEyes

You lose control and visibility when WAN connectivity and business-critical applications migrate to shared infrastructure, the Internet, and public cloud locations. One way to gain back visibility and control is with Cisco ThousandEyes.

Cisco ThousandEyes allows you to monitor your user’s digital experience against software as a service and on-prem applications, regardless of where your users are, through the essential elements of your SASE architecture. SASE is not just one virtual machine (VM) or virtual network function and consists of various technologies or VNFs such as SD-WAN, SWG, VPN, and ZTNA. 

Introducing Cisco SASE Solution: Cisco Umbrella SASE

We know the SASE definition and the convergence of networking and security in cloud-native solutions with global PoP. Cisco SD-WAN is a great starting point for your Cisco SASE solution, especially SD-WAN security, which has been mainstream for a while now.

But what would you say about gaining the correct visibility into your SASE model? We have a lot of networking and security functionality now bundled into PoPs, along with different ways to connect to the PoP, whether you are at home or working from the branch office. 

So, if you are at home, you will have a VPN client and go directly to Cisco Umbrella SASE. If you are in the Office, you will likely connect to the SASE PoP or on-premise application via Cisco SD-WAN. The SD-WAN merges with the SASE PoP with redundant IPsec tunnels. You can have up to 8 IPsec tunnels with four active tunnels. The automated policy can be set up between Cisco vManage and Cisco Umbrella, so it’s a good interaction.

Cisco Umbrella SASE is about providing secure connectivity to our users and employees. We need to know precisely what they are doing and not just blame the network all the time when there is an issue. Unfortunately, the network is easy to blame, even though it could be something else.

Scenario: Remote Worker: Creating a SASE VPN

Let’s say we have a secure remote worker. They need to access the business application that could be on-premises in the enterprise data center or served in the cloud. So, users will initiate their SASE VPN client to access a VPN gateway for on-premise applications and then land on the corporate LAN. Hopefully, the LAN is tied down with microsegmentation, and the SASE VPN users don’t get overly permissive broad access.

Suppose the applications are served over the Internet in a public cloud SaaS environment. In this case, the user must go through Cisco Umbrella, not to the enterprise data center but to the cloud. You know that Cisco Umbrella SASE will have a security stack such as DNS-layer filtering, CASB, and URL filtering. DNS-layer filtering is the first layer of defense.

SASE VPN: Identity Service

In both cases, working remotely or from the branch office, some Identity services may fall under the zero trust network access (ZTNA) category. Identity services are done with Cisco Duo. CyberArk also has complete identity services.

These identity providers offer identity services such as Single-Sign-On (SSO) and Multi-Factor Authentication (MFA) to ensure users are who they say they are—presenting them with multiple MFA challenges and a seamless experience with SSO via an identity portal.

Out of your control

In both use cases of creating a SASE VPN, we need visibility into several areas out of your control. For example, suppose the user works from home. In that case, we will need visibility into their WiFi network, the secure SASE VPN tunnel to the nearest Umbrella PoP, the transit ISP, and the SASE security functions.

We need visibility into numerous areas, and each region is different. But one thing that they share is that they are all out of our control. Therefore, we must question and gain complete visibility into something out of our control. 

We will have similar problems with edge use cases where workers work from branch sites. If these users go to the Internet, they will still use the Cisco Umbrella SASE security stack, but it will go through SD-WAN first.

Monitoring SD-WAN

However, there will be another part to monitor, using SD-WAN. So, with SD-WAN, we add another layer of needed visibility into the SD-WAN overlay and underlay. There will be multiple ISPs with multiple components and decades-old equipment for the SD-WAN underlay.

We will have different applications mapped to other overlays for the overlay network—potentially changing on the fly based on performance metrics. The diagram below shows that some application types prefer different paths and network topologies based on performance metrics such as latency. 

sd-wan technology

With SD-WAN, the network overlay is now entirely virtualized, allowing an adaptive, customized network infrastructure that responds to an organization’s changing needs. So when you move to a SASE environment, you are becoming more dependent on an increasing number of external networks and services that you do not own and that traditional tools can’t monitor, resulting in blind spots that will lead to gaps in security and many operational challenges to moving to SASE.

Challenges to the Cisco SASE Solution

When moving to a SASE environment, we face several challenges. Internet blindspots can be an Achilles’ heel to SASE deployments and performance. After all, network paths today consist of many more hops over longer and more complex segments (e.g., Internet, security, and cloud providers) that may be entirely out of the control of IT. 

Legacy network monitoring tools are no longer suitable for this Internet-centric environment because they primarily collect passive data from on-premises infrastructure. We also have a lot of complexity and moving parts. Modern applications have become increasingly complex, involving modular architectures distributed across multi-cloud platforms. Not to mention a complex web of interconnected API calls and third-party services.

As a result, understanding the application experience for an increasingly remote and distributed workforce is challenging—and siloed monitoring tools fail to provide a complete picture of the end-to-end experience.

Cisco ThousandEyes: Different Vantage Points.

Cisco ThousandEyes provides visibility end-to-end across your SASE environment. It allows you to be proactive and see problems before they happen, reducing your time to resolution. Remember that today, we have a complex environment with many new and unpredictable failure modes. Having the correct visibility lets you control the known and unknown failure modes.

Cisco ThousandEyes can also give you actionable data. For example, when service degradation occurs, you can quickly identify where the problem is. So, your visibility will need to be actionable. To gain actionable visibility, you need to monitor different things from different levels. One way to do this is with different types of agents.

Using a global collective of cloud, enterprise, and end-user vantage points, ThousandEyes enables organizations to see any network, including those belonging to Internet and cloud providers, as if it were their own—and to correlate this visibility with application performance and employee experience.

From Thousand Eyes’ different vantage points, which are based on deploying agents, we can see the layer 3 hop-by-hop underlays from remote users and SD-WAN sites to secure edge and from secure edge to application servers SaaS application performance, including monitoring login availability and application workflows, service dependency monitoring, including secure edge PoP, and DNS servers.

Example of an Issue: HTTP Response Time

That’s quite a lot of areas to grasp. So, let’s say you are having performance issues with Office365, and the response time has increased. First, you would notice an increase in HTTP response time from a specific office. The next stage would be to examine the network layer and see an increase in latency. So, in this case, we have network problems.

Then, you can go deeper into the problem using the packet visualization Cisco ThousandEyes offers to pinpoint precisely where it is happening. The packet visualization shows you the exact path of the Office to the Internet via Umbrella. It provides all the legs of the Internet and can pinpoint the problem to the specific device. So now we have end-to-end visibility via this remote worker right to the application.

Cisco ThousandEyes agents

Endpoint Agent

The secure remote worker could be on the move and working from anywhere. In this case, you need the ThousandEyes Endpoint agent. The Endpoint agent performs active application and network performance tests and passively collects performance data, such as WiFi and device-level metrics like CPU and memory.

It also detects and monitors any SASE VPN, other VPN gateways, and proxies. The most crucial point to note about the Endpoint agent is that it follows the user regardless of where they work. The branch office or the remote locations. The endpoint agent is location agnostics. However, creating a baseline for users with this type of movement will be challenging.

The endpoint agent, by default, does some passive monitoring. WiFi performance is a metric that always sees the percentage of retransmitted packets that would indicate a problem occurring. If the user is working from home and saying this application is not working, you can tell if the WiFi is now working and ask them to carry out the necessary troubleshooting if the issue is at their end.

The endpoint agent also automatically performs default gateway networking testing. This is synthetic network testing using the default gateway. Remote working has an extensive internal network, so you can map it out and help them troubleshoot. 

They can test the underlay network to the VPN termination points. So, if you are on a VPN, you have one hop! But if you need to determine any packet loss, etc., you must see the exact underlay. The underlay testing can tell you if there is a problem with the upstream ISP or the VPN termination points.

Enterprise Agent Testing

The Enterprise agent is set up from the Endpoint agent. The Enterprise agent has, on top of it, complete application testing. Unlike the endpoint agents, it can do page load testing. Using Webex, you can set up the RTP tests for the agent running in the various WebEx data centers.  

Then, we have the secure edge design, where the users work from a branch office. This is where we have an Enterprise agent—one agent from all users working in the Office. So, one agent for all users and devices in the LAN can be installed on several device types—for example, Cisco Catalyst 8000 or ISR 4000 series. Or if you can’t install it on a Cisco device, you can install it as a Docker container or in a smaller office; you can deploy it on a Raspberry PI.

Network Performance Testing

It performs active application and network performance testing, similar to the Endpoint agent. But one main difference is that it can perform complex web application tests. The Enterprise agent has a fully-fledged browser on top of it, and it can open up a web application, download images needed to load the page load event, and log in to the application. 

This is an essential test for the zero trust network access (ZTNA) category, as it supports complete web testing for applications beyond SSO. It can also test VPN and the SDN overlay and underlay. In addition, it provides a continuous baseline regardless of any active users. The baseline is 24/7, and you can immediately know if there are problems. This is compared to the Endpoint agent, which can’t provide a baseline due to choppy data.

Cisco ThousandEyes also has a Cloud agent that can augment the Enterprise agent. The Cloud agent is installed on over 200 worldwide locations and is also in WebEx data centers. Consider the cloud agent to increase the enterprise agent. It allows you to do two-way networking and bidirectional testing. Here, we can test agent to agent.

  • SD-WAN Underlay Visibility

The enterprise agent can also test the SD-WAN underlay. In the underlay testing, you can configure some data policies and allow the network test to go into the underlay and even test the Umbrella IPsec Gateway or the SD-WAN router in the data center, which gives you hop-by-hop insights into the underlay.

  • Device Layer Visibility

We also have device layer visibility. Here, we have visibility into the performance of the secure edge internal devices by gathering network device topology. This will show you all the Layer three nodes in your network, firewall, load balancer, and other Layer 2 devices.

AppDynamic and ThousandEye Integration

While most organizations cannot respond to 3rd party connectivity issues, ThousandEyes can give you the correct Observability into every application and service and track the network traffic hops inside and outside, including all your environments. If you want or need to go one step further, you can integrate Thousand Eye with AppDynamic and see your business transactions in detail.

 

Summary: SASE and Visibility

In today’s digital landscape, the demand for secure and efficient network connectivity is higher than ever. With the rise of remote work and cloud adoption, organizations are turning to Secure Access Service Edge (SASE) solutions to streamline their network infrastructure. Cisco Thousandeyes emerges as a powerful tool in this realm, offering enhanced visibility and control. In this blog post, we explored the key features and benefits of Cisco Thousandeyes, shedding light on how it can revolutionize SASE visibility.

Section 1: Understanding SASE Visibility

To grasp the importance of Cisco Thousandeyes, it’s crucial to comprehend the concept of SASE visibility. SASE visibility refers to monitoring and analyzing network traffic, performance, and security across an organization’s network infrastructure. It provides valuable insights into user experience, application performance, and potential security threats.

Section 2: The Power of Cisco Thousandeyes

Cisco Thousandeyes empowers organizations with comprehensive SASE visibility that extends across the entire network. By leveraging its advanced monitoring capabilities, businesses gain real-time insights into network performance, application behavior, and end-user experience. With Thousandeyes, IT teams can identify and troubleshoot issues faster, ensuring optimal network performance and security.

Section 3: Key Features and Functionalities

In this section, we will delve into the key features and functionalities offered by Cisco Thousandeyes. These include:

1. Network Monitoring: Thousandeyes provides end-to-end visibility, allowing organizations to monitor their network infrastructure from a single platform. It tracks network performance metrics, such as latency, packet loss, and jitter, enabling proactive issue resolution.

2. Application Performance Monitoring: With Thousandeyes, businesses can gain deep insights into application performance across their network. IT teams can identify bottlenecks, optimize routing, and ensure consistent application delivery to enhance user experience.

3. Security Monitoring: Cisco Thousandeyes offers robust security monitoring capabilities, enabling organizations to detect and mitigate potential threats. It provides visibility into network traffic, identifies anomalies, and facilitates rapid incident response.

Section 4: Integration and Scalability

One of the significant advantages of Cisco Thousandeyes is its seamless integration with existing network infrastructure. It can integrate with various networking devices, cloud platforms, and security tools, ensuring a cohesive and scalable solution. This flexibility allows businesses to leverage their current investments while enhancing SASE visibility.

Conclusion:

In conclusion, Cisco Thousandeyes proves to be a game-changer in SASE visibility. Its comprehensive monitoring capabilities empower organizations to optimize network performance, ensure application reliability, and enhance security posture. By embracing Cisco Thousandeyes, businesses can journey toward a more efficient and secure network infrastructure.

WAN Security

SD WAN Security

In today's fast-paced digital landscape, businesses increasingly rely on Software-Defined Wide Area Networking (SD-WAN) to enhance network performance and flexibility. However, with the rise in cyber threats and data breaches, ensuring robust security measures within SD-WAN deployments has become paramount. This blog post will delve into the significance of SD-WAN security and its crucial role in protecting sensitive data and preserving network integrity

SD-WAN, or Software-Defined Wide Area Networking, is a revolutionary technology that allows organizations to connect and manage their networks more efficiently and cost-effectively. By leveraging software-defined networking principles, SD-WAN provides centralized control and enables the dynamic routing of network traffic over multiple connections, including MPLS, broadband, LTE, and more.

This agility and flexibility of SD-WAN have brought numerous benefits to businesses, but it also introduces potential security risks.

Table of Contents

Highlights: SD-WAN Security

Decrease the Attack Surface

SD-WAN security allows end users to connect directly to cloud applications and resources without backhauling through a remote data center or hub. This will enable organizations to offload guest traffic to the Internet instead of using up WAN and data center resources. The DIA model, where Internet access is distributed across many branches, increases the network’s attack surface and makes security compliance a critical task for almost every organization.

A Layered Approach to Security 

The broad threat landscape includes cyber warfare, ransomware, and targeted attacks. Firewalling, intrusion prevention, URL filtering, and malware protection must be leveraged to prevent, detect, and protect the network from all threats. The branches can consume Cisco SD-WAN security through integrated security applications within powerful WAN Edge routers, cloud services, or regional hubs where VNF-based security chains may be leveraged or robust security stacks may already exist.

The Role of Cisco Umbrella

This post will address Cisco SD-WAN security features for the control plane elements, data plane forwarding, and the integrated SD-WAN security features that can be used for Direct Internet Access (DIA). Just to let you know, SD-WAN can be combined with Cisco Umbrella via a series of redundant IPsec tunnels for additional security measures, increasing the robustness of your WAN Security.

In addition, the WAN architecture can provide simplicity regarding application deployment and management. First, however, the mindset must shift from a network topology focus to an application services topology. This is what SD-WANs’ initial focus was on.

Related: For additional pre-information, you may find the following posts helpful:

  1. SD WAN Tutorial
  2. WAN Monitoring
  3. Virtual Firewalls

Cisco SD-WAN Security

Key SD-WAN Security Discussion Points:


  • Introduction to SD WAN Security and what is involved.

  • Highlighting the details of Direct Internet Access (DIA) and the need for branch security. 

  • Technical details on securing the control and data plane. 

  • Scenario: Branch DIA security functions. WAN security.

  • A final note on integrating SD-WAN with Cisco Umbrella.

 

Back to Basics: SD-WAN Security

Unveiling the Security Risks in SD-WAN Deployments

While SD-WAN offers enhanced network performance and agility, it also expands the attack surface for potential security breaches. The decentralized nature of SD-WAN introduces complexities in securing data transmission and protecting network endpoints. Threat actors constantly evolve tactics, targeting vulnerabilities within SD-WAN architectures to gain unauthorized access, intercept sensitive information, or disrupt network operations. Organizations must be aware of these risks and implement robust security measures.

Implementing Strong Authentication and Access Controls

Robust authentication mechanisms and access controls are essential to mitigate security risks in SD-WAN deployments. Multi-factor authentication (MFA) should be implemented to ensure that only authorized users can access the SD-WAN infrastructure.

Additionally, granular access controls should be enforced to restrict privileges and limit potential attack vectors. By implementing these measures, organizations can significantly enhance the overall security posture of their SD-WAN deployments.

Ensuring Encryption and Data Privacy

Protecting data privacy is a critical aspect of SD-WAN security. Encryption protocols should be employed to secure data in transit between SD-WAN nodes and across public networks. By leveraging robust encryption algorithms and key management practices, organizations can ensure the confidentiality and integrity of their data, even in the face of potential interception attempts. Data privacy regulations, such as GDPR, further emphasize the importance of encryption in safeguarding sensitive information.

Monitoring and Threat Detection

Continuous monitoring and threat detection mechanisms are pivotal in maintaining SD-WAN security. Intrusion Detection Systems (IDS) and Security Information and Event Management (SIEM) tools can provide real-time insights into network activities, identifying potential anomalies or suspicious behavior. Through proactive monitoring and threat detection, organizations can swiftly respond to security incidents and mitigate potential risks before they escalate.

SD-WAN Security 

SD-WAN Main Security Components

SD-WAN Security

  • Robust authentication mechanisms and access controls are essential to mitigate security risks.

  • Encryption protocols should be employed to secure data in transit between SD-WAN nodes.

  • Continuous monitoring and threat detection mechanisms are pivotal in maintaining SD-WAN security.

  • Implement secure web gateways and integration with next-generation firewalls to inspect and filter traffic across the network.

WAN Technologies

All networks experience latency, which refers to the time between a data packet being sent and received or the round-trip time. All networks also experience jitter, which is the variance in the time delay between data packets- basically, a “disruption” in the sending and receiving packets.

However, there are ways we can help manage the experience for all. For example, we can implement quality of service (QoS), which we can utilize to prioritize traffic, such as voice and video, where fluctuations in the network due to these factors are noticeable. In addition, there are mechanisms to route traffic dynamically, such as Multiprotocol Label Switching Traffic Engineering (MPLS-TE).

MPLS Traffic Engineering

Today, we are plunging into cloud adoption, where almost everything can be offered “As a Service.” So, how do we match the needs of today’s cloud computing, the benefits of QoS, MPLS-TE, and the dynamism we need for modern networks? Hence SD-WAN.

MPLS TE is a technology that enables network operators to control traffic flow through their networks by establishing specific paths for data packets. Providing traffic engineering capabilities ensures efficient utilization of network resources, improves network performance, and enhances Quality of Service (QoS).

♦ Benefits of MPLS Traffic Engineering

Efficient utilization of network resources: MPLS TE enables network operators to allocate bandwidth intelligently, ensuring optimal utilization of available resources. This prevents congestion and improves overall network performance.

Improved network reliability: By creating explicit paths for traffic, MPLS TE allows network operators to reroute traffic in the event of link failures or network congestion. This enhances network resiliency and minimizes service disruptions.

Enhanced QoS: MPLS TE enables operators to prioritize certain types of traffic, ensuring that critical applications receive the necessary bandwidth and low latency they require. This results in a better end-user experience and improved QoS.

MPLS TE
Diagram: MPLS TE

Video: SD-WAN Tutorial

In the following video, we will address the basics of SD-WAN and the challenges of the existing WAN. We will also go through popular features of SD-WAN and integration points with, for example, SASE.

SD WAN Tutorial
Prev 1 of 1 Next
Prev 1 of 1 Next

SD WAN’s initial focus

The initial SD-WAN deployment model was about bringing up corporate communications with the organizational fabric and corporate communications with the SD WAN overlay. There was an immediate ROI as you could bring cheap broadband links into the branch and connect to the organization’s network with the SD-WAN overlay.

For some time now, we have been gaining benefits from the base of SD-WAN, such as site connection; we are now in a position to design and implement the application optimizations that SD-WAN offers, such as integrated security. This enables us to get additional benefits from SD-WAN.

From a security perspective, end-to-end segmentation and policy are critical. The control, data, and management planes must be separated throughout the environment and secured appropriately. In addition, the environment should support native encryption that is robust and scalable and offers lightweight key management.

SD WAN traffic steering
Diagram: SD WAN traffic steering. Source Cisco.

SD-WAN Security Features: DIA

With SD-WAN, we can now instead go directly from the branch through DIA to the applications hosted in the Cloud by leveraging DNS and geo-location services for the best possible performance. This, however, presents different types of attack surfaces that we need to deal with.

We have different security implications for moving the Internet edge to the branch. In the DIA model, Internet access is distributed across many components; for example, unsecured guest users are allowed Internet access directly. They may be guests, but we are responsible for content filtering and ensuring compliance. So, we have internal and external attack vectors that need to be considered with this new approach to the WAN.

SD WAN Security
Diagram: SD WAN Security and DIA.

You could group these threats into three main categories. Outside-in threats could consist of denial of services or unauthorized access. Inside-out threats could be malware infection or phishing attacks.

Then, we have internal threats where lateral movements are a real problem. With every attack vector, the bad actor must find high-value targets, which will likely not be the first host they land on.

So, to protect against these threats, we need a new security model with comprehensive, integrated security at the branch site. So, the branch leveraged the appropriate security mechanisms, such as application-aware firewalling, intrusion prevention, URL filtering, and malware protection, to prevent, detect, and protect the network and the various identities from all threats. 

SD WAN Security Featrues
Diagram: SD WAN Security Features. The source is Cisco.

SD-WAN Deployment Models

SD-WAN can be designed in several ways. For example, you can have integrated security at the mentioned branch. We can also consume security through cloud services or regional hubs where VNF-based security chains may be leveraged. So, to enable or deploy SD-WAN security, you can choose from different types of security models.  

The first model would be cloud security, often considered a thin branch with security in the Cloud. This design or deployment model might not suit, for example, healthcare. Then, we have integrated protection with a single platform for routing and security at the branch. This deployment model is widespread, and we will examine a use case soon. 

A final deployment model would be the regional hub design. We have a co-location or carrier-neutral facility (CNF) where the security functions are virtual network functions (VNFs) at the regional collection hub. I have seen similar architecture with a SASE deployment and segment routing between the regional hubs.

SD WAN Deployment
Diagram: SD WAN Deployment. The source is Cisco

Recap: WAN Challenges

First, before we delve into these main areas, let me quickly recap the WAN challenges. We had many sites connected to the MPLS site without a single pane of glass. With many locations, you could not see or troubleshoot, and it could be the case that one application was taking up all the bandwidth. 

Visibility was a big problem; any gaps in visibility would affect your security. In addition, there needed to be more application awareness, which resulted in complex operations, and a DIY approach to application optimization and WAN virtualization resulted in fragmented security.

sd-wan technology

Highlighting SD-WAN

SD-WAN solves all the challenges that give you an approach to centrally provision, manage, monitor, and troubleshoot the WAN edges. So, SD-WAN is not a single VM; it is an array of technologies grouped that fall under the umbrella of SD-WAN. As a result, it increases application performance over the WAN while offering security and data integrity.

So, we have users, devices, and things, and we no longer have one type of host to deal with. We have many identities and identity types. One person may have several devices that need an IP connection and communicate to applications hosted in the primary data center, IaaS, or SaaS. 

IP connectivity must be done securely and on a scale while gaining good telemetry. We know the network edges send a wealth of helpful information or telemetry. We can predict or know that you need to upgrade specific paths, which helps monitor traffic patterns and make predictions. Of course, all this needs to operate over a security infrastructure.

Introduction to SD-WAN Security Features

SD-WAN security is extensive and encompasses a variety of factors. However, it falls into two main categories. First, we have the security infrastructure category, which secures the control and data plane.

Then, we have the DIA side of things, where we need to deploy several security functions, such as intrusion prevention, URL filtering, and an application-aware firewall, to name a few. SD-WAN can be integrated with SASE for DNS-layer filtering. The Cisco version of SASE is Cisco Umbrella.

Cisco SD WAN Security
Diagram: Cisco SD WAN Security

Now, we need to have layers of security known as the defense-in-depth approach, and DNS-layer filtering is one of the most critical layers, if not the first layer of defense. Everything that wants IP connectivity has to perform a DNS request, so it’s an excellent place to start.

SD-WAN Security Features: Secure the SD-WAN Infrastructure

The SD-WAN infrastructure is what builds the SD-WAN fabric. Consider a material a mesh of connectivity that can take on different topologies. We have several SD-WAN components that can reside in the Cloud or on-premise. These components are the Cisco vBond, vAnalytics, vManage, and vSmart controllers. But, of course, having these components on the Cloud or on-premises depends on whether you are cloud-ready.

SD-WAN vBond

The Cisco vBond is the orchestration plane and orchestrates the control and management plane. The Cisco vBond is the entry into the network and is the first point of authentication. So if you pass authentication, the vBond will tell the WAN Edge device that is trying to come online in the fabric who they need to communicate in the Cloud or on-premises, depending on the design, to build a control plane and data plane and get into the fabric securely. 

Essentially, the vBond distributes connectivity information of the vManage/vSmarts to all WAN edge routers.

The Cisco vBond also acts as a STUN server, allowing you to get around different types of Network Address Translation (NAT). So there are different types of NAT, and we need a unit or a device that can be aware of NAT and tell the WAN edge devices that this is your real IP and port, so when you build the control information, you make sure you have the correct address. 

WAN Security
Diagram: WAN Security with Cisco SD-WAN

The Cisco vSmart

The Cisco vSmart is the brain of the solution and facilitates fabric discovery. The Cisco vSmart performs the policy, routes, and key exchange. In addition, all the WAN edge devices, physical or virtual, will build connectivity to multiple vSmart controllers in different regions for redundancy.

So, the vSmart acts as a dissemination point that distributes data plane and application-aware routing policies to the WAN edge routers. It’s like an enhanced BGP route reflector (RR) but reflects much more than routes, such as policy, control, and security information. This drastically reduces complexity and offers a highly resilient architecture.

These devices connect to the control plane security with TLS or DTLS tunneling. You can choose this when you are setting up your SD-WAN. All of this is configured via the vManage.

Data Plane

Then we have the data plane that could be physical and virtual—known as your WAN edge that is responsible for moving packets. It no longer has to deal with the complexity of a control pane on the WAN side, such as BGP configurations and maintaining peering relationships. Of course, it would help if you still had a control pane on the LAN site, such as route learning via OSPF. But on the WAN side, all the complex peerings have been pushed into the vSmart controllers. 

The WAN edge device establishes as DLTS or TLS tunnels to the SD-WAN control plane that consists of the vSmart controllers. In addition to the DTLS and TLS tunnel, the WAN edge creates a secure control plane with the vSmarts and Cisco’s purpose-built Overlay Management Protocol (OMP).

OMP is the enhanced routing protocol for SD-WAN. You can add a lot of extensions to OMP to enhance the SD-WAN fabric. It is a lot more intelligent than a standard routing protocol.

Cisco vManage

vManage is the UI you can use for Day 0, Day 1, and Day 2 operations. All policies, routing, and QoS security are configured in vManage. Then, vManage pushes this directly to the WAN edge or the vSmart. It depends on what you are looking for.

If you reconfigure a box, such as an IP address, this could get pushed down directly to the box with NETCONF; however, if you change the policy to a remote site. That does not get pushed down via the vManage. So, in the case of advanced configurations, the vSmart will carry out some path calculation and push this down in a state mode to the WAN Edge.

SD-WAN Security Features: Device Identity

So now we have started to secure the fabric, and everything is encrypted for the control plane side of things. But before we get into the data plane security, we must look at physical security. So here, we need to address device and software authentication. How can you authenticate a Cisco authentic device and make sure that Cisco OS runs on that device? Unfortunately, many counterfeit devices are produced, but those, when booted up, will not even load.

In the past, many vulnerabilities were found in the IOS classic routes. We had, for example, runtime infection and static infection. Someone could access the devices and modify them for all of these to be successful. With some vulnerabilities, it contacted C&C servers when the router came online. So, Malware in IOS is a real threat. There was a security breach that affected the line cards. 

However, now Cisco is authenticating Cisco hardware and Cisco software. And this is done with Cisco Trust Anchor modules. We also need to secure the OS, which is done with Cisco Secure Boot. 

50%

SD WAN Security Checklist

  • Change in perimeter now leads to a new attack surface.

  • SD-WAN has a number of deployment models that need protection.

  • WAN security emposes branch sites and the SD-WAN infrastructure components.

  • Identity comes in many shapes and forms. Both user and device layer security. 

SD-WAN Security Features: Secure Control Plane

We have taken the burden from the WAN edge router. The traditional WAN had integrated control and data plane where we had high complexity, limited scale, and path selection. So, even if you use DMVPN, you still carry out the routing, such as EIGRP or OSPF. So you are not saved from this. We will have the IKE and routing components with DMVPN. IKE in large environments is hard to scale. 

DMVPN operates with Phases, and below, we have DMVPN phase 3. DMVPN Phase 3 allows on-demand spoke-to-spoke tunnels. This is carried out with the hub router; in our case, R11 sends a Traffic Indication message to the spokes telling the spoke to override the routing table and go directly to the other spoke. Therefore, spoke-to-spoke traffic does not need to flow through the hub.

DMVPN Phase 3
Diagram: DMVPN Phase 3 configuration

With SD-WAN, we have a network-wide control pane different from that of DMVPN. Moreover, as the WAN edge has secure and authenticated connectivity to the vSmart controllers, we can use the vSmart controllers to remove the complexity, especially for central key rotation. So now, with SD-WAN, you can have IKE-less architecture. 

So you only need a single peering to the vSmart, which allows you to scale horizontally. On top of this, we have OMP. It was designed from the ground up to be very extensive and to carry values that mean something to SD-WAN. It is not just used to replace a routing protocol; it can do much more than have IP prefixes. It can take the keys, policy information, service insertion, and multicast node information.

The TLOC 

It is also distributed, allowing edge devices to provide their identity in the fabric. We have TLOC that will enable you to build a fabric. The TLOC allows you to make any network design you wish. The TLOC is a transport locator with a unique WAN fabric identity. The TLOC is on every box, composed of system IP, color, or label for the transport and the encapsulation ( IPsec and GRE ). Now, we can make a differential on every box, and you can have much more control. You can carry all the TLOC and sub information in the OMP peerings.

SD-WAN

Once the TLOC is advertised to the vSmart controllers, the vSmart advertises it to the WAN edges. We have a full mesh in this case, but you can limit who can learn the TLOC or block it to build a hub-and-spoke topology.

You can change the next hop of a TLOC to change where a route is advertised. When you think about it, in the past, changing BGP on a wide scale was challenging as it was box by box, but now, with SD-WAN, we can quickly build the topology.

SD-WAN Security Features: Secure Data Plane

So we have secure connectivity from their WAN edge to the vSmart. We have an OMP that runs inside secure DLTS/TLS tunnels. And this is all dynamic. The OMP session to the Smart to the WAN edge can get the required information, such as TLOC and security keys. Then, the WAN edge devices can build an IPsec tunnel to each other, and this is not just standard IPsec but UDP-based IPsec. The UDP-based IPsec tunnels between two boxes allow tunnels over multiple types of transport. The transport and fabric are now agnostic.

We still have route learning on the LAN side, and this route is placed into a VPN, just like a VRF. So this is new reachability information learned from the LAN and sent as an OMP update to the vSmart. The vSmart acts as a route reflector and reflects this information. The vSmart makes all the path decisions for the network.

If you want to manipulate the path information, you can do this in the vSmart controller. So you can drive preference for other transports or change the next hop from the controller without any box-by-box configuration.

SD-WAN Security Features: Direct Internet Access

Next, let us examine direct internet access. So, for direct access, we have several use cases that we need to meet. The primary use case is PCI compliance, so before the packet leaves the branch, it needs to be inspected with a stateful firewall and an IPS solution. The SD-WAN enterprise firewall is application-aware, and we have IPS integrated with SD-WAN that can solve this use case.

Then, we have a guest access use case. Where guests are working in a branch office. We need content filtering for these guests, too. SD-WAN can run URL filtering that can be used here—also, direct cloud access use case. So we want to provide optimal performance to employee traffic but select and choose applications and send them directly from the branch to the Cloud and other applications to the HQ. Again, the DNS web layer security would be helpful here.

Cisco Umbrella
Diagram: Cisco Umbrella connectivity.

So the main features, enterprise firewall, URL filtering, and IPS, are on the box, with the DNS layer filtering being a cloud feature with Cisco Umbrella. This provides complete edge security and does not need a two-box solution, except for the additional Cisco Umbrella, a cloud-native solution dispersed around the globe with security functions delivered from PoPs.

Example of a Cisco device or VNF

One way to consume Cisco SD-WAN security is by leveraging Cisco’s integrated security applications within a rich portfolio of powerful WAN Edge routers, such as the ISR4000 series. On top of the native application-aware stateful firewall, these WAN Edge routers can dedicate compute resources to application service containers running within IOS-XE to enable in-line IDS/IPS, URL filtering, and Advanced Malware Protection (AMP).

Remember, Cisco SD-WAN security can also be consumed through cloud services or regional hubs where VNF-based security chains may be leveraged, or robust security stacks may already exist.

SD-WAN Security Features

WAN Security: Enterprise Firewall

Traditional branch firewall design involves deploying the appliance in either in-line Layer 3 mode or transparent Layer 2 mode behind or even ahead of the WAN Edge router. Now, for stateful inspection, we have to have another device. This adds complexity to the enterprise branch and creates unnecessary administrative overhead in managing the added firewalls. 

A proper firewall protects stateful TCP sessions, enables logging, and ensures that a zero-trust domain is implemented between segments in the network. Cisco SD-WAN takes an integrated approach and implements a robust Application-Aware Enterprise Firewall directly into the SD-WAN code.

Cisco SD-WAN takes an integrated approach. It has implemented an application-aware enterprise firewall directly into the SD-WAN code, so there is no need for another inspection device.

Cisco has integrated the stateful firewall with the NBAR 2 engine. Now, with these two, we have good application visibility and granularity. In addition, the enterprise firewall can also do application detection with the very first packet. The Cisco SD-WAN firewall provides stateful inspection, zone-based policies, and segment awareness. It can also classify over 1,400 Layer 7 applications and apply granular policy control to them based on category or an individual basis.

Video: Stateful Packet Inspection

We know we have a set of well-defined protocols that are used to communicate over our networks. Let’s call these communication rules. You are probably familiar with the low-layer transport protocols, such as TCP and UDP, and higher application layer protocols, such as HTTP and FTP.

Generally, we interact directly with the application layer and have networking and security devices working at the lower layers. So when Host A wants to talk to Host B, it will go through several communication layers with devices working at each layer. A stateful firewall is a device that works at one of these layers.

Stateful Inspection Firewall
Prev 1 of 1 Next
Prev 1 of 1 Next

 

WAN Security: Intrusion Prevention

An IDS/IPS can inspect traffic in real-time to detect and prevent attacks by comparing the application behavior against a known database of threat signatures. This is based on the Snort engine and runs as a container. So, Snort is the most widely deployed intrusion prevention system globally. The solution is combined with Cisco Talos, which puts out the signatures. The Cisco Talos Intelligence Group is one of the world’s largest commercial threat intelligence teams comprising researchers, analysts, and engineers.

Cisco vManage connects to the Talos signature database, downloads the signatures on a configurable periodic or on-demand basis, and pushes them down into the branch WAN Edge routers without user intervention. Signatures are rules that an IDS and an IPS use to detect typical intrusive activity. Also, you can use the allowlist approach if you see many false positives. It is better to start this in detect mode so the engine can learn before you start blocking.

Intrusion detection and prevention (IDS/IPS) can inspect traffic in real-time to detect and prevent cyberattacks and notify the network operator through Syslog events and dashboard alerts. IDS/IPS is enabled through IOS-XE application service container technology. KVM and LxC containers are used, and they differ mainly in how tightly they are coupled to the Linux kernel used in most network operating systems, such as IOS XE.

The Cisco SD-WAN IDS/IPS runs Snort, the most widely deployed intrusion prevention engine globally, and leverages dynamic signature updates published by Cisco Talos. The signatures are updated via vManage or manually using CLI commands available on the WAN Edge device.

WAN Security: URL filtering

URL filtering is another Cisco SD-WAN security function that leverages the Snort engine to inspect HTTP and HTTPS payloads to provide web security at the branch. In addition, the URL filtering engine enforces acceptable use controls to block or allow websites.  They download the URL database and block based on over 80 categories. They can also make decisions based on a web application score. This information is gained from Webroot/Brightcloud. 

URL Filtering leverages the Snort engine to provide comprehensive web security at the branch. It can be configured to permit or deny websites based on 82 different categories, the site’s web reputation score, and a dynamically updated URL database when an end user requests access to a particular website through their web browser. The URL Filtering engine inspects the web traffic, queries any custom URL lists, compares the URL to the blocked or allowed categories policy, and finally consults the URL Filtering database.

WAN Security: Advanced Malware Protection and Threat Grid

Advanced Malware Protection (AMP) and Threat Grid are the newest additions to the SD-WAN security features. As with URL filtering, both AMP and Threat Grid leverage the Snort engine and Talos for the real-time inspection of file downloads and malware detection. AMP can block malware entering your network using antivirus detection engines, one-to-one signature matching, machine learning, and fuzzy fingerprinting.

WAN Security: DNS Web Layer Security

Finally, we have DNS layer security. Some countries have a rule that you cannot inspect HTTP or HTTPS packets to filter content. So, how can you filter content if you can’t inspect HTTP or HTTPS packets?

We can do this with DNS packets. So before the page is loaded in the browser, the client sends a DNS request to the DNS server for the website, asking for a name to IP mapping. Once registered with Umbrella cloud, the WAN Edge router intercepts DNS requests from the LAN and redirects them to Umbrella resolvers. If the requested page is a known malicious site or is not allowed (based on the policies configured in the Umbrella portal, the DNS response will contain the IP address for an Umbrella-hosted block page. 

Cisco Umbrella DNS

DNS web layer security also supports DNSCrypt, EDNS, and TLS decryption. In the same way that SSL turns HTTP web traffic into HTTPS encrypted web traffic, DNSCrypt turns regular DNS traffic into encrypted DNS traffic that is secure from eavesdropping and man-in-the-middle attacks. It does not require changes to domain names or how they work; it simply provides a method for securely encrypting communication between the end user and the DNS servers in the Umbrella cloud located around the globe.

In some scenarios, it may be essential to avoid intercepting DNS requests for internal resources and passing them on to an internal or alternate DNS resolver. To meet this requirement, the WAN Edge router can leverage local domain bypass functionality, where a list of internal domains is defined and referenced during the DNS request interception process. 

Summary: SD-WAN Security

In today’s digital landscape, organizations increasingly adopt Software-Defined Wide Area Network (SD-WAN) solutions to enhance their network connectivity and performance. However, with the growing reliance on SD-WAN, ensuring robust security measures becomes paramount. This blog post explored key considerations and best practices to ensure secure SD-WAN deployments.

Section 1: Understanding the Basics of SD-WAN

SD-WAN brings flexibility and efficiency to network management by leveraging software-defined networking principles. It allows organizations to establish secure and scalable connections across multiple locations, optimizing traffic flow and reducing costs.

Section 2: Recognizing the Security Challenges

While SD-WAN offers numerous benefits, it also introduces new security challenges. One key concern is the increased attack surface due to integrating public and private networks. Organizations must be aware of potential vulnerabilities and implement adequate security measures.

Section 3: Implementing Layered Security Measures

To fortify SD-WAN deployments, a layered security approach is crucial. This includes implementing next-generation firewalls, intrusion detection and prevention systems, secure web gateways, and robust encryption protocols. It is also important to regularly update and patch security devices to mitigate emerging threats.

Section 4: Strengthening Access Controls

Access control is a vital aspect of SD-WAN security. Organizations should enforce robust authentication mechanisms, such as multi-factor authentication, and implement granular access policies based on user roles and privileges. Additionally, implementing secure SD-WAN edge devices with built-in security features can enhance access control.

Section 5: Monitoring and Incident Response

Continuous monitoring of SD-WAN traffic is essential for detecting and responding promptly to security incidents. Deploying security information and event management (SIEM) solutions can provide real-time visibility into network activities, enabling rapid threat identification and response.

Conclusion:

In conclusion, securing SD-WAN deployments is a critical aspect of maintaining a resilient and protected network infrastructure. By understanding the basics of SD-WAN, recognizing security challenges, implementing layered security measures, strengthening access controls, and adopting proactive monitoring and incident response strategies, organizations can ensure a robust and secure SD-WAN environment.

Cisco Umbrella

SD-WAN SASE

SD WAN SASE

Traditional networking approaches are falling short in today's digital era, where businesses increasingly rely on cloud-based applications and remote workforces. This is where the combination of SD-WAN (Software-Defined Wide Area Network) and SASE (Secure Access Service Edge) steps in, revolutionizing network connectivity. This blog post will delve into SD-WAN and SASE, exploring their benefits, key features, and how they transform organizations' approaches to network architectures.

At its core, SD-WAN is a technology that simplifies managing and operating a wide area network. By leveraging software-defined networking principles, SD-WAN offers organizations enhanced performance, reliability, and flexibility. With the ability to prioritize critical applications and intelligently route traffic, SD-WAN empowers businesses to optimize their network resources and seamlessly adapt to changing demands.

On the other hand, SASE is a comprehensive framework combining network connectivity and security services into a single, cloud-native solution. By converging SD-WAN capabilities with integrated security functions, such as secure web gateways, firewall-as-a-service, and zero-trust network access, SASE offers organizations a unified approach to network and security.

Table of Contents

Highlights: SD WAN SASE

 

Starting the SASE Journey

Although more and more enterprises are moving workloads and applications to the cloud, we also need to consider the secure and fast connection to the Internet with minimal latency and packet loss to affect application performance.

The following post discusses SD-WAN SASE and how to start your security SASE journey. In particular, we will examine the SASE Cisco approach to deploying a SASE network. To gain all the benefits of security SASE, you need a strategy, and the best way is to start with SD-WAN. And you can label this journey as SD-WAN SASE.

 

Related: Before you proceed, you may find the following post helpful for pre-information:

  1. SASE Definition
  2. DNS Security Solutions
  3. Cisco Umbrella CASB
  4. SASE Model
  5. Secure Firewall
  6. SASE Visibility
  7. Zero Trust SASE

 



Security SASE

Key SD-WAN SASE Discussion Points:


  • Introduction to SD-WAN SASE and what is involved.

  • Highlighting the details of how to start a SASE network.

  • Critical points on integrating SD-WAN and SASE. Deploying SASE Cisco.

  • Technical details on the different ways you can connect SD-WAN to Cisco Umbrella.

  • Technical details on optimizing the connectivity from SD-WAN to Cisco Umbrella.

 

Back to Basics: SD-WAN SASE

♦ SASE Solutions

SASE solutions generally possess a networking component such as a software-defined wide area network (SD-WAN) plus a wide range of security components offered in cloud-native format.

These security components are added to secure the communication on the network from end to end, provide consistent policy management and enforcement, add security analytics, and enable an integrated administration capability to manage every connection from everything to every resource.

Some of these features commonly include Zero Trust Network Access (ZTNA), which means a Zero Trust approach to security is one of the security components that enables SASE. Therefore, SASE is dependent on Zero Trust.

♦ The Benefits of SD-WAN and SASE

The combination of SD-WAN and SASE brings forth many benefits for businesses. Firstly, it enhances network performance and agility, allowing organizations to deliver consistent and reliable connectivity across geographically dispersed locations.

Secondly, SD-WAN and SASE offer robust security features that safeguard critical data and applications from emerging cyber threats. Additionally, the cloud-native nature of SASE enables organizations to scale their networks effortlessly while reducing infrastructure costs.

SD-WAN and SASE are transforming the way businesses approach network architectures. Organizations can optimize their network costs without compromising performance by replacing traditional MPLS connections with cost-effective and flexible broadband options. Integrating security into the network fabric also eliminates the need for multiple standalone security appliances, simplifying network management and reducing complexity.

SD-WAN and SASE

SD-WAN and SASE Main Components

SASE Solution 

  • SASE solutions generally possess a networking component such as a software-defined wide area network (SD-WAN).

  • Some of these features commonly include Zero Trust Network Access (ZTNA).

  • Now, organizations to deliver consistent and reliable connectivity across geographically dispersed locations.

  • Integrating security into the network fabric eliminates the need for multiple standalone security appliances.

SASE Network

We have a common goal to achieve this. To move users closer to the cloud services they are accessing. However, traffic sent over the Internet is all best-effort and is often prone to bad actors’ attacks and unforeseen performance issues.

There were over 14,000 BGP incidents last year, so cloud access over the Internet varies if BGP is unstable. There is no one approach to solve everything, but deploying SASE ( secure access service edge ) will give you a solid posture. Secure Access Service Edge deployment is not something you take out of a box and plug in.

It needs a careful strategy, and a recommendation would be to start with SD-WAN. Specifically, SD-WAN security creates an SD-WAN SASE design. SD-WAN is now mainstream, and cloud security integration is becoming critical, enabling enterprises to evolve to a cloud-based SASE architecture. The SASE Cisco version is called Cisco Umbrella.

 

Cisco Umbrella
Diagram: Cisco Umbrella. Source is Cisco

 

Security SASE

As organizations have shifted how they connect their distributed workforce to distributed applications in any location, the convergence of networking and cloud security has never been more critical. And that is what security SASE is all about—bringing these two pillars together and enabling them from several cloud-based PoPs.

Designing, deploying, and managing end-to-end network security is essential in today’s constant attacks. Zero Trust SASE lays the foundation for customers to adopt a cloud-delivered policy-based network security service model.

 

Security SASE

SD-WAN SASE

Then, we have Cisco SD-WAN, a cornerstone of the SASE Solution. In particular, Cisco SD-WAN integration with Cisco Umbrella enables networks to access cloud workloads and SaaS applications securely with one-touch provisioning, deployment flexibility, and optimized performance.

We have several flexible options to journey to the SASE Cisco with Cisco SD-WAN. Cisco has a good solution that can combine the Cisco SD-WAN and cloud-native security, which is Cisco Umbrella, into a single offering that delivers complete protection. We will get to how this integrates in just a moment.

However, to reach this integration point, you must first understand your stage in your SASE journey. Everyone will be at different stages of the SASE journey, with unique networking and security requirements. For example, you may still be at the SD-WAN with on-premises security.

Then, others may be further down the SASE line with SD-WAN and Umbrella SIG integration or even partially at a complete SASE architecture. As a result, there will be a mixture of thick and thin branch site designs.

SASE Network: First steps 

A mix of SASE journey types will be expected, but you need a consistent, unique policy over this SASE deployment mix. Therefore, we must strive for a compatible network and security function anywhere for continuous service. 

As a second stage to consider, most are looking for multi-security services, not just a CASB or a Firewall. A large number of organizations are looking for multi-function cloud security services. And once you move to the cloud, you will increase efficiency and gain the benefits of multi-fiction cloud-delivered security services.

 

SASE Network

 

SASE Network: Combined all security functions

So, the other initial step to SASE is to combine security services into a cloud-delivered service. All security functions are now delivered from one place, dispersed globally with PoPs. This can be done with Cisco Umbrella. Cisco Umbrella is a multi-function security SASE solution.

Cisco Umbrella integrates multiple services to manage protection and has all of this on one platform. Then, you can have this deployed to what locations it is needed. For example, some sites only need the DNS-layer filtering; for others, you may need full CASB and SWGs.

SASE Network: Combine security with networking 

So, once we have combined all security functions, we need to bring networking into security, which requires a flexible approach to meet multi-cloud at scale. This is where we can introduce SD-WAN as a starting point of convergence. The benefits of SD-WAN are clear. Dynamic segmentation, application optimization, cloud networking, integrated analytics & assurance. So, we are covering technology stacks and how the operations team consumes the virtual overlay features.

Cisco SD-WAN use cases that can help you transform your WAN edge with deeper cloud integration and rapid access to SASE Cisco. So you can have Cisco Umbrella cloud security available from the SD-WAN controller and vice versa. So this is a good starting point.

 

Secure Access Service Edge

New connectivity structures

Let us rewind for a moment. The concept of Secure Access Service Edge is based on a few reasons. Several products can be put together to form a SASE offering. The main reason for SASE is the major shift in the IT landscape.

We have different types of people connecting to the network, using our network to get to the cloud, or there can be direct cloud access. This has driven the requirements for a new security architecture to match these new connectivity structures. Nothing can be trusted, so you need to evolve your connectivity requirements. 

Shift Workflows to the cloud.

There has been a shift of workloads moving to the cloud. Therefore, there are better approaches than providing a data center backhaul to users requesting cloud applications. Backhauling to a central data center to access cloud applications is an actual waste of resources.

And should only be used for applications that can’t be placed in the cloud. This will result in increased application latency and unpredictable user experience. However, the cloud drives a significant network architect shift; you should take advantage of this.

 

SASE Network: New SASE design

Initially, we had a hub and spoke architecture with traditional appliances that have moved to a design where we deliver network and security capabilities. This puts the Internet at the center, creating a cloud edge around the globe where it makes sense for the users to access, not just to go to central data because it’s there. 

This is the paradigm shift we are seeing with the new SASE architecture. So users connect directly to this new cloud edge, the main headquarters joins the cloud edge, and branch offices can connect via SD-WAN to the cloud edge.

So, this new cloud edge contains all data and applications. Then, you can turn the other security and network functions that each cloud edge PoP needs into a suite for the branch site or remote user connecting.

 

SASE Cisco

 

Secure Access Service Edge Consideration

The need for DIA

Firstly, most customers want to leverage Direct Internet Access (DIA) circuits because they want the data center to be something other than the aggregation path for most of the traffic going to the cloud. Then we have complications or, say, requirements for some applications, for example, Office 365.

In this case, there is a specific requirement from Microsoft. Such an application can not be subject to the proxy. Office365 demands DIA and should be provided with, for example, Azure ExpressRoute.

Identity Security

Then, we have the considerations around identity and identity security. We have new endpoints and identities to consider. We need to consider multiple contextual factors when determining the risk level of the identity requesting access. Now that the premier has shifted, how do I have complete visibility of the traffic flow and drive consistent identity-driven policy? And not just for the user but for the devices, too.

Also, segmentation. How do you extend your segmentation strategy to the cloud and open up new connectivity models? For segmentation, you want to isolate all your endpoints, and this may include IoT, CCTV, and other devices. 

Identity Security Technologies

Multi-factor authentication (MFA) can be used here, and we can combine multiple authentication factors to grant access. And this needs to be a continuous process. I’m also a big fan of Just in Time access. Here, we give access to only a particular segment for a specific time. Once that time is up, access is revoked. This certainly does reduce the risk of Malware spreading. In addition, you can isolate privileged sessions and use step-up authentication to access critical assets.

 

SASE Cisco
Diagram: SASE Cisco and Enhanced Identity Security.

 

Security SASE 

SASE Cisco takes the network, the connectivity, and security and converges them into a user service. SASE is an alternative to the traditional on-premises approach to protection.

And instead of having separate silos for network and security, SASE unifies networking and security services and delivers edge-to-edge protection. SASE is more of a journey to reach than an all-in-one box you can buy and turn on. We know SASE entails Zero Trust Network Access (ZTNA), SD-WAN, CASB, FWaaS, RBI, and SWG, to name a few. 

SASE Effectivity wants to consolidate adequate security and threat protection through a single vendor with a global presence and peering relationships. 

 

SASE Cisco

SASE connectivity: SD-WAN SASE

Connectivity is where we need to connect users anywhere to applications everywhere. This is where the capabilities of SD-WAN SASE come into play. SD-WAN brings advanced technologies such as application-aware routing, WAN optimization, per-segment topologies, and dynamic tunnels.

Now, we have SD-WAN that can handle the connectivity side of things. Then, we need to move to control based on the security side. Control is required for end-to-end threat visibility and security. So, even though the perimeter has shifted, you still need to follow the zero trust model outside of the traditional boundary. 

Multiple forms of security drive SASE that can bring this control; the main ones are secure web gateways, cloud-delivered firewalls, cloud access security brokers, DNS layer security, and remote browser isolation. So, we need these network and security central pillars to converge into a unified model. So, it can be provided as a software-as-a-service model.

Building the SASE architecture 

There can be several approaches to forming this architecture. We can have a Virtual Machine (VM) for each of the above services, place it in the cloud, and then call this SASE. However, too many hops between network and security services in the VM design will introduce latency. As a result, we need to have a SASE approach that is born for the cloud. A bunch of VMs for each network and security service is not a scalable approach.

Therefore, a better approach would be to have a microservices, multi-tenancy container architecture with the flexibility to optimize and scale. Consider the SASE architecture to be a cloud-native architecture.

A multitenant cloud-native approach to WAN infrastructure enables SASE to service any edge endpoint, including the mobile workforce, without sacrificing performance or security. It also means the complexities of upgrades, patches, and maintenance are handled by the SASE vendor and abstracted away from the enterprise.

 

  • A key point: Cisco Umbrella

Cisco Umbrella is built on a cloud-native microservices architecture. However, the umbrella is not alone in providing SASE; it must be integrated with other Cisco products to provide the SASE architecture. Let’s start with Cisco SD-WAN.

 

Cisco SD-WAN was creating SD-WAN SASE.

SD-WAN grew in popularity as a more agile and cloud-friendly approach to WAN connectivity. With large workloads shifting to the cloud, SD-WAN gave enterprises a more reliable alternative to Internet-based VPN and a more agile, affordable alternative to MPLS for several use cases.

In addition, by abstracting away underlying network transports and enabling a software-defined approach to the WAN, SD-WAN helped enterprises improve network performance and address challenges such as the high costs of MPLS bandwidth and the trombone-routing problem. 

SD-WAN is essential for SASE success and is a crucial building block for SASE. SASE Cannot Deliver Ubiquitous Security without the Safeguards SD-WAN Provides, Including:

  • Enabling Network Address Translation (NAT)
  • Segmenting the network into multiple subnetworks
  • Firewalling unwanted incoming and VLAN-to-VLAN traffic
  • Securing site-to-site/in-tunnel VPN

So, SD-WAN can ride on top of any transport, whether you have an MPLS or internet breakout, and onboard any users and consumption model. This is a good starting point for SASE. Here, we can use SD-WAN embedded security as a starting point for SASE.  

 

SD-WAN SASE
Diagram: SD-WAN SASE – Connecting to Cisco Umbrella

 

SD-WAN Security Stack: SD-WAN SASE

The SD-WAN security stack is entirely consistent on-premises and in the cloud. SD-WAN supports the enterprise firewall that is layer 7 aware, intrusion prevention system built on SNORT, URL filtering, advanced malware protection, and SSL proxy.

A container architecture enables everything except the enterprise firewall; automated security templates exist. So, based on the intent, the SD-WAN component of vManage will push the config to the WAN edge so that the security services can be turned on.

And all of this can be done with automated templates from the SD-WAN controller. It configures the Cisco Umbrella from Cisco SD-WAN. What I find helpful about this is the excellent integration between vManage—essentially, streamlining security. There are automated templates in vManage that you can leverage for this functionality in Cisco Umbrella.

Cisco Umbrella: Enabling Security SASE

The next level of the SASE journey would be with Cisco Umbrella. So, we still have the SD-WAN network and security capabilities enabled. An SD-WAN fabric provides a secure connection to connect to Cisco Umbrella, gaining all the benefits of the SD-WAN connecting model, such as auto tunnel and intelligent traffic steering.

Now, this can be mixed with the capabilities of cloud security from Cisco Umbrella. So now, with these two products combined, we are beginning to fill out our defense in the depth layer of security functions. There will also be multiple security features that work together to strengthen your security posture.

The first layer of defense

I always consider the DNS layer security as the first layer. Every transaction needs a DNS request, so it’s an excellent place to start your security. If the customer needs an additional measure of defense that can introduce the other security functions that the Cisco Umbrella offers. You turn on and off security functions based on containers as you see fit.

 

SD-WAN SASE: Connecting the SASE Network 

We use a secure IPsec tunnel for SD-WAN to connect to Cisco Umbrella. An IPsec tunnel is set up to the Cisco Umbrella by pushing the SIG feature template. So, there is no need to set up a tunnel for each WAN edge at the branch. The IPsec tunnels at the branch are auto-created to the Cisco Umbrella headend. This provides deep integration and automation capabilities between Cisco SD-WAN and Cisco Umbrella. You don’t need to design this; this is done for you.

IPsec Tunnel Capabilities

What type of IPsec capabilities do you have? Remember that each single IPsec tunnel can support 250 Mbps and burst higher if needed. In the case of larger deployments, multiple tunnels can be deployed to support higher capacity. So, active-active tunnels can be created for more power. There is also an excellent high available with this design. You have an IPsec tunnel established to primary Cisco Umbrella PoP.

If this Cisco Umbrella goes down, all the services can be mapped to a secondary Umbrella data center in the same or a different region if needed. It is doubtful that two SASE PoPs will go down in the areas of the same region.

Hybrid Anycast handles the failure to secondary SASE PoP or DR site. You don’t need to design this; it is done automatically for you. So, with this design, Cisco has what is known as a unified deployment template called the “Secure Internet Gateway Template.” 

 

Cisco Umbrella
Diagram: Cisco Umbrella connectivity.

 

Active-active tunnels

The Cisco SD-WAN vManage auto-template allows up to 4 active tunnels, operating at 250 Mpbs each from a single device. The Cisco SD-WAN can then ECMP load-balance traffic on each of the tunnels. Eight tunnels can be supported, but only four can be active.

These tunnels are established from a single Public IP address using NAT-T, and NAT-T opens up various design options for you. Now, you can have active-active tunnels, weighted load balancing, and flexible traffic engineering with a unique template.

We know that each tunnel supports 250 Mbps. We now support four tunnels with ECMP for increased throughput. These four tunnels can give you 1Gbps from the branch to the Cisco Umbrella headend. So, as a network admin, you can pass 1Gpbs of traffic to the Umbrella SIG to maintain performance. 

IPsec Tunnel configuration 

For the weighted load balancing, we can have, let’s say, 2 tunnels to the Cisco Umbrella with the same weight. These are two DIA circuits with the same bandwidth. So when the importance is confirmed the same for the different ISPs, the traffic will be equally load balanced. Cisco uses per-flow load balancing and not per-packet load balancing. The Cisco load balancing is done by flow pinning, where a flow is dictated by hashing the 4 Tuple. 

So, for example, there will be a static route pairing to both tunnels, and the metric will be the same; you can also have an unequal-cost multi-path use case. You may have small branch sites with dual DIA circuits with different bandwidths and entitlements.

To optimize the WAN, you can have traffic steered at 80:20 over the DIA circuits. If you had a static route statement, you could see that there would be different metrics. 

 

Policy-Based Routing to Cisco Umbrella

You can also have policy-based routing to Cisco Umbrella. This allows you to configure flexible traffic engineering. For example, you would like only specific application traffic from your branch to Umbrella. So, at one branch site, you should send Office 365 or GitHub traffic to Cisco Umbrella; then at Branch 2, you should send all traffic. This would include all cloud and internet-bound traffic. So we can adopt the use case for each design requirement. 

Policy-based routing to the Cisco Umbrella allows you to select which applications are sent to the Umbrella, limiting what types of traffic are routed to Umbrella in accordion with their presence; here, we are leveraging Deep Packet Inspection (DPI) for application classification within data policy. All of this is based on a data policy that is app-aware. 

Layer 7 Health check 

You will also want to monitor the IPsec tunnel health during brownouts. This could be from an underlying transport issue. And dynamically influence traffic forwarding based on high-performing tunnels. Here, Cisco has an L7 tracker with a custom SLA that can be used to monitor the tunnel health. The L7 tracker sends an HTTPing request to the Umbrella service API ( service.sig.umbrella.com) to measure RTT latency and then compares this to the user’s configured SLA. If tunnels do not meet the required SLA, they are marked down based on the tracker status. The traffic will then go through the available tunnels.  

 

Summary: SD WAN SASE

In today’s increasingly digital world, businesses constantly seek innovative solutions to enhance network connectivity and security. SD-WAN SASE (Software-Defined Wide Area Network Secure Access Service Edge) is a groundbreaking technology. In this blog post, we delved into the intricacies of SD-WAN SASE, its benefits, and how it is revolutionizing network connectivity.

Section 1: Understanding SD-WAN

SD-WAN, or Software-Defined Wide Area Network, is a virtualized approach to connecting and managing networks. It allows organizations to efficiently connect multiple locations, whether branch offices, data centers, or cloud-based applications. By leveraging software-defined networking principles, SD-WAN offers enhanced agility, performance, and cost savings compared to traditional WAN solutions.

Section 2: Unveiling SASE

SASE, which stands for Secure Access Service Edge, is a transformative concept that combines network security and WAN capabilities into a unified cloud-based architecture. It enables organizations to consolidate networking and security functions, delivering comprehensive protection and improved performance. SASE replaces the traditional hub-and-spoke network model with a more agile and secure architecture.

Section 3: The Synergy of SD-WAN and SASE

When SD-WAN and SASE are combined, the result is a powerful solution that brings together the benefits of both technologies. SD-WAN provides network agility and scalability, while SASE ensures robust security measures are seamlessly integrated into the network. This synergy enables organizations to optimize their network performance while safeguarding against evolving cybersecurity threats.

Section 4: Benefits of SD-WAN SASE

4.1 Enhanced Performance and User Experience: SD-WAN SASE optimizes traffic routing, ensuring applications and data take the most efficient path. It prioritizes critical applications, resulting in improved performance and user experience.

4.2 Simplified Network Management: The unified architecture of SD-WAN SASE simplifies network management by consolidating various functions into a single platform. This streamlines operations and reduces complexity.

4.3 Enhanced Security: With SASE, security functions are natively integrated into the network. This ensures consistent and comprehensive protection across all locations, devices, and users, regardless of their physical location.

4.4 Cost Savings: SD-WAN SASE reduces the reliance on expensive hardware and dedicated security appliances, resulting in cost savings for organizations.

Conclusion:

In conclusion, SD-WAN SASE is transforming the landscape of network connectivity and security. By combining the agility of SD-WAN and the robustness of SASE, organizations can achieve optimal performance, enhanced security, simplified management, and cost savings. Embracing this innovative technology can empower businesses to stay ahead in the ever-evolving digital world.

SASE Cisco

SASE | SASE Solution

SASE Solution

In the rapidly evolving landscape of technology and connectivity, organizations are constantly seeking innovative solutions to enhance network security and streamline operations. Enter SASE, the game-changing concept that combines network and security capabilities into a single cloud-based architecture. This blog post will delve into SASE (Secure Access Service Edge) and explore its transformative potential for businesses.

SASE, pronounced "sassy," represents a paradigm shift in network security. It encompasses a comprehensive framework that converges wide-area networking (WAN) and network security services into a unified cloud-native solution. By integrating software-defined wide-area networking (SD-WAN) and security functions, SASE offers organizations a simplified, scalable, and agile approach to network security.

Table of Contents

Highlights: SASE Solution

 

The Role of SASE Security

In this post, we will decompose the Zero Trust SASE, considering the SASE fabric and what a SASE solution entails. The SASE security consists of global PoPs. With network and security functions built into each PoP, they are operated with a single management plane. This post will examine the fabric components while discussing the generic networking and security challenges that SASE overcomes, focusing on Cisco SASE.

Cisco Approach with Umbrella

The Cisco SASE definition is often deemed just Cisco Umbrella; however, that is just part of the solution. Cisco SASE includes the Umbrella but entails an entirely new architecture based on the CSP 5000 and Network Function Virtualization (NFV) and a series of Virtual Network Functions (VNFs) such as virtual firewalls. We will touch on Cisco SASE soon.

As the SASE solution has a lot of dependencies, you, as an enterprise, know how far you are in your cloud adoption. If you are a public cloud first, hybrid, multi, or private cloud path, it affects the design of where you have your DMZ. SASE security is all about optimizing the DMZ to enable secure methods.

Secure Access Service Edge

Related: For pre-information, you may find the following posts helpful:

  1. SD-WAN SASE
  2. SASE Model
  3. Cisco Secure Firewall
  4. Ebook on SASE Capabilities

 



SASE Security.

Key SASE Architecture Discussion points:


  • Introduction to old DMZ and its drawbacks.

  • The role of perfecting the DMZ and a SASE solution.

  • SASE solution components. 

  • The old data center design and issues.

  • Challenges and how SASE overcomes these.

  • Example SASE Solution: Cisco SASE.

 

Back to Basics: SASE Solution

SASE directs to a concept incorporating cloud-based software-defined wide area networking (SD-WAN) with a range of security services and unified management functionality for delivering security and SD-WAN capabilities to any edge computing location. A prime use case for SASE is to address the performance bottleneck issues of traditional networks that rely on traffic backhauling. Further, by integrating identity, business context, and real-time risk assessment into every connection, SASE architectures pledge to control a variety of cyber-attacks.

SASE explained
Diagram: SASE explained. Source Fortinet.

♦ The Benefits of SASE

By adopting a SASE solution, businesses can unlock a plethora of benefits. Firstly, it provides secure access to applications and data from any location, enabling seamless remote work capabilities. Additionally, SASE eliminates the need for traditional hardware-based security appliances, reducing costs and complexity. The centralized management and policy enforcement offered by SASE ensures consistent security across the entire network, regardless of the user’s location or device.

To fully grasp the power of SASE, it is essential to understand its key components. These include secure web gateways (SWG), cloud access security brokers (CASB), firewall-as-a-service (FWaaS), data loss prevention (DLP), and zero-trust network access (ZTNA). Each element is crucial in fortifying network security while enabling seamless user connectivity.

While the benefits of SASE are enticing, organizations must approach its implementation strategically. Assessing the existing network infrastructure, defining security requirements, and selecting a reliable SASE provider is crucial. A phased approach to performance, starting with pilot projects and gradually scaling up, can help organizations ensure a smooth transition and maximize the potential of SASE.

 

The DMZ: Calling a SASE Solution

First, the SASE architecture updates the DMZ, which has remained unchanged since the mid-90s. The DMZ, often called the perimeter network, is a physical or logical subnetwork whose sole purpose is to expose an organization’s external-facing services to untrusted networks.

The DMZ adds a layer of security so that external networks, potentially insecure, can only access what is exposed in the DMZ. At the same time, the rest of the organization’s network is protected by a security stack.

As a result, the DMZ is considered a small, isolated network portion and, if configured correctly, will give you extra time to detect and address breaches, malware, and other types of attacks before they further penetrate the internal networks. 

The critical factor here is that it’s a layer that, at best, gives you additional time before the breach to the internal network. The central pain point with the current DMZ architecture is that the bad actor knows it’s there unless you opt for zero trust single packet authentication or some other zero-trust technology. This post will examine how SASE can secure and update the DMZ to align with the current trends we will discuss in this post.

SASE Architecture
Diagram: The old DMZ and the need for a SASE architecture.

 

  • A key point: SASE security and SD-WAN

Similar to updating the WAN edge with SD-WAN to optimize performance per application with SDWAN overlays. Both SASE and SD-WAN are updating, let’s say, the last hardware bastions in your infrastructure. SD-WAN with the WAN edge and SASE with the DMZ. 

The DMZ is a vital section but needs to be secure not just from a perimeter firewall with a port but more from what traffic flow we have, along with good visibility with the ability to detect and attack and then respond appropriately. Reaction time needs to be quick. Speeds that can only be achievable with secure automation.

 

A perfect DMZ: SASE Solution

These new DMZ designs need to be open. It must support API and open standard modeling languages like XML and YANG. This will allow you to support various network and security devices, physical, virtual, and hybrid, via secure API. Not only does it need to be open, but it also needs to be extensible and repeatable. So, we can allow new functionality to be added and removed as the architecture evolves and react to dynamic business objectives.

SASE Solution.
Diagram: SASE Solution. The requirements.

 

SASE also needs to scale up and down, out and in, with little or no disruption to existing services. It should be able to scale without adding physical appliances. You can only scale so far with physical devices. The SASE solution needs Network Function Virtualization ( NFV ) with a series of Virtual Network Functions (VNFs) chained together. Cisco CSP 5000 can be used here, and we will discuss it briefly.

You want to avoid dealing with the CLI of the device. The new SASE fabric needs to have good programmability. All functional elements of the architecture are fully programmable via API.

The APIs cannot just read data but can change behavior, such as network device configurations. So you will need an orchestrator for this. For example, Ansible Tower could automate and manage configuration drift among the virtual network functions. Ansible Tower provides end-to-end team automation with features such as workflow templates and integration into the CI/CD pipelines.

 

SASE Security and SDN

Network segmentation is essential to segment the data and control plane traffic. So, the control plane configures the devices, and the data plane forwards the traffic. The segmentation aspect is sufficient for the scalability and performance of resolutions. To manage SASE security, you will need to employ software-defined networking principles. The SDN controller is not in the forward path. It just sets up the data plane. The data plane should operate even if the control plane fails. However, the control plane could have some clustering to avoid failure.

 

Standard Data Center Design

There will be the consumers of services. So, they can be customers, remote users, partners, and branch sites. These consumers will have to access applications, and these applications are hosted in the network or cloud domain. So, the consumers will typically have to connect to a WAN edge for applications hosted in the network. On the other hand, if consumers want to connect to cloud-based applications, they can go directly to, let’s say, IaaS or the more common SaaS-based applications. Again, this is because access to cloud-based applications does not go via the WAN edge.

SASE Security.
Diagram: Standard DMZ design and need for SASE security.

 

For consumers to access network applications not hosted in the cloud, as discussed, they are met with the WAN edge. Traffic will need to traverse the WAN edge to get to the application, which will have another layer of network and security functionality deeper in the network.

At the edge of the network, we have a lot of different types of network and security functionality. So we will have standard routers, a WAN optimization controller, Firewalls, Email Gateways, Flow collectors, and other types of probes to collect traffic.

Then, a network will have to switch fabric. So, the old days of the 3-tier data Center architecture are gone. All primary switching fabrics or switching fabrics that you want IP forwarding to scale are based on the spine leaf architecture, for example, the Cisco ACI with ACI networks. The ACI Cisco has good Multi Pod and Multi-Site capabilities.

 

Then, we go deeper into the applications and have the app tier access. So we application hosted Internet all by internal users. Each one will have its security, forwarding proxy devices, and load balancers. All these are physical wires to the fabric that will have limited capacity.

For global data center design. These will commonly connect over MPLS, which provides the Global WAN. Each data center would connect to an MPLS network and will usually be grouped by regions such as EMEA or AMERICAS. So we have distributed networks—the MPLS network label switches. You can also have Segment Routing to provide this global WAN, which improves traffic engineering.

So, we have had some common trends that have challenged parts of this design. Many of these trends have called for the introduction of a new network area called the SASE fabric, commonly held in a CNF or a collocation facility. That has all the physical connectivity and circuits already laid out for you.

 

Common Trends: SASE Architecture

In a cloud-centric world, users and devices require access to services everywhere. These services are now commonly migrated to SaaS and IaaS-based clouds. So we have an app migration from “dedicated” private to “shared” public cloud. These applications became easy to change based on a microservices design. The growth was rapid, and now you must secure workloads in a multi-tenant environment.

 

Identity is the new perimeter

As a result, the focal point has changed considerably. Now, it is the identity of the user and device, along with other entities around the connection group, as opposed to the traditional model focusing solely on the data center. Identity then becomes the new perimeter. 

Another major trend is the capacity requirements and bandwidth to the public clouds doubled. Now that applications are hosted in the cloud, we also need to make changes on the fly to match the agility of the cloud.

When migrating these applications, we must rapidly, for example, upgrade internet-facing firewalling due to remote user access demands. Also, security teams demand IPS/AMP appliance insertions. In a cloud environment, it’s up to you to secure your workloads, and you need the same security levels in the cloud as you would on-premises.

SASE
Diagram: The common trends. The need for SASE.

 

These apps are not in our data center, so we need to ensure that these migrated applications have the same security policy that would be housed in the AWS or Azure clouds. So we need more services in the current infrastructure. Now we have more wiring and configuration, what is the impact on an extensive global network? You have a distributed application in several areas and want to open a port. These configurations need to be done and monitored in many places and with many teams.

The internal data application is getting less important than what is running in the public clouds. More apps are in the cloud, and the data center is becoming less important as the prime focal point. The data center will always be retained, but the connectivity and design will change with the introduction of a SASE solution.

 

SASE Security

Many common problems challenge the new landscape. Due to deployed appliances for different technology stacks for networking and security, not to mention the failover requirements between them, we are embedded with high complexity and overhead.

It is a fact that the legacy network and security located in the DMZ designs increase latency. The latency is even with service chaining, but it will expand and become more challenging to troubleshoot. In addition, the world is encrypted. This needs to be inspected without degrading application performance.

These challenges are compelling reasons to leverage a cloud-delivered SASE solution. The SASE architecture is a global fabric consisting of a tailored network for application types typically located in the cloud: SASE optimizes where it makes the most sense for the user, device, and application – at geographically dispersed PoPs. Many will connect directly to a colocation facility that can hold the SASE architecture.

sase architecture
Diagram: SASE Architecture.

 

The significant architecture changes to what you have seen in the past are that the consumers, remote users, customers, branches, and partners will connect to the WAN edge, Internet, or IaaS via a Colocation facility. Circuits migrated to selected “central hubs” connectivity and colocation sites from the data center.

The old DC will become another application provider connecting to the colocation. Before addressing what this collocation looks like, we will address the benefits of redefining the network and security architecture. Yes, adopting SASE reduces complexity and overhead, improves security, and increases application performance, but what does that mean practically?

 

Challenges: Complexity and Overhead

Problems with complexity/overhead/processing/hardware-based solutions

Traditional mechanisms are limited by the hardware capacity of the physical appliances at the customer’s site and the lag created for hardware refresh rates needed to add new functionality. Hardware-based network and security solutions build the differentiator of the offering into the hardware. With different hardware, you can accelerate the services and add new features.

Some features are available on specific hardware, not the hardware you already have onsite. In this case, heavy lifting by the customer will be required. In addition, as the environment evolves, we should not depend on the new network and security features from the new appliance generation. This inefficient and complex model creates high operational overhead and management complexity.

Device upgrades for new features require significant management. For example, from experience, changing out a line card would involve multiple teams. For example, the line card ran out of ports or you need additional features from a new generation. 

This would involve project planning, onsite engineers, design guides, hopefully, line card testing, and out-of-hours work. For critical sites to ensure successful refresh, team members may need to be backed up. Many touches need to be managed.

 

SASE Security Response:

SASE  architecture overcomes tight coupling/hardware-based solutions.

The cloud-based SASE enables updates for new features and functionality without requiring new deployments of physical appliances. There will need to be a physical appliance, but this physical appliance can host many virtual networks and security functions. This has an immediate effect on ease of management.

The network and security deployment can now occur without ever touching the enterprise network. This allows enterprises to adopt new capabilities quickly. Once the tight coupling between the features and the customer appliance is removed, we have increased agility and simplicity for deploying network and security services.

 

Cisco SASE: Virtualization of Network Functions

With a Cisco SASE platform, when we create an object, such as the virtualization of Network Functions. The policy in the networking domain is then available in other domains, such as security. Network function virtualization, where we de-couple software from hardware, is familiar.

This is often linked to automation and orchestration, where we focus on simplifying architecture, particularly on Layer 4 – Layer 7 services. Virtual Machine hosting enabled the evolution and variety of virtualized workloads. The virtualization of network and security functions allows you to scale up, down, and in and out at speed and scale without embedding service.

 

Cisco SASE: Network Functions Examples

Let’s say you have an ASAv5 as a virtual appliance. This virtual appliance has, for example, 1 Core. If you want more cores, you can scale up to support ASA v50, which supports eight cores. So we can scale up and down. However, what if you want to scale out?

Here, we can add more cloud service providers and ASAv, so we are scaling out virtual firewalls with equal-cost multipath load balancing. You want to buy something other than a physical appliance that will only ever do one function. The days of multiple physical point solutions are ending as sase gains momentum. Instead, you want your data center to scale when capacity demands it without physical limitations.

 

  • For example, Cisco SASE architecture.

NFV network services can be deployed and managed much more flexibly because they can be implemented in a virtualized environment using x86 computing resources instead of purpose-built dedicated hardware appliances. The CSP 5000 Series can help you make this technology transition.

In addition, with NFV, the Cisco SASE open approach allows other vendors to submit their Virtual Network Functions (VNF) for certifications to help ensure compatibility with Cisco NFV platforms.

This central location is a PoP that could be a Cloud Services Platform that could provide the virtualized host. For example, the Cloud Services Platform CSP-5000 could host CSR, FTD, F5, AVI networks, ASAv, or KVM-based services. These network and security functions represent the virtual network appliances that consist of virtual machines. 

 

Cisco SASE and the CSP 5000

Within the Cisco SASE design, the CSP 5000 Series can be deployed within data centers, regional hubs, colocation centers, the WAN edge, the DMZ, and even at a service provider’sprovider’s Point of Presence (PoP), hosting various Cisco and third-party VNFs. We want to install the CSP at a PoP, specifically in a collocation facility. If you examine the CSP-5000 for a block diagram, you will see that Cisco SASE has taken a very open ecosystem approach to NFV, such as Open vSwitch. 

It uses Single Root I/O Virtualization (SR-IOV) and an Open vSwitch Data Plane Development Kit (OVS-DPDK). The optimized data plane provides near-line rates for SR-IOV-enabled VNFs and high throughput with OVS DPDK interfaces.

The CSP has a few networking options. First, the Open vSwirch ( OVS) is the Software layer two switches for the CSP-500. So, the CSP internal software switches bridge the virtual firewall to the load balancer to the ToR switches. Or you can use SR-IOV Virtual Ethernet Bridge Mode (VB), which will give better performance. As a third option, we have SR-IOV, virtual Ethernet Port Aggregators Mode (VEPA)

Cisco SASE Security Policies 

With the flexible design Cisco SASE offers, any policies assigned to users are tied to that user regardless of network location. This removes the complexity of managing network and security policies across multiple locations, users, and devices. But, again, all of this can be done from one platform.

 

SASE Security Response:

SASE  architecture overcomes the complexity and heavy lifting/scale.

I remember from a previous consultancy. We were planning next year’s security budget. The network was packed with numerous security solutions. All these point solutions are expensive, and there is never a fixed price, so how do you plan for this? Some new solutions we were considering charge on usage models, which we needed the quantity at that time. So the costs keep adding up and up.

SASE removes these types of headaches. In addition, consolidating services into a single provider will reduce the number of vendors and agents/clients on the end-user device. So we can still have different vendors operating a sase fabric, but they are now VNF on a single appliance.

Overall, substantial complexity savings will be from consolidating vendors and technology stacks, pushing this to the cloud away from the on-premises enterprise network. The SASE fabric abstracts the complexity and reduces costs. In addition, from a hardware point of view, the cloud-based SASE can add more PoPs of the same instance for scale and additional capacity. This is known as vertical scaling, and also, in new locations, known as horizontal scaling.

SASE overcomes intensive processing.

Additionally, the SASE-based cloud takes care of intensive processing. For example, as much of internet traffic is now encrypted, malware can use encryption to evade and hide from detection. 

 

Here, each PoP can perform deep packet dynamics on TLS-encrypted traffic. You may not need to decrypt to understand the payload fully. Still, a lot can be understood by performing partial decryption, and examining payload patterns to understand the malicious activity seems enough. The SASE vendor needs to have some Deep Packet Dynamic technologies.

Traditional firewalls are not capable of inspecting encrypted traffic. Therefore, performing DPI on TLS-encrypted traffic would require extra modules or a new appliance. A SASE solution ensures the decryption and inspection are done at the PoP, so no performance hits or new devices are needed on the customer sites. This can be done with Deep Packet Dynamic technologies.

 

Challenges: PoP Optimizations: Performance

Problems with packet drops/latency

Network congestion resulting in dropped and out-of-order packets could be better for applications. Latency-sensitivity applications such as collaboration, video, VoIP, and web conferencing are hit hardest by packet drops. Luckily, there are options to minimize latency and the effects of packet loss.

SD-WAN solutions have WAN optimization features that can be applied on an application-by-application or site-by-site basis. Along with WAN optimization features, there are protocol and application acceleration techniques.

Dropped Packet Test

On top of existing techniques to reduce packet loss and latency, we can privatize the WAN as much as possible. To control the adverse and varying effects the last mile and middle mile have on applications is to privatize with a private global backbone consisting of a fabric of PoPs.

Once privatized, we have more control over traffic paths, packet loss, and latency. A private network fabric is a crucial benefit of SASE, as it drives application performance. So we can inspect east-west and north-south traffic.

Now that we have a centralized fabric consisting of many hubs and spokes, it is easy to perform traffic engineering and improve performance. So, when you centralize some of the architecture into a centralized fabric, it is easier to make traffic adjustments globally. The central hub will probably be a collocation facility and can be only one hop away. So, the architecture will be simpler and easier to implement.

 

SASE Securtiy Response:

We discussed PoP optimization – Routing algorithms, and TCP proxy.

Each PoP in the SASE cloud-based solution optimizes where it makes the most sense, not just at the WAN edge. For example, within the SASE fabric, we have global route optimizations to determine which path is best and can be changed for all traffic or specific applications.

These routing algorithms factor in performance metrics such as latency, packet loss, and jitter. I am selecting the optimal route for every network packet. Unlike internet routing, which favors cost over performance, the WAN backbone constantly analyzes and tries to improve performance.

 

  • A key point: Increasing the TCP Window size

As everything is privatized, we have all the information to create the largest packet size and use rate-based algorithms over traditional loss-based algorithms. As a result, you don’t need to learn anything, and throughput can be maintained end-to-end. As each PoP acts as a TCP proxy server, techniques are employed so that the TCP client and server think they are closer. Therefore, a larger TCP window is set, allowing more data to be passed before waiting for an acknowledgment.

 

Challenge: SASE Security, Zero Trust

SASE converges the networking and security pillars into a single platform. This allows multiple security solutions into a cloud service that enforces a unified policy across all corporate locations, users, and data. SASE recommends you employ the zero trust principles.

The path to zero trust starts with identity in that network access is based on the identity of the user, the device, and the application. Not on the IP address or physical location of the device. And this is for a good reason. There needs to be contextual information.

The identity of the user/device must reflect the business context instead of being associated with binary constructs utterly disjointed from the upper layers. This binds an identity to the networking world and is the best way forward for policy enforcement.

Therefore, the dependency on IP or applications as identifiers is removed. Now, the policy is applied consistently regardless of where the user/device is located, while the identity of the user/device/service can be factored into the policy. The SASE stack is dynamically applied based on originality and context while serving zero trust at strategic points in the cloud, enforcing an identity-centric perimeter.

Highlights: SASE Solution

In today’s rapidly evolving technological landscape, traditional networking approaches are struggling to keep up with the demands of modern connectivity. Enter SASE (Secure Access Service Edge) – a revolutionary solution that combines network and security capabilities into a unified cloud-based architecture. In this blog post, we explored the key features and benefits of SASE and delve into how it is shaping the future of networking.

Section 1: Understanding SASE

SASE, pronounced “sassy,” represents a paradigm shift in networking. It converges wide-area networking (WAN) and network security services into a single, cloud-native solution. By integrating these traditionally disparate functions, organizations can simplify network management, improve security, and enhance overall performance. SASE embodies the principles of simplicity, scalability, and flexibility, all while delivering a superior user experience.

Section 2: The Power of Cloud-native Architecture

At the core of SASE lies its cloud-native architecture. By leveraging the scalability and agility of the cloud, organizations can dynamically scale their network and security resources based on demand. This elasticity eliminates the need for costly infrastructure investments and allows businesses to adapt quickly to changing network requirements. With SASE, organizations can embrace the benefits of a cloud-first approach without compromising on security or performance.

Section 3: Enhanced Security and Zero Trust

One of the key advantages of SASE is its inherent security capabilities. SASE leverages a Zero Trust model, which means that every user and device is treated as potentially untrusted, regardless of their location or network connection. By enforcing granular access controls, strong authentication mechanisms, and comprehensive threat detection, SASE ensures that only authorized users can access critical resources. This approach significantly reduces the attack surface, mitigates data breaches, and enhances overall security posture.

Section 4: Simplified Network Management

Traditional networking architectures often involve complex configurations and multiple point solutions, leading to a fragmented and challenging management experience. SASE streamlines network management by centralizing control and policy enforcement through a unified console. This centralized approach simplifies troubleshooting, reduces administrative overhead, and enables organizations to maintain a consistent network experience across their distributed environments.

Conclusion:

As the digital landscape continues to evolve, embracing innovative networking solutions like SASE becomes imperative for organizations seeking to stay ahead of the curve. By consolidating network and security functions into a unified cloud-native architecture, SASE provides simplicity, scalability, and enhanced security. As businesses continue to adopt cloud-based applications and remote work becomes the norm, SASE is poised to revolutionize the way we connect, collaborate, and secure our networks.