Computer Networks

Computer Networking: Building a Strong Foundation for Success

Computer Networking

Computer networking has revolutionized how we communicate and share information in today's digital age. Computer networking offers many possibilities and opportunities, from the Internet to local area networks. This blog post will delve into the fascinating world of computer networking and discover its key components, benefits, and prospects.

Computer networking is essentially the practice of connecting multiple devices to share resources and information. It involves using protocols, hardware, and software to establish and maintain these connections. Understanding networking fundamentals, such as IP addresses, routers, and switches, is crucial for anyone venturing into this field.

The Birth of Networking: In the early days of computer networking, it was primarily used for military and scientific purposes. The advent of ARPANET in the late 1960s laid the foundation for what would eventually become the internet. This pioneering effort allowed multiple computers to communicate with each other, setting the stage for the interconnected world we know today.

The Internet Era Begins: The 1990s marked a significant turning point in computer networking with the emergence of the World Wide Web. Tim Berners-Lee's creation of the HTTP protocol and the first web browser fueled the rapid growth and accessibility of the internet. Suddenly, information could be shared and accessed with just a few clicks, transforming the way we gather knowledge, conduct business, and connect with others.

From Dial-Up to Broadband: Remember the days of screeching dial-up modems? As technology progressed, so did our means of connecting to the internet. The widespread adoption of broadband internet brought about faster speeds and more reliable connections. With the introduction of DSL, cable, and fiber-optic networks, users could enjoy seamless online experiences, paving the way for streaming media, online gaming, and the rise of cloud computing.

Wireless Networking and Mobility: Gone are the days of being tethered to a desktop computer. The advent of wireless networking technologies such as Wi-Fi and Bluetooth opened up a world of mobility and convenience. Whether it's connecting to the internet on our smartphones, laptops, or IoT devices, wireless networks have become an indispensable part of our daily lives, enabling us to stay connected wherever we go.

Highlights: Computer Networking

Network Components

Creating a computer network requires a lot of preparation and knowledge of the right components used. One of the first steps in computer networking is identifying what features to use and where to place them. This includes selecting the proper hardware, such as the Layer 3 routers, Layer 2 switches, and Layer 1 hubs if you are on an older network. Along with the right software, such as operating systems, applications, and network services. Or if any advanced computer networking techniques, such as virtualization and firewalling, are required.

Diagram: Cloud Application Firewall.

Network Structure

Once the network components are identified, it’s time to plan the network’s structure. This involves deciding where each piece will be placed and how they will be connected. The majority of networks you will see today will be Ethernet-based. You will need a design process for more extensive networks. Still, for smaller networks, such as your home network, once physically connected, you are ready as all the network services are set up for you on the WAN router by the local service provider.

Network Design

To embark on our journey into network design, it’s crucial to grasp the fundamental concepts. This section will cover topics such as network topologies, protocols, and the different layers of the OSI model. By establishing a solid foundation, you’ll be better equipped to make informed decisions in your network design endeavors.

Assessing Requirements and Goals

Before exploring the technical aspects of network design, it’s essential to identify your specific requirements and goals. This section will explore the importance of conducting a thorough needs analysis, considering factors such as scalability, security, and bandwidth. By aligning your network design with your objectives, you can build a robust and future-proof infrastructure.

Choosing the Right Equipment and Technologies

With a clear understanding of your requirements, it’s time to select the appropriate equipment and technologies for your network. We’ll delve into the world of routers, switches, firewalls, and wireless access points, discussing the criteria for evaluating different options. Additionally, we’ll explore emerging technologies like Software-Defined Networking (SDN) and Network Function Virtualization (NFV) that can revolutionize network design.

Designing for Efficiency and Redundancy

Efficiency and redundancy are vital aspects of network design that ensure reliable and optimized performance. This section will cover load balancing, fault tolerance, and network segmentation strategies. We’ll explore techniques like VLANs (Virtual Local Area Networks), link aggregation, and the implementation of redundant paths to minimize downtime and enhance network resilience.

Securing Your Network

Network security is paramount in an era of increasing cyber threats. This section will address best practices for securing your network, including firewalls, intrusion detection systems, and encryption protocols. We’ll also touch upon network access control mechanisms and the importance of regular updates and patches to safeguard against vulnerabilities.

Firewall types
Diagram: Displaying the different firewall types.

 

 

Related: Additional links to internal content for pre-information:

  1. Data Center Topologies
  2. Distributed Firewalls
  3. Internet of Things Access Technologies
  4. LISP Protocol and VM Mobility.
  5. Port 179
  6. IP Forwarding
  7. Forwarding Routing Protocols
  8. Technology Insight for Microsegmentation
  9. Network Security Components
  10. Network Connectivity

Computer Networks

Key Computer Networking Discussion Points:


  • Introduction to computer networks and what is involved.

  • Highlighting the details of how you connect up networks.

  • Technical details on approaching computer networking and the importance of security.

  • Scenario: The main network devices: Are Layer 2 switches and Layer 3 routers.

  • The different types of protocols sued in computer networks.

Back to Basics: Computer Networks

A network is a collection of interconnected systems that share resources. Networks connect IoT (Internet of Things) devices, desktop computers, laptops, and mobile phones. A computer network will consist of standard devices such as APs, switches, and routers, the essential network components.

Network services

You can connect your network’s devices to other computer networks and the Internet, a global system of interconnected networks. So when we connect to the Internet, we secure the Local Area Network (LAN) to the Wide Area Network (WAN). As we move between computer networks, we must consider security.

You will need a security device between these segments for a stateful inspection firewall. You are probably running IPv4, so you will need a network service called Network Address Translation (NAT). IPv6, the latest version of the IP protocol, does not need NAT but may need a translation service to communicate with IPv4-only networks.

Network Address Translation

♦Types of Networks

There are various types of computer networks, each serving different purposes. Local Area Networks (LANs) connect devices within a limited geographical area, such as homes or offices. Wide Area Networks (WANs) span larger areas, connecting multiple LANs. The internet itself can be considered the most extensive WAN, connecting countless networks across the globe.

Computer networking brings numerous benefits to individuals and businesses. It enables seamless communication, file sharing, and resource access among connected devices. In industry, networking enhances productivity and collaboration, allowing employees to work together efficiently regardless of physical location. Moreover, networking facilitates company growth and expansion by providing access to global markets.

Computer Networking

Computer Networking Main Components


  •  A network is a collection of interconnected systems that share resources. The primary use case of a network was to share printers.

  • A network must offer a range of network services such as NAT.

  • Various types of computer networks, each serving different purposes. LAN vs WAN.

  • Protecting sensitive data, preventing unauthorized access, and mitigating potential threats are constant challenges.

Security and Challenges

With the ever-increasing reliance on computer networks, security becomes a critical concern. Protecting sensitive data, preventing unauthorized access, and mitigating potential threats are constant challenges. Network administrators employ various security measures such as firewalls, encryption, and intrusion detection systems to safeguard networks from malicious activities.

As technology continues to evolve, so does computer networking. Emerging trends such as cloud computing, the Internet of Things (IoT), and software-defined networking (SDN) are shaping the future of networking. The ability to connect more devices, handle massive amounts of data, and provide faster and more reliable connections opens up new possibilities for innovation and advancement.

Local Area Network

A Local Area Network (LAN) is a computer network that connects computers and other devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Ethernet cables typically connect LANs but may also be connected through wireless connections. LANs are usually used within a single organization or business but may connect multiple locations. The equipment in your LAN is in your control.

computer networking

Wide Area Network

Then, we have the Wide Area Network (WAN). In contrast to the LAN, a WAN is a computer network covering a wide geographical area, typically connecting multiple locations. Your LAN may only consist of Ethernet and a few network services.

However, a WAN may consist of various communications equipment, protocols, and media that provide access to multiple sites and users. WANs usually use private leased lines, such as T-carrier lines, to connect geographically dispersed locations. The equipment in the WAN is out of your control.

Computer Networks
Diagram: Computer Networks with LAN and WAN.

LAN

WAN

  • LAN means local area network.

  •   Itconnect users and applications in close geographical proximity (same building).

  •  LANs use OSI Layer 1 and Layer 2 data connection equipment for transmission.

  •   LANs use local connections like Ethernet cables and wireless access points.

  • LANs are faster, because they span less distance and have less congestion.

  • LANs are good for private IoT networks, bot networks, and small business networks. LANs use OSI Layer 1 and Layer 2 data connection equipment for transmission.

  • WAN means wide area network.

  • Itconnect users and applications in geographically dispersed locations (across the globe).

  • WANs use Layer 1, 2, and 3 network devices for data transmission.

  • WANs use wide area connections like MPLS, VPNs, leased lines, and the cloud.

  • WANs are slightly slower, but that may not be perceived by your users.

  • WANs use Layer 1, 2, and 3 network deviceWANs are good for disaster recovery, applications with global users, and large corporate networks.s for data transmission.

Virtual Private Network ( VPN )

We use a VPN to connect LAN networks over a WAN. A virtual private network (VPN) is a secure and private connection between two or more devices over a public network such as the Internet. Its purpose is to provide fast, encrypted communication over an untrusted network.

VPNs are commonly used by businesses and individuals to protect sensitive data from prying eyes. One of the primary benefits of using a VPN is that it can protect your online privacy by masking your IP address and encrypting your internet traffic. This means that your online activities are hidden from your internet service provider (ISP), hackers, and other third parties who may be trying to eavesdrop on your internet connection.

Example: VPN Technology

An example of a VPN technology is Cisco DMVPN. DMVPN operates in phases; there is DMVPN Pashe 1 to 3. For true hub and spoke, you would implement Phase 1; however, today, Phase 3 is the most popular, offering spoke-to-spoke tunnels. The screenshot below is an example of DMVPN Phase 1 running an OSPF network type of broadcast.

DMVPN

Computer Networking

Once the network’s components and structure have been determined, the next step is configuring computer networking. This involves setting up network parameters, such as IP addresses and subnets, and configuring routing tables.

Remember that security is paramount, especially when connecting to the Internet, an untrusted network with a lot of malicious activity. Firewalls help you create boundaries and secure zones for your networks. Different firewall types exist for the other network parts, making a layered approach to security.

Once the computer networking is completed, the next step is to test the network. This can be done using tools such as network analyzers, which can detect any errors or issues present. You can conduct manual tests using Internet Control Message Protocol (ICMP) protocols, such as ping and traceroute. Testing for performance is only half of the pictures. It’s also imperative to regularly monitor the network for potential security vulnerabilities. So, you must have antivirus software, a computer firewall, and other endpoint security controls.

Finally, it’s critical to keep the network updated. This includes updating the operating system and applications and patching any security vulnerabilities as soon as possible. It’s also crucial to watch for upcoming or emerging technologies that may benefit the network.

packet loss testing
Diagram: Packet loss testing.

Lab Guide: Endpoint Networking and Security

Address Resolution Protocol (ARP)

The first command you will want to become familiar with is arp

At its core, ARP is a protocol that maps an IP address to a corresponding MAC address. It enables devices within a local network to communicate with each other by resolving the destination MAC address for a given IP address. Devices store these mappings in an ARP table for efficient and quick communication.

Analysis: What you see are 5 column headers explained as follows:

  • Address: The IP address of a device on the network identified through the ARP protocol is resolved to the hostname.

  • HWtype: This describes the type of hardware facilitating the network connection. In this case, it is an ethernet rather than a Wi-Fi interface.

  • HW address: The MAC address assigned to the hardware interface responding to ARP requests.

  • Flags Mask: A hex value translated into ASCII defines how the interface was set.

  • Iface: Lists the interface’s name associated with the hardware and IP address.


Analysis: The output contains the same columns and information, with additional information about the contents of the cache. The -v flag is for verbose mode and provides additional information about the entries in the cache. Focus on the Address. The -n flag tells the command not to resolve the address to a hostname; the result is seeing the Address as an IP.

Note: The IP and Mac address returned is an additional VM running Linux in this network. This is significant because if a device is within the same subnet or layer two broadcast domain as a device identified by its local ARP cache, it will simply address traffic to the designated MAC address. In this way, if you can change the ARP cache, you can change where the device sends traffic within its subnet.

Locally, you can change the ARP cache directly by adding entries yourself.  See the screenshot above:

Analysis: Now you see the original entry and the entry you just set within the local ARP cache. When your device attempts to send traffic to the address 192.168.18.135, the packets will be addressed at layer 2 to the corresponding MAC address from this table. Generally, MAC address to IP address mappings are learned dynamically through the ARP network protocol activity, indicated by the “C” under the Flags Mask column. The CM reflects that the entry was manually added.

Note: Additional Information on ARP

  • ARP Request and Response

When a device needs to communicate with another device on the same network, it initiates an ARP request. The requesting device broadcasts an ARP request packet containing the target IP address for which it seeks the MAC address. The device with the matching IP address responds with an ARP reply packet, providing its MAC address. This exchange allows the requesting device to update its ARP table and establish a direct communication path.

  • ARP Cache Poisoning

While ARP serves a critical purpose in networking, it is vulnerable to attacks like ARP cache poisoning. In this type of attack, a malicious entity spoofs its MAC address, tricking devices on the network into associating an incorrect MAC address with an IP address. This can lead to various security issues, including interception of network traffic, data manipulation, and unauthorized access.

  • Address Resolution Protocol in IPv6

While ARP is predominantly used in IPv4 networks, IPv6 networks utilize a similar protocol called Neighbor Discovery Protocol (NDP). NDP performs functions identical to ARP but with additional features such as stateless address autoconfiguration and duplicate address detection. Although NDP differs from ARP in several ways, its purpose of mapping IP addresses to link-layer addresses remains the same.

Computer Networking & Data Traffic

Computer networking aims to carry data traffic so we can share resources. The first use case of computer networks was to share printers; now, we have a variety of use cases that evolve around data traffic. Data traffic can be generated from online activities such as streaming videos, downloading files, surfing the web, and playing online games. It is also generated from behind-the-scenes activities such as system updates and background and software downloads.

The Importance of Data Traffic

Data traffic is the amount transmitted over a network or the Internet. It is typically measured in bits, bytes, and packets per second. Data traffic can be both inbound and outbound. Inbound traffic is data coming into a network or computer, and outbound traffic is data leaving a network or computer. Inbound data traffic should be inspected by a security device, such as a firewall, which can either be at the network’s perimeter or on your computing device. At the same time, outbound traffic is generally unfiltered.

To keep up with the increasing demand, companies must monitor data traffic to ensure the highest quality of service and prevent network congestion. With the right data traffic monitoring tools and strategies, organizations can improve network performance and ensure their data is secure.

 

The Issues of Best Efforts or FIFO

Network devices don’t care what kind of traffic they have to forward. Ethernet frames are received by your switch, which looks for the destination MAC address before forwarding them. Your router does the same thing: it gets an IP packet, checks the routing table for the destination, and forwards the packet.

Would the frame or packet contain data from a user downloading the latest songs from Spotify or significant speech traffic from a VoIP phone? It doesn’t matter to the switch or router. This forwarding logic is called best effort or FIFO (First In, First Out). Sometimes, this can be an issue when applications are hungry for bandwidth. 

Example: Congestion

The serial link is likely congested when the host and IP phone transmit data and voice packets to the host and IP phone on the other side. Packets queued for transmission will not be indefinitely held by the router.

When the queue is full, how should the router proceed? Are data packets being dropped? Voice packets? If voice packets are dropped, there will be complaints about poor voice quality on the other end. If data packets are dropped, users may complain about slow transfer speeds.

You can change how the router or switch handles packets using QoS tools. For example, the router can prioritize voice traffic over data traffic.

The Role of QoS

Quality of Service (QoS) is a popular technique used in computer networking. QoS can segment applications so that different types will have different priority levels. For example, Voice traffic is often considered more critical than web surfing traffic. Especially as it is sensitive to packet loss. So, when there is congestion on the network, QoS allows administrators to prioritize network traffic so users have the best experience.

Quality of Service (QoS) refers to techniques and protocols prioritizing and managing network traffic. By allocating resources effectively, QoS ensures that critical applications and services receive the necessary bandwidth, low latency, and minimal packet loss while maintaining a stable network connection. This optimization process considers factors such as data type, network congestion, and the specific requirements of different applications.

Expedited Forwarding (EF)

Expedited Forwarding (EF) is a network traffic management model that provides preferential treatment to certain types of traffic. The EF model prioritizes traffic, specifically real-time traffic such as voice, video, and streaming media, over other types of traffic, such as email and web browsing. This allows these real-time applications to function more reliably and efficiently by reducing latency and jitter.

The EF model works by assigning a traffic class to each data packet based on the type of data it contains. The assigned class dictates how the network treats the packet. The EF model has two categories: EF for real-time traffic and Best Effort (BE) for other traffic. EF traffic is given preferential treatment, meaning it is prioritized over BE traffic, resulting in a higher quality of service for the EF traffic.

The EF model is an effective and efficient way to manage computer network traffic. By prioritizing real-time traffic, the EF model allows these applications to function more reliably, with fewer delays and a higher quality of service. Additionally, the EF model is more efficient, reducing the amount of traffic that needs to be managed by the network.

Lab Guide: QoS and Marking Traffic

TOS ( Type of Service )

In this Lab, we’ll take a look at marking packets. Marking means we set the TOS (Type of Service) byte with an IP Precedence or DSCP value.

Marking and Classifcaiton take place on R2. R1 is the source of the ICMP and HTTP Traffic. R3 has an HTTP server installed. As traffic, both telnet and HTTP packets get sent from R1 and traverse R2, classification takes place.

Note:

To ensure each application gets the treatment it requires, we must implement QoS (Quality of Service). The first step when implementing QoS is classification,

QoS classification

We will mark the traffic and apply a QoS policy once it has been classified. Marking and configuring QoS policies are a whole different story, so we’ll stick to classification in this lesson.

On IOS routers, there are a couple of methods we can use for classification:

  • Header inspection
  • Payload inspection

We can use some fields in our headers to classify applications. For example, telnet uses TCP port 23, and HTTP uses TCP port 80. Using header inspection, you can look for:

  • Layer 2: MAC addresses
  • Layer 3: source and destination IP addresses
  • Layer 4: source and destination port numbers and protocol

QoS

♦Benefits of Quality of Service

A) Bandwidth Optimization:

One of the primary advantages of implementing QoS is the optimized utilization of available bandwidth. By classifying and prioritizing traffic, QoS ensures that bandwidth is allocated efficiently, preventing congestion and bottlenecks. This translates into smoother and uninterrupted network experiences, especially when multiple users or devices access the network simultaneously.

B) Enhanced User Experience:

With QoS, users can enjoy a seamless experience across various applications and services. Whether streaming high-quality video content, engaging in real-time online gaming, or participating in video conferences, QoS helps maintain low latency and minimal jitter, resulting in a smooth and immersive user experience.

♦Implementing Quality of Service

To implement QoS effectively, network administrators need to understand the specific requirements of their network and its users. This involves:

A) Traffic Classification:

Different types of network traffic require different levels of priority. Administrators can allocate resources by classifying traffic based on its nature and importance.

B) Traffic Shaping and Prioritization:

Once traffic is classified, administrators can prioritize it using various QoS mechanisms such as traffic shaping, packet queuing, and traffic policing. These techniques ensure critical applications receive the necessary resources while preventing high-bandwidth applications from monopolizing the network.

C) Monitoring and Fine-Tuning:

Regular monitoring and fine-tuning of QoS parameters are essential to maintain optimal network performance. By analyzing network traffic patterns and adjusting QoS settings accordingly, administrators can adapt to changing demands and ensure a consistently high level of service.

Computer Networking Components – Devices:

First, the devices. Media interconnect devices provide the channel over which the data travels from source to destination. Many devices are virtualized today, meaning they no longer exist as separate hardware units.

One physical device can emulate multiple end devices. In addition to having its operating system and required software, an emulated computer system operates as a separate physical unit. Devices can be further divided into endpoints and intermediary devices.

Endpoint: 

Endpoint is a device part of a computer network, including PCs, laptops, tablets, smartphones, video game consoles, and televisions. Endpoints can be physical hardware units, such as file servers, printers, sensors, cameras, manufacturing robots, and smart home components. Nowadays, we have virtualized endpoints.

Computer Networking Components – Intermediate Devices

Layer 2 Switches:

These devices enable multiple endpoints, such as PCs, file servers, printers, sensors, cameras, and manufacturing robots, to connect to the network. Switches allow devices to communicate on the same network. Switches attempt to forward messages from the sender so the destination can only receive them, unlike a hub that floods traffic out of all ports. The switch operates with MAC addresses and works at Layer 2 of the OSI model.

Usually, all the devices that connect to a single switch or a group of interconnected switches belong to a standard network. They can, therefore, exchange information directly with each other. If an end device wants to communicate with a device on a different network, it requires the “services” of a device known as a router. Routers connect other networks and work higher up in the OSI model at Layer 3. They use the IP protocol.

Routers

Routers’ primary function is to route traffic between computer networks. For example, you need a router to connect your office network to the Internet. Routers connect computer networks and intelligently select the best paths between them, and they hold destinations in what is known as a routing table. There are different routing protocols for different-sized networks, and each will have other routing convergence times.

routing convergence
The well-known steps in routing convergence.

We recently combined Layer 2 and Layer 3 functionality. So we have a Layer 3 router with a Layer 2 switch module inserted, or we can have a multilayer switch that combines the functions of Layer 3 routing and Layer 2 switch functionality on a single device.

Computer Networks
Diagram: Computer Networks with Switch and Routers.

Wi-Fi access points

These devices allow wireless devices to connect. They usually connect to switches but can also be integrated into routers. My WAN router has everything in one box: Wi-Fi, Ethernet LAN, and network services such as NAT and WAN. Wi-Fi access points provide wireless internet access within a specified area.

Wi-Fi access points are typically found in coffee shops, restaurants, libraries, and airports in public settings. These access points allow anyone with a Wi-Fi-enabled device to access the Internet without needing additional hardware. 

WLAN controllers: 

WLAN controllers are devices used to automate the configuration of wireless access points. They provide centralized management of wireless networks and act as a gateway between wireless and wired networks. Administrators can monitor and manage the entire WLAN, set up security policies, and configure access points through the controller. WLAN controllers also authenticate users, allowing them to access the wireless network.

In addition, the WLAN controller can also detect and protect against malicious activities such as unauthorized access, denial-of-service attacks, and interference from other wireless networks. By using the controller, administrators can also monitor the usage of the wireless network and make sure that the network is secure.

Network firewalls:

Then, we have firewalls, which are the cornerstone of security. Depending on your requirements, there will be different firewall types. Firewalls range from basic packet filtering to advanced next-generation firewalls and come in virtual and physical forms.

Generally, a firewall monitors and controls incoming and outgoing traffic according to predefined security rules. The firewall will have a default rule set so that some firewall interfaces are more trusted than others, blankly restricting traffic from outside to inside, but you need to set up a policy for firewalls to work.

A firewall typically establishes a barrier between a trusted, secure internal network and another outside network, such as the Internet, which is assumed not to be secure or trusted. Firewalls are typically deployed in a layered approach, meaning multiple security measures are used to protect the network. Firewalls provide application, protocol, and network layer protection.

data center firewall
Diagram: The data center firewall.
  • Application layer protection:

The next layer is the application layer, designed to protect the network from malicious applications, such as viruses and malware. The application layer also includes software like firewalls to detect and block malicious traffic.

  • Protocol layer protection: 

The third layer is the protocol layer. This layer focuses on ensuring that the data traveling over a network is encrypted and that it is not allowed to be modified or corrupted in any way. This layer also includes authentication protocols that prevent unauthorized users from accessing the network.

  • Network Layer protection

Finally, the fourth layer is network layer protection. This layer focuses on controlling access to the network and ensuring that users cannot access resources or applications they are not authorized to use.

A network intrusion protection system (IPS): 

An IPS or IDS analyzes network traffic to search for signs that a particular behavior is suspicious or malicious. If the IPS detects such behavior, it can take protective action immediately. In addition, the IPS and firewall can work together to protect a network. So, if an IPS detects suspicious behavior, it can trigger a policy or rule for the firewall to implement.

An intrusion protection system can alert administrators of suspicious activity, such as attempts to gain unauthorized access to confidential files or data. Additionally, it can block malicious activity if necessary; it provides a layer of defense against malicious actors and cyber attacks. Intrusion protection systems are essential to any organization’s security plan.

Cisco IPS
Diagram: Traditional Intrusion Detection. With Cisco IPS.

Computer Networking Components – Media

Next, we have the media. The media connects network devices. Different media have different characteristics, and selecting the most appropriate medium depends on the circumstances, including the environment in which the media is used and the distances that need to be covered.

The media will need some connectors. A connector makes it much easier to connect wired media to network devices. A connector is a plug attached to each end of the cable. RJ-45 connector is the most common type of connector on an Ethernet LAN.

Ethernet: Wired LAN technology.

The term Ethernet refers to an entire family of standards. Some standards define how to send data over a particular type of cabling and at a specific speed. Other standards define protocols or rules that the Ethernet nodes must follow to be a part of an Ethernet LAN. All these Ethernet standards come from the IEEE and include 802.3 as the beginning of the standard name.

Introducing Copper and Fiber

Ethernet LANs use cables for the links between nodes on a computer network. Because many types of cables use copper wires, Ethernet LANs are often called wired LANs. Ethernet LANs also use fiber-optic cabling, which includes a fiberglass core that devices use to send data using light. 

Materials inside the cable: UTP and Fiber

The most fundamental cabling choice concerns the materials used inside the cable to transmit bits physically: either copper wires or glass fibers. 

  • Unshielded twisted pair (UTP) cabling devices transmit data over electrical circuits via the copper wires inside the cable.
  • Fiber-optic cabling, the more expensive alternative, allows Ethernet nodes to send light over glass fibers in the cable’s center. 

Although more expensive, optical cables typically allow longer cabling distances between nodes. So you have UTP cabling in your LAN and Fiber-optic cabling over the WAN.

UTP and Fiber

The most common copper cabling for Ethernet is UTP. An unshielded twisted pair (UTP) is cheaper than the other two and is easier to install and troubleshoot. Many UTP-based Ethernet standards can use a cable length of up to 100 meters, which means that most Ethernet cabling in an enterprise uses UTP cables.

The distance from an Ethernet switch to every endpoint on a building’s floor will likely be less than 100m. In some cases, however, an engineer might prefer to use fiber cabling first for some links in an Ethernet LAN to reach greater distances.

Fiber Cabling

Then we have fiber-optic cabling, a glass core that carries light pulses and is immune to electrical interference. Fiber-optic cabling is typically used as a backbone between buildings. So fiber cables are high-speed transmission mediums. It contains tiny glass or plastic filaments as the medium to which light passes.

Cabling types: Multimode and Single Mode

There are two main types of fiber optic cables. We have single-mode fiber ( SMF) and multimode fiber ( MMF). Two implementations of fiber-optic include MMF for shorter distances and SMF for longer distances. Multimode improves the maximum distances over UTP and uses less expensive transmitters than single-mode. Standards vary; for instance, the criteria for 10 Gigabit Ethernet over Fiber allow for distances up to 400m, often allowing for connecting devices in different buildings in the same office park.

Network Services and Protocols

We need to follow these standards and the rules of the game. We also need protocols so we have the means to communicate. If you use your web browser, you use the HTTP protocol. If you send an email, you use other protocols, such as IMAP and SMTP.

A protocol establishes a set of rules that determine how data is transmitted between different devices in the network. The two protocols must talk to each other, such as HTTP at one end and HTTP at the other.

Consider protocol the same way you would speak the same language. We need to communicate in the same language. Then, we have standards that we need to follow for computer networking, such as the TCP/IP suite.

Types of protocols

We have different types of protocols. The following are the main types of protocols used in computer networking.

  • Communication Protocols

For example, we have routing protocols on our routers that help you forward traffic. This would be an example of a communication protocol that allows different devices to communicate with each other. Another example of a communication protocol would be instant messaging.

Instant messaging is instantaneous, text-based communication you probably have used on your smartphone. So here we have several instant messaging network protocols. Short Message Service (SMS): This communications protocol was created to send and receive text messages over cellular networks.  

  • Network Management

Network management protocols define and describe the various operating procedures of a computer network. These protocols affect multiple devices on a single network—including computers, routers, and servers—to ensure that each one and the network as a whole perform optimally.

  • Security Protocols

Security protocols, also called cryptographic protocols, ensure that the network and the data sent over it are protected from unauthorized users. Security protocols are implemented on more than just your network security devices. They are implemented everywhere. The standard functions of security network protocols include encryption: Encryption protocols protect data and secure areas by requiring users to input a secret key or password to access that information.

The following screenshot is an example of an IPsec tunnel offering end-to-end encryption. Notice that the first packet in the ping ( ICMP request ) was lost due to ARP working in the background. Five pings are sent, but only four are encapsulated/decapsulated.

Site to Site VPN

Characteristics of a network

Network Topology:

In a carefully designed network, data flows are optimized, and the network performs as intended based on the network topology. Network topology is the arrangement of a computer network’s elements (links, nodes, etc.). It can be used to illustrate a network’s physical and logical layout and how it functions. 

what is spine and leaf architecture

Bitrate or Bandwidth:

It is often referred to as bandwidth or speed in device configurations, sometimes considered speed. Bitrate measures the data rate in bits per second (bps) of a given link in the network. The number of bits transmitted in a second is more important than the speed at which one bit is transmitted over the link – which is determined by the physical properties of the medium that propagates the signal. Many link bit rates are commonly encountered today, including 1 and 10 gigabits per second (1 and 10 billion bits per second). Some links can reach 100 and even 400 gigabits per second.

Network Availability: 

Network availability is determined by several factors, including the type of network being used, the number of users, the complexity of the network, the physical environment, and the availability of network resources. Network availability should also be addressed in terms of redundancy and backup plans. Redundancy helps to ensure that the system is still operational even if one or more system components fail. Backup plans should also be in place in the event of a system failure.

A network’s availability is calculated based on the percentage of time it is accessible and operational. To calculate this percentage, divide the number of minutes the network is available by the total number of minutes it is available for over an agreed period and divide it by 100. In other words, availability is the ratio of uptime and full-time, expressed in percentage. 

Gateway Load Balancer Protocol

Network High Availability: 

High availability is a critical component of a successful IT infrastructure. It ensures that systems and services remain available and accessible to users and customers. High availability is achieved by using redundancies, such as multiple servers, systems, and networks, to ensure that if one component fails, a backup component is available.

High availability is also achieved through fault tolerance, which involves designing systems that respond to failures without losing data or becoming unavailable. Various strategies, such as clustering, virtualization, and replication, can achieve high availability.

Network Reliability:

Network reliability can be achieved by implementing a variety of measures, often through redundancy. Redundancy is a crucial factor in ensuring a reliable network. Redundancy has multiple components to provide a backup in case of failure. Redundancy can include having multiple servers, routers, switches, and other hardware devices. Redundancy can also involve having numerous sources of power, such as various power supplies or batteries, and multiple paths for data to travel through the network.

For adequate network reliability, you also need to consider network monitoring. Network monitoring involves using software and hardware tools to monitor the network’s performance continuously. Monitoring can detect and alert administrators of potential performance issues or failures. We have a new term called Observability, which accurately reflects tracking in today’s environment.

Network Characteristics
Diagram: Network Characteristics

Network Scalability:

A network’s scalability indicates how easily it can accommodate more users and data transmission requirements without affecting performance. Designing and optimizing a network only for the current conditions can make it costly and challenging to meet new needs when the network grows.

Several factors must be taken into account in terms of network scalability. First and foremost, the network must be designed with the expectation that the number of devices or users will increase over time. This includes hardware and software components, as the network must support the increased traffic. Additionally, the network must be designed to be flexible so that it can easily accommodate changes in traffic or user count. 

Network Security: 

Network security is protecting the integrity and accessibility of networks and data. It involves a range of protective measures designed to prevent unauthorized access, misuse, modification, or denial of a computer network and its processing data. These measures include physical security, technical security, and administrative security. A network’s security tells you how well it protects itself against potential threats.

The subject of security is essential, and defense techniques and practices are constantly evolving. The network infrastructure and the information transmitted over it should also be protected. Whenever you take actions to affect the network, you should consider security. An excellent way to view network security is to take a zero-trust approach.

Software Defined Perimeter and Zero Trust
Software Defined Perimeter and Zero Trust

Virtualization: 

Virtualization can be done at the hardware, operating system, and application level. At the hardware level, physical hardware can be divided into multiple virtual machines, each running its operating system and applications.

At the operating system level, virtualization can run multiple operating systems on the same physical server, allowing for more efficient resource use. At the application level, multiple applications can run on the same operating system, allowing for better resource utilization and scalability. 

container based virtualization

Overall, virtualization can provide several benefits, including improved efficiency, utilization, flexibility, security, and scalability. It can consolidate and manage hardware or simplify application movement between different environments. Virtualization can also make it easier to manage other settings and provide better security by isolating various applications.

Computer Networking

Characteristics of a Network



  • Network Topology– It is the arrangement of a computer network’s elements (links, nodes, etc.)

  • Bitrate or Bandwidth– Bitrate measures the data rate in bits per second (bps) of a given link in the network.

  • Network Availability– It calculate based on the percentage of time it is accessible and operational..

  •  High Availability– It ensures that systems and services remain available and accessible to users and customers.

  • Reliability– It can be achieved by implementing a variety of measures, often through redundancy.

  • Scalability– Indicates how easily it can accommodate more users and data transmission needs without affecting performance.

  • Security– It protect the integrity, accessibility of networks & data, tells you how well it protects itself against potential threats..

  • Virtualization– It helps to improved efficiency, utilization & flexibility, as well as improved security and scalability.

Computer Networking and Network Topologies

Physical and logical topologies exist in networks. The physical topology describes the physical layout of the devices and cables. A physical topology may be the same in two networks but may differ in distances between nodes, physical connections, transmission rates, or signal types.

There are various types of physical topologies you may encounter in wired networks. Identifying the kind of cabling used is essential when describing the physical topology. Physical topology can be categorized into the following categories:

Bus Topology:

In a bus topology, every workstation is connected to a common transmission medium, a single cable called a backbone or bus. In a previous bus topology, computers and other network devices were connected to a central coaxial cable via connectors, resulting in direct connectivity.

Ring Topology:

In a ring topology, computers and other network devices are cabled in succession, with the last device connected to the first to form a circle or ring. There are two neighbors for every device in the network, and there are no direct connections between them. When one node sends data to another, it passes through each node between them until it reaches its destination.

  • Star Topology

A star topology is the most common physical topology, where network devices are connected to a central device through point-to-point connections. It is also known as the hub and spoke topology. A spoke device does not have a direct physical connection to another. This topology can also be called the extended star topology. A device with its spokes replaces one or more spoke devices in an extended star topology.

Mesh Topology

One device can be connected to more than one other in a mesh topology. Multiple paths are available for one node to reach another. Redundant links enhance reliability and self-healing. In a full mesh topology, all nodes are connected. In partial mesh, some nodes do not connect to all other nodes.

Introducing Switching Technologies

All Layer 2 devices connect to switches to communicate with one another. Switches work at layer two of the Open Systems Interconnection (OSI) model, the data link layer. Switches are ready to use right out of the box. In contrast to a router, a switch doesn’t require configuration settings by default. When you unbox the switch, it does not need to be configured to perform its role, which is to provide connectivity for all devices on your network. After putting power on the switch and connecting the systems, the switch will forward traffic to each connected device as needed.

Switch vs. Hubs

Moreover, you learned that switches had replaced hubs since they provide more advanced capabilities and are better suited to today’s computer networks. Advanced functionality includes filtering traffic by sending data only to the destination port (while a hub always sends data to all ports).

Full Duplex vs. Half Duplex

With a full duplex, both parties can talk and listen simultaneously, making it more efficient than half-duplex communication, where only one can speak simultaneously. Full duplex transmission is also more reliable since it is less likely to experience interference or distortion. Until switches became available, communication devices were only half-duplexed with hubs. A half-duplex device can send and receive simultaneously, but not simultaneously send and receive.

VLAN: Logical LANs

Virtual Local Area Networks (VLANs) are computer networks that divide a single physical local area network (LAN) into multiple logical networks. This partitioning allows for the segmentation of broadcast traffic, which helps to improve network performance and security.

VLANs enable administrators to set up multiple networks within a single physical LAN without needing separate cables or ports. These benefits businesses need to separate data and applications between various teams, departments, or customers.

In a VLAN, each segment is identified by a unique identifier or VLAN ID. The VLAN ID is used to associate traffic with a particular VLAN segment. For example, if a user needs to access an application on a different VLAN, the packet must be tagged with the VLAN ID of the destination segment to be routed correctly.

In the screenshot below, we have an overlay with VXLAN. VXLAN, short for Virtual Extensible LAN, is an overlay network technology that enables the creation of virtual Layer 2 networks over an existing Layer 3 infrastructure. It addresses traditional VLANs’ limitations by extending network virtualization’s scalability and flexibility. By encapsulating Layer 2 frames within UDP packets, VXLAN allows for creating up to 16 million logical networks, overcoming the limitations imposed by the 12-bit VLAN identifier field.

VXLAN
Diagram: Changing the VNI

VLANs also provide security benefits. A VLAN can help prevent malicious traffic from entering a segment by segmenting traffic into logical networks. This helps prevent attackers from gaining access to the entire network. Additionally, VLANs can isolate critical or confidential data from other users on the same network. VLANs can be implemented on almost any network, including wired and wireless networks. They can also be combined with other network technologies, such as routing and firewalls, to improve security further.

Overall, VLANs are powerful tools for improving performance and security in a local area network. With the right implementation and configuration, businesses can enjoy improved performance and better protection.

Switching Technologies

Switching Technologies


  •  Switch vs. Hubs- Switches replaced hubs since they provide more advanced capabilities and are better suited to today’s computer networks.

  • Full Duplex vs. Half Duplex- In Half Duplex mode, Sender can send the data and also can receive the data but one at a time. In Full Duplex mode, Sender can send the data and also can receive the data simultaneously.

  •  VLAN: Logical LANs- VLANs are a powerful tool to help improve performance and security in a local area network.

IP Routing Process

IP routing works by examining the IP address of each packet and determining where it should be sent. Routers are responsible for this task and use routing protocols such as RIP, OSPF, EIGRP, and BGP to decide the best route for each packet. In addition, each router contains a routing table, which includes information on the best path to a given destination.

When a router receives a packet, it looks up the destination in its routing table. If the destination is known, the router will make a forwarding decision based on the routing protocol. The router will use a default gateway to forward the packet if the destination is unknown.

Routing Protocol
Diagram: Routing Protocol. ISIS.

To route packets successfully, routers must be configured appropriately and able to communicate with one another. They must also be able to detect any changes to the network, such as link failures or changes in network topology.

IP routing is essential to any network, ensuring packets are routed as efficiently as possible. Therefore, it is crucial to ensure that routers are correctly configured and maintained.

IP Forwarding Example
Diagram: IP Forwarding Example.

Routing Table

A routing table is a data table stored in a router or a networked computer that lists the possible routes a packet of data can take when traversing a network. The routing table contains information about the network’s topology and decides which route a packet should take when leaving the router or computer. Therefore, the routing table must be updated to ensure data packets are routed correctly.

The routing table usually contains entries that specify which interface to use when forwarding a packet. Each entry may have network destination addresses and associated metrics, such as the route’s cost or hop count. In addition to the destination address, each entry can include a subnet mask, a gateway address, and a list of interface addresses.

Routers use the routing table to determine which interface to use when forwarding packets. When a router receives a packet, it looks at the packet’s destination address and compares it to the entries in the routing table. Once it finds a match, it forwards the packet to the corresponding interface.

Lab Guide: Networking and Security

Routing Tables and Netstat

Routing tables are essentially databases stored within networking devices, such as routers. These tables contain valuable information about the available paths and destinations within a network. Each entry in a routing table consists of various fields, including the destination network address, next-hop address, and interface through which the data packet should be forwarded.

One of the fundamental features of Netstat is its ability to display active connections. Using the appropriate flags, you can view the list of established connections, their local and remote IP addresses, ports, and the protocol being used. This information is invaluable for identifying suspicious or unauthorized connections.

Get started by running the route command.

Analysis: Seem familiar? Yet another table with the following column headers:

    • Destination: This refers to the destination of traffic from this device. The default refers to anything not explicitly set.

    • Gateway: The next hop for traffic headed to the specific destination.

    • Genmask: The netmask of the destination.

      Note: For more detailed explanations of all the columns and results, run man route.

Run netstat to get a stream of information relating to network socket connections and UNIX domain sockets.

Note: UNIX domain sockets are a mechanism that allows processes local to the devices to exchange data.

  1. To clean this up, you can view just the network traffic using. netstat -at.

    • -a displays all ports, including IPV4 & IPV6

    • -t displays only TCP sockets

Analysis: When routes are created in different ways, they display differently. In the most recent rule, you can see that no metric is listed, and the scope is different from the other automatic routes. That is the kind of information we can use for detection.

The route table will send traffic to the designated gateway regardless of the route’s validity. Threat actors can use this to intercept traffic destined for another location, making it a crucial place to look for indicators of compromise.

How Routing Tables Work:

Routing tables utilize various routing protocols, such as OSPF (Open Shortest Path First) or BGP (Border Gateway Protocol), to gather information about network topology and make informed decisions about the best paths for data packets. These protocols exchange routing information between routers, ensuring that each device has an up-to-date understanding of the network’s structure.

Routing Table Entries and Metrics:

Each entry in a routing table contains specific metrics that determine the best path for forwarding packets. Metrics can include hop count, bandwidth, delay, or reliability. By evaluating these metrics, routers can select the most optimal route based on network conditions and requirements.

Summary: Computer Networking

It’s the backbone of modern communication, from browsing the internet to sharing files across devices. In this blog post, we delved into the fascinating world of computer networking, exploring its key concepts, benefits, and future prospects.

Section 1: What is Computer Networking?

Computer networking refers to connecting multiple computers and devices to facilitate data sharing and communication. It involves hardware components such as routers, switches, cables, and software protocols that enable seamless data transmission.

Section 2: The Importance of Computer Networking

Computer networking has revolutionized how we work, communicate, and access information. It enables efficient collaboration, allowing individuals and organizations to share resources, communicate in real-time, and access data from anywhere in the world. Whether a small local network or a global internet connection, networking plays a pivotal role in our digital lives.

Section 3: Types of Computer Networks

There are various types of computer networks, each serving different purposes. Local Area Networks (LANs) connect devices such as homes, offices, or schools within a limited area. Wide Area Networks (WANs) span larger geographical areas, connecting multiple LANs together. Additionally, there are Metropolitan Area Networks (MANs), Wireless Networks, and the vast Internet itself.

Section 4: Key Concepts in Computer Networking

To understand computer networking, you must familiarize yourself with key concepts like IP addresses, protocols (such as TCP/IP), routing, and network security. These concepts form the foundation of how data is transmitted, received, and protected within a network.

Section 5: The Future of Computer Networking

As technology advances, so does the world of computer networking. Emerging trends such as the Internet of Things (IoT), 5G networks, and cloud computing are reshaping the networking landscape. These developments promise faster speeds, increased connectivity, and enhanced security, paving the way for a more interconnected future.

Conclusion:

In conclusion, computer networking is a fascinating field that underpins our digital world. Its importance cannot be overstated, as it enables seamless communication, resource sharing, and global connectivity. Understanding the key concepts and staying updated with the latest trends in computer networking will empower individuals and organizations to make the most of this ever-evolving technology.

Diagram: Cloud Application Firewall.

Cisco CloudLock

Cisco CloudLock

In today's digital age, data security is of utmost importance. With the increasing number of cloud-based applications and the growing risk of data breaches, organizations need robust solutions to protect their sensitive information. One such solution is Cisco Cloudlock, a powerful cloud security platform. In this blog post, we will explore the key features and benefits of Cisco Cloudlock and how it can help safeguard your data.

Cisco Cloudlock is a comprehensive cloud security platform that provides real-time visibility and control over your organization's cloud applications. With its advanced threat intelligence and data protection capabilities, Cloudlock offers a holistic approach to cloud security. Whether you use popular cloud platforms like Google Workspace or Microsoft 365, Cloudlock can seamlessly integrate, providing enhanced security across your entire cloud environment.

A: Threat Protection: Cisco Cloudlock employs advanced machine learning algorithms to detect and prevent various types of cyber threats, including malware, phishing attacks, and data leakage. It continuously monitors user behavior and analyzes cloud data to identify any suspicious activities, allowing you to take proactive measures to mitigate risks.

B: Data Loss Prevention: Protecting sensitive data is crucial for every organization. Cloudlock offers robust data loss prevention (DLP) capabilities that allow you to define policies and enforce compliance across your cloud applications. It can detect, classify, and protect sensitive data such as personally identifiable information (PII) and intellectual property, ensuring it doesn't fall into the wrong hands.

C: Enhanced Visibility: With Cisco Cloudlock, you gain real-time visibility into your cloud environment, including user activities, application usage, and potential security threats. This increased visibility empowers you to make informed decisions and take proactive measures to safeguard your data.

D: Seamless Integration: Cloudlock seamlessly integrates with popular cloud platforms, making it easy to deploy and manage. It works across multiple cloud applications and provides a unified view of your entire cloud environment, simplifying security management and reducing operational complexity.

Highlights: Cisco CloudLock

Cloud-Native Security

Cloud-native security platforms such as Cisco Cloudlock are designed to provide comprehensive protection for cloud-based applications, particularly enterprises. By integrating with leading SaaS providers like Google Workspace, Microsoft 365, and Salesforce, Cloudlock offers a unified approach to data security, ensuring safe collaboration and preventing data breaches.

Its API-first architecture allows easy deployment and scalability, providing a smooth transition to a cloud-centric security model. Cisco Cloudlock provides user security, app security, and data security.

Cloud-Native Security Features:

a. Data Loss Prevention (DLP): Cloudlock’s advanced DLP capabilities allow organizations to define and enforce policies to prevent the leakage of sensitive data. With real-time monitoring and automated remediation, Cloudlock ensures that your critical information remains secure within the cloud environment. Countless out-of-the-box policies are available, as well as highly customizable custom policies.

b. Threat Protection: Leveraging machine learning algorithms and threat intelligence, Cloudlock identifies and mitigates risks posed by malicious insiders, compromised accounts, and external threats. By continuously analyzing user behavior and detecting anomalies, Cloudlock provides proactive threat detection and response. Anomalies are detected based on various factors using advanced machine learning algorithms. Moreover, it identifies illegal activities outside of allow listed countries and actions that appear to occur at impossible speeds across long distances.

c. Compliance and Governance: Cloudlock offers robust compliance and governance features for businesses operating in regulated industries. It helps organizations meet industry-specific regulations and standards by providing visibility, control, and audit capabilities across cloud applications. Cloudlock Apps Firewall discovers and controls cloud apps connected to your corporate network. Each app has a crowd-sourced Community Trust Rating, and you can ban or allowlist it based on its risk.

“The Road To Cisco Cloudlock Or Multiple Point Products”

**Microservices-based Security**

Enabling security microservices such as UEBA, DLP, and the application firewall to protect your SaaS environment can be done by deploying multiple products for each capability and then integrating them with different SaaS vendors and offerings. This approach provides additional capabilities but at the cost of managing multiple products per environment and application.

Adding other security products to the cloud environment increases security capabilities.  Still, there comes the point where the additional security capabilities become unmanageable due to time, financial costs, and architectural limitations. 

Cisco can help customers close the complexity of multiple-point products and introduce additional security services for your SaaS environments under one security solution, Cisco CASB Solutions such as Cisco Cloudlock. It has UEBA, Application Firewall, DLP, and CASB. This has been extended to secure access service edge (SASE) with Cisco Umbrella.

**Challenge: Lack of Visibility**

Cloud computing is becoming more popular due to its cost-savings, scalability, and accessibility. However, there is a drawback when it comes to security posture. Firstly, you no longer have as much visibility or control as you used to with on-premise application access. Cloud providers assume more risk and have less visibility into your environment the more they manage it for you.

A critical security concern is that you have yet to learn what’s being done in the cloud and when. In addition, the cloud now hosts your data, which raises questions about what information is there, who can access it, where it goes, and whether it’s being stolen. Cloud platforms’ security challenges are unique, and Cisco has several solutions that can help alleviate these challenges. 

Examples: Cloud Security Solutions.

  1. Cisco CloudLock
  2. Cisco Umbrella 
  3. Cisco Secure Cloud Analytics
  4. Cisco Duo Security

**Challenge: Direct to the Cloud**

Cloud computing offers cost savings, scalability, and accessibility to applications, data, and identities. With SaaS applications, businesses give their employees greater control over the applications they use and how information is shared inside and outside the office.

Users no longer need a VPN to get work done since sensitive data and applications are no longer restricted behind a firewall. Due to an increased reliance on the cloud, more branch offices opt for direct internet access instead of backhauling traffic over the corporate network.

The traditional security stack was not designed to protect cloud-native paradigms with cloud-enabled users. Since users connect directly to the internet, they are more likely to get infected with malware because IT security professionals cannot protect what they cannot see. Organizations face an increased risk of exposing sensitive data inadvertently or maliciously as employees have greater flexibility in installing and self-enabling applications.

Cisco Umbrella and Cisco Cloudlock

Umbrella was built with a bidirectional API to easily integrate with security appliances, threat intelligence platforms or feeds, and custom, in-house tools. With Umbrella’s pre-built integrations with over 10 security providers, including Splunk, FireEye, and Anomali, you can easily extend protection beyond the perimeter and amplify existing investments.

Cloudlock uses a 100 percent cloud-native, API-based approach. It is the most open platform and connects to your most commonly used SaaS services, including Okta, OneLogin, and Splunk. It aggregates data feeds across existing IT infrastructure to enrich security intelligence and ensure data protection across on-premise and cloud environments.

With both Cisco Umbrella and Cisco Cloudlock, you can securely access the Internet and use cloud apps securely. With Umbrella, users can stay protected wherever they are on the Internet as a cloud-delivered service. Organizations can defend internet access by providing visibility across all network devices, office locations, and roaming users. It identifies infected devices faster, prevents data exfiltration, and prevents malware infections earlier.

Highlighting Cisco Cloudlock

Cisco Cloudlock is a Cloud Access Security Broker (CASB) that helps organizations protect their cloud-based identities, data, and applications. As a result, organizations can monitor what is happening in their cloud applications, guarding against compromised credentials, insider threats, and malware. Cloudlock also helps organizations identify data leakages and privacy violations and respond to them.

Data Loss Prevention:

A Cisco Cloudlock Data Loss Prevention (DLP) engine continuously monitors cloud environments to identify sensitive information stored in cloud environments that violates policy. In addition to out-of-the-box policies focused on PCI-DSS and HIPAA compliance, Cisco Cloudlock has custom policies to identify proprietary information, such as intellectual property. Advanced capabilities such as regular expression (RegEx) input, threshold settings, and proximity controls ensure a high true positive rate and a low false positive rate.

Automated Responses:

By offering configurable cross-platform automated responses, Cisco Cloudlock goes beyond cloud DLP discovery. As part of Cisco Cloudlock’s API-driven Cloud Access Security Broker (CASB) architecture, deep, integrated response workflows are enabled that leverage the native capabilities of the monitored application, including automatic field-level encryption in Salesforce.com and automated file quarantining in Box. By combining Cisco Cloudlock and many other data protection tools, Cisco Cloudlock reduces risk efficiently without requiring resource-intensive operations.

Understanding CASBs  – Security Control Point

CASB, the acronym for Cloud Access Security Broker, acts as a vital intermediary between cloud service providers and users. It is a security control point, offering visibility, compliance, and data protection for cloud-based applications. By enforcing security policies, CASBs enable organizations to have a unified view of their cloud environment, ensuring secure and compliant usage. CASBs come equipped with powerful features designed to fortify cloud security. These include:

**1. Threat Detection and Prevention**

CASBs leverage advanced threat intelligence and machine learning algorithms to detect and prevent malicious activities, ensuring proactive security. Organizations need proactive measures to combat advanced threats in the ever-evolving threat landscape. Advanced threat protection features enable businesses to detect and respond to security incidents effectively.

Through continuous analysis of cloud activity, CASBs leverages machine learning algorithms to identify abnormal behavior, detecting potential threats such as account compromises or malicious insider activities.

Threat detection involves actively monitoring systems, networks, and applications to identify potential malicious activities or security breaches. It employs advanced algorithms and machine learning techniques to analyze patterns, anomalies, and known attack signatures. On the other hand, threat prevention aims to proactively mitigate risks by blocking or neutralizing threats before they can cause harm.

**2. Data Loss Prevention (DLP)**

With sensitive data being stored and accessed in the cloud, CASBs provide robust DLP capabilities, preventing unauthorized disclosure and ensuring compliance with data protection regulations. Data Loss Prevention, commonly known as DLP, refers to tools and practices designed to prevent the unauthorized disclosure or leakage of sensitive data. It encompasses various techniques and technologies for identifying, monitoring, and protecting sensitive information across multiple channels and endpoints.

Example DLP Technology: Suricata IPS IDS

Understanding Suricata

Suricata IPS/IDS is an open-source solution that analyzes network traffic and detects potential threats in real-time. Its robust capabilities include signature-based detection, protocol analysis, and behavioral anomaly detection. Suricata can identify malicious activities by inspecting network packets and responding swiftly to prevent security breaches.

One of Suricata’s key strengths is its ability to detect threats in real-time. Suricata can identify known malicious behavior patterns by leveraging its signature-based detection engine. Additionally, Suricata employs protocol analysis to detect abnormal network activities and behavioral anomalies, enabling it to identify zero-day attacks and emerging threats.

**3. Access Control and Identity Management**

CASBs facilitate granular access controls, ensuring only authorized users can access specific cloud resources. They integrate with identity management systems to provide seamless and secure access.

Access control refers to granting or denying authorization to individuals based on their identity and privileges. It involves defining user roles, permissions, and restrictions to ensure that only authorized personnel can access specific resources or perform certain actions. This helps prevent unauthorized access and potential data breaches.

Identity management plays a vital role in cloud-native access security  ensuring individuals are correctly identified and authenticated before granting access. It involves verifying user identities through various means, such as passwords, biometrics, or two-factor authentication. Identity management solutions also facilitate user provisioning, de-provisioning, and lifecycle management to maintain the accuracy and integrity of user information.

Example: **Identity Management in Linux**

Linux, renowned for its robust security features, is the foundation for many privacy-conscious individuals and organizations. However, to fortify your defenses effectively, it is crucial to grasp the nuances of Linux identity security. User authentication is the first defense in securing your Linux identity. From strong passwords to two-factor authentication, employing multiple layers of authentication mechanisms helps safeguard against unauthorized access.

**What is Identity-Aware Proxy?**

Identity-Aware Proxy (IAP) is a Google Cloud service that provides context-aware access control to your applications. Unlike traditional VPNs or firewalls that grant access based on network location, IAP determines user access based on identity and context, such as user identity, location, and request attributes. This means that even if a user is within the network, they won’t gain access unless they meet specific identity criteria. IAP ensures that only the right people, under the right conditions, can access your cloud applications.

**Benefits of Using Identity-Aware Proxy**

One of the primary benefits of using IAP is enhanced security. By adding an additional layer of identity verification, IAP significantly reduces the risk of unauthorized access. It also eliminates the need for a VPN, which can be cumbersome and less secure, especially for remote workers. Additionally, IAP provides a seamless user experience, as it integrates with existing identity providers, allowing for single sign-on (SSO) capabilities. This means users can access multiple applications with just one set of credentials, enhancing productivity and reducing password fatigue.

**Integrating IAP with Google Cloud**

Integrating Identity-Aware Proxy with Google Cloud is straightforward, thanks to Google’s robust cloud infrastructure. IAP works seamlessly with Google Cloud Platform (GCP) services, including App Engine, Compute Engine, and Kubernetes Engine. By leveraging Google Cloud’s identity management services, such as Cloud Identity and Access Management (IAM), you can define fine-grained access policies that align with your organization’s security requirements. This integration ensures that your applications are not only secure but also scalable and easy to manage.

Identity aware proxy

**4. Encryption and Tokenization**

Finally, CASBs offer encryption and tokenization techniques to protect data at rest and in transit, safeguarding it from unauthorized access.

Encryption converts plain text or data into an unreadable format known as ciphertext. It involves using encryption algorithms and keys to scramble the data, making it inaccessible to unauthorized individuals. Encryption ensures that the data remains secure and protected even if it is intercepted or stolen. Advanced encryption standards, such as AES-256, provide robust security and are widely adopted by organizations to protect sensitive data.

Tokenization, on the other hand, is a technique that replaces sensitive data with unique identification symbols called tokens. These tokens are randomly generated and have no relation to the original data, making it virtually impossible to reverse-engineer or retrieve the original information. Tokenization is particularly useful in scenarios where data needs to be processed or stored, but the actual sensitive information is not required. By utilizing tokens, organizations can minimize the risk of data exposure and mitigate the impact of potential breaches.

Multi-Layer Approach To Data Protection

While encryption and tokenization are potent techniques, combining them can provide even more robust data security. Organizations can achieve a multi-layered approach to data protection by encrypting data first and then tokenizing it. This dual-layer approach ensures that even if attackers bypass one security measure, they face another barrier before accessing the original data. This combination of encryption and tokenization significantly enhances data security and reduces the risk of unauthorized access.

Common Cloud Threats

1. Data Breaches: One of the most significant concerns in the cloud landscape is the potential for unauthorized access to sensitive information. Hackers and cybercriminals may exploit vulnerabilities in cloud infrastructure or use social engineering techniques to gain entry, leading to data breaches that can have severe consequences.

2. Account Hijacking: Weak passwords or compromised credentials can enable attackers to gain unauthorized access to cloud accounts. Once inside, they can manipulate data, disrupt services, or launch attacks. Vigilance and robust authentication mechanisms are crucial to combat account hijacking.

3. Malware and Ransomware: The cloud is not immune to malware and ransomware attacks. Malicious software can infiltrate cloud environments, infecting files and spreading across connected systems. Organizations and individuals must implement robust antivirus measures and regularly update their security software to mitigate these risks.

4. Insider Threats: While external threats often grab the spotlight, insider threats should not be underestimated. Malicious insiders or employees with compromised credentials can intentionally or unintentionally harm cloud systems. Organizations must implement proper access controls, monitor user activities, and educate employees about the risks associated with their actions.

5. DDoS Attacks: Distributed Denial of Service (DDoS) attacks can disrupt cloud services by overwhelming them with incoming traffic. These attacks aim to exhaust system resources, rendering the cloud infrastructure inaccessible to legitimate users. Mitigation strategies such as traffic filtering, rate limiting, and advanced monitoring systems are crucial in defending against DDoS attacks.

6. Phishing Attacks: Phishing attacks are fraudulent attempts to obtain sensitive information such as usernames, passwords, and credit card details by disguising as a trustworthy entity in electronic communications. These attacks often come in the form of emails, messages, or websites that mimic legitimate sources. The attackers exploit human trust, playing on emotions such as fear and urgency to prompt immediate action, often leading to dire consequences.

Example: Cloud Security Threat: Phishing Attack

Below, we have an example of a phishing attack. I’m using the Social Engineering Toolkit to perform a phishing attack for a web template. Follow the screenshots and notice we have a hit at the end.

Note: Understanding Social Engineering

Social engineering is a technique cybercriminals use to manipulate individuals and exploit human psychology to gain unauthorized access to sensitive information. By understanding the fundamentals of social engineering, security professionals can better anticipate and defend against potential threats.

The Social Engineering Toolkit, developed by trusted security expert David Kennedy, is an open-source tool that facilitates simulated social engineering attacks. It offers many attack vectors, including spear-phishing, website cloning, malicious USB drops, and more. SET provides a controlled environment for security professionals to test and assess an organization’s vulnerability to social engineering attacks.

NMAP is a tool that bad actors can use. Notice below you can use stealth scans that go under the radar of firewalls.

Useful Preventive Technologies:

A: Network Scanning

Network scanning systematically explores a computer network to gather information about connected devices, open ports, and potential security weaknesses. By employing specialized tools and techniques, security professionals can gain valuable insights into the network’s architecture and identify possible entry points for malicious actors.

a) Port Scanning: Port scanning involves probing a network’s connected devices to discover open ports and services. This technique helps security experts understand which services are running, identify potential vulnerabilities, and strengthen the network’s defenses accordingly.

b) Vulnerability Scanning: Vulnerability scanning identifies weaknesses and flaws within network devices and systems. By utilizing automated tools, security teams can quickly pinpoint vulnerabilities and take proactive measures to patch or mitigate them.

B: Mapping and Identifying Networks

TCP Dump is a command-line packet analyzer that allows network administrators to capture and analyze network packets in real time. It offers a plethora of functionalities, including the ability to filter packets based on various criteria, dissect protocols, and save packet captures for later analysis. Whether you’re a network engineer, a security analyst, or a curious enthusiast, TCP Dump has something valuable to offer.

Wireshark is an open-source network protocol analyzer that allows users to capture and examine network traffic in real-time. It provides detailed insights into network packets, helping network administrators, security experts, and developers troubleshoot issues, analyze performance, and detect anomalies.

Related: Before you proceed, you may find the following posts helpful for pre-information:

  1. Cisco Secure Firewall
  2. Dropped Packet Test
  3. Network Security Components
  4. Cisco Umbrella CASB
  5. CASB Tools
  6. SASE Definition
  7. Open Networking
  8. Distributed Firewalls
  9. Kubernetes Security Best Practice

Cloud Security Concepts

Before we proceed, let's brush up on some critical security concepts. The principle of least privilege states that people or automated tools should be able to access only the information they need to do their jobs. However, when the principle of least privilege is applied in practice, access policies are typically denied by default. Users are not granted any privileges by default and must request and approve any required privileges.

The concept of defense in depth acknowledges that almost any security control can fail, either because a bad actor is sufficiently determined or because the security control is implemented incorrectly. By overlapping security controls, defense in depth prevents bad actors from gaining access to sensitive information if one fails. In addition, you should remember who will most likely cause you trouble. These are your potential "threat actors," as cybersecurity professionals call them.

Examples: Threat actors.
- Organized crime or independent criminals interested in making money.
- Hacktivists, interested primarily in discrediting you by releasing stolen data, committing acts of --vandalism, or disrupting your business.
- Inside attackers are usually interested in denying you or making money.
- State actors who may steal secrets or disrupt your business.

Authentication and group-based access control policies defined in the application are part of the security the SaaS environment provides. However, SaaS providers significantly differ regarding security features, functionality, and capabilities. It is far from one size fits all regarding security across the different SaaS providers. For example, behavioral analytics, data loss prevention, and application firewalling are not among most SaaS providers' main offerings - or capabilities. We will discuss these cloud security features in just a moment.

Organizations must refrain from directly deploying custom firewalls or other security mechanisms into SaaS environments because they need to expose infrastructure below the application layer. Most SaaS platforms, but not all, allow users to control their infrastructure through tools provided by the provider.

Cloud Security Technologies

A. Data Loss Prevention (DLP)

Let us start with DLP. Data loss prevention (DLP) aims to prevent critical data from leaving your business unauthorizedly. This presents a significant challenge for security because the landscape and scope are complex, particularly when multiple cloud environments are involved.

Generally, people think of firewalls, load balancers, email security systems, and host-based antimalware solutions as protecting their internal users. However, organizations use data loss prevention (DLP) to prevent internal threats, whether deliberate or unintentional.

DLP solutions are specifically designed to address “inside-out” threats, whereas firewalls and other security solutions are not positioned to be experts in detecting those types of threats. Data loss prevention solutions prevent authorized users from performing authorized actions on approved devices. This addresses the challenge of preventing authorized users from moving data outside authorized realms. Intentional, unintentional, or at least accidental data breaches are not uncommon.

Example: Google Cloud Sensitive Data Protection

Sensitive data protection DLP in Action – Example of Threat:

– Let us examine a typical threat. A financial credit services company user could possess legitimate access to unlimited credit card numbers and personally identifiable information (PII) through an intentional insider breach. The insider is likely to have access to email, so attachments can also be sent this way.

– Even firewalls and email security solutions can’t prevent this insider from emailing an Excel spreadsheet with credit card numbers and other personal information from their corporate email account to their email address.

– They are not looking for that type of metadata. However, a DLP is more aligned with this type of threat. So, with the help of adequately configured data loss prevention solutions, unacceptable data transfers can be mitigated, prevented, and alerted. 

– Remember that disaster recovery and data loss prevention go hand in hand. The data you can access will be lost once you re-access it. In other words, preventing data loss is a worthwhile goal. However, recovering from data loss and disasters that prevent you from accessing your data (whether they are caused by malware or something more straightforward, such as forgotten domain renewals) requires planning.

“Generally speaking, it boils down to a lack of visibility”

In on-premises DLP systems, visibility is limited to network traffic and does not extend to cloud environments, such as SaaS-bound traffic. Additionally, given the ease with which users can distribute information in cloud environments and their highly collaborative nature, distributing sensitive information to external parties is easy for employees.

However, it is difficult for security analysts to detect with traditional mechanisms.  The data loss prevention technology continuously monitors cloud environments to detect and secure sensitive information in cloud environments. CASBs, for instance, can see whether files stored in an application are shared outside of an organization, outside of specific organizational groups, or outside the entire organization.

B. Application Firewalls

Next, we have application firewalls. How does an application firewall differ from a “traditional firewall”? What is its difference from a “next-generation firewall”? First, an application firewall focuses on the application, not the user or the network. Its logic differs entirely from that of a non-application firewall, and it can create policies based on different objects. Establishing a policy on traditional things in cloud environments is useless.

Application Firewall vs. Traditional Firewall.

Many traditional approaches to protecting cloud applications will not work when you use a firewall. Because your cloud application needs to be accessible from anywhere, it is not feasible to configure rules for “Source IP.” You might be able to geo-fence using IP blocks assigned by IANA, but what about a traveling user or someone on vacation who needs remote assistance? Source IP addresses can not be used to write security policies for cloud applications.

Your toolkit just became ineffective when it came to Layer 3 and Layer 4 security controls. In addition, the attack could originate from anywhere in the world using IPv4 or IPv6. So, how you secure your cloud applications and data must change from a traditional firewall to an application firewall focusing directly on the application and nothing below.

Example: Distributed Firewalling

distributed firewalls

The Issue of Static Firewall Rules

In addition, writing firewall policies based on user IDs can be challenging. To make your cloud application accessible to anyone from anywhere, you may as well not write firewall rules based on directory services like LDAP or Active Directory.

Compared with an on-premise solution, you have fewer options for filtering traffic between clients and the cloud application. In an application firewall, data is exchanged, and access is controlled to (or from) an application. Application firewalls focus not on the security of IP networks and Layer 4 ports but on protecting applications and services.

A firewall at the application layer cares little about how data is received and connected to the application or how it is formatted or encrypted. And this is what a traditional firewall would focus on. Instead, an application firewall monitors data exchanges between applications and other entities. Data exchange methods rather than location are examined when determining if policy violations have occurred.

Firewall tags

Example Stateful Firewalling:

Zone-based Firewall

Cisco Zone-Based Firewall, also known as ZBF, is a stateful firewall technology that operates in different network zones. These zones define security boundaries and allow administrators to enforce specific security policies based on traffic types, sources, and destinations. Unlike traditional access control lists (ACLs), ZBF provides a more flexible and intuitive approach to network security.

Traffic Inspection and Control: ZBF enables deep packet inspection, allowing administrators to scrutinize traffic at multiple layers. By analyzing packets’ content and context, ZBF can make informed decisions about permitting or denying traffic based on predefined policies.

Application-Aware Filtering: With ZBF, administrators can implement application-aware filtering, which assesses traffic based on the specific application protocols being used. This level of granularity allows for more targeted and adequate security measures.

Simplified Configuration: ZBF offers a simplified configuration process that utilizes zone pairs and policy maps. This modular approach enables administrators to define policies for specific traffic flows, reducing complexity and enhancing manageability.

C. Cloud Access Security Broker (CASB)

Users use cloud access security brokers (CASBs) to interact with cloud services such as SaaS applications, IaaS, and PaaS environments. Moreover, they help you comply with security policies and enforce them. Now, we can enforce policy in settings that we do not control. CASBs safeguard cloud data, applications, and user accounts, regardless of where the user is or how they access the cloud application. Where other security mechanisms focus on protecting the endpoint or the network, CASB solutions focus on protecting the cloud environment. They are purpose-built for the job of cloud protection.

CASB solutions negotiate access security between the user and the cloud application on their behalf. They go beyond merely “permitting” or “denying” access. A CASB solution can enable users to access cloud applications, monitor user behavior, and protect organizations from risky cloud applications by providing visibility into user behavior.  The cloud application continues to be accessible to end users in the same way as before CASB deployment.

Applications are still advertised and served by cloud application service providers in the same manner as before the implementation of CASB. Cloud applications and the user environment do not change.  Additionally, due to a lack of control, more visibility will be needed—many SaaS environments need a mechanism for tracking user behavior and controlling users (although most cloud providers have their own UEBA systems).

CASB Categories:

It is possible to categorize out-of-band CASB into API-based CASB and log-based CASB, which live outside users and cloud applications. Compared to a log-based CASB, API-based CASB exchanges API calls with the cloud application environment rather than log data. SIEM or other reporting tools typically ingest log data, but API calls allow the CASB solution to control cloud applications directly. API-based are not dependent on cloud applications. They are integrated with cloud applications but external to their environments.

CASB solutions based on logs are limited because they only take action once logs have been parsed by an SIEM or other tool. CASBs based on APIs monitor cloud usage, whether on or off the corporate network or using managed or unmanaged devices, along with monitoring cloud usage. Cloud-to-cloud applications can also be protected using a CASB that uses APIs – communications that never reach the corporate network. 

Cloudlock is an API-based CASB. Therefore, unlike proxy-based CASBs, it doesn’t need to be in the user traffic path to provide security. As a result, there is no need to worry about undersizing or oversizing a proxy. Also, you don’t have to maintain proxy rulesets, cloud application traffic doesn’t have to be routed through another security layer, and traffic doesn’t have to circumvent the proxy, which is a significant value-add to cloud application security.

Summary: Cisco CloudLock

In today’s digital age, businesses increasingly rely on cloud-based platforms to store and manage their data. However, with this convenience comes the need for robust security measures to protect sensitive information from potential threats. One such solution that stands out in the market is Cisco Cloudlock. In this blog post, we delved into the features, benefits, and implementation of Cisco Cloudlock, empowering you to safeguard your cloud environment effectively.

Understanding Cisco Cloudlock

Cisco Cloudlock is a comprehensive cloud access security broker (CASB) solution that provides visibility, control, and security for cloud-based applications like Google Workspace, Microsoft 365, and Salesforce. By integrating seamlessly with these platforms, Cloudlock enables organizations to monitor and protect their data, ensuring compliance with industry regulations and mitigating the risk of data breaches.

Key Features and Benefits

a) Data Loss Prevention (DLP): Cloudlock’s DLP capabilities allow businesses to define and enforce policies to prevent sensitive data from being shared or leaked outside of approved channels. With customizable policies and real-time scanning, Cloudlock ensures your critical information remains secure.

b) Threat Protection: Recognizing the evolving threat landscape, Cloudlock employs advanced threat intelligence and machine learning algorithms to detect and block malicious activities in real-time. From identifying compromised accounts to detecting anomalous behavior, Cloudlock is a proactive shield against cyber threats.

c) Compliance and Governance: Maintaining regulatory compliance is a top priority for organizations across various industries. Cloudlock assists in achieving compliance by providing granular visibility into data usage, generating comprehensive audit reports, and enforcing data governance policies, thereby avoiding potential penalties and reputational damage.

Implementing Cisco Cloudlock

Implementing Cisco Cloudlock is a straightforward process that involves a few key steps. First, organizations need to integrate Cloudlock with their chosen cloud platforms. Once integrated, Cloudlock scans and indexes data to gain visibility into the cloud environment. Organizations can then define policies, configure alerts, and set up automated responses based on specific security requirements. Regular monitoring and fine-tuning of policies ensure optimal protection.

Conclusion: Cisco Cloudlock is a powerful solution for safeguarding your cloud environment. With its robust features, including data loss prevention, threat protection, and compliance capabilities, Cloudlock empowers organizations to embrace the cloud securely. By implementing Cisco Cloudlock, businesses can unlock the full potential of cloud-based platforms while ensuring the confidentiality, integrity, and availability of their valuable data.

rsz_1moving_thrugh_thr_layer

Network Connectivity

Network Connectivity

Network connectivity has become integral to our lives in today's digital age. A reliable and efficient network is crucial, from staying connected with loved ones to conducting business operations. In this blog post, we will explore the significance of network connectivity and how it has shaped our world.

Over the years, network connectivity has evolved significantly. Gone are the days of dial-up connections and limited bandwidth. Today, we have access to high-speed internet connections, enabling us to connect with people around the globe instantly. This advancement has revolutionized communication, work, learning, and entertainment.

Network connectivity is the ability of devices or systems to connect and communicate with each other. It allows data to flow seamlessly, enabling us to access information, engage in online activities, and collaborate across vast distances. Whether through wired connections like Ethernet or wireless technologies such as Wi-Fi and cellular networks, network connectivity keeps us interconnected like never before.

Router - The Navigators of Networks: Routers are the heart of any network, directing traffic and ensuring data packets reach their intended destinations. They analyze network addresses, make decisions, and establish connections across different networks. With their advanced routing protocols, routers enable efficient and secure data transmission.

Switches - The Traffic Managers: While routers handle traffic between different networks, switches manage the flow of data within a network. They create multiple paths for data to travel, ensuring efficient data transfer between devices. Switches also enable the segmentation of networks, enhancing security and network performance.

Cabling - The Lifelines of Connectivity: Behind the scenes, network cables provide the physical connections that transmit data between devices. Ethernet cables, such as Cat5e or Cat6, are commonly used for wired connections, offering high-speed and reliable data transmission. Fiber optic cables, on the other hand, provide incredibly fast data transfer over long distances.

Wireless Access Points - Unleashing the Power of Mobility: In an era of increasing wireless connectivity, wireless access points (WAPs) are vital components. WAPs enable wireless devices to connect to a network, providing flexibility and mobility. They use wireless communication protocols like Wi-Fi to transmit and receive data, allowing users to access the network without physical connections.

Highlights: Network Connectivity

Interconnecting Devices

Interconnecting various components of a network is an extensive and comprehensive process. At a base level, network components can be connected via switches, gateways, and routers. Efficient and reliable network connectivity indicates how well these components interact, either on-premises or a cloud-based network. As a result of network connectivity, a range of devices, including IoT and computers can communicate with one another via protocols and other methods, facilitating connectivity.

To understand network connectivity, we will break networking down into layers. Then, we can fit the different networking and security components that make up a network into each layer. This is the starting point for understanding how networks work and carrying out the advanced stages of network design, and troubleshooting.

Networking does not just magically happen; we need to follow protocols and rules so that two endpoints can communicate and share information. These rules and protocols don’t just exist on the endpoint, such as your laptop; they also need to exist on the network and security components in the path between the two endpoints. 

**Example: TCP/IP Suite and OSI Model**

We have networking models, such as the TCP/IP Suite and the OSI model, to help you understand what rules and protocols we need for all components. These networking models are like blueprints for building a house. They allow you to follow specific patterns and have certain types of people, which are protocols in networking.

**Example: Address Resolution Protocol (ARP)**

For example, when you know the destination’s IP address, you use the Address Resolution Protocol (ARP) to find the MAC address. So, we have rules and standards to follow. By learning these rules, you can install, configure, and troubleshoot the main networking components of routers, switches, and security devices.

Networking Components

Routers are vital in directing network traffic and ensuring data packets reach their destinations. They act as intermediaries between different networks, using routing tables to determine the best path for data transmission. With their advanced features, such as Quality of Service (QoS) and firewall capabilities, routers provide a secure and efficient network connection. Routers are responsible for directing traffic between different networks and keeping them isolated.

google cloud routes

Switches enable the interconnection of devices within a local network. They operate at the data link layer of the OSI model, using MAC addresses to forward data packets to the intended recipient. By creating virtual LANs (VLANs) and managing network traffic effectively, switches enhance network performance and provide seamless communication between devices.

Firewalls act as the first line of defense against unauthorized access to a network. These security devices monitor incoming and outgoing network traffic based on predetermined rules, allowing or blocking data packets accordingly. By implementing firewalls, organizations can prevent potential threats and maintain control over their network’s security posture.

Firewall tags

Intrusion Detection Systems (IDS) are designed to identify and respond to potential security breaches. They monitor network traffic, looking for signs of malicious activity or unauthorized access attempts. IDS can be host- or network-based, providing real-time alerts and helping network administrators promptly mitigate potential threats.

Virtual Private Networks (VPNs) establish encrypted tunnels over public networks, allowing remote users to access company resources securely. By encrypting data, VPNs provide confidentiality and integrity, ensuring that sensitive information remains protected during transmission.

Access Control Systems ensure only authorized personnel can access sensitive data and resources. This includes various authentication mechanisms such as passwords, biometrics, or smart cards. Organizations can significantly reduce the risk of unauthorized access to critical systems or information by implementing access control systems.

Creating Boundaries for Secure Network Connectivity

One way to create the boundary between the external and internal networks is with a firewall. An example would be a Cisco ASA firewall configured with zones. The zones create the border. Example: Gig0/0 is the internal zone with a security level of 0. By default, a higher-level area, such as the outside zone, with a security level of 100, cannot communicate with zones of lower numbering.

Security zones are virtual boundaries created within your network infrastructure to control and monitor traffic flow. These zones provide an added layer of defense, segregating different network segments based on their trust levels. Administrators can apply specific security policies and access controls by classifying traffic into zones, reducing the risk of unauthorized access or malicious activities.

So, as I said, computer networks enable connected hosts—computers—to share and access resources. So when you think of a network, think of an area, and this area exists for sharing. The first purpose of network connectivity was to share printers, and it has not been expanded to many other devices to share, but in reality, the use case of sharing is still its primary use case.

You need to know how all the connections happen and all the hardware and software that enables that exchange of resources. We do this using a networking model. So, we can use network models to conceptualize the many parts of a network, relying primarily on the Open Systems Interconnection (OSI) seven-layer model to help you understand networking. 

Remember that we don’t implement the OSI; we implement the TCP/IP suite. However, the OSI is a great place to start learning, as everything is divided into individual layers. You can place the network and security components at each layer to help you understand how networks work. Let us start with the OSI model before we move to the TCP/IP suite.

IPv4 and IPv6 Connectivity

Understanding IPv4 & IPv6 Connectivity:

IPv4, or Internet Protocol version 4, is the fourth iteration of the IP protocol. It uses a 32-bit address space, allowing for approximately 4.3 billion unique addresses. This version has been the foundation of internet connectivity for several decades and has served us well. However, with the rapid growth of internet-connected devices, the limitations of IPv4 have become apparent.

Enter IPv6, or Internet Protocol version 6, the next generation of IP addressing. IPv6 was designed to address the limitations of IPv4 by utilizing a 128-bit address space, resulting in a staggering number of unique addresses – approximately 340 undecillion! This vast expansion of address space ensures that we will not run out of addresses anytime soon, even with the increasing number of internet-connected devices.

IPv6 Advantages:

IPv6 offers several advantages over its predecessor. First, its larger address space allows for efficient and scalable allocation of IP addresses, ensuring that every device can have a unique identifier. Second, IPv6 incorporates built-in security features, enhancing the integrity and confidentiality of data transmitted over the network. Third, IPv6 supports auto-configuration, simplifying the process of connecting devices to a network.

Challenge- Transitioning to IPv6:

While IPv6 brings numerous benefits, transitioning from IPv4 to IPv6 has challenges. One of the main obstacles lies in the coexistence of the two protocols during the transition phase. However, various transition mechanisms, such as dual-stack, tunneling, and translation, have been developed to enable interoperability between IPv4 and IPv6 networks. These mechanisms facilitate a smooth transition, ensuring that devices and networks communicate seamlessly.

IPv6 Connectivity & Solicited Node Address

The Purpose of Solicited Node Multicast Address

Now that we have a basic understanding of multicast communication, let’s explore the purpose of IPv6 Solicited Node Multicast Address. Its primary function is to enable efficient address resolution for devices in an IPv6 network. When a device wants to resolve the Layer 2 address (MAC address) of another device with a known IPv6 address, it can use the Solicited Node Multicast Address to send a request to a specific group of devices that share the same IPv6 address prefix.

Structure & Format

IPv6 Solicited Node Multicast Address has a unique and structured format. It is derived from the device’s IPv6 address by replacing the least significant 24 bits with a specific prefix (FF02::1:FF/104) and the last 24 bits with the corresponding bits from the original IPv6 address. This ensures that the resulting multicast address is unique to the device while being part of the larger multicast group associated with that particular IPv6 address.

Neighbor Discovery Protocol (NDP) is a crucial aspect of IPv6 network operations, and IPv6 Solicited Node Multicast Address plays a vital role within this protocol. When a device joins an IPv6 network, it uses the Solicited Node Multicast Address to send a Neighbor Solicitation message to the group associated with its IPv6 address. This message serves as a request to obtain the device’s MAC address with the corresponding IPv6 address, allowing for efficient communication and address resolution.

IPv6 Neighbor Discovery

Understanding IPv6 Neighbor Discovery

IPv6 Neighbor Discovery Protocol, often abbreviated as NDP, serves as a key mechanism in the IPv6 network for various tasks such as address autoconfiguration, duplicate address detection, and router discovery. It replaces the functions performed by the Address Resolution Protocol (ARP) in IPv4. Using ICMPv6 messages and multicast communication, NDP enables efficient network operations and seamless communication between neighboring devices.

A) Neighbor Solicitation and Neighbor Advertisement:

These ICMPv6 message types form the backbone of IPv6 Neighbor Discovery. Neighbor Solicitation messages are used to determine the link-layer address of a neighboring node. In contrast, Neighbor Advertisement messages provide the necessary information in response to a solicitation or as part of periodic updates.

B) Router Solicitation and Router Advertisement:

Router Solicitation and Router Advertisement messages play a vital role in facilitating the discovery of routers on the network. Nodes send Router Solicitation messages to request router configuration information, while routers periodically broadcast Router Advertisement messages to announce their presence and important network details.

The adoption of IPv6 Neighbor Discovery brings forth several advantages. Firstly, it simplifies the configuration process by allowing devices to assign IPv6 addresses automatically without manual intervention. This enables efficient scalability and reduces administrative overhead. NDP’s neighbor caching mechanism also enhances network performance by storing and managing neighbor information, reducing network congestion and faster communication.

While IPv6 Neighbor Discovery offers numerous benefits, it has challenges. One of the primary concerns is the potential for malicious activities, such as Neighbor Spoofing or Neighbor Advertisement Spoofing attacks. To mitigate these risks, network administrators should implement secure network designs, leverage features like Secure Neighbor Discovery (SEND), and employ intrusion detection and prevention systems to safeguard against potential threats.

IPv6 Stateless Autoconfiguration

Stateless autoconfiguration is a mechanism in IPv6 that allows devices to assign themselves an IP address, configure their default gateway, and perform other necessary network settings without manual configuration or DHCP servers. It is based on the Neighbor Discovery Protocol (NDP) and Router Advertisement (RA) messages.

Efficiency and Scalability: Stateless autoconfiguration simplifies network setup, especially in large-scale deployments. It eliminates the need for manual IP address assignment, reducing the chances of human error and streamlining the process of connecting devices to the network.

Reduced Dependency on DHCP: Unlike IPv4, where DHCP is commonly used for IP address assignment, stateless autoconfiguration reduces reliance on DHCP servers. This reduces network complexity and eliminates single points of failure, leading to increased network stability.

Seamless Network Roaming: Stateless autoconfiguration enables devices to connect seamlessly to different networks without requiring reconfiguration. This is particularly useful for mobile devices that frequently switch between networks, such as smartphones and laptops.

Router Advertisement (RA) Messages: Routers periodically send RA messages to announce their presence on the network and provide network configuration information. These messages contain essential details like prefixes, default gateways, and other network-related parameters.

Neighbor Discovery Protocol (NDP): The NDP is responsible for various functions, including address resolution, duplicate address detection, and router discovery. It plays a crucial role in stateless autoconfiguration by facilitating the assignment of IP addresses and other network settings.

Network Connectivity with Google Cloud

Understanding VPC Networking

VPC Networking serves as the backbone of a cloud infrastructure, allowing users to create and manage their own virtual network environments. It provides isolation, security, and flexibility, enabling seamless connectivity between various resources within the cloud environment. With Google Cloud’s VPC networking, organizations can have full control over their network settings, subnets, IP addresses, and routing.

Google Cloud’s VPC networking offers a robust set of features that empower users to customize and optimize their network infrastructure. Some notable features include:

1. Subnetting: Users can divide their network into subnets, enabling better organization and control over IP address allocation.

2. Firewall Rules: VPC networking allows users to define and enforce firewall rules, ensuring secure access to resources and protecting against unauthorized access.

3. Network Peering: This feature enables the connection of multiple VPC networks, allowing seamless communication between resources in different VPCs.

4. VPN Connectivity: Google Cloud’s VPC networking offers secure VPN connections, facilitating remote access and secure communication between on-premises networks and the cloud.

Understanding VPC Peering

VPC peering enables communication between VPC networks in a single region or across different regions. It establishes a direct private connection between VPCs, eliminating the need for external gateways or VPN tunnels. This direct connection ensures low-latency and high-bandwidth communication, making it ideal for inter-VPC communication.

VPC peering can be leveraged in different scenarios to meet specific requirements. One common use case is multi-tier application architectures, where different tiers of an application are deployed in separate VPCs.

VPC peering allows these tiers to communicate securely while maintaining isolation. Another use case is disaster recovery, where VPC peering facilitates data replication and synchronization between primary and secondary VPCs. Furthermore, VPC peering supports hybrid cloud deployments by enabling connectivity between on-premises networks and Google Cloud VPCs.

Network Connectivity Center (NCC)

At the heart of Network Connectivity Center is the formidable Google Cloud. With its vast global infrastructure and advanced technologies, Google Cloud enables NCC to offer unparalleled connectivity solutions. By leveraging Google’s extensive network, businesses can connect their on-premises environments, branch offices, and cloud resources effortlessly. The integration with Google Cloud ensures not only high performance but also robust security and compliance, making it a trusted choice for organizations worldwide.

### Key Features of Network Connectivity Center

Network Connectivity Center is packed with features designed to optimize your networking experience. One of its standout features is the centralized management console, which provides a unified view of your entire network. This makes it easier to monitor connectivity, troubleshoot issues, and implement changes across your network infrastructure. Additionally, NCC supports hybrid and multi-cloud environments, allowing businesses to connect and manage resources across different cloud providers seamlessly.

### Benefits for Businesses

Adopting Network Connectivity Center can yield numerous benefits for businesses. Firstly, it reduces complexity by centralizing network operations, which can lead to significant cost savings. Secondly, it improves network reliability and performance, ensuring that your applications and services run smoothly without interruptions. Furthermore, the enhanced security features of NCC help protect sensitive data, giving businesses peace of mind in an era where cyber threats are ever-present.

### How to Get Started

Getting started with Google’s Network Connectivity Center is straightforward. Businesses can begin by exploring Google’s comprehensive documentation and tutorials, which provide step-by-step guidance on setting up and configuring NCC. Additionally, Google offers support services to assist businesses in migrating their existing networks and optimizing their infrastructure for the cloud. With the right resources and support, transitioning to a more efficient network management system becomes a seamless process.

Network Connectivity Center

Network Connectivity Center: Hub and Spoke Model

In the hub-and-spoke model, the hub signifies a central or lead organization that serves as the coordinating entity. The spokes, on the other hand, represent partner organizations that are directly linked to the hub. Each spoke interacts directly with the hub but not necessarily with each other.

**Improving Network Connectivity**

**Network Monitoring**

Network monitoring is the practice of observing and analyzing the performance and availability of computer networks. It involves tracking various network components, such as routers, switches, servers, and applications, to identify potential issues. Network monitoring allows for proactive problem identification, enabling IT teams to detect and resolve issues before they escalate. This prevents costly downtime and minimizes the impact on business operations.

**Understanding Network Scanning**

Network scanning is the systematic process of identifying, mapping, and analyzing network devices, services, and vulnerabilities. By conducting network scans, organizations gain valuable insights into their network infrastructure, identifying potential weak points and areas for improvement.

Several techniques are employed for network scanning, each with strengths and applications. Some common methods include port scanning, vulnerability scanning, and network mapping. Port scanning involves probing network ports to determine which ones are open and potentially vulnerable. Vulnerability scanning focuses on detecting and assessing vulnerabilities within the network infrastructure. Network mapping aims to create a comprehensive map of devices and their connections within the network.

**Identifying and Mapping Networks**

To troubleshoot the network effectively, you can use a range of tools. Some are built into the operating system, while others must be downloaded and run. Depending on your experience, you may choose a top-down or a bottom-up approach.

Wireshark, also known as Ethereal, is a free and open-source packet analyzer that allows you to capture, analyze, and interpret network traffic. It supports various operating systems and provides an intuitive graphical user interface (GUI) for ease of use. With Wireshark, you can scrutinize network protocols, identify anomalies, troubleshoot network issues, and enhance security.

Tcpdump is a command-line packet analyzer tool for capturing and analyzing network traffic. It provides information about packets traversing your network, including source and destination addresses, protocols used, and packet payloads. With tcpdump, you can gain valuable insights into network behavior, diagnose network issues, and uncover potential security threats.

Container Network Connectivity

The Basics of Docker Network Connectivity

Before diving into advanced networking concepts, let’s start with the basics. Docker provides a default bridge network that allows containers on the same host to communicate. This bridge network assigns IP addresses to containers and provides a simple way for them to interact. However, other networking options, such as host and overlay networks, are available.

Docker Default Networking

Docker default networking, also known as the bridge network, is the default networking mode when you create a new Docker container. This mode enables containers to communicate with each other using IP addresses within the same bridge network. By default, Docker creates a ” bridge ” network on the host machine.

One critical advantage of Docker default networking is its simplicity and ease of use. Containers within the same bridge network can communicate with each other using their container names as hostnames, making it straightforward to establish connections. Additionally, Docker default networking provides automatic DNS resolution for container names, simplifying the process of addressing containers within the network.

Exploring Docker Default Networking Configuration

Understanding Docker’s configuration options is essential to grasp its default networking fully. Docker allows you to customize the bridge network by modifying bridge options, such as IP address ranges, subnet masks, and gateway settings. This flexibility enables you to tailor the networking environment to suit your requirements.

While Docker default networking offers many benefits, knowing potential challenges and limitations is essential. One limitation is that containers within different bridge networks cannot communicate directly with each other. To establish communication between containers in separate bridge networks, you may need to configure additional networking solutions, such as Docker overlay networks or custom network bridges.

Docker Orchestration: What is Docker Swarm?

Docker Swarm is a native clustering and orchestration solution provided by Docker. It allows you to create and manage a swarm of Docker nodes, turning them into a single virtual Docker host. This means you can deploy and scale your applications across multiple machines, distributing the workload efficiently.

To effectively use Docker Swarm, it’s essential to grasp its key concepts and architecture. At the heart of Docker Swarm lies the swarm manager, which acts as the control plane for the entire swarm. It handles tasks such as service discovery, load balancing, and scheduling. On the other hand, Worker nodes are responsible for running the actual containers.

Deploying Services with Docker Swarm

One of Docker Swarm’s main benefits is its seamless deployment of services. With a simple command, you can define your services’ desired state, including the number of replicas, resource constraints, and network configurations. Docker Swarm distributes the tasks across the available nodes and ensures high availability.

Scalability and load balancing are crucial in a production environment. Docker Swarm makes scaling your services horizontally by adding or removing replicas easy. It also provides built-in load-balancing mechanisms that distribute incoming traffic evenly across the containers, ensuring optimal performance.

Docker Swarm offers robust mechanisms for high availability and fault tolerance. Replicating services across multiple nodes ensures that even if a node fails, the containers will be rescheduled on other available nodes. This provides resilience and minimizes service downtime.

Network Connectivity for Virtual Switching

Understanding Open vSwitch

Open vSwitch, often called OVS, is an open-source software switch designed to enable network virtualization. It operates at the data link layer of the networking stack, providing a flexible and scalable solution for virtualized environments. Its robust features make Open vSwitch a go-to option for implementing virtual switches in small-scale and large-scale deployments.

Open vSwitch offers a wide range of features that contribute to its versatility. From standard switch functionality to advanced capabilities like support for tunneling protocols, Open vSwitch has it. It provides flexible port configurations and VLAN support and even implements the OpenFlow protocol for enhanced control and management. With its modular architecture and support for multiple virtualization platforms, Open vSwitch is ideal for network administrators seeking a reliable and scalable solution.

Use Cases of Open vSwitch:

Open vSwitch finds applications in various networking scenarios. It can be used in virtualized data centers to create and manage virtual networks, enabling efficient resource utilization and dynamic network provisioning. Open vSwitch is also commonly utilized in software-defined networking (SDN) environments, which bridges physical and virtual networks. Additionally, Open vSwitch can be integrated with orchestration frameworks like OpenStack to provide seamless network connectivity for virtual machines.

Testing Network Connectivity  

What is ICMP?

ICMP, or Internet Control Message Protocol, is integral to the TCP/IP suite. It operates at the network layer and facilitates communication between network devices. Network devices generate and send ICMP messages to convey information about network errors, troubleshooting, and other important notifications.

ICMP serves various functions contributing to a network’s proper functioning and maintenance. Some of its essential functions include:

1. Error Reporting: ICMP allows network devices to report errors encountered while transmitting IP packets. These errors can range from unreachable hosts to time exceeded during packet fragmentation.

2. Network Troubleshooting: ICMP provides essential tools for network troubleshooting, such as the famous “ping” command. By sending ICMP Echo Request messages, devices can check a destination host’s reachability and round-trip time.

3. Path MTU Discovery: ICMP assists in determining the Maximum Transmission Unit (MTU) of a path between two hosts. This allows for efficient packet transmission without fragmentation.

Understanding IP SLAs ICMP Echo Operations

IP SLAs, for Internet Protocol Service Level Agreements, is a Cisco feature that enables network administrators to measure network performance, verify service guarantees, and proactively monitor network devices. ICMP Echo Operations, or ping, is a widely used and crucial component within IP SLAs. ICMP Echo Operations sends Internet Control Message Protocol (ICMP) echo requests to measure network connectivity and response times.

IP SLAs ICMP Echo Operations offer several benefits for network administrators. First, they allow them to proactively monitor network connectivity and detect potential issues before they impact end-users. Administrators can ensure that network devices are reachable and respond within acceptable time frames by periodically sending ICMP echo requests.

Additionally, IP SLAs ICMP Echo Operations provide valuable data for troubleshooting network performance problems, allowing administrators to pinpoint bottlenecks and latency issues.

Restricting Network Connectivity

IPv6 Standard ACL 

Standard access lists are one of the two types of access lists in Cisco IOS, the other being extended access lists. Standard access lists evaluate only packets’ source IP addresses, unlike extended access lists. They are commonly used for basic filtering and can be powerful tools in network security.

Standard access lists find various applications in network configurations. One everyday use case is restricting access to network resources based on the source IP address. For example, an administrator can create a standard access list to allow or deny specific IP addresses from accessing a particular server or network segment. Standard access lists can also filter traffic for network management purposes, such as limiting Telnet or SSH access to specific hosts.

Example: IPv6 Access Lists

Understanding IPv6 Access-lists

IPv6 access lists serve as a set of rules or filters that control traffic flow in a network. They allow administrators to permit or deny specific types of traffic based on various criteria, such as source and destination IP addresses, ports, and protocols. Unlike their IPv4 counterparts, IPv6 access lists are designed to handle the unique characteristics of IPv6 addresses, ensuring compatibility and efficient network management.

Implementing IPv6 access lists brings many benefits to network security. First, they enable administrators to control network traffic precisely, allowing only authorized connections and blocking potential threats. Second, IPv6 access lists facilitate network segmentation, allowing administrators to create separate security zones and enforce stricter policies. Additionally, these access lists provide the flexibility to prioritize specific types of traffic, ensuring optimal network performance.

Network Connectivity & Enterprise Routing Protocols

**IPv4 and IPv6 Routing Protocols**

Routing protocols for IPv4 networks play a crucial role in determining the path data packets take as they traverse the internet. One of the most widely used protocols is the Routing Information Protocol (RIP), which uses hop count as a metric to determine the best route. Another popular protocol is the Open Shortest Path First (OSPF), known for its scalability and fast convergence. Additionally, the Border Gateway Protocol (BGP) is essential for routing between autonomous systems, making it a vital component of the global internet.

With the depletion of IPv4 addresses, IPv6 has emerged as the next generation of IP addressing. IPv6 routing protocols have been developed to accommodate the larger address space and improve the features of this new protocol. One of the prominent routing protocols for IPv6 is OSPFv3, an extension of OSPF for IPv4. It allows for the routing of IPv6 packets and provides efficient intra-domain routing in IPv6 networks. Another protocol, the Enhanced Interior Gateway Routing Protocol for IPv6 (EIGRPv6), offers advanced capabilities such as route summarization and load balancing.

**Example: OSPFvs IPv6 Routing**

IPv6 OSPFv3, which stands for Open Shortest Path First version 3, is a routing protocol designed explicitly for IPv6 networks. It is an enhanced version of OSPFv2, the routing protocol used for IPv4 networks. OSPFv3 operates at the network layer and determines the most efficient paths for data packets to travel in an IPv6 network.

One of the significant benefits of IPv6 OSPFv3 is its scalability. As organizations expand their networks and connect more devices, scalability becomes critical. IPv6 OSPFv3 is designed to handle large networks efficiently, ensuring optimal routing performance even in complex environments.

Another advantage of IPv6 OSPFv3 is its support for multiple IPv6 address families. It can handle IPv6 addresses and provide routing capabilities for various network services. This flexibility allows for seamless integration of different network protocols and services within an IPv6 infrastructure.

**Network Address Translation (NAT)**

NAT, or Network Address Translation, is a technique for modifying network address information in IP packet headers while in transit across a routing device. Its primary purpose is to enable sharing of limited IP addresses among multiple devices within a private network. NAT acts as a mediator, providing a layer of security and managing the distribution of public IP addresses.

NAT is a technique employed by routers to allow multiple devices within a private network to share a single public IPv4 address. Through NAT, private IP addresses are translated into a single public IP address, enabling internet connectivity for all devices within the network.

There are various types of NAT, including Static NAT, Dynamic NAT, and Port Address Translation (PAT). Each type serves a specific purpose, whether it’s mapping a single private IP to a single public IP or allowing multiple private IPs to share a single public IP.

Scarcity of IPv4 Addresses: As mentioned earlier, the limited pool of available IPv4 addresses has led to the adoption of NAT to conserve address space. However, NAT introduces its challenges, including limitations in peer-to-peer applications, complex configurations, and potential performance bottlenecks.

**IPv6 and NAT**

Contrary to popular belief, NAT has a role in the IPv6 ecosystem. Although IPv6 provides a vast address space, the widespread use of NAT in IPv4 has led to a reliance on its functionalities. NAT in IPv6 operates differently, focusing more on protocol translation and maintaining compatibility with IPv4 networks.

Despite the abundant address space in IPv6, NAT still offers several benefits. First, it enhances network security by acting as a barrier between public and private networks, making it more difficult for malicious entities to exploit vulnerabilities. Additionally, NAT can aid network troubleshooting, allowing administrators to monitor and control traffic flow more effectively.

While NAT brings advantages, it also presents specific challenges and considerations. One notable concern is the potential impact on end-to-end communication and the introduction of additional processing overhead. Furthermore, the complex configurations and compatibility issues between IPv6 and NAT devices can pose challenges during implementation.

**IPv6 Transition Mechanisms**

IPv6 transition mechanisms enable coexistence and smooth migration from IPv4 to IPv6.

Dual Stack: One prominent transition mechanism is dual stack. With dual stack, devices are configured to simultaneously support IPv4 and IPv6 protocols. This allows for a seamless transition period during which both address families can coexist. Dual stack provides compatibility and flexibility, ensuring smooth communication between IPv4 and IPv6 networks.

Tunneling: Tunneling is another critical mechanism used during the IPv6 transition. It encapsulates IPv6 packets within IPv4 packets, allowing them to traverse IPv4-only networks. Tunneling provides a way to connect IPv6 islands over an IPv4 infrastructure. Different tunneling techniques, such as 6to4, Teredo, and ISATAP, offer varying approaches to encapsulating and transmitting IPv6 traffic over IPv4 networks.

Translation: IPv6 translation mechanisms bridge the gap between IPv4 and IPv6 networks by facilitating communication between devices using different protocols. Network Address Translation IPv6 (NAT64) and IPv6-to-IPv4 (IVI) translation enable data exchange between IPv6 and IPv4 networks. These translation techniques are crucial in ensuring interoperability during the transition phase.

Address Resolution: Address resolution mechanisms aid in seamlessly integrating IPv6 and IPv4 networks. Address Resolution Protocol for IPv6 (ARPv6) and Neighbor Discovery Protocol (NDP) assist in resolving IPv6 and IPv4 addresses to Ethernet MAC addresses. These protocols enable devices to discover and communicate with each other on both IPv6 and IPv4 networks.

Example Technology: NPTv6 ( Network Address Translation )

Understanding NPTv6

NPTv6, also known as Network Prefix Translation for IPv6, is a stateless translation mechanism that facilitates the transition from IPv4 to IPv6. It allows for seamless communication between IPv6 and IPv4 networks by translating the IPv6 prefixes to IPv4 addresses. This enables organizations to adopt IPv6 while maintaining connectivity with existing IPv4 infrastructure.

One key advantage of NPTv6 is its ability to simplify the transition from IPv4 to IPv6. By providing a transparent translation mechanism, NPTv6 eliminates the need for complex dual-stack configurations or tunneling techniques. This dramatically reduces the operational overhead and facilitates a smoother migration to the next-generation IPv6 protocol.

Moreover, NPTv6 offers improved scalability and address management. With the depletion of IPv4 addresses, organizations are increasingly adopting IPv6. NPTv6 allows them to leverage their existing IPv4 infrastructure without needing address renumbering. This saves time and effort and ensures a more efficient utilization of available address space.

Example Technology: NAT64

Understanding NAT64

NAT64 bridges IPv6 and IPv4 networks, facilitating communication by translating IPv6 packets into IPv4 packets and vice versa. This allows devices using IPv6 to communicate with devices using IPv4, overcoming the incompatibility barrier. By utilizing NAT64, network operators can seamlessly transition from IPv4 to IPv6 without the need for dual-stack deployment.

One of NAT64’s key advantages is extending connectivity and reachability. With IPv4 exhaustion becoming a reality, NAT64 provides a solution to connect IPv6-only networks with the vast IPv4 Internet. This enables organizations to adopt IPv6 without losing connectivity to legacy IPv4 infrastructure. Additionally, NAT64 simplifies network management by reducing the complexity of maintaining dual-stack networks.

While NAT64 offers numerous benefits, being aware of its potential challenges is essential. One significant consideration is the impact on end-to-end communication. NAT64 introduces an additional layer of translation, which can affect specific applications and protocols that rely on direct IP communication. Network administrators must carefully evaluate the compatibility of their network infrastructure and applications before implementing NAT64.

Organizations have several strategies for successfully deploying NAT64. One approach is stateless NAT64, where the translation is performed on the fly without storing session-specific information. Another strategy is stateful NAT64, which maintains a translation state to enable advanced features such as inbound connection initiation. Each deployment strategy has its benefits and considerations depending on the network’s specific requirements.

Example: IPv6 over IPv4 GRE

Understanding IPv6 over IPv4 GRE

IPv6 over IPv4 GRE is a tunneling mechanism that encapsulates IPv6 packets within IPv4 packets. This enables communication between IPv6 networks over an IPv4 infrastructure. The Generic Routing Encapsulation (GRE) protocol provides the framework for encapsulating and decapsulating the packets, allowing them to traverse across different network domains.

Conversely, IPSec provides a secure framework for the encapsulated packets transmitted over the GRE tunnel. It offers authentication, integrity, and confidentiality services, ensuring the data remains protected from unauthorized access or tampering during transmission.

To establish an IPv6 over IPv4 GRE tunnel with IPSec, both ends of the tunnel must be properly configured. This includes configuring the tunnel interfaces, enabling GRE and IPSec protocols, defining the tunnel endpoints, and specifying the IPSec encryption and authentication algorithms.

Example: IPv6 Automatic 6to4 Tunneling

Understanding IPv6 Automatic 6to4 Tunneling

IPv6 automatic 6to4 tunnelling is a technique that allows IPv6 packets to be transmitted over an IPv4 network. It enables communication between IPv6-enabled hosts over an IPv4 infrastructure. This method utilizes IPv6 addresses that are automatically assigned to facilitate tunneling.

Automatic 6to4 tunnelling utilizes a unique addressing scheme that enables communication between IPv6 hosts over an IPv4 network. It relies on encapsulating IPv6 packets within IPv4 packets, allowing them to traverse IPv4-only networks. This process involves using 6to4 relay routers that facilitate traffic exchange between IPv6 and IPv4 networks.

One key advantage of IPv6 automatic 6to4 tunneling is its ability to enable IPv6 connectivity without requiring extensive upgrades to existing IPv4 infrastructure. It provides a cost-effective and efficient solution for organizations transitioning to IPv6. Additionally, automatic 6to4 tunneling allows for the coexistence of IPv4 and IPv6 networks, ensuring seamless communication between both protocols.

Implementing IPv6 automatic 6to4 tunneling requires configuring 6to4 relay routers and ensuring proper routing between IPv4 and IPv6 networks. Network security must be considered, and the 6to4 relay routers must be protected from potential threats. Organizations must also monitor and manage their IPv6 addressing scheme to utilize available resources efficiently.

Related: Useful links to pre-information

  1. Network Security Components
  2. IP Forwarding
  3. Cisco Secure Firewall
  4. Distributed Firewalls
  5. Virtual Firewalls
  6. IPv6 Attacks
  7. Layer 3 Data Center
  8. SD WAN SASE

Scanning Network Connectivity

Network Scanning

PowerShell and TNC

There are multiple ways to scan a network to determine host and open ports. PowerShell is used with variables and can perform advanced scripting.  Below, I am using TNC to monitor my own Ubuntu VM and the WAN gateway.

Note:

The command stands for test network connection. This will display a summary of the request and a timeout. The PingSucceeded value will be equal to False. This output can indicate port filtering or that the target machine is powered off. The different statuses can vary between operating systems even when the results appear to be the same.

You can scan for the presence of multiple systems on the network with the following 1..2 | % {tnc 192.168.0.$_}

Analysis:

    • This command will attempt to scan 2 IP addresses in the range 192.168.0.1 and 192.168.0.2. The number range 1..1 can be extended, for example, 1..200, although it will take longer to complete.
    • RDP is a prevalent protocol for administrative purposes on machines within a corporate network. This will display a summary of the request and show a successful connection. The output will show TcpTestSucceeded equals False. This indicates the system is not running and active, and a service could be running on port 3389, which is typically used for administration and remote desktop access.

In the following example, we have a PowerShell code to create a variable called $ports by typing $ports = 22,53,80,445,3389 and pressing the Return key. This variable will store multiple standard ports found on the target system.

Then scan the machine using the new variable $ports with the command $ports | ForEach-Object {$port = $_; if (tnc 192.168.0.2 -Port $port ) {“$port is open” } else {“$port is closed”} }.

Analysis:

    • This code will scan the IP address 172.31.24.20 and test each port number within the previously created $ports variable. For each port found, an open port message is displayed.
    • If a port is not found, the message port is shown as closed. According to the output, several ports should be opened on the machine.

Connectivity with MAC address 

Data link layer and MAC addresses

The following lab guide will explore the addresses of Media Access Control (MAC). MAC address works at the data link layer of the OSI model. This address may also be called the physical address since it’s the identifier assigned to a Network Interface Card (NIC).

While this is typically a physical card or controller that you might plug the ethernet or fiber into, MACs are also used to identify a pseudo-physical address for logical interfaces. This example shows the MAC changes seen in virtual machines or docker containers. 

Note:

We have a Docker container running a web service and mapped port 80 on the container to 8000 on the Docker host, which is an Ubuntu VM. Also, notice the assigned MAC addresses; we will change these immediately. I’m also running a TCPDump to start a packet capture on Docker0.

Docker networking

Analysis:

    • For this challenge, we will focus on the virtual network between your local endpoint and a web application running locally inside a docker container. The docker0 interface is your endpoint’s interface for communication with docker containers. The “veth…” interfaces are the virtual interfaces for web applications.
    • Even though the MAC address is supposed to be a statically assigned identifier for the specific NIC, they are straightforward to change. We changed the MAC address in the following screenshots and dropped the Docker0.

Note:

Typically, attackers will spoof a MAC to mimic a desired type of device or use randomization software to mask their endpoint.

MAC addresses

Now that you have seen how MAC addresses work, we can look at the ARP process.

Note:

When endpoints communicate across networks, they use logical IP addresses to track where the requests come from and the intended destination. Once a packet arrives internal to an environment, networking devices must convert that IP address to the more specific “physical” location the packets are destined for. That “physical” location is the MAC address you analyzed in the last challenge. The Address Resolution Protocol (ARP) is the protocol that makes that translation.

Analysis:

Let’s take this analysis step-by-step. When you send the curl request or any traffic, the first thing that must occur is to determine the intended destination. So we are giving the IP address as this, but we don’t know the Layer 2 MAC address. ARP is the process of finding this.

Where did the initial ARP request come from?

    • It looks like the first packet has a destination MAC of “ff:ff:ff:ff:ff:ff.”  Since your endpoint doesn’t know the destination MAC address, the first ARP packet is broadcast. Although this works, it is a bit of a security concern.
    • A broadcast packet will be sent to every host within the local network. Unfortunately, the ARP protocol was not developed with security in mind, so in most configurations, the first host to respond to the ARP request will be the “winner.” This makes it very simple if an attacker controls a host within an environment to spoof their own MAC, respond faster, and effectively perform a Man-In-The-Middle (MITM) attack. Notice we have “Request how has ” above.
    • The requesting IP address must be in the packet’s payload. This is an important distinction since most packets are returned to the requesting IP address found in the IPv4 header. This allows adversaries to use attacks such as ARP spoofing and MAC flooding since the original requester doesn’t have to be the intended destination. Notice we have a “Reply” at the end of the ARP process.

Understanding ARP:

ARP bridges the Network Layer (Layer 3) and the OSI model’s Data Link Layer (Layer 2). Its primary function is to map an IP address to a corresponding MAC address, allowing devices to exchange data efficiently.

Address Resolution Protocol

The ARP Process:

1. ARP Request: When a device wants to communicate with another on the same network, it sends an ARP request broadcast packet. This packet contains the target device’s IP address and the requesting device’s MAC address.

2. ARP Reply: Upon receiving the ARP request, the device with the matching IP address sends an ARP reply containing its MAC address. This reply is unicast to the requesting device.

3. ARP Cache: Devices store the ARP mappings in an ARP cache to optimize future communications. This cache contains IP-to-MAC address mappings, eliminating the need for ARP requests for frequently accessed devices.

4. Gratuitous ARP: In specific scenarios, a device may send a Gratuitous ARP packet to announce its presence or update its ARP cache. This packet contains the device’s IP and MAC address, allowing other devices to update their ARP caches accordingly.

**Host Enumeration**

Linux Host Enumeration

In a Linux environment, it is common practice to identify the host network details. A standalone isolated machine is scarce these days, and most systems are somehow interconnected to other systems. Run the following command to display IP information, saving the output to a text file instead of the popular method of displaying text on the screen.

Note:

1. Below, you can see there is usually a lot of helpful information displayed with network information. The screenshot shows that the network device ens33 and the MAC address are also listed.

2. Hping3 is a command-line tool for crafting and sending customized network packets. It offers various options and functionalities, making it invaluable for network discovery, port scanning, and firewall testing tasks.

3. One of hping3’s critical strengths lies in its advanced features. From TCP/IP stack fingerprinting to traceroute mode, hping3 goes beyond basic packet crafting and provides robust network analysis and troubleshooting techniques.

Analysis:

    • The w command will show who, what, and where. In the above screenshot, a user is connecting from a remote location, which highlights how interconnected we are today; the connection could be anywhere in the world. Other helpful information here shows that the user has an open terminal bash and is running the w command.
    • Use the hping command to ping your machine seven times using. sudo hping3 127.0.0.1 -c 57.
    • The sudo is needed as elevated privileges are required to run hping3. The IP address 127.0.0.1 is the loopback address, meaning this is your machine. We work in a secure lab environment and cannot ping systems online.
    • In the screenshot, errors will be displayed if there are any connection issues on the network. Generally, ping helps identify interconnected systems on the network. Hping is a much more advanced tool with many features beyond this challenge. It can also perform advanced techniques for testing firewall port scanning and help penetration testers look for weaknesses. It is a potent tool!

Network Connectivity & Network Security

So, we have just looked at generic connectivity. However, these networking and security devices will have two main functions. First, there is the network connectivity side of things. 

So, we will have network devices that need to forward your traffic so it can reach its destination. Traffic is delivered based on IP. Keep in mind that IP is not guaranteed. Enabling reliable network connectivity is handled further up the stack. The primary version of IP used on the Internet today is Internet Protocol Version 4 (IPv4).

Due to size constraints with the total number of possible addresses in IPv4, a newer protocol was developed. The latest protocol is called IPv6. It makes many more addresses available and is increasing in adoption.

**Network Security and TCPdump**

Secondly, we will need to have network security devices. These devices allow traffic to pass through their interfaces if they deem it safe, and policy permits the traffic to pass through that zone in the network. The threat landscape is dynamic, and bad actors have many tools to disguise their intentions. Therefore, we have many different types of network security devices to consider.

Tcpdump is a powerful command-line packet analyzer that allows users to capture and examine network traffic in real-time. It captures packets from a network interface and displays their content, offering a detailed glimpse into the intricacies of data transmission.

**Getting Started with TCPdump**

Understanding TCPdump’s primary usage and command syntax is crucial to its effective use. By employing a combination of command-line options, filters, and expressions, users can tailor their packet-capturing experience to suit their specific needs. We will explore various TCPdump commands and parameters, including filtering by source or destination IP, port numbers, or protocol types.

**Analyzing Captured Packets**

Once network packets are captured using TCPdump, the next step is to analyze them effectively. This section will explore techniques for examining packet headers and payload data and extracting relevant information. We will also explore how to interpret and decode different protocols, such as TCP, UDP, ICMP, and more, to understand network traffic behavior better.

**Capturing Traffic with TCPdump**

Note:

Remember that starting tcpdump requires elevated permissions and initiates a continuous traffic capture by default, resulting in an ongoing display of network packets scrolling across your screen. To save the output of tcpdump to a file, use the following command:

sudo tcpdump -vw test.pcap

Tip: Learn tcpdump arguments

  • sudo Run tcpdump with elevated permissions

  • -v User verbose output

  • -w write output to the file

tcpdump

Analysis:

    • Running TCPdump is an invaluable tool for network analysis and troubleshooting. It lets you capture and view the live traffic flowing through your network interfaces. This real-time insight can be crucial for identifying issues, understanding network behavior, and detecting security threats.

Next, to capture traffic from a specific IP address, at the terminal prompt, enter:

sudo tcpdump ip host 192.168.18.131

Tip: Learn tcpdump arguments

  • ip the protocol to capture

  • host <ip address> limit the capture to a single host’s IP address

To capture a set number of packets, type the following command:

sudo tcpdump -c20

tcpdump

Analysis:

    • Filtering tcpdump on a specific IP address streamlines the analysis by focusing only on the traffic involving that address. This targeted approach can reveal patterns, potential security threats, or performance issues related to that host.
    • Limiting the packet count in a tcpdump capture, such as 20 packets, creates a more focused and manageable dataset for analysis. This can be particularly useful in isolating incidents or behaviors without being overwhelmed by continuous information.
    • Tcpdump finds practical applications in various scenarios. Whether troubleshooting network connectivity issues, detecting network intrusions, or performing forensic analysis, tcpdump is an indispensable tool.

Networking Scanning with Python

Python and NMAP

I am scanning my local netowrk in this lab guide, looking for targets and potential weaknesses. Knowing my shortcomings will help strengthen the overall security posture. I am scanning and attempting to gain access to services with Python.

Network scanning involves identifying and mapping the devices and resources within a network. It helps identify potential vulnerabilities, misconfigurations, and security loopholes. Python, a versatile scripting language, provides several modules and libraries for network scanning tasks.

Note:

  1. Python offers various libraries and modules that can be used with Nmap for network scanning. One such library is “python-nmap,” which provides a Pythonic way to interact with Nmap. By leveraging this library, we can easily automate scanning tasks, customize scan parameters, and retrieve results for further analysis.
  2. The code will import the Nmap library used to provide Nmap functionality. Then, the most basic default scan will be performed against the Target 1 virtual machine.

Steps:

  1. Using the nano editor, create a new text file called scannetwork.py by typing nano scannetwork.py. This is where the Python script will be made.
  2. With nano open, enter the following Python code to perform a basic default port scan using Nmap with Python. Add your IP address for the Target 1 virtual machine to the script.
    import nmap
    import subprocess
    nm = nmap.PortScanner()
    print(‘Perform default port scan’)
    nm.scan(‘add.ip.address.here’)
    print(nm.scaninfo())

Note: The code will import the Nmap library to provide Nmap functionality, and then the most basic default scan will be performed against the Target 1 virtual machine.


Analysis:

    • Scan results may vary. The following output shows many numbers, signifying port numbers, with the scan completing quickly. With this type of full scan without arguments and the speed at which Python returns, the results will likely produce errors.
    • Remember that you need to have NMAP installed first.

Summary: Network Connectivity

Network connectivity is crucial in our daily lives in today’s digital age. From smartphones to home devices, staying connected and communicating seamlessly is essential. In this blog post, we delved into the fascinating world of network connectivity, exploring its different types, the challenges it faces, and the future it holds.

Understanding Network Connectivity

Network connectivity refers to the ability of devices to connect and communicate with each other, either locally or over long distances. It forms the backbone of modern communication systems, enabling data transfer, internet access, and various other services. To comprehend network connectivity better, it is essential to explore its different types.

Wired Connectivity

As the name suggests, wired connectivity involves physical connections between devices using cables or wires. This traditional method provides a reliable and stable network connection. Ethernet, coaxial, and fiber optic cables are commonly used for wired connectivity. They offer high-speed data transfer and are often preferred when stability is crucial, such as in offices and data centers.

Wireless Connectivity

Wireless connectivity has revolutionized the way we connect and communicate. It eliminates physical cables and allows devices to connect over the airwaves. Wi-Fi, Bluetooth, and cellular networks are well-known examples of wireless connectivity. They offer convenience, mobility, and flexibility, enabling us to stay connected on the go. However, wireless networks can face challenges such as signal interference and limited range.

Challenges in Network Connectivity

While network connectivity has come a long way, it still faces particular challenges. One of the significant issues is network congestion, where increased data traffic leads to slower speeds and reduced performance. Security concerns also arise, with the need to protect data from unauthorized access and cyber threats. Additionally, the digital divide remains a challenge, with disparities in access to network connectivity across different regions and communities.

The Future of Network Connectivity

As technology continues to evolve, so does network connectivity. The future holds exciting prospects, such as the widespread adoption of 5G networks, which promise faster speeds and lower latency. The Internet of Things (IoT) will also play a significant role, with interconnected devices transforming various industries. Moreover, satellite communication and mesh network advancements aim to bring connectivity to remote areas, bridging the digital divide.

Conclusion

In conclusion, network connectivity is an integral part of our modern world. Whether wired or wireless, it enables us to stay connected, access information, and communicate effortlessly. While challenges persist, the future looks promising with advancements like 5G and IoT. As we embrace the ever-evolving world of network connectivity, we must strive for inclusivity, accessibility, and security to create a connected future for all.

Cisco Snort

Cisco Firewall with Cisco IPS

Cisco Firewall with IPS

In today's digital landscape, the need for robust network security has never been more critical. With the increasing prevalence of cyber threats, businesses must invest in reliable firewall solutions to safeguard their sensitive data and systems. One such solution that stands out is the Cisco Firewall. In this blog post, we will explore the key features, benefits, and best practices of Cisco Firewall to help you harness its full potential in protecting your network.

Cisco Firewall is an advanced network security device designed to monitor and control incoming and outgoing traffic based on predetermined security rules. It is a barrier between your internal network and external threats, preventing unauthorized access and potential attacks. With its stateful packet inspection capabilities, the Cisco Firewall analyzes traffic at the network, transport, and application layers, providing comprehensive protection against various threats.

Cisco Firewall with IPS functions offers a plethora of features designed to fortify network security. These include:

1. Signature-based detection: Cisco's extensive signature database enables the identification of known threats, allowing for proactive defense.
2. Anomaly-based detection: By monitoring network behavior, Cisco Firewall with IPS functions can detect anomalies and flag potential security breaches.
3. Real-time threat intelligence: Integration with Cisco's threat intelligence ecosystem provides up-to-date information and protection against emerging threats.

The combination of Cisco Firewall with IPS functions offers several enhanced security measures, such as:

1. Intrusion Prevention: Proactively identifies and blocks intrusion attempts, preventing potential network breaches.
2. Application Awareness: Deep packet inspection allows for granular control over application-level traffic, ensuring secure usage of critical applications.
3. Virtual Private Network (VPN) Protection: Cisco Firewall with IPS functions offers robust VPN capabilities, securing remote connections and data transmission.

Highlights: Cisco Firewall with IPS

**Protecting The Internet Edge**

The Internet edge is the point at which the organization’s network connects to the Internet. This is the boundary between the public Internet and the private resources within an organization’s network. Worms, viruses, and botnet intrusions threaten data security, performance, and availability.

Additional problems include employee productivity loss and data leakage due to an organization’s Internet connection. A company’s network infrastructure and data resources are at risk from internet-based attackers. Worms, viruses, and targeted attacks constantly attack Internet-connected networks.

Firewalling is a fundamental aspect of network security. It is a barrier between a trusted internal network and an untrusted external network, monitoring and controlling incoming and outgoing network traffic. By implementing firewalling features, organizations can protect their sensitive data and network resources from unauthorized access and potential threats.

Example Technology: Linux Firewalling

What is a UFW Firewall?

UFW, short for Uncomplicated Firewall, is a user-friendly front-end for managing firewall rules in Linux-based systems. It is built upon the robust infrastructure of iptables. Still, it provides a simplified and intuitive interface, making it accessible even to users without an in-depth understanding of networking and firewall configurations. With UFW, you can easily define and manage rules to control incoming and outgoing network traffic, safeguarding your system from unauthorized access and potential threats.

**Numerous Attack Vectors**

We have Malware, social engineering, supply chain attacks, advanced persistent threats, denial of service, and various man-in-the-middle attacks. And nothing inside the network should be considered safe. So, we must look beyond Layer 3 and incorporate multiple security technologies into firewalling.

We have the standard firewall that can prevent some of these attacks, but we need to add additional capabilities to its baseline. Hence, we have a better chance of detection and prevention. Some of these technologies that we layer on are provided by Cisco Snort, which enables the Cisco intrusion prevention system ( Cisco IPS ) included in the Cisco Firewall solution that we will discuss in this post.

Cisco Firewall Types:

1. Cisco ASA Firewalls:

Cisco ASA (Adaptive Security Appliance) firewalls are among the most widely used firewalls in the industry. They provide advanced threat protection, application visibility and control, and integrated security services. With features such as stateful packet inspection, VPN support, and network address translation, Cisco ASA firewalls are suitable for small to large enterprises.

2. Cisco Firepower Threat Defense (FTD):

Cisco Firepower Threat Defense (FTD) is a unified software image that combines the functionality of Cisco ASA with advanced threat detection and prevention capabilities. FTD offers next-generation firewall features like intrusion prevention system (IPS), malware protection, and URL filtering. It provides enhanced visibility into network traffic and enables organizations to combat modern-day threats effectively.

3. Cisco Meraki MX Firewalls:

Cisco Meraki MX firewalls are cloud-managed security appliances designed for simplicity and ease of use. They offer robust security features, including stateful firewalling, content filtering, and advanced malware protection. Meraki MX firewalls are particularly suitable for distributed networks, remote sites, and small to medium-sized businesses.

4. Cisco IOS Zone-Based Firewall:

Cisco IOS Zone-Based Firewall is a software-based firewall solution integrated into Cisco routers. It provides secure network segmentation by grouping interfaces into security zones and applying firewall policies between zones. The IOS Zone-Based Firewall is ideal for branch offices and enterprise edge deployments with its flexible configuration options and support for various protocols.

**Zone-Based Firewall in Transparent Mode**

Understanding Zone-Based Firewalls

Zone-based firewalls are a form of network security that operates based on zones rather than individual IP addresses. This approach allows for simplified policy management and enhanced security. By classifying network segments into zones, administrators can define specific security policies for each zone, controlling traffic flow between them.

One key advantage of zone-based firewalls is their ability to provide granular control over network traffic. Administrators can define policies based on the specific requirements of different zones, allowing for customized security measures. Additionally, zone-based firewalls enable simplified troubleshooting and monitoring, as traffic can be inspected and logged at the zone level.

Zone-based firewalls can integrate the existing network infrastructure seamlessly to achieve transparent network security. By placing the firewall at the perimeter of each zone, traffic can be inspected and filtered without disrupting the network’s regular operation. This transparency ensures that network performance is not compromised while maintaining robust security measures.

Firewalling Features:

Several types of firewalling features serve different purposes. Let’s explore a few of them:

**Packet Filtering**

1. Packet Filtering: Packet filtering firewalls examine individual data packets and decide based on predefined rules. They analyze the packet’s header information, such as source and destination IP addresses, ports, and protocol type, to determine whether to allow or block the packet. There are two main types of packet-filtering firewalls: stateless and stateful. Stateless firewalls examine individual packets without considering their context, which can be more efficient but less secure.

On the other hand, stateful firewalls maintain information about the connection state, allowing for more advanced inspection and increased security. Each type has advantages and considerations, and the choice depends on the network’s specific needs and requirements.

Packet Filtering – IPv4 & IPv6 Standard Access Lists

Understanding Standard Access Lists

Standard access lists, commonly known as ACLs, are an essential tool in network security. They filter IP traffic based only on source IP addresses. Network administrators can control data flow into or out of a network by specifying which source IP addresses are allowed or denied.

Specific syntax and configuration steps must be followed to create a standard access list. Typically, this involves defining the access list number, specifying the permit or deny actions, and defining the source IP addresses to be filtered. Network administrators can implement these access lists on routers or switches to regulate traffic flow effectively.

Standard access lists have applications in various network scenarios. For example, they can restrict access to specific network resources based on source IP addresses. Additionally, they can be utilized for traffic filtering, allowing or denying certain types of traffic based on predefined criteria. Practical examples will demonstrate how standard access lists can enhance network security and optimize performance.

ACL Type: IPv6 Access Lists

What are IPv6 access lists?

IPv6 access lists are firewall mechanisms that filter IPv6 traffic based on defined rules. They permit or deny packets based on various criteria, such as source and destination IP addresses, protocol types, and port numbers. Network administrators can define granular traffic policies by implementing access lists and enhancing network security and performance.

IPv6 access lists follow a specific syntax and structure. They consist of sequential lines, each containing a permit or deny statement, followed by the criteria for matching packets. The requirements can include source and destination IPv6 addresses, port numbers, and protocol types. Additionally, access lists can be configured with specific logic, such as allowing or denying packets based on a particular sequence of rules.

**Stateful Inspection**

2. Stateful Inspection: Stateful inspection firewalls go beyond packet filtering by keeping track of the state of network connections. They maintain information about the context and status of each connection, allowing them to make more informed decisions and provide better protection against sophisticated attacks.

Stateful inspection firewalls operate at Layers 3 and Layer 4 of the OSI model, examining packet headers and payload data to make intelligent decisions. They analyze packets based on protocols, source and destination IP addresses, port numbers, and the connection’s state. This comprehensive analysis empowers stateful inspection firewalls to differentiate between legitimate traffic and malicious attempts, providing robust protection.

Example: Stateful Inspection 

Understanding CBAC Firewall

The CBAC firewall operates by examining the state of network connections and making access control decisions accordingly. It analyzes the traffic flow in real-time, ensuring that only legitimate packets are allowed through while blocking malicious attempts. By incorporating layer 4 and layer 7 inspection, the CBAC firewall provides enhanced security measures compared to packet-filtering firewalls.

1. Stateful Packet Inspection: The CBAC firewall maintains a state table that tracks the context of network connections, allowing it to differentiate between legitimate and illegitimate traffic.

2. Application Layer Gateway: By inspecting the application layer data, the CBAC firewall can identify and control specific protocols, preventing unauthorized access and ensuring data integrity.

3. Protocol Inspection: CBAC firewall can scrutinize the protocol headers, ensuring that they comply with predefined policies and preventing protocol-level attacks.

CBAC Firewall CBAC Firewall

**Application Layer Filtering**

3. Application Layer Filtering: Application layer firewalls operate at the application layer of the network stack. They can inspect the content of network traffic, including application-specific data. This allows for more granular control and protection against application-level attacks. Application-level filtering, also known as deep packet inspection (DPI), is a sophisticated security mechanism that scrutinizes data packets beyond traditional network-layer parameters.

Analyzing the packets’ content at the application layer provides granular control over network traffic based on various parameters, such as application type, protocol, and user-defined rules. We will look at DPI in just a moment.

Network Monitoring and Traffic Analysis

Understanding IDPS

IDPS, an acronym for Intrusion Detection and Prevention Systems, refers to a broad range of security solutions designed to detect and mitigate potential threats within a network infrastructure. These systems analyze network traffic, monitor suspicious activities, and alert administrators in real time. Furthermore, IDPS can take proactive measures to prevent intrusions, such as blocking malicious traffic or executing predefined security policies.

An IDPS typically consists of several interconnected components that perform its function effectively. These include:

1. Sensors: Sensors gather data from various network sources, such as network devices, servers, or individual endpoints. They continuously monitor network traffic and collect valuable information for analysis.

2. Analyzers: Analyzers are the brains behind the IDPS. They receive sensor data, analyze it using sophisticated algorithms, and determine whether suspicious or malicious activity occurs. Analyzers utilize both signature-based and anomaly-based detection techniques to identify potential threats.

3. User Interface: The user interface provides a centralized platform for administrators to manage and configure the IDPS. It allows them to customize detection rules, view alerts, and generate reports for further analysis.

Example: Google Cloud DPI 

Google Cloud & Sensitive Data Protection

Sensitive data protection Example Technology: IPD IDS

Understanding Suricata

Suricata is an open-source intrusion detection and prevention system designed to monitor network traffic and detect potential security threats. It combines the best features of signature-based and behavior-based detection, making it a versatile tool for network security professionals. Let’s delve into its core functionalities.

Suricata boasts various features that distinguish it from other IPS/IDS solutions. Its multi-threaded design efficiently utilizes system resources, ensuring optimal performance even under heavy network traffic. Suricata supports many protocols and can detect and prevent attacks, including malware infections, DDoS attacks, and suspicious network activities.

Detection Techniques

IDPS employs a variety of detection techniques to identify potential intrusions:

1. Signature-Based Detection: This technique compares network traffic patterns against a database of known attack signatures. The IDPS flags the traffic as potentially malicious if a match is found. Signature-based filters compare incoming data packets against a database of known attack signatures. When a match is found, the filter immediately acts, blocking the packet or generating an alert. This filter type effectively detects well-known attacks but may have limitations when facing new or evolving threats.

2. Anomaly-Based Detection: Anomaly-based detection identifies deviations from normal network behavior. By establishing a baseline of regular activity, the IDPS can detect and raise alerts when unusual patterns or behaviors are observed. Anomaly-Based Detection operates on the principle of identifying abnormal behavior within a network or system. Unlike Signature-Based Detection, which relies on known patterns of attacks, Anomaly-Based Detection focuses on deviations from established norms. By building a baseline of normal behavior, this approach can effectively detect novel and sophisticated attacks that traditional methods might miss.

To implement Anomaly-Based Detection, IDPS leverages techniques such as statistical analysis, machine learning, and behavior modeling. These methods enable the system to learn and adapt to evolving threats, making it an invaluable asset for cybersecurity professionals. By continuously monitoring network traffic and system behavior, Anomaly-Based Detection can identify unusual patterns and flag potential threats in real-time.

3. Heuristic Detection: Heuristic detection involves using predefined rules and algorithms to identify potential threats. These rules are based on known attack patterns and can help identify previously unseen attacks. Heuristic detection is a proactive security mechanism that goes beyond traditional signature-based approaches. It identifies potential threats and anomalies by analyzing the behavior of network traffic, files, and system activities. Unlike signature-based detection, which relies on known patterns, heuristics can detect previously unknown and emerging threats, making it a vital component of modern IDPS solutions.

Heuristic detection operates on the principles of anomaly detection and behavior analysis. It establishes a baseline of normal behavior and then identifies deviations from this baseline. By leveraging machine learning algorithms and statistical models, heuristics can identify suspicious activities that may indicate the presence of malicious intent. These algorithms continually evolve and adapt to new threats, enhancing the effectiveness of IDPS solutions.

IDPS: Prevention Mechanisms

In addition to detection, IDPS systems can also take preventive measures to mitigate threats:

1. Intrusion Prevention: IDPS can be configured to actively block or prevent suspicious or malicious traffic from entering the network. This can include blocking specific IP addresses, applying access control policies, or terminating connections.

2. Incident Response: IDPS can trigger automated incident response actions when an intrusion is detected. These actions may include isolating affected systems, initiating forensic data collection, or notifying security personnel.

Related: Before you proceed, you may find the following posts helpful for pre-information:

  1. Cisco Secure Firewall
  2. WAN Design Considerations
  3. Routing Convergence
  4. Distributed Firewalls
  5. IDS IPS Azure
  6. Stateful Inspection Firewall
  7. Cisco Umbrella CASB

The Security Landscape

Firewalling & Attack Vectors

We are constantly under pressure to ensure mission-critical systems are thoroughly safe from bad actors that will try to penetrate your network and attack critical services with a range of attack vectors. So, we must create a reliable way to detect and prevent intruders. Adopting a threat-centric network security approach with the Cisco intrusion prevention system is viable. The Cisco IPS is an engine based on Cisco Snort that is an integral part of the Cisco Firewall, specifically, the Cisco Secure Firewall.

Firewalls have been around for decades and come in various sizes and flavors. The most typical idea of a firewall is a dedicated system or appliance that sits in the network and segments an “internal” network from the “external” Internet.

The traditional Layer 3 firewall has baseline capabilities that generally revolve around the inside being good and the outside being bad. However, we must move from just meeting our internal requirements to meeting the dynamic threat landscape in which the bad actors are evolving. 

Firewall Security Zones

There are various firewall security zones, each serving a specific purpose and catering to different security requirements. Let’s explore some common types:

1. DMZ (Demilitarized Zone):

The DMZ is a neutral zone between the internal and untrusted external networks, usually the Internet. It acts as a buffer zone, hosting public-facing services such as web servers, email servers, or FTP servers. By placing these services in the DMZ, organizations can mitigate the risk of exposing their internal network to potential threats.

2. Internal Zone:

The internal zone is the trusted network segment where critical resources, such as workstations, servers, and databases, reside. This zone is typically protected with strict access controls and security measures to safeguard sensitive data and prevent unauthorized access.

3. External Zone:

The external zone represents the untrusted network, which is usually the Internet. It serves as the gateway through which traffic from the external network is filtered and monitored before reaching the internal network. By maintaining a secure boundary between the internal and external zones, organizations can defend against external threats and potential attacks.

Enhancing Network Security

The Importance of Network Scanning

Network scanning is crucial in identifying potential security risks and vulnerabilities within a network infrastructure. Administrators can gain valuable insights into potential weak points that malicious actors may exploit by actively probing and analyzing network devices. It is a proactive approach to fortifying network defenses.

Various techniques are employed for network scanning, each with strengths and purposes. Port scanning allows administrators to identify open ports and services on network devices. Vulnerability scanning focuses on identifying known vulnerabilities in software and firmware versions. Network mapping helps create a comprehensive map of the network infrastructure, aiding in visualization and understanding of the environment.

Identifying and Mapping Networks

Tcpdump is a widely used tool for capturing and analyzing network packets. It captures packets, traversing a network interface and providing detailed information about each packet. With tcpdump, you can gain insights into the source and destination IP addresses, protocol types, packet size, and more.

To capture packets using tcpdump, you need to specify the network interface to monitor. Once tcpdump runs, it captures real-time packets and displays them on the terminal. You can apply filters to capture specific types of packets or focus on traffic from a particular source or destination. Tcpdump’s flexibility allows for complex filtering options, making it a powerful tool for network analysis.

Wireshark provides extensive protocol dissectors, allowing us to analyze many network protocols. From the ubiquitous HTTP and TCP/IP to more specialized protocols like SIP or DNS, Wireshark unravels the inner workings of each protocol, providing valuable insights into how data is exchanged and potential bottlenecks. We can identify anomalies, detect performance issues, and optimize network configurations by analyzing traffic patterns.

Cisco Security Technologies

A: Cisco Firewall

The Cisco Firewall is a next-generation firewall that provides several compelling threat detection and prevention technologies to the security professional’s toolbox. The Cisco Firewall solution is more than just Firewall Threat Detection (FTD). We have several components that make up the security solution. Firstly, we have the Firewall Management Center (FMC), which is the device that gives you the GUI and configures the policy and operational activities for the FTD. We also include several services.

B: Cisco Secure Endpoint

We have two critical pieces around malware. First, the Cisco Secure Endpoint cloud is a database of known bad and good files and maintains a file hash for all those entries. So, as files pass through the firewall, they can decide on known files. These hashes can be calculated at the line rate, and the Cisco firewall can do quick lookups. This allows you to hold the last packet of the file and determine whether it is good, bad, or unknown.

C: Cisco Secure Malware Analytics

So, we can make a policy by checking the hash if you like. However, you can extract the file if you have not seen it before, and it can be submitted to Cisco Secure Malware Analytics. This is a sandbox technology. The potentially bad file is placed in a VM-type world, and we can get a report with a score sent back. So this is a detection phase and not prevention, as it can take around 15 mins to get the score sent back to us.

These results can then be fed back into the Cisco Secure Endpoint cloud. Now, everyone, including other organizations that have signed up to the Cisco Secure Endpoint cloud, can block this file seen in just one place. So, no data is shared; it’s just the hash. Also, Talos Intel. This is the research organization’s secret source, with over 250 highly skilled researchers. It can provide intelligence such as Indicator of Compromise (IoC), bad domains, and signatures looking for exploits. And this feeds all security products.

D: Cisco IPS

We need several network security technologies that can work together. First, we need a Cisco IPS that provides protocol-aware deep packet inspection and detection, which Cisco Snort can offer and which we will discuss soon. You also need a list of bad IPs, Domains, and file hashes that allow you to tune your policy based on these. For example, for networks that are the source of spam, you want a different response from networks known to host the bad actors C&C.

Example: Bad Domains with Google

Also, for URL filtering, we think about content filtering in the sense that users should not access specific sites from work. However, the URL is valuable from a security and threat perspective. Often, transport is only over HTTP, DNS is constantly changing, and bad actors rely only on a URL to connect to, for example, a C&C. So this is a threat intelligence area that can’t be overlooked.

We also need to look at file hashing and run engines on the firewall that can identify Malware without sending it to the cloud for checking. Finally, it would help if you also had real-time network awareness and indicators of compromise. The Cisco Firewall can watch all traffic, and you tell us that here are the networks that this firewall protects, and these are the top talkers. Potentially to notice any abnormal behavior.

**Adopting Cisco Snort**

This is where Cisco Snort comes into play. Snort can carry out more or less all of the above with its pluggable architecture. More specifically, Snort 3. Cisco now develops and maintains Snort, known as Cisco Snort. Snort an open-source network intrusion prevention system. In its most straightforward terms, Snort monitors network traffic, examining each packet closely to detect a harmful payload or suspicious anomalies.

Required: Traffic Analysis & Packet Logging

Cisco Snort can perform real-time traffic analysis and packet logging as an open-source prevention system. So, the same engine runs in commercial products as in open-source development. The open-source core engine has over 5 million downloads and 500,000 registered users. Snort is a leader in its field.

Before the Cisco IPS team got their hands on it, Snort was released in 1998, and the program was meant to be a packet logger. You can still download the first version. It has come a long way since then. So Snort is so much more than a Cisco IPS.

In reality, Snort is a flexible, high-performance packet processing engine. The latest version of Snort 3 is pluggable, so you can add modules to make it adaptable to cover different security aspects. Snort 2 to Snort 3 takes two years to evolve.

With the release of 7, Cisco Secure Firewall Threat Defence introduced Snort 3 on FMC-managed devices. Now, we can have a Snort 3 filter with the Cisco Firewall, rule groups, and rule recommendations. These combined will help you use the Cisco firewall better to improve your security posture.

**Highlighting Snort 2**

So we started with Snort 2, even though Snort 3 has been out for a few years. Sort 2 has 4 primaries or, let’s say, essential components:

  1. It starts with the decoder, which performs minor decoding once the packers are pulled off the wire. This is what you might see with TCPDump.
  2. Then, we have the preprocessor Snort 2’s secret sauce. It is responsible for normalization and assembly. Its primary role is to present data to the next component, the detection agent.
  3. The detection engine is where the Snort rules are, and this is where we process the regulations against the traffic to observe. 
  4. Log module. Based on the rules on traffic, if something is found, we have a log module enabling you to create a unified alert.

**Snort Rule tree**

When Snort looks like a rule set, it doesn’t start at the top and run a packet through; it breaks it up into what is known as rule trees based on, for example, source port or destination port. So, when it comes to a rule to evaluate a packet, a packet only goes through a few rules. So, Cisco Snort, which provides the Cisco IPS for the Cisco Firewall, is efficient because it only needs to enable packets through the rules it might be appropriate for.

Knowledge check for Packet Sniffing

Capturing network traffic is often a task during a penetration testing engagement or while participating in a bug bounty. One of the most popular packet capture tools (sniffer) is Wireshark. If you are familiar with Linux, you know about another lightweight but powerful packet-capturing tool called tcpdump. The packet sniffing process involves a cooperative effort between software and hardware. This process can be broken down into three steps:

1.Collection: The packet sniffer collects raw binary data from the wire. Generally, this is accomplished by switching the selected network interface into promiscuous mode. In this mode, the network card can listen to all traffic on a network segment, not only the traffic addressed.

2.Conversion: The captured binary data is converted into readable form. This is as far as most developed command-line packet sniffers can go. The network data can be interpreted fundamentally at this point, leaving most of the analysis to the end user.

Analysis: Finally, the packet sniffer analyzes the captured and converted data. Based on the information extracted, the sniffer verifies the protocol of the captured network data and begins analyzing that protocol’s distinguishing features.

tcpdump

**Highlighting Snort 3**

Then, we have a new edition of Cisco IPS. Snort 3.0 is an updated version with a unique design and a superset of Snort 2. Snort 3 includes additional functionality that improves efficacy, performance, scalability, usability, and extensibility. In addition, Snort 3 aimed to address some of the limitations of Snort 2.

For example, Snort 2 is packet-based, so it’s a packet sniffer per packet. So it would help if you built in statefulness and awareness of fragments and the fact that HTTP GET’s boundaries are not packet boundaries, which can spread over multiple packets.

Snort 3 Features:

HTTP Protocol Analyzer

Snort 3 has a good HTTP protocol analyzer that can detect HTTP running over any port. Many IPS providers only look at 80, 8080, and 442. However, HTTP over any port other than the Cisco IPS assumes it is TCP. However, based on Cisco Snort, Cisco IPS can detect HTTP over any port. Now that it knows HTTP, Snort can’t set up different pointers in the other parts of the packet. So when you get to the IPS rules section looking for patterns, you don’t need to do the lookup and calculation again, which is essential when you are going at a line rate.

Snort is pluggable

Also, within the Cisco firewall, Cisco Snort is pluggable and does much more than protocol analysis. It can perform additional security functions and make network discovery, a type of passive detection. Along with advanced malware protection and application identification, not by ports and protocols but by deep packet inspection. Now, you can have a policy based on the application. An identity engine can also map users to IP, allowing identity-based firewalling. So, Cisco Snort does much of the heavy lifting for the Cisco Firewall.

Snort 2 architecture: The issues

Snort 3 has a modern architecture for handling all of the Snort 2 packet-based evasions. It also supports HTTP/2, whereas Snort 2 only supports HTTP/1. The process architecture is the most meaningful difference between Snort 2 and Snort 3. To go faster in Snort 2, you put more Snorts running on the box. Depending on the product, a connection arrives and is hashed based on a 5-tuple or a 6-tuple. I believe 5tuple is for open-source products, and 6tuple is for commercial products.

Connections on the same hash go to the same CPU. To improve Snort 2 performance, if you had a single CPU on a box, you add another Snort CPU and get double the performance with no overhead. Snort 2 works with multiple Snort processes, each affiliated with an individual CPU core, and within each Snort process, there is a separate thread for management and data handling.

But we are loading Snorts over and over again. So, we have linear scalability, which is good, but duplicated memory structure is bad. So every time we load Cisco Snort, we load the rules, and everything runs in their isolated world.

**Snort 3 architecture: Resolving the issues**

On the other hand, Snort 3 is multi-threaded, unlike Snort 2. This means we have one control thread and multiple packet threads. The packet arrives at the control thread, and we have the same connection hashing with 5-tuple or 6-tuple. Snort 3 only runs on one process, with each thread affiliated with individual CPU cores, backed by one control thread that handles data for all packet-processing threads. The connections are still pinned to the core, but they are packet threads, and each one of these packet threads is running on its CPU, but they share the control thread, and this shares the rules. 

The new Snort 3 architecture eliminates the need for a control thread per process and facilitates configuration/data sharing among all threads. As a result, less overhead is required to orchestrate the collaboration among packet-processing threads. We get better memory utilization and reloads are much faster.

A) Snort 3 inspectors

Snort 3 has inspectors now. In Snort 2, we had pre-processors. We have an HTTP inspector instead of a pre-processor. Packets are processed differently in Snort 3 than in Snort 2. So, in Snort 2, the packet comes linearly in specific steps. This was done with a preprocessing stage.

What has to happen is that the packet has to go through, and every field of the packet will be decoded. And if this is HTTP, they will look at the GET, the body, and the header, for example. All of this will be decoded in case a rule needs that data. In the case of RPC, there are so many fields in an RPC packet. So, it could decode fields in the packet that a rule never needs. So, you need to save time in decoding the data.

B) Parallel resource utilization

On the other hand, Snort 3 uses what is known as parallel resource utilization. We have plugins and a publish and subscribe model in the packet inspection process. So, when it looks at a packet, there are things it can decode. When the packet gets to the rule, the rule might say that it needs the body and not any other fields. Then, the body will only be decoded. This is referred to as just in time instead of just in case. You don’t waste time if any fields in the packet must be translated.

C) Rules Group Security Levels.

With Snort 2 regarding rule sets, you have only a few options. For example, you can pick no rules active-based policy, which is not recommended. There is also a connection-based rule set ( connectivity over security). We also have balanced security and connectivity. Then, we have protection over the connectivity rules that are set. With Snort 3, you will get more than just these policy sets. We have rule groups that we can use to set the security levels individually. So, the new feature is Rule Groups, making it easier to adjust your policy.

With rule groups, we can assign security levels to each sub-group. You can adjust based on your usage, such as a more aggressive rule set for Chrome or not for Internet Explorer. The security level can be set on a per-group basis. However, Snort 2 offers this only in the base policy. 

  • Level 1 – Connectivity over Security 
  • Level 2 – Balanced Security and Connectivity 
  • Level 3 – Security over connectivity 
  • Level 4 – Maximum Detection

Now, there is no need to set individual rule states. We have levels that equate to policy. With Snort 2, you would have to change the entire base policy, but with Snort 3, we can change the groups related to the rule set. What I like about this is the trade-off so you can have rules, for example, for the browser, that are not common on your network but still exist. 

Summary: Cisco Firewall and IPS

In today’s rapidly evolving digital landscape, cybersecurity is of paramount importance. With increasing cyber threats, organizations must employ robust security measures to safeguard their networks and sensitive data. One such solution that has gained immense popularity is the Cisco Firewall and IPS (Intrusion Prevention System). This blog post dived deep into Cisco Firewall and IPS, exploring their capabilities, benefits, and how they work together to fortify your network defenses.

Understanding Cisco Firewall

Cisco Firewall is a formidable defense mechanism that acts as a barrier between your internal network and external threats. It carefully inspects incoming and outgoing network traffic, enforcing security policies to prevent unauthorized access and potential attacks. By leveraging advanced technologies such as stateful packet inspection, network address translation, and application-level filtering, Cisco Firewall provides granular control over network traffic, allowing only legitimate and trusted communication.

Exploring Cisco IPS

On the other hand, Cisco IPS takes network security to the next level by actively monitoring network traffic for potential threats and malicious activities. It uses a combination of signature-based detection, anomaly detection, and behavior analysis to identify and mitigate various types of attacks, including malware, DDoS attacks, and unauthorized access attempts. Cisco IPS works in real-time, providing instant alerts and automated responses to ensure a proactive defense strategy.

The Power of Integration

While Cisco Firewall and IPS are powerful, their true potential is unleashed when they work together synchronously. Integration between the two enables seamless communication and sharing of threat intelligence. When an IPS identifies a threat, it can communicate this information to the Firewall, immediately blocking the malicious traffic at the network perimeter. This collaborative approach enhances the overall security posture of the network, reducing response time and minimizing the impact of potential attacks.

Benefits of Cisco Firewall and IPS

The combined deployment of Cisco Firewall and IPS offers numerous benefits to organizations. Firstly, it provides comprehensive visibility into network traffic, allowing security teams to identify and respond to threats effectively. Secondly, it offers advanced threat detection and prevention capabilities, reducing the risk of successful attacks. Thirdly, integrating Firewall and IPS streamlines security operations, enabling a proactive and efficient response to potential threats. Lastly, Cisco’s continuous research and updates ensure that Firewalls and IPS remain up-to-date with the latest vulnerabilities and attack vectors, maximizing network security.

Conclusion:

In conclusion, the Cisco Firewall and IPS duo are formidable forces in network security. By combining the robust defenses of a Firewall with the proactive threat detection of an IPS, organizations can fortify their networks against a wide range of cyber threats. With enhanced visibility, advanced threat prevention, and seamless integration, Cisco Firewall and IPS empower organizations to stay one step ahead in the ever-evolving cybersecurity landscape.

rsz_1dc_secreu_5

Data Center Security

Data Center Security

Data centers are crucial in storing and managing vast information in today's digital age. However, with increasing cyber threats, ensuring robust security measures within data centers has become more critical. This blog post will explore how Cisco Application Centric Infrastructure (ACI) can enhance data center security, providing a reliable and comprehensive solution for safeguarding valuable data.

Cisco ACI segmentation is a cutting-edge approach that divides a network into distinct segments, enabling granular control and segmentation of network traffic. Unlike traditional network architectures, which rely on VLANs (Virtual Local Area Networks), ACI segmentation leverages the power of software-defined networking (SDN) to provide a more flexible and efficient solution. By utilizing the Application Policy Infrastructure Controller (APIC), administrators can define and enforce policies to govern communication between different segments.

Micro-segmentation has become a buzzword in the networking industry. Leaving the term and marketing aside, it is easy to understand why customers want its benefits.Micro-segmentation's primary advantage is reducing the attack surface by minimizing lateral movement in the event of a security breach.

With traditional networking technologies, this is very difficult to accomplish. However, SDN technologies enable an innovative approach by allowing degrees of flexibility and automation impossible with traditional network management and operations. This makes micro-segmentation possible.

Highlights: Data Center Security

Data Center Security Techniques

Data center network security encompasses a set of protocols, technologies, and practices to safeguard the infrastructure and data within data centers. It involves multiple layers of protection, including physical security, network segmentation, access controls, and threat detection mechanisms. By deploying comprehensive security measures, organizations can fortify their digital fortresses against potential breaches and unauthorized access.

A. Physical Security Measures: Physical security forms the first line of defense for data centers. This includes biometric access controls, surveillance cameras, and restricted entry points. By implementing these measures, organizations can limit physical access to critical infrastructure and prevent unauthorized tampering or theft.

B. Network Segmentation: Segmenting a data center network into isolated zones helps contain potential breaches and limit the lateral movement of threats. By dividing the network into distinct segments based on user roles, applications, or sensitivity levels, organizations can minimize the impact of an attack, ensuring that compromised areas can be contained without affecting the entire network.

C. Access Controls: Strong access controls are crucial for data center network security. These controls involve robust authentication mechanisms, such as multi-factor authentication and role-based access control (RBAC), to ensure that only authorized personnel can access critical resources. Regularly reviewing and updating access privileges further strengthens the security posture.

D. Threat Detection and Prevention: Data center networks should employ advanced threat detection and prevention mechanisms. This includes intrusion detection systems (IDS) and intrusion prevention systems (IPS) that monitor network traffic for suspicious activities and proactively mitigate potential threats. Additionally, deploying firewalls, antivirus software, and regular security patches helps protect against known vulnerabilities.

E: Data Encryption and Protection: Data encryption is a critical measure in safeguarding data both at rest and in transit. By encoding data, encryption ensures that even if it is intercepted, it remains unreadable without the proper decryption keys. Cisco’s encryption solutions offer comprehensive protection for data exchange within and outside the data center. Additionally, implementing data loss prevention (DLP) strategies helps in identifying, monitoring, and protecting sensitive data from unauthorized access or leakages.

Data Center Security – SCCs

A: **Understanding Security Command Center**

Security Command Center (SCC) is a comprehensive security management tool that provides visibility into your Google Cloud assets and their security status. It acts as a centralized hub, enabling you to identify potential vulnerabilities and threats before they escalate into serious issues. By leveraging SCC, businesses can ensure their data centers remain secure, compliant, and efficient.

**Detecting Threats with Precision**

One of the standout features of Security Command Center is its ability to detect threats with precision. Utilizing advanced threat detection capabilities, SCC continuously monitors your cloud environment for signs of suspicious activity. It leverages machine learning algorithms and Google’s vast threat intelligence to identify anomalies, ensuring that potential threats are flagged before they can cause harm. This proactive approach to security allows organizations to respond swiftly, minimizing potential damage.

**Investigating Threats with Confidence**

Once a threat is detected, it’s crucial to have the tools necessary to investigate it thoroughly. Security Command Center provides detailed insights into security incidents, offering a clear view of what happened, when, and how. This level of transparency empowers security teams to conduct comprehensive investigations, trace the root cause of incidents, and implement effective remediation strategies. With SCC, businesses can maintain control over their security landscape, ensuring continuous protection against cyber threats.

**Enhancing Data Center Security on Google Cloud**

Integrating Security Command Center into your Google Cloud infrastructure significantly enhances your data center’s security framework. SCC provides a holistic view of your security posture, enabling you to assess risks, prioritize security initiatives, and ensure compliance with industry standards. By adopting SCC, organizations can bolster their defenses, safeguarding their critical data assets and maintaining customer trust.

Example: Event Threat Protection & Security Health Analysis

Data Center Security – NEGs

B: **Understanding Network Endpoint Groups**

Network endpoint groups are collections of network endpoints, such as virtual machine instances or internet protocol addresses, that you can use to manage and direct traffic within Google Cloud. NEGs are particularly useful for deploying applications across multiple environments, providing the flexibility to choose between different types of endpoints. This feature is pivotal when dealing with a hybrid architecture, ensuring that traffic is efficiently directed to the most appropriate resource, whether it resides in your cloud infrastructure or on-premises.

**The Role of NEGs in Data Center Security**

One of the standout benefits of using network endpoint groups is their contribution to enhancing data center security. By enabling precise traffic management, NEGs allow for better segmentation and isolation of network traffic, reducing the risk of unauthorized access. With the ability to direct traffic to specific endpoints, NEGs provide an additional layer of security, ensuring that only authorized users can access sensitive data and applications. This capability is crucial in today’s cybersecurity landscape, where threats are becoming increasingly sophisticated.

**Integrating NEGs with Google Cloud Services**

Network endpoint groups seamlessly integrate with various Google Cloud services, making them a versatile tool for optimizing your cloud environment. For instance, NEGs can be used in conjunction with Google Cloud’s load balancing services to distribute traffic across multiple endpoints, enhancing the availability and reliability of your applications. Additionally, NEGs can work with Google Cloud’s Kubernetes Engine, allowing for more granular control over how traffic is routed to your containerized applications. This integration ensures that your applications can scale efficiently while maintaining high performance.

**Best Practices for Implementing NEGs**

When implementing network endpoint groups, it’s essential to follow best practices to maximize their effectiveness. Start by clearly defining your endpoint groups based on your application architecture and traffic patterns. Ensure that endpoints are regularly monitored and maintained to prevent potential bottlenecks. Additionally, leverage Google’s monitoring and logging tools to gain insights into traffic patterns and potential security threats. By adhering to these best practices, you can harness the full potential of NEGs and ensure a robust and secure cloud infrastructure.

network endpoint groups

Data Center Security – VPC Service Control

C: **How VPC Service Controls Work**

VPC Service Controls work by creating virtual perimeters around the Google Cloud resources you want to protect. These perimeters restrict unauthorized access and data transfer, both accidental and intentional. When a service perimeter is set up, it enforces policies that prevent data from leaving the defined boundary without proper authorization. This means that even if credentials are compromised, sensitive data cannot be moved outside the specified perimeter, thus providing an additional security layer over Google Cloud’s existing IAM roles and permissions.

**Integrating VPC Service Controls with Your Cloud Strategy**

Integrating VPC Service Controls into your cloud strategy can significantly bolster your security framework. Begin by identifying the critical services and data that require the most protection. Next, define the service perimeters to encompass these resources. It’s essential to regularly review and update these perimeters to adapt to changes in your cloud environment. Additionally, leverage Google Cloud’s monitoring tools to gain insights and alerts on any unauthorized access attempts. This proactive approach ensures that your cloud infrastructure remains resilient against evolving threats.

VPC Security Controls

**Best Practices for Implementing VPC Service Controls**

To maximize the effectiveness of VPC Service Controls, organizations should follow best practices. First, ensure that your team is well-versed in both Google Cloud services and the specifics of VPC Service Controls. Regular training sessions can help keep everyone up to date with the latest features and security measures. Secondly, implement the principle of least privilege by granting the minimal level of access necessary for users and services. Lastly, continuously monitor and audit your cloud environment to detect and respond to any anomalies swiftly.

Data Center Security – Cloud Armor

D: **Understanding Cloud Armor**

Cloud Armor is a cloud-based security service that leverages Google’s global infrastructure to provide advanced protection for your applications. It offers a range of security features, including DDoS protection, WAF (Web Application Firewall) capabilities, and threat intelligence. By utilizing Cloud Armor, businesses can defend against various cyber threats, such as SQL injection, cross-site scripting, and other web vulnerabilities.

**The Power of Edge Security Policies**

One of the standout features of Cloud Armor is its edge security policies. These policies enable businesses to enforce security measures at the network edge, closer to the source of potential threats. By doing so, Cloud Armor can effectively mitigate attacks before they reach your applications, reducing the risk of downtime and data breaches. Edge security policies can be customized to suit your specific needs, allowing you to create tailored rules that address the unique security challenges faced by your organization.

**Implementing Cloud Armor in Your Security Strategy**

Integrating Cloud Armor into your existing security strategy is a straightforward process. Begin by assessing your current security posture and identifying any potential vulnerabilities. Next, configure Cloud Armor’s edge security policies to address these vulnerabilities and provide an additional layer of protection. Regularly monitor and update your policies to ensure they remain effective against emerging threats. By incorporating Cloud Armor into your security strategy, you can enhance your overall security posture and protect your digital assets more effectively.

**Benefits of Using Cloud Armor**

There are numerous benefits to using Cloud Armor for your security needs. Firstly, its global infrastructure ensures low latency and high availability, providing a seamless experience for your users. Secondly, the customizable edge security policies allow for granular control over your security measures, ensuring that you can address specific threats as they arise. Additionally, Cloud Armor’s integration with other Google Cloud services enables a unified security approach, streamlining your security management and monitoring efforts.

### The Role of Cloud Armor in Cyber Defense

Google Cloud Armor serves as a robust defense mechanism against DDoS attacks, providing enterprises with scalable and adaptive security solutions. Built on Google Cloud’s global network, Cloud Armor leverages the same infrastructure that protects Google’s services, offering unparalleled protection against high-volume attacks. By dynamically filtering malicious traffic, it ensures that legitimate requests reach their destination without disruption, maintaining the availability and performance of online services.

### Enhancing Data Center Security

Data centers, the backbone of modern business operations, face unique security challenges. Cloud Armor enhances data center security by providing a first line of defense against DDoS threats. Its customizable security policies allow organizations to tailor their defenses to specific needs, ensuring that only legitimate traffic flows into data centers. Coupled with advanced threat intelligence, Cloud Armor adapts to emerging threats, keeping data centers secure and operational even during sophisticated attack attempts.

### Key Features of Cloud Armor

Cloud Armor offers a range of features designed to shield enterprises from DDoS attacks, including:

– **Adaptive Protection**: Continuously analyzes traffic patterns to identify and block malicious activities in real-time.

– **Global Load Balancing**: Distributes traffic across multiple servers, preventing any single point from becoming overwhelmed.

– **Customizable Security Policies**: Allows businesses to define rules and policies that match their specific security requirements.

– **Threat Intelligence**: Utilizes Google’s vast threat database to stay ahead of emerging threats and enhance protection measures.

Data Center Security – FortiGate

E: FortiGate and Google Cloud

Cloud security has become a top concern for organizations worldwide. The dynamic nature of cloud environments necessitates a proactive approach to protect sensitive data, prevent unauthorized access, and mitigate potential threats. Google Compute Engine offers a reliable and scalable infrastructure, but it is essential to implement additional security measures to fortify your cloud resources.

FortiGate, a leading network security solution, seamlessly integrates with Google Compute Engine to enhance the security posture of your cloud environment. With its advanced features, including firewall, VPN, intrusion prevention system (IPS), and more, FortiGate provides comprehensive protection for your compute resources.

Firewall Protection: FortiGate offers a robust firewall solution, allowing you to define and enforce granular access policies for inbound and outbound network traffic. This helps prevent unauthorized access attempts and safeguards your cloud infrastructure from external threats.

VPN Connectivity: With FortiGate, you can establish secure VPN connections between your on-premises network and Google Compute Engine instances. This ensures encrypted communication channels, protecting data in transit and enabling secure remote access.

Intrusion Prevention System (IPS): FortiGate’s IPS capabilities enable real-time detection and prevention of potential security breaches. It actively monitors network traffic, identifies malicious activities, and takes immediate action to block threats, ensuring the integrity of your compute resources.

Data Center Security – PCS

What is Private Service Connect?

Private Service Connect is a Google Cloud feature that allows you to securely connect services from different Virtual Private Clouds (VPCs) without exposing them to the public internet. By using internal IP addresses, Private Service Connect ensures that your data remains within the confines of Google’s secure network, protecting it from external threats and unauthorized access.

### Enhancing Security with Google Cloud

Google Cloud’s infrastructure is built with security at its core, and Private Service Connect is no exception. By routing traffic through Google’s private network, this feature reduces the attack surface, making it significantly harder for malicious entities to intercept or breach sensitive data. Furthermore, it supports encryption, ensuring that data in transit is protected against eavesdropping and tampering.

### Seamless Integration and Flexibility

One of the standout benefits of Private Service Connect is its seamless integration with existing Google Cloud services. Whether you’re running applications on Compute Engine, using Cloud Storage, or leveraging BigQuery, Private Service Connect allows you to connect these services effortlessly, without the need for complex configurations. This flexibility ensures that businesses can tailor their cloud infrastructure to meet their specific security and connectivity needs.

private service connect

**Cisco ACI and Segmentation**

Network segmentation involves dividing a network into multiple smaller segments or subnetworks, isolating different types of traffic, and enhancing security. Cisco ACI offers an advanced network segmentation framework beyond traditional VLAN-based segmentation. It enables the creation of logical network segments based on business policies, applications, and user requirements.

Cisco ACI is one of many data center topologies that must be secured. It lacks a data center firewall and has a zero-trust model. However, more is required; the policy must say what can happen. Firstly, we must create a policy. You have Endpoint groups (EPG) and a contract. These would be the initial security measures. Think of a contract as the policy statement and an Endpoint group as a container or holder for applications of the same security level.

**Cisco ACI & Micro-segmentation**

Micro-segmentation has become a buzzword in the networking industry. Leaving the term and marketing aside, it is easy to understand why customers want its benefits. Micro-segmentation’s primary advantage is reducing the attack surface by minimizing lateral movement in the event of a security breach. With traditional networking technologies, this isn’t easy to accomplish. However, SDN technologies enable an innovative approach by allowing degrees of flexibility and automation that are impossible with traditional network management and operations. This makes micro-segmentation possible.

For those who haven’t explored this topic yet, Cisco ACI has ESG. ESGs are an alternative approach to segmentation that decouples it from the early concepts of forwarding and security associated with Endpoint Groups. Thus, segmentation and forwarding are handled separately by ESGs, allowing for greater flexibility and possibilities.

**Cisco ACI ESGs**

Cisco ACI ESGs are virtual appliances that provide advanced network services within the Cisco ACI fabric. They offer various functionalities, including firewalling, load balancing, and network address translation, all seamlessly integrated into the ACI architecture. By utilizing ESGs, organizations can achieve centralized network management while maintaining granular control over their network policies.

One key advantage of Cisco ACI ESGs is their ability to streamline network management. With ESGs, administrators can easily define and enforce network policies across the entire ACI fabric, eliminating the need for complex and time-consuming manual configurations. The centralized management provided by ESGs enhances operational efficiency and reduces the risk of human errors.

Security is a top priority for any organization, and Cisco ACI ESGs deliver robust security features. With built-in firewall capabilities and advanced threat detection mechanisms, ESGs ensure only authorized traffic flows through the network. Furthermore, ESGs support micro-segmentation, allowing organizations to create isolated security zones within their network, preventing any lateral movement of threats.

**Cisco ACI and ACI Service Graph**

The ACI service graph is how Layer 4 to Layer 7 functions or devices can be integrated into ACI. This helps ACI redirect traffic between different security zones of FW or load balancer. The ACI L4-L7 services can be anything from load balancing and firewalling to advanced security services. Then, we have ACI segments that reduce the attack surface to an absolute minimum.

Then, you can add an ACI service graph to insert your security function that consists of ACI L4-L7 services. Now, we are heading into the second stage of security. What we like about this is the ease of use. If your application is removed, all the dots, such as the contract, EPG, ACI service graph, and firewall rules, get released. Cisco calls this security embedded in the application and allows automatic remediation, a tremendous advantage for security functionality insertion.

Cisco Data Center Security Technologies

Example Technology: NEXUS MAC ACLs

MAC ACLs, or Media Access Control Access Control Lists, are essential for controlling network traffic based on MAC addresses. Unlike traditional IP-based ACLs, MAC ACLs operate at Layer 2 of the OSI model, granting granular control over individual devices within a network. Network administrators can enforce security policies and mitigate unauthorized access by filtering traffic at the MAC address level.

MAC ACL Key Advantages

The utilization of MAC ACLs brings forth several noteworthy advantages. Firstly, they provide an additional layer of security by complementing IP-based ACLs. This dual-layered approach ensures comprehensive protection against potential threats. Moreover, MAC ACLs enable the isolation of specific devices or groups, allowing for enhanced segmentation and network organization. Additionally, their ability to filter traffic at Layer 2 minimizes the strain on network resources, resulting in optimized performance.

Understanding VLAN ACLs

Before diving into the configuration details, let’s understand VLAN ACLs clearly. VLAN ACLs are rules that control traffic flow between VLANs in a network. They act as filters, allowing or denying specific types of traffic based on defined criteria such as source/destination IP addresses, protocol types, or port numbers. By effectively implementing VLAN ACLs, network administrators can control and restrict resource access, mitigate security threats, and optimize network performance.

ACLs – Virtual Security Fence

ACLs (Access Control Lists) are rules that determine whether to permit or deny network traffic. They act as a virtual security fence, controlling data flow between network segments. ACLs can be applied at various points in the network, including routers, switches, and firewalls. Traditionally, ACLs were used to control traffic between different subnets. Still, with the advent of VLANs, ACLs can now be applied at the VLAN level, offering granular control over network traffic.

What is a MAC Move Policy?

In the context of Cisco NX-OS devices, a MAC move policy dictates how MAC addresses are handled when they move from one port to another within the network. It defines the device’s actions, such as flooding, forwarding, or blocking the MAC address. This policy ensures efficient delivery of data packets and prevents unnecessary flooding, reducing network congestion.

Types of MAC Move Policies

Cisco NX-OS devices offer different MAC move policies to cater to diverse network requirements. The most commonly used policies include:

1. Forward: In this policy, the device updates its MAC address table and forwards data packets to the new destination port.

2. Flood: When a MAC address moves, the device floods the data packets to all ports within the VLAN, allowing the destination device to learn the new location of the MAC address.

3. Drop: This policy drops data packets destined for the moved MAC address, effectively isolating it from the network.

Data Center Visibility Technologies

**Understanding sFlow**

sFlow is a sampling-based network monitoring technology that allows network administrators to gain real-time visibility into their network traffic. By continuously sampling packets at wire speed, sFlow provides a comprehensive view of network behavior without imposing significant overhead on the network devices.

sFlow Key Advantages

On Cisco NX-OS, sFlow brings a host of benefits for network administrators. Firstly, it enables proactive network monitoring by providing real-time visibility into network traffic patterns, allowing administrators to identify and address potential issues quickly. Secondly, sFlow on Cisco NX-OS facilitates capacity planning by providing detailed insights into traffic utilization, enabling administrators to optimize network resources effectively.

Understanding Nexus Switch Profiles

To begin our exploration, we must grasp the fundamentals of Nexus Switch Profiles. These profiles are essentially templates that define the configuration settings for Nexus switches. Network administrators can easily apply consistent configurations across multiple switches by creating profiles, reducing manual effort and potential errors. These profiles include VLAN configurations, interface settings, access control lists, and more.

Nexus Switch Profiles Key Advantages

Nexus Switch Profiles offer numerous benefits for network administrators and organizations. First, they streamline the configuration process by providing a centralized and standardized approach. This not only saves time but also ensures consistency across the network infrastructure. Additionally, profiles allow for easy scalability, enabling the swift deployment of new switches with pre-defined configurations. Moreover, these profiles enhance security by enforcing consistent access control policies and reducing the risk of misconfigurations.

Related: For pre-information, you may find the following posts helpful:

  1. Cisco ACI 
  2. ACI Cisco
  3. ACI Networks
  4. Stateful Inspection Firewall
  5. Cisco Secure Firewall
  6. Segment Routing

Security with Cisco ACI

Data Center Security with Cisco ACI

Cisco ACI includes many tools to implement and enhance security and segmentation from day 0. We already mentioned tenant objects like EPGs, and then for policy, we have contracts permitting traffic between them. We also have micro-segmentation with Cisco ACI.  Even though the ACI fabric can deploy zoning rules with filters and act as a distributed data center firewall, the result is comparable to a stateless set of access lists ACLs.

As a result, they can provide coarse security for traffic flowing through the fabric.  However, for better security, we can introduce deep traffic inspection capabilities like application firewalls, intrusion detection (prevention) systems (IDS/IPS), or load balancers, which often secure application workloads. 

Cisco ACI – Application-centric security 

ACI security addresses security concerns with several application-centric infrastructure security options. You may have heard of the allowlist policy model. This is the ACI security starting point, meaning only something can be communicated if policy allows it. This might prompt you to think that a data center firewall is involved. Still, although the ACI allowlist model does change the paradigm and improves how you apply security, it is only analogous to access control lists within a switch or router. 

Cisco Secure Firewall Integration

We need additional protection. So, further protocol inspection and monitoring are still required, which data center firewalls and intrusion prevention systems (IPSs) do very well and can be easily integrated into your ACI network. Here, we can introduce Cisco Firepower Threat Defence (FTD) to improve security with Cisco ACI.

**Starting ACI Security**

**ACI Contracts**

In network terminology, contracts are a mechanism for creating access lists between two groups of devices. This function was initially developed in the network via network devices using access lists and then eventually managed by firewalls of various types, depending on the need for deeper packet inspection. As the data center evolved, access-list complexity increased.

Adding devices to the network that require new access-list modifications could become increasingly complex. While contracts satisfy the security requirements handled by access control lists (ACLs) in conventional network settings, they are a more flexible, manageable, and comprehensive ACI security solution.

Contracts control traffic flow within the ACI fabric between EPGs and are configured between EPGs or between EPGs and L3out. Contracts are assigned a scope of Global, Tenant, VRF, or Application Profile, which limits their accessibility.

**Challenge: Static ACLs**

With traditional data center security design, we have standard access control lists (ACLs) with several limitations the ACI fabric security model addresses and overcomes. First, the conventional ACL is very tightly coupled with the network topology. They are typically configured per router or switch ingress and egress interface and are customized to that interface and the expected traffic flow through those interfaces. 

**Management Complexity**

Due to this customization, they often cannot be reused across interfaces, much less across routers or switches. In addition, traditional ACLs can be very complicated because they contain lists of specific IP addresses, subnets, and protocols that are allowed and many that are not authorized. This complexity means they are challenging to maintain and often grow as administrators are reluctant to remove any ACL rules for fear of creating a problem.

**ACI Fabric Security – Contracts, Filters & Labels**

The ACI fabric security model addresses these ACL issues. Cisco ACI administrators use contract, filter, and label managed objects to specify how groups of endpoints are allowed to communicate. 

The critical point is that these managed objects are not tied to the network’s topology because they are not applied to a specific interface. Instead, they are rules that the network must enforce irrespective of where these endpoints are connected.  So, security follows the workloads, allowing topology independence.

Furthermore, this topology independence means these managed objects can easily be deployed and reused throughout the data center, not just as specific demarcation points. The ACI fabric security model uses the endpoint grouping construct directly, so allowing groups of servers to communicate with one another is simple. With a single rule in a contract, we can allow an arbitrary number of sources to speak with an equally random number of destinations. 

Micro-segmentation in ACI

We know that perimeter security is insufficient these days. Once breached, lateral movement can allow bad actors to move within large segments to compromise more assets. Traditional segmentation based on large zones gives bad actors a large surface to play with. Keep in mind that identity attacks are hard to detect.

How can you tell if a bad actor moves laterally through the network with compromised credentials or if an IT administrator is carrying out day-to-day activities?  Micro-segmentation can improve the security posture inside the data center. Now, we can perform segmentation to minimize segment size and provide lesser exposure for lateral movement due to a reduction in the attack surface.

**ACI Segments**

ACI microsegmentation refers to segmenting an application-centric infrastructure into smaller, more granular units. This segmentation allows for better control and management of network traffic, improved security measures, and better performance. Organizations implementing an ACI microsegmentation solution can isolate different applications and workloads within their network. This allows them to reduce their network’s attack surface and improve their applications’ performance.

**Creating ACI Segments**

Creating ACI segments based on ACI microsegmentation works by segmenting the network infrastructure into multiple subnets. This allows for fine-grained control over network traffic and security policies. Furthermore, it will enable organizations to quickly identify and isolate different applications and workloads within the network.

**Microsegmentation Advantages**

The benefits of ACI microsegmentation are numerous. Organizations can create a robust security solution that reduces their network’s attack surface by segmenting the network infrastructure into multiple subnets. Additionally, by isolating different applications and workloads, organizations can improve their application performance and reduce the potential for malicious traffic.

ACI Segments with Cisco ACI ESG

We also have an ESG, which is different from an EPG. The EPG is mandatory and is how you attach workloads to the fabric. Then, we have the ESG, which is an abstraction layer. Now, we are connected to a VRF, not a bridge domain, so we have more flexibility.

As of ACI 5.0, Endpoint Security Groups (ESGs) are Cisco ACI’s new network security component. Although Endpoint Groups (EPGs) have been providing network security in Cisco ACI, they must be associated with a single bridge domain (BD) and used to define security zones within that BD. 

This is because the EPGs define both forwarding and security segmentation simultaneously. The direct relationship between the BD and an EPG limits the possibility of an EPG spanning more than one BD. The new ESG constructs resolve this limitation of EPGs.

ACI Segments
Diagram: Endpoint Security Groups. The source is Cisco.

Standard Endpoint Groups and Policy Control

As discussed in ACI security, devices are grouped into Endpoint groups, creating ACI segments. This grouping allows the creation of various types of policy enforcement, including access control. Once we have our EPGs defined, we need to create policies to determine how they communicate with each other.

For example, a contract typically refers to one or more ‘filters’ to describe specific protocols & ports allowed between EPGs. We also have ESGs that provide additional security flexibility with more fine-grained ACI segments. Let’s dig a little into the world of contracts in ACI and how these relate to old access control of the past.

Microsegmentation with Cisco ACI adds the ability to group endpoints in existing application EPGs into new microsegment (uSeg) EPGs and configure the network or VM-based attributes for those uSeg EPGs. This enables you to filter with those attributes and apply more dynamic policies. 

We can use various attributes to classify endpoints in an EPG called µEPG. Network-based attributes: IP/MAC VM-based attributes: Guest OS, VM name, ID, vnic, DVS, Datacenter.

Example: Microsegmentation for Endpoint Quarantine 

Let us look at a use case. You might have separate EPGs for web and database servers, each containing both Windows and Linux VMs. Suppose a virus affecting only Windows threatens your network, not the Linux environment.

In that case, you can isolate Windows VMs across all EPGs by creating a new EPG called, for example, “Windows-Quarantine” and applying the VM-based operating systems attribute to filter out all Windows-based endpoints. 

This quarantined EPG could have more restrictive communication policies, such as limiting allowed protocols or preventing communication with other EPGs by not having any contract. A microsegment EPG can have a contract or not have a contract.

ACI service graph

ACI and Policy-based redirect: ACI L4-L7 Services

The ACI L4–L7 policy-based redirect (PBR) concept is similar to policy-based routing in traditional networking. In conventional networking, policy-based routing classifies traffic and steers desired traffic from its path to a network device as the next-hop route (NHR). This feature was used in networking for decades to redirect traffic to service devices such as firewalls, load balancers, IPSs/IDSs, and Wide-Area Application Services (WAAS).

In ACI, the PBR concept is similar: You classify specific traffic to steer to a service node by using a subject in a contract. Then, other traffic follows the regular forwarding path, using another subject in the same contract without the PBR policy applied.

ACI L4-l7 services
Diagram: ACI PBR. Source is Cisco

Deploying PBR for ACI L4-L7 services

With ACI policy-based redirect ( ACI L4-L7 services ), firewalls and load balancers can be provisioned as managed or unmanaged nodes without requiring Layer 4 to Layer 7 packages. The typical use cases include providing appliances that can be pooled, tailored to application profiles, scaled quickly, and are less prone to service outages. 

In addition, by enabling consumer and provider endpoints to be located in the same virtual routing and forwarding instance (VRF), PBR simplifies the deployment of service appliances. To deploy PBR, you must create an ACI service graph template that uses the route and cluster redirect policies. 

After deploying the ACI service graph template, the service appliance enables endpoint groups to consume the service graph endpoint group. Using vzAny can be further simplified and automated. Dedicated service appliances may be required for performance reasons, but PBR can also be used to deploy virtual service appliances quickly.

ACI l4-l4 services
Diagram: ACI Policy-based redirect. Source is Cisco

ACI’s service graph and policy-based redirect (PBR) objects bring advanced traffic steering capabilities to universally utilize any Layer 4 – Layer 7 security device connected in the fabric, even without needing it to be a default gateway for endpoints or part of a complicated VRF sandwich design and VLAN network stitching. So now it has become much easier to implement a Layer 4 – Layer 7 inspection.

You won’t be limited to a single L4-L7 appliance; ACI can chain many of them together or even load balance between multiple active nodes according to your needs. The critical point here is to utilize it universally. The security functions can be in their POD connected to a leaf switch or a pair of leaf switches dedicated to security appliances not located at strategic network points.

An ACI service graph represents the network using the following elements:

  • Function node—A function node represents a function that is applied to the traffic, such as a transform (SSL termination, VPN gateway), filter (firewalls), or terminal (intrusion detection systems). A function within the ACI service graph might require one or more parameters and have one or more connectors.
  • Terminal node—A terminal node enables input and output from the service graph.
  • Connector—A connector enables input and output from a node.
  • Connection—A connection determines how traffic is forwarded through the network.
ACI Service Graph
Diagram: ACI Service Graph. Source is Cisco

Summary: Data Center Security

In today’s digital landscape, network security is of utmost importance. Organizations constantly seek ways to protect their data and infrastructure from cyber threats. One solution that has gained significant attention is Cisco Application Centric Infrastructure (ACI). In this blog post, we explored the various aspects of Cisco ACI Security and how it can enhance network security.

Understanding Cisco ACI

Cisco ACI is a policy-based automation solution that provides a centralized network management approach. It offers a flexible and scalable network infrastructure that combines software-defined networking (SDN) and network virtualization.

Key Security Features of Cisco ACI

Micro-Segmentation: One of Cisco ACI’s standout features is micro-segmentation. It allows organizations to divide their network into smaller segments, providing granular control over security policies. This helps limit threats’ lateral movement and contain potential breaches.

Integrated Security Services: Cisco ACI integrates seamlessly with various security services, such as firewalls, intrusion prevention systems (IPS), and threat intelligence platforms. This integration ensures a holistic security approach and enables real-time detection and prevention.

Policy-Based Security

Policy Enforcement: With Cisco ACI, security policies can be defined and enforced at the application level. This means that security rules can follow applications as they move across the network, providing consistent protection. Policies can be defined based on application requirements, user roles, or other criteria.

Automation and Orchestration: Cisco ACI simplifies security management through automation and orchestration. Security policies can be applied dynamically based on predefined rules, reducing the manual effort required to configure and maintain security settings. This agility helps organizations respond quickly to emerging threats.

Threat Intelligence and Analytics

Real-Time Monitoring: Cisco ACI provides comprehensive monitoring capabilities, allowing organizations to gain real-time visibility into their network traffic. This includes traffic behavior analysis, anomaly detection, and threat intelligence integration. Proactively monitoring the network can identify and mitigate potential security incidents promptly.

Centralized Security Management: Cisco ACI offers a centralized management console for easily managing security policies and configurations. This streamlines security operations simplifies troubleshooting and ensures consistent policy enforcement across the network.

Conclusion: Cisco ACI is a powerful solution for enhancing network security. Its micro-segmentation capabilities, integration with security services, policy-based security enforcement, and advanced threat intelligence and analytics make it a robust choice for organizations looking to protect their network infrastructure. By adopting Cisco ACI, businesses can strengthen their security posture and mitigate the ever-evolving cyber threats.

identity security

Identity Security

Identity Security

In today's interconnected world, protecting our personal information has become more crucial than ever. With the rise of cybercrime and data breaches, ensuring identity security has become a paramount concern for individuals and organizations alike. In this blog post, we will explore the importance of identity security, common threats to our identities, and practical steps to safeguard our personal information.

Identity security refers to the protection of our personal information from unauthorized access, use, or theft. It encompasses various aspects such as safeguarding our Social Security numbers, bank account details, credit card information, and online credentials. By maintaining a robust identity security, we can mitigate the risks of identity theft, financial fraud, and other malicious activities that can have severe consequences on our personal and financial well-being.

There are a number of common threats that jeopardize our identity security. Cybercriminals employ various tactics such as phishing, malware, and social engineering to gain unauthorized access to our personal information. They exploit vulnerabilities in our online behavior, weak passwords, and outdated security measures. It is essential to be aware of these threats and take proactive measures to protect ourselves.

Now that we understand the importance of identity security and the threats we face, let's explore practical steps to fortify our defenses. This section will provide actionable tips, including:

1. Strong Passwords and Two-Factor Authentication: Creating unique, complex passwords and enabling two-factor authentication adds an extra layer of security to our online accounts.

2. Secure Internet Connections: Avoiding public Wi-Fi networks and using VPNs (Virtual Private Networks) when accessing sensitive information can help prevent unauthorized access to our data.

3. Regular Software Updates: Keeping our operating systems, applications, and antivirus software up to date is crucial to patch security vulnerabilities.

4. Practicing Safe Online Behavior: Being cautious while clicking on links or downloading attachments, avoiding suspicious websites, and being mindful of sharing personal information online are essential habits to develop.

Highlights: Identity Security

The Importance of Identity Security

Identity security safeguards your personal information from unauthorized access, fraud, and identity theft. With the increasing prevalence of data breaches and online scams, it is essential to comprehend the significance of protecting your digital identity. Doing so can mitigate potential risks and maintain control over your sensitive data. Identity theft is a pervasive issue that can have devastating consequences.

Cybercriminals employ various techniques to obtain personal information, such as phishing, hacking, and data breaches. Once they access your identity, they can wreak havoc on your financial and personal life. It is essential to understand the gravity of this threat and take necessary precautions.

Required – Identity Security:

– Strong Passwords and Two-Factor Authentication: One fundamental aspect of identity security is creating strong, unique passwords for all your online accounts. Avoid using common passwords or personal information that can be easily guessed. Implementing two-factor authentication adds an extra layer of protection by requiring a verification code or biometric confirmation in addition to your password.

– Regularly Update and Secure Your Devices: Keeping your devices updated with the latest software and security patches is vital for identity security. Manufacturers periodically release updates to address vulnerabilities and strengthen defenses against potential threats. Additionally, consider installing reputable antivirus software and firewalls to protect against malware and other malicious attacks.

– Be Mindful of Phishing Attempts: Phishing is a common tactic used by cybercriminals to trick individuals into revealing their personal information. Be cautious when clicking on links or providing sensitive data, especially in emails or messages from unknown sources. Verify the legitimacy of websites and communicate directly with trusted organizations to avoid falling victim to phishing scams.

Zero-Trust Identity Management 

Zero-trust identity management involves continuously verifying users and devices to ensure access and privileges are granted only when needed. The backbone of zero-trust identity security starts by assuming that any human or machine identity with access to your applications and systems may have been compromised.

The “assume breach” mentality requires vigilance and a Zero Trust approach to security centered on securing identities. With identity security as the backbone of a zero-trust process, teams can focus on identifying, isolating, and stopping threats from compromising identities and gaining privilege before they can harm.

Zero Trust Authentication

The identity-centric focus of zero trust authentication uses an approach to security to ensure that every person and every device granted access is who and what they say they are. It achieves this authentication by focusing on the following key components:

  1. The network is always assumed to be hostile.
  2. External and internal threats always exist on the network.
  3. Network locality needs to be more sufficient to decide trust in a network. As discussed, other contextual factors must also be taken into account.
  4. Every device, user, and network flow is authenticated and authorized. All of this must be logged.
  5. Security policies must be dynamic and calculated from as many data sources as possible.

Zero Trust Identity: Validate Every Device

Not just the user: Validate every device. While user verification adds a level of security, more is needed. We must ensure that the devices are authenticated and associated with verified users, not just the users.

Risk-based access: Risk-based access intelligence should reduce the attack surface after a device has been validated and verified as belonging to an authorized user. This allows aspects of the security posture of endpoints, like device location, a device certificate, OS, browser, and time, to be used for further access validation. 

Device Validation: Reduce the attack surface

While device validation helps limit the attack surface, it is only as reliable as the endpoint’s security. Antivirus software to secure endpoint devices will only get you so far. We need additional tools and mechanisms to tighten security even further.

Identity Security – Google Cloud

### What is Identity-Aware Proxy?

Identity-Aware Proxy is a security feature that ensures only authenticated users can access your applications and resources. Unlike traditional security models that rely on network-based access controls, IAP uses user identity and contextual information to allow or deny access. This approach allows organizations to implement a zero-trust security model, where the focus is on verifying the user and their context rather than the network they are connecting from.

### Benefits of Using Identity-Aware Proxy

Implementing IAP comes with a range of benefits that can significantly enhance an organization’s security posture:

1. **Improved Security**: By enforcing access based on user identity and context, IAP reduces the risk of unauthorized access. It ensures that only legitimate users can access sensitive applications and data.

2. **Simplified Access Management**: IAP centralizes access control management, allowing administrators to easily define and enforce access policies that are consistent across all applications and services.

3. **Scalability**: As organizations grow, so do their security needs. IAP scales effortlessly with your infrastructure, making it suitable for businesses of all sizes.

4. **Enhanced User Experience**: With IAP, users can access applications seamlessly without the need for a VPN or additional authentication layers, improving productivity and satisfaction.

### Integration with Google Cloud

Google Cloud’s Identity-Aware Proxy is a robust solution for securing application access. It integrates seamlessly with Google Cloud services, allowing organizations to leverage Google’s powerful infrastructure for managing and securing their applications. Google Cloud IAP supports a wide range of applications, including those hosted on Google Kubernetes Engine, Compute Engine, and App Engine. By using Google Cloud IAP, organizations can take advantage of features such as single sign-on (SSO), multi-factor authentication (MFA), and detailed access logging.

Identity aware proxy

### What are VPC Service Controls?

VPC Service Controls provide a security perimeter around Google Cloud services, adding an extra layer of protection against data exfiltration. With VPC Service Controls, organizations can define security policies that restrict access to their data based on the source and destination of network traffic, ensuring that sensitive information remains secure even in a highly distributed environment. This feature is particularly beneficial for businesses dealing with sensitive data, as it provides a robust mechanism to control data access and movement.

### Enhancing Identity Security

Identity security is a critical component of any cloud security strategy. VPC Service Controls play a pivotal role in this aspect by allowing organizations to manage and secure identities across their cloud infrastructure. By defining policies that specify which identities can access particular services, organizations can minimize the risk of unauthorized data access. This level of control is crucial for maintaining compliance with regulatory standards and safeguarding sensitive information.

VPC Security Controls

Cloud IAM **The Pillars of Identity Security**

Identity security is more than just controlling access; it’s about safeguarding digital identities across the board. Google Cloud IAM offers robust identity security by employing the principle of least privilege, allowing users to access only what they need to perform their jobs. This minimizes potential attack surfaces and reduces the risk of unauthorized access. Additionally, IAM integrates seamlessly with Google Cloud’s security tools, providing a comprehensive security posture. This integration ensures that identity-related threats are quickly identified and mitigated, enhancing the overall security of your digital ecosystem.

**Streamlining Access Management**

Managing access is a dynamic challenge, especially in organizations where roles and responsibilities are constantly evolving. Google Cloud IAM simplifies this process by offering predefined roles and custom roles that can be tailored to specific needs. This flexibility allows administrators to define precise access controls, ensuring that users have the necessary permissions without overexposing sensitive data. Furthermore, IAM’s audit logs provide transparency and accountability, allowing administrators to track access and identify any anomalies.

**Enhancing Collaboration Through Secure Access**

In today’s interconnected world, collaboration is key. Google Cloud IAM facilitates secure collaboration by allowing organizations to manage and share resources efficiently across teams, partners, and clients. By leveraging IAM, organizations can create a seamless and secure environment where collaboration does not compromise security. Multi-factor authentication and context-aware access further enhance this security, ensuring that access is granted based on real-time conditions and user behavior.

Google Cloud IAM

**Unveiling the Vault**

**Introduction: Understanding the Basics**

In today’s digital landscape, securing sensitive data and ensuring only the right individuals have access to particular resources is paramount. This is where the concepts of authentication, authorization, and identity come into play, and tools like Vault become indispensable. Whether you’re a developer, systems administrator, or security professional, understanding how Vault manages these critical aspects can significantly enhance your security posture.

**Authentication: The First Line of Defense**

Authentication is the process of verifying who someone is. In the context of Vault, it involves ensuring that the entity (user or machine) trying to access a particular resource is who it claims to be. Vault supports a variety of authentication methods, including token-based, username and password, and more advanced methods like AWS IAM roles or Kubernetes service accounts. By providing multiple authentication paths, Vault offers flexibility and security to meet diverse organizational needs.

**Authorization: Granting the Right Permissions**

Once an entity’s identity is verified, the next step is to determine what they’re allowed to do. This is where authorization comes in, dictating the actions an authenticated user can perform. Vault uses policies to manage authorization. These policies are written in HashiCorp Configuration Language (HCL) and define precise control over what data and operations a user or system can access. With Vault, administrators can ensure that users have just enough permissions to perform their job, reducing the risk of data breaches or misuse.

**Identity: The Core of Secure Access Management**

Identity management is crucial for maintaining a secure environment, especially in complex, multi-cloud architectures. Vault’s identity framework allows organizations to unify users’ identities and manage them seamlessly. By integrating with external identity providers, Vault makes it easy to map the identities of various users and systems to Vault’s internal policies. This integration not only streamlines access management but also enhances overall security by ensuring consistent identity verification across platforms.

Vault

**Security Scanning: Potential Identity Threats**

Example: Security Scan with Lynis

Lynis Security Scan is a powerful open-source security auditing tool that helps you identify vulnerabilities and weaknesses in your system. It comprehensively assesses your system’s security configuration, scanning various aspects such as file permissions, user accounts, network settings, and more. Lynis provides valuable insights into your system’s security status by leveraging various tests and checks.

**New Attack Surface, New Technologies**

Identity security has pushed authentication to a new, more secure landscape, reacting to improved technologies and sophisticated attacks. The need for more accessible and secure authentication has led to the wide adoption of zero-trust identity management zero trust authentication technologies like risk-based authentication (RBA), fast identity online (FIDO2), and just-in-time (JIT) techniques.

**Challenge: Visibility Gaps**

If you examine our identities, applications, and devices, you will see that they are in the crosshairs of bad actors, making them probable threat vectors. In addition, we are challenged by the sophistication of our infrastructure, which increases our attack surface and creates gaps in our visibility. Controlling access and the holes created by complexity is the basis of all healthy security. 

**Challenge: Social Engineering**

Social engineering involves manipulating individuals into performing actions or divulging confidential information. Attackers may impersonate someone in a position of authority or use emotional manipulation to gain trust. By collecting personal data from social media platforms or other online sources, criminals can create convincing personas to deceive unsuspecting victims.

Hackers, fraudsters, and cybercriminals employ phishing, pretexting, and baiting tactics to achieve their nefarious goals.

Common Social Engineering Techniques

  • Phishing: One of the most prevalent techniques involves sending fraudulent emails disguised as legitimate ones to trick recipients into divulging sensitive information.
  • Pretexting: This technique involves creating a fabricated scenario and impersonating someone trustworthy to extract valuable information.
  • Baiting: Baiting lures victims with enticing offers or rewards, often through physical media like infected USB drives or fake promotional materials.

Popular Attack Vectors: Phishing Attacks

Phishing attacks have become increasingly sophisticated and deceptive. Cybercriminals create fake emails, websites, or messages that closely resemble legitimate organizations to trick users into revealing sensitive information. These attacks often prey on human psychology, exploiting trust and urgency to manipulate victims into divulging personal data.

Phishers employ various tactics to manipulate their targets and gain unauthorized access to sensitive information. One common tactic is creating emails or messages that appear to be from reputable organizations, enticing recipients to click on malicious links or download harmful attachments. Another technique involves masquerading as a trusted individual, such as a colleague or a friend, to deceive the target into sharing confidential details.

Starting with Endpoint Security

Endpoint security protects endpoints like laptops, desktops, servers, and mobile devices.

ARP Security: The Address Resolution Protocol (ARP) is vulnerable to various attacks, such as ARP spoofing, which can lead to network breaches. Implementing ARP security measures, such as ARP cache monitoring and strict ARP validation, can help protect against these attacks and ensure the integrity of your network.

Secure Routing: Securing your network’s routing protocols is essential to prevent unauthorized access and route manipulation. Implementing secure routing techniques, such as using encrypted protocols (e.g., BGP over IPsec) and implementing access control lists (ACLs) on routers, can enhance the overall security of your network.

Network Monitoring with Netstat: Netstat is a powerful command-line tool for monitoring network connections, open ports, and active endpoint sessions. By regularly using netstat, you can identify suspicious connections or unauthorized access attempts and take appropriate action to mitigate potential threats.

Identity Security with Linux

Strong User Authentication

User authentication forms the first line of defense in securing identity. Implementing solid passwords, enforcing password policies, and utilizing multi-factor authentication (MFA) mechanisms are essential to enhance Linux security.

Efficient user account management plays a crucial role in identity security. Regularly reviewing and auditing user accounts, disabling unnecessary accounts, and implementing proper access controls ensure that only authorized users can access sensitive data.

Securing communication channels is vital to protect identity during data transmission. Encrypted protocols such as SSH (Secure Shell) and HTTPS (Hypertext Transfer Protocol Secure) ensure that sensitive information remains confidential and protected from eavesdropping or tampering.

Understanding SELinux

SELinux, or Security-Enhanced Linux, is a security module integrated into the Linux kernel. It provides fine-grained access control policies and enhances the system’s overall security posture. Unlike traditional access control mechanisms, SELinux operates on the principle of least privilege, ensuring that only authorized actions are allowed.

Zero-trust endpoint protection is a security model that assumes no implicit trust in any user or device, regardless of location within or outside the network. It emphasizes continuous verification and strict access controls to mitigate potential threats. Organizations can bolster their security measures by incorporating SELinux into a zero-trust framework by enforcing granular policies on every endpoint.

Detecting Identity Threats in Logs

The Power of Logs

Logs serve as a digital footprint, capturing a wide range of activities and events within a system. They act as silent witnesses, recording valuable information to aid security analysis and incident response. Syslog and auth.log are two types of logs critical in security event detection.

Syslog is a standardized protocol for message logging, allowing various devices and applications to send log messages to a central repository. It offers a wealth of information, including system events, errors, warnings, etc. Understanding the structure and content of syslog entries is essential for effective security event detection.

Auth.log, short for authentication log, records authentication-related activities within a system. It tracks successful and failed login attempts, user authentication methods, and other relevant information. By analyzing auth.log entries, security professionals can swiftly identify potential breaches and unauthorized access attempts.

Example Identity Product: Understanding Cisco ISE

Cisco ISE is a comprehensive security policy management platform that enables organizations to enforce security policies across the network infrastructure. It provides granular control over user access and device authentication, ensuring that only authorized users and devices can connect to the network. By integrating with existing network infrastructure such as switches, routers, and firewalls, Cisco ISE simplifies the management of access control policies and strengthens network security.

Cisco ISE offers a wide range of features that enhance network security. These include:

1. Identity-Based Access Control: Cisco ISE allows organizations to define policies based on user identities rather than IP addresses. This enables more granular control over access permissions and reduces the risk of unauthorized access.

2. Device Profiling: With Cisco ISE, organizations can identify and profile devices connecting to the network. This helps detect and block unauthorized or suspicious devices, preventing potential security breaches.

3. Guest Access Management: Cisco ISE simplifies guest access management by providing a self-service portal for guest users. It allows organizations to define guest policies, control access duration, and monitor guest activities, ensuring a secure guest access experience.

Related: Before you proceed, you may find the following posts helpful

  1. SASE Model
  2. Zero Trust Security Strategy
  3. Zero Trust Network Design
  4. OpenShift Security Best Practices
  5. Zero Trust Networking
  6. Zero Trust Network
  7. Zero Trust Access

Identity Security: The Workflow 

Identity Security: The Concept

The concept of identity security is straightforward and follows a standard workflow that can be understood and secured. First, a user logs into their employee desktop and is authenticated as an individual who should have access to this network segment. This is the authentication stage.

They have appropriate permissions assigned so they can navigate to the required assets (such as an application or file servers) and are authorized as someone who should have access to this application. This is the authorization stage.

As they move across the network to carry out their day-to-day duties, all of this movement is logged, and all access information is captured and analyzed for auditing purposes. Anything outside of normal behavior is flagged. Splunk UEBA has good features here.

  • Stage of Authentication: You must accurately authenticate every human and non-human identity. After an identity is authenticated to confirm who it is, it only gets a free one for some to access the system with impunity. 
  • Stage of Re-Authentication: Identities should be re-authenticated if the system detects suspicious behavior or before completing tasks and accessing data that is deemed to be highly sensitive. If we have an identity that acts outside of normal baseline behavior, they must re-authenticate.
  • Stage of Authorization: Then we need to move to authorization. We need to authorize the user to ensure they’re allowed access to the asset only when required and only with the permissions they need to do their job. So we have authorized each identity on the network with the proper permissions so they can access what they need and not more. 
  • Stage of Access: Then, we look into Access: Provide structured access to authorized assets for that identity. How can appropriate access be given to the person/user/device/bot/script/account and nothing else? Follow the practices of zero-trust identity management and least privilege. Ideally, access is granted to microsegments instead of significant VLANs based on traditional zone-based networking.
  • Stage of Audit: All identity activity must be audited or accounted for. Auditing allows insight and evidence that Identity Security policies are working as intended. How do you monitor identity activities? How do you reconstruct and analyze an identity’s actions? An auditing capability ensures visibility into an identity’s activities, provides context for the identity’s usage and behavior, and enables analytics that identify risk and provide insights to make smarter decisions about access.

Scanning Networks

The Importance of Network Scanning

Network scanning systematically examines a network to identify its vulnerabilities, open ports, and active devices. Network administrators can gain valuable insights into their security posture using specialized tools and techniques. Understanding the fundamentals of network scanning is crucial for effectively securing network infrastructures.

There are several network scanning techniques, each serving a specific purpose. Port scanning, for example, involves probing network ports to identify potential entry points for attackers. Vulnerability scanning, on the other hand, focuses on identifying known vulnerabilities within network devices and applications. Organizations can adopt a comprehensive approach to network security by exploring these different types of network scanning.

Starting Zero Trust Identity Management

Now, we have an identity as the new perimeter compounded by identity being the latest target. Any identity is a target. Looking at the modern enterprise landscape, it’s easy to see why. Every employee has multiple identities and uses several devices.

What makes this worse is that security teams’ primary issue is that identity-driven attacks are hard to detect. For example, how do you know if a bad actor or a sys admin uses the privilege controls? As a result, security teams must find a reliable way to monitor suspicious user behavior to determine the signs of compromised identities.

We now have identity sprawl, which may be acceptable if only one of those identities has user access. However, it doesn’t, and it most likely has privileged access. All these widen the attack surface by creating additional human and machine identities that can gain privileged access under certain conditions, establishing new pathways for bad actors.

We must adopt a different approach to secure our identities regardless of where they may be. Here, we can look for a zero-trust identity management approach based on identity security. Next, I’d like to discuss your common challenges when adopting identity security.

Challenge 1: Zero trust identity management and privilege credential compromise

Current environments may result in anonymous access to privileged accounts and sensitive information. Unsurprisingly, 80% of breaches start with compromised privilege credentials. If left unsecured, attackers can compromise these valuable secrets and credentials to gain possession of privileged accounts and perform advanced attacks or use them to exfiltrate data.

Challenge 2: Zero trust identity management and exploiting privileged accounts

We have two types of bad actors. First, there are external attackers and malicious insiders who can exploit privileged accounts to orchestrate a variety of attacks. Privileged accounts are used in nearly every cyber attack. With privileged access, bad actors can disable systems, take control of IT infrastructure, and gain access to sensitive data. So, we face several challenges when securing identities, namely protecting, controlling, and monitoring privileged access.

Challenge 3: Zero trust identity management and lateral movements

Lateral movements will happen. A bad actor has to move throughout the network. They will never land directly on a database or important file server. The initial entry point into the network could be an unsecured IoT device, which does not hold sensitive data. As a result, bad actors need to pivot across the network.

They will laterally move throughout the network with these privileged accounts, looking for high-value targets. They then use their elevated privileges to steal confidential information and exfiltrate data. There are many ways to exfiltrate data, with DNS being a common vector that often goes unmonitored. How do you know a bad actor is moving laterally with admin credentials using admin tools built into standard Windows desktops?

The issue with VLAN-based segmentation is large broadcast domains with free-for-all access. This represents a larger attack surface where lateral movements can take place. Below is a standard VLAN-based network running Spanning Tree Protocol ( STP ).

Example: Issues with VLAN based segmentation

Example: Improved Segmentation with Network Endpoint Groups (NEGs)

network endpoint groups

 

Challenge 4: Zero trust identity management and distributed attacks

These attacks are distributed, and there will be many dots to connect to understand threats on the network. Could you look at ransomware? Enrolling the malware needs elevated privilege, and it’s better to detect this before the encryption starts. Some ransomware families perform partial encryption quickly. Once encryption starts, it’s game over. You need to detect this early in the kill chain in the detect phase.

The best approach to zero-trust authentication is to know who accesses the data, ensure they are the users they claim to be, and operate on the trusted endpoint that meets compliance. There are plenty of ways to authenticate to the network; many claim password-based authentication is weak.

The core of identity security is understanding that passwords can be phished; essentially, using a password is sharing. So, we need to add multifactor authentication (MFA). MFA gives a big lift but needs to be done well. You can get breached even if you have an MFA solution in place.

Knowledge Check: Multi-factor authentication (MFA)

More than simple passwords are needed for healthy security. A password is a single authentication factor – anyone with it can use it. No matter how strong it is, keeping information private is useless if lost or stolen. You must use a different secondary authentication factor to secure your data appropriately.

Here’s a quick breakdown:

•Two-factor authentication: This method uses two-factor classes to provide authentication. It is also known as ‘2FA’ and ‘TFA.’

Multi-factor authentication: use of two or more factor classes to provide authentication. This is also represented as ‘MFA.’

Two-step verification: This authentication method involves two independent steps but does not necessarily require two separate factor classes. It is also known as ‘2SV’.

Strong authentication: authentication beyond simply a password. It may be represented by the usage of ‘security questions’ or layered security like two-factor authentication.

The Move For Zero Trust Authentication

No MFA solution is an island. Every MFA solution is just one part of multiple components, relationships, and dependencies. Each piece is an additional area where an exploitable vulnerability can occur. Essentially, any element in the MFA’s life cycle, from provisioning to de-provisioning and everything in between, is subject to exploitable vulnerabilities and hacking. And like the proverbial chain, it’s only as strong as its weakest link.

Zero trust authentication: Two or More Hacking Methods Used

Many MFA attacks use two or more of the leading hacking methods. Often, social engineering is used to start the attack and get the victim to click on a link or to activate a process, which then uses one of the other methods to accomplish the necessary technical hacking. 

For example, a user may receive a phishing email directing them to a fake website, which accomplishes a man-in-the-middle (MitM) attack and steals credential secrets. Alternatively, a hardware token may be physically stolen and forensically examined to find the stored authentication secrets. MFA hacking requires using two or all of these main hacking methods.

You Can’t Rely On MFA Alone

You can’t rely on MFA alone; you must validate privileged users with context-aware Adaptive Multifactor Authentication and secure access to business resources with Single Sign-On. Unfortunately, credential theft remains the No. 1 area of risk. And bad actors are getting better at bypassing MFA using a variety of vectors and techniques.

For example, a bad actor can be tricked into accepting a push notification to their smartphone to grant access in the context of getting admission. You are still acceptable to man-in-the-middle attacks. This is why MFA and IDP vendors introduce risk-based authentication and step-up authentication. These techniques limited the attack surface, which we will talk about soon.

**Considerations for Zero Trust Authentication** 

  • Think like a bad actor.

By thinking like a bad actor, we can attempt to identify suspicious activity, restrict lateral movement, and contain threats. Try envisioning what you would look for if you were a bad external actor or malicious insider. For example, are you looking to steal sensitive data to sell it to competitors, start Ransomware binaries, or use your infrastructure for illicit crypto mining? 

  • Attacks with happen

The harsh reality is that attacks will happen, and you can only partially secure some of their applications and infrastructure wherever they exist. So it’s not a matter of ‘if’ but a concern of’ when.’ Protection from all the various methods that attackers use is virtually impossible, and there will occasionally be day 0 attacks. So, they will eventually get in; it’s all about what they can do once they are in.

  • The first action is to protect Identities.

Therefore, you must first protect their identities and prioritize what matters most—privileged access. Infrastructure and critical data are only fully protected if privileged accounts, credentials, and secrets are secured and protected.

  • The bad actor needs privileged access.

We know that about 80% of breaches tied to hacking involve using lost or stolen credentials. Compromised identities are the common denominator in virtually every severe attack. The reason is apparent: 

The bad actor needs privileged access to the network infrastructure to steal data. However, without privileged access, an attacker is severely limited in what they can do. Furthermore, without privileged access, they may be unable to pivot from one machine to another. And the chances of landing on a high-value target are doubtful.

  • The malware requires admin access

The malware used to pivot and requires admin access to gain persistence; privileged access without vigilant management creates an ever-growing attack surface around privileged accounts.

**Adopting Zero Trust Authentication** 

Where can you start identity security with all of this? Firstly, we can look at a zero-trust authentication protocol. We need an authentication protocol that can be phishing-resistant. This is FIDO2, known as Fast Identity Online (FIDO2), built on two protocols that effectively remove any blind protocols. FIDO authentication Fast Identity Online (FIDO) is a challenge-response protocol that uses public-key cryptography. Rather than using certificates, it manages keys automatically and beneath the covers.

**Technology with Fast Identity Online (FIDO2)**

FIDO2 uses two standards. The Client to Authenticator Protocol (CTAP) describes how a browser or operating system establishes a connection to a FIDO authenticator. The WebAuthn protocol is built into browsers and provides an API that JavaScript from a web service can use to register a FIDO key, send a challenge to the authenticator, and receive a response to the challenge.

So, there is an application the user wants to go to, and then we have the client, which is often the system’s browser, but it can be an application that can speak and understand WebAuthn. FIDO provides a secure and convenient way to authenticate users without using passwords, SMS codes, or TOTP authenticator applications. Modern computers, smartphones, and most mainstream browsers understand FIDO natively. 

FIDO2 addresses phishing by cryptographically proving that the end-user has a physical position over the authentication. There are two types of authenticators: a local authenticator, such as a USB device, and a roaming authenticator, such as a mobile device. These need to be certified FIDO2 vendors. 

The other is a platform authenticator such as Touch ID or Windows Hello. While roaming authenticators are available, for most use cases, platform authenticators are sufficient. This makes FIDO an easy, inexpensive way for people to authenticate. The biggest impediment to its widespread use is that people won’t believe something so easy is secure.

**Risk-based authentication**

Risk is not a static attribute, and it needs to be re-calculated and re-evaluated so you can make intelligent decisions for step-up and user authentication. We have Cisco DUO that reacts to risk-based signals at the point of authentication.

These risk signals are processed in real-time to detect signs of known account takeout signals. These signals may include Push Bombs, Push Sprays, and Fatigue attacks. Also, a change of location can signal high risk. Risk-based authentication (RBA) is usually coupled with step-up authentication.

For example, let’s say your employees are under attack. RBA can detect this attack as a stuffing attack and move from a classic authentication approach to a more secure verified PUSH approach than the standard PUSH. 

This would add more friction but result in better security, such as adding three to six digital display keys at your location/devices, and you need to enter this key in your application. This eliminates fatigue attacks. This verified PUSH approach can be enabled at an enterprise level or just for a group of users.

**Conditional Access**

Then, we move towards conditional access, a step beyond authentication. Conditional access examines the context and risk of each access attempt. For example, contextual factors may include consecutive login failures, geo-location, type of user account, or device IP to either grant or deny access. Based on those contextual factors, access may be granted only to specific network segments. 

Risk-based decisions and recommended capabilities

The identity security solution should be configurable to allow SSO access, challenge the user with MFA, or block access based on predefined conditions set by policy. It would help if you looked for a solution that can offer a broad range of shapes, such as IP range, day of the week, time of day, time range, device O/S, browser type, country, and user risk level. 

These context-based access policies should be enforceable across users, applications, workstations, mobile devices, servers, network devices, and VPNs. A key question is whether the solution makes risk-based access decisions using a behavior profile calculated for each user.

**Technology with JIT techniques**

Secure privileged access and manage entitlements. For this reason, many enterprises employ a least privilege approach, where access is restricted to the resources necessary for the end-user to complete their job responsibilities with no extra permission. A standard technology here would be Just in Time (JIT). Implementing JIT ensures that identities have only the appropriate privileges, when necessary, as quickly as possible and for the least time required. 

JIT techniques that dynamically elevate rights only when needed are a technology to enforce the least privilege. The solution allows for JIT elevation and access on a “by request” basis for a predefined period, with a full audit of privileged activities. Full administrative rights or application-level access can be granted, time-limited, and revoked.

Summary: Identity Security

In today’s interconnected digital world, protecting our identities online has become more critical than ever. From personal information to financial data, our digital identities are vulnerable to various threats. This blog post aimed to shed light on the significance of identity security and provide practical tips to enhance your online safety.

Understanding Identity Security

Identity security is the measure to safeguard personal information and prevent unauthorized access. It encompasses protecting sensitive data such as login credentials, financial details, and personal identification information (PII). Individuals can mitigate the risks of identity theft, fraud, and privacy breaches by ensuring robust identity security.

Common Threats to Identity Security

In this section, we’ll explore some of the most prevalent threats to identity security, including phishing attacks, malware infections, social engineering, and data breaches. Understanding these threats is crucial for recognizing potential vulnerabilities and taking appropriate preventative measures.

Best Practices for Strengthening Identity Security

Now that we’ve highlighted the importance of identity security and identified common threats, let’s delve into practical tips to fortify your online presence:

1. Strong and Unique Passwords: Use complex passwords that combine letters, numbers, and special characters. Avoid using the same password across multiple platforms.

2. Two-Factor Authentication (2FA): Enable 2FA whenever possible to add an extra layer of security. This typically involves a secondary verification method, such as a code sent to your mobile device.

3. Regular Software Updates: Keep all your devices and applications current. Software updates often include security patches that address known vulnerabilities.

4. Beware of Phishing Attempts: Be cautious of suspicious emails, messages, or calls asking for personal information. Verify the authenticity of requests before sharing sensitive data.

5. Secure Wi-Fi Networks: When connecting to public Wi-Fi networks, use a virtual private network (VPN) to encrypt your internet traffic and protect your data from potential eavesdroppers.

The Role of Privacy Settings

Privacy settings play a crucial role in controlling the visibility of your personal information. Platforms and applications often provide various options to customize privacy preferences. Take the time to review and adjust these settings according to your comfort level.

Monitoring and Detecting Suspicious Activity

Remaining vigilant is paramount in maintaining identity security. Regularly monitor your financial statements, credit reports, and online accounts for unusual activity. Promptly report any suspicious incidents to the relevant authorities.

Conclusion:

In an era where digital identities are constantly at risk, prioritizing identity security is non-negotiable. Implementing the best practices outlined in this blogpost can significantly enhance your online safety and protect your valuable personal information. Proactive measures and staying informed are vital to maintaining a secure digital identity.

data center firewall

Cisco Secure Firewall with SASE Cloud

Cisco Secure Firewall with SASE Cloud

In today's digital era, network security is of paramount importance. With the rise of cloud-based services and remote work, businesses require a comprehensive security solution that not only protects their network but also ensures scalability and flexibility. Cisco Secure Firewall with SASE (Secure Access Service Edge) Cloud is a cutting-edge solution that combines the robustness of firewall protection with the agility of cloud-based security services. In this blog post, we will delve into the features and benefits of Cisco Secure Firewall with SASE Cloud.

Cisco Secure Firewall is an advanced network security solution designed to safeguard organizations from cyber threats. Built on industry-leading technology, it provides next-generation firewall capabilities, intrusion prevention, and application control. With granular security policies, deep visibility, and advanced threat intelligence, Cisco Secure Firewall empowers businesses to protect their networks from internal and external threats effectively.

SASE (Secure Access Service Edge) is a transformative approach to network security and connectivity. By converging networking and security functions into a unified cloud-based service, SASE offers organizations scalable and flexible security solutions. Cisco Secure Firewall with SASE Cloud takes advantage of this architecture, providing businesses with integrated security services that are delivered from the cloud. This enables seamless scalability, simplified management, and enhanced protection against evolving threats.

a) Cloud-Native Firewall: Cisco Secure Firewall with SASE Cloud leverages the power of cloud-native architecture, enabling organizations to easily scale their security infrastructure based on demand. It ensures consistent security policies across various locations and eliminates the need for hardware-based firewalls.

b) Advanced Threat Protection: With integrated threat intelligence and advanced analytics, Cisco Secure Firewall with SASE Cloud offers robust protection against sophisticated threats. It provides real-time threat detection and prevention, ensuring that businesses stay one step ahead of cybercriminals.

c) Simplified Management: The centralized management console allows organizations to effortlessly manage their security policies and configurations. From a single interface, administrators can efficiently deploy and enforce security policies, reducing complexity and enhancing operational efficiency.

As organizations continue to embrace digital transformation, the network landscape is constantly evolving. Cisco Secure Firewall with SASE Cloud future-proofs your network by providing a scalable and adaptable security solution. Its cloud-native architecture and integration with SASE enable businesses to stay agile and easily adapt to changing security requirements, ensuring long-term protection and resilience.

Highlights: Cisco Secure Firewall with SASE Cloud

Understanding the Cisco Secure Firewall

Cisco Secure Firewall is a cutting-edge network security appliance that provides advanced threat protection, secure connectivity, and simplified management. It combines next-generation firewall capabilities with intrusion prevention, application visibility and control, and advanced malware protection. With its comprehensive suite of security features, it offers a multi-layered defense against a wide range of cyber threats.

Key Cisco Secure Firewall Features:

– Advanced Threat Protection: The Cisco Secure Firewall employs advanced intelligence to detect and prevent sophisticated attacks, including malware, ransomware, and zero-day exploits. Its integrated security technologies work in tandem to identify and mitigate threats in real-time, ensuring your network infrastructure’s highest level of protection.

– Secure Connectivity: The Cisco Secure Firewall enables secure remote access and site-to-site connectivity with built-in VPN capabilities. It establishes encrypted tunnels, allowing authorized users to access network resources from anywhere while ensuring data confidentiality and integrity.

– Application Visibility and Control: Gaining visibility into network traffic and effectively managing application usage is crucial for optimizing network performance and ensuring security. The Cisco Secure Firewall offers granular application control, allowing administrators to define policies and prioritize critical applications while restricting or blocking unauthorized ones.

**Broken Firewall Rules**

One-third of your firewall rules are broken. You can use the Cisco AI Assistant for Security to identify and report policies, augment troubleshooting, and automate policy lifecycle management with the Cisco AI Assistant for Security. Also, take back control of your encrypted traffic and application environments. Cisco Talos lets you see and detect more across your infrastructure while ensuring security resilience across billions of signals.

**Advanced Clustering & HA**

This enabled you to drive efficiency at scale. Secure Firewall’s advanced clustering, high availability, and multi-instance capabilities allow you to scale, be reliable, and be productive. Finally, by integrating network, microsegmentation, and app security, Secure Firewall makes zero-trust achievable and cost-effective. It automates access and anticipates what comes next.

Knowledge Check: Cisco’s Firewalling

Cisco integrated its original Sourcefire’s next-generation security technologies into its existing firewall solutions, the Adaptive Security Appliances (ASA). In that early implementation, Sourcefire technologies ran as a separate service module. Later, Cisco designed new hardware platforms to support Sourcefire technologies natively.

They are named Cisco Firepower, later rebranded as Cisco Secure Firewall, which is the current implementation of Firewalling. In the new implementation, Cisco converges Sourcefire’s next-generation security features, open-source Snort, and ASA’s firewall functionalities into a unified software image. This unified software is called the Firepower Threat Defense (FTD). After rebranding, this software is now known as the Cisco Secure Firewall.

Example Security Technology: IPS IDS

Understanding Suricata

Suricata is an open-source network threat detection engine that offers high-performance intrusion detection and prevention capabilities. Built with speed, scalability, and robustness, Suricata analyzes network traffic and detects various threats, including malware, exploits, and suspicious activities. Its multi-threaded architecture and rule-based detection mechanism make it a formidable weapon against cyber threats.

Suricata boasts an impressive array of features that elevate its effectiveness in network security. Suricata covers various security needs, from protocol analysis and content inspection to file extraction and SSL/TLS decryption. Its extensive rule set allows for fine-grained control over network traffic, enabling tailored threat detection and prevention. Additionally, Suricata supports various output formats, making it compatible with other security tools and SIEM solutions.

Understanding SASE Cloud

Cisco SASE Cloud, short for Secure Access Service Edge Cloud, is a comprehensive networking and security platform that combines wide area networking (WAN) capabilities with robust security features. It offers a unified solution for remote access, branch connectivity, and cloud security, all delivered from the cloud. This convergence of networking and security into a single cloud-native platform allows organizations to simplify their infrastructure, reduce costs, and enhance agility.

SASE Cloud Key Points:

Enhanced Security: One of Cisco SASE Cloud’s standout features is its advanced security capabilities. By leveraging a combination of next-generation firewalls, secure web gateways, data loss prevention, and other security services, it provides comprehensive protection against cyber threats. With SASE Cloud, organizations can ensure secure access to applications and data from anywhere, anytime, without compromising security.

Scalability and Flexibility: Cisco SASE Cloud offers unmatched scalability and flexibility. As an organization grows, SASE Cloud can quickly adapt to evolving needs. Whether it’s adding new branches, onboarding remote employees, or expanding into new markets, the cloud-native architecture of SASE Cloud enables seamless scalability without the need for extensive infrastructure investments.

Simplified Management: Managing complex networking and security infrastructure can be daunting for IT teams. However, Cisco SASE Cloud simplifies this process by centralizing management and providing a single pane of glass for visibility and control. This streamlined approach allows IT teams to monitor and manage network traffic efficiently, apply security policies, and troubleshoot issues, improving operational efficiency.

Secure Access Anywhere & Anytime

Converged Networking and Security: Cisco SASE combines networking and security functions, such as secure web gateways, firewalls, and data loss prevention, into a single solution. This convergence eliminates the need for multiple standalone appliances, reducing complexity and improving operational efficiency.

Cloud-Native Architecture: Built on a cloud-native architecture, Cisco SASE leverages the scalability and flexibility of the cloud. This enables organizations to dynamically adapt to changing network demands, scale their resources as needed, and integrate new security services without significant infrastructure investments.

Enhanced User Experience and Security: With Cisco SASE, users can enjoy a seamless and secure experience regardless of location or device. Cisco SASE protects users and data from threats through its integrated security capabilities, including zero-trust network access and secure web gateways, ensuring a safe and productive digital environment.

Related: For additional pre-information, you may find the following helpful:

  1. SD WAN SASE
  2. SASE Model
  3. Zero Trust SASE
  4. SASE Solution
  5. Distributed Firewalls
  6. SASE Definition

Evolution of the Network Security

In the past, network security was typically delivered from the network using the Firewall. However, these times, network security extends well beyond just firewalling. We now have different points in the infrastructure that we can use to expand our security posture while reducing the attack surface.

You would have commonly heard of Cisco Umbrella Firewall and SASE, along with Cisco Secure Workload security, which can be used with your Cisco Secure firewall, which is still deployed at the network’s edge. Unfortunately, you can’t send everything to the SASE cloud.

You will still need an on-premise firewall, such as the Cisco Secure Firewall, that can perform standard stateful filtering, intrusion detection, and threat protection. This post will examine the Cisco Secure Firewall and its integration with Cisco Umbrella via the SASE Cloud. Firstly, let us address some basics of firewalling.

A. Redesigning Traditional Security:
Let’s examine the evolution of network security before we get into some inbound and outbound traffic use cases. Traditionally, the Firewall was placed at the network edge, acting as a control point for the network’s ingress/egress point. The Firewall was responsible for validating communications with rule sets and policies created and enforced at this single point of control to ensure that desired traffic was allowed into and out of the network and undesirable traffic was prevented. This type of design was known as the traditional perimeter approach to security.

B: Numerous Firewalling challenges:
Today, branch office locations, remote employees, and increasing use of cloud services drive more data away from the traditional “perimeter,” The cloud-first approach completely bypasses the conventional security control point.
Further, the overwhelming majority of business locations and users also require direct access to the Internet, where an increasing prevalence of cloud-based critical applications and data now lives. As a result, applications and data become further de-centralized, and networks become more diverse.

C: Conventional Appliance Sprawl:
This evolution of network architectures has dramatically increased our attack surfaces and did the job of protecting more complicated ones. So, we started to answer this challenge with point solutions. Typically, organizations have attempted to address these challenges by adding the "best" point security solution to address each new problem as it emerges. 

Because of this approach, we have seen tremendous device sprawl. Multiple security products from different vendors can pose significant management problems for network security teams, eventually leading to complexity and blind spots.

Consequently, our "traditional" firewall devices are being augmented by a mixture of physical and virtual appliances—some are embedded into the network. In contrast, others are delivered as a service, host-based, or included within public cloud environments. Regardless of the design, you will stall inbound and outbound traffic to protect.

**Basics of Firewalling**

A firewall is an entity or obstacle deployed between two structures to prevent fire from spreading from one system to another. This term has been taken into computer networking, where a firewall is a software or hardware device that enables you to filter unwanted traffic and restrict access from one network to another. The Firewall is a vital network security component in securing network infrastructure and can take many forms. For example, we can have a host-based or network-based Firewall.

Firewall types
Diagram: Firewall types. Source is IPwithease

Firewalling Types:

A. Host-based Firewall

A host-based firewall service is installed locally on a computer system. In this case, the end user’s computer system takes the final action—to permit or deny traffic. Every operating system has some Firewall. It consumes the resources of a local computer to run the firewall services, which can impact the other applications running on that particular computer. Furthermore, in a host-based firewall architecture, traffic traverses all the network components and can consume the underlying network resources until the traffic reaches its target.

B. Network-based Firewall

On the other hand, a network-based firewall can be entirely transparent to an end user and is not installed on the computer system. Typically, you deploy it in a perimeter network or at the Internet edge where you want to prevent unwanted traffic from entering your network. The end-user computer system remains unaware of any traffic control by an intermediate device performing the filtering. In a network-based firewall deployment, you do not need to install additional software or daemon on the end-user computer systems. However, it would help if you used both firewall types for a defense-in-depth approach.

The early generation of firewalls could allow or block packets only based on their static elements, such as a packet’s source address, destination address, source port, destination port, and protocol information. These elements are also known as the 5-tuple.

Example Technology Cisco Packet Filter

### Implementing Packet Filtering in Your Network

To effectively implement packet filtering, it’s crucial to have a clear understanding of your network architecture and traffic patterns. Begin by identifying the critical assets that need protection and the types of traffic that should be allowed. Develop a detailed access list, considering both inbound and outbound traffic.

C. Stateless Firewalling

When an early-generation firewall examined a particular packet, it was unaware of any prior packets that passed through it because it was agnostic of the Transmission Control Protocol (TCP) states that would have signaled this. Due to the nature of its operation, this type of Firewall is called a stateless firewall.

A stateless firewall is unable to distinguish the state of a particular packet. So, for example, it could not determine if a packet is part of an existing connection, trying to establish a legitimate new connection, or whether it is a manipulated, rogue packet. We then moved to a stateful inspection firewall and an application-aware form of next-generation firewalling.

D. Stateful Firewaling

The stateful inspection examines the TCP and UDP port numbers, while an application-aware firewall examines Layer 7. So now we are at a stage where the Firewall does some of everything, such as the Cisco Secure Firewall.

CBAC Firewall CBAC Firewall

**Firewalling Use Cases**

1.Inbound Use case

The Firewall picks up every packet, looks at different fields, examines for signatures that could signal an attack is in process, and then re-packs and sends the packet out its interfaces. Still, the technique is relevant. It tracks inbound traffic to tell if someone outside or inside is accessing the private applications you want to keep secure. So, looking at every packet is still relevant for the inbound traffic use case. 

While everything is encrypted these days, you need to decrypt traffic to gain security value. Deep Packet Inspection (DPI) is still very relevant for inbound traffic. So, we will continue to decrypt inbound traffic for complete application threat protection with the hope of minimal functional impact.

Example Technology: Sensitive Data Protection with Google Cloud

Sensitive data protection

2.Outbound Use Case

Then, we need to look at outbound traffic. Here, things have changed considerably. Some users need to catch up to a firewall and then go to applications hosted outside the protection of your on-premise security stack and network. These are applications in the cloud, such as SaaS applications, that do not like when the network devices in the middle interfere with the traffic.

Therefore, applications such as Office365 make an effort with their design to reduce the chances of the potential of any network and security device from peeking into their traffic. For example, you could have mutual certificate authentication with the service in the cloud. So, there are a couple of options here besides the traditional DPI use case for inbound traffic use case.

**Improving Security: Understanding the network**

Understanding Network Scanning

Network scanning involves exploring computer networks to gather information about connected devices, open ports, and system vulnerabilities. Cybersecurity professionals gain valuable insights into the network’s architecture, potential entry points, and security risks by utilizing various scanning techniques and tools.

There are different types of network scans, each serving a specific purpose. Port scans identify open ports and services running on them, while vulnerability scans aim to pinpoint weaknesses within network devices, applications, or configurations. Additionally, network mapping scans visually represent the network’s structure, aiding in better understanding and management.

Example: Cisco Secure Firewall 3100

Cisco has the Cisco Secure Firewall 3100, a mid-range model that can be an Adaptive Security Appliance (ASA) for standard stateful firewall inspection or Firewall Threat Defense (FTD) software.

So it can perform one or the other. It also has clustering, a multi-instance firewall, and high availability, which we will discuss. In addition, the Cisco Series Firewall throughput range addresses use cases from the Internet edge to the data center and private cloud.

Cisco Secure Firewall 3100 is an advanced next-generation firewall that provides comprehensive security and high performance for businesses of all sizes. Its advanced security features can protect an organization’s most critical assets, from data, applications, and users to the network infrastructure. Cisco Secure Firewall 3100 offers an integrated threat defense system that combines intrusion prevention, application control, and advanced malware protection. This firewall is designed to detect and block malicious traffic and protect your network from known and unknown threats.

Secure Firewall
Diagram: Cisco Secure Firewall. The source is Cisco.

Adaptive Security Appliance (ASA) and Firewall Threat Defense (FTD)

The platforms can be deployed in Firewall (ASA) and dedicated IPS (FTD) modes. In addition, the 3100 series supports Q-in-Q (stacked VLAN) up to two 802.1Q headers in a packet for inline sets and passive interfaces. The platform also supports FTW (fail-to-wire) network modules.

Remember that you cannot mix and match ASA and FTD modes. However, you can make FTD operate similarly to the ASA. For example, the heart of the Cisco Secure Firewall is Snort, one of the most popular open-source intrusion detection and prevention systems capable of real-time traffic inspection. 

**IPS Engine – Snort**

What’s powerful about the Cisco Secure Firewall is its high decryption performance due to the Crypto Engine. The Firewall has an architecture built around decrypting traffic and has impressive performance. In addition, you can tune your CPU cores to perform more traditional ASA functionality, such as termination IPsec and some stateful firewall inspection.

In such a scenario, we have an IPS engine ( based on Snort ) but give it only 10%. We can provide 90% of the data plane to traditional firewalling in this case. So, a VPN headend or basic stateful Firewall would use more data plane cores.

On the other hand, any heavy IPS and file inspection would be biased toward more “Snort” Cores. Snort provides the IPS engine to tailor the performance profiles to your liking. We have configurable CPU Core allocation, which can be set statically, not dynamically.

Secure Firewalling Features:

1.Secure Firewall Feature: Clustering

Your Secure Firewall deployment can also expand as your organization grows to support its network growth. You do not need to replace your existing devices for additional horsepower; you can add threat defense devices to your current deployment and group them into a single logical cluster to support additional throughput. 

A clustered logical device offers higher performance, scalability, and resiliency simultaneously. You can create a cluster between multiple chassis or numerous security modules of the same chassis. When a cluster is built with various independent chassis, it is called inter-chassis clustering.

2.Secure Firewall Feature: Multi-Instance

The Secure Firewall offers multi-instance capability powered by Docker container technology. It enables you to create and run multiple application instances using a small subset of a chassis’s hardware resources. You can independently manage the threat defense application instances as separate threat defense devices. Multi-instance capability enables you to isolate many critical elements.

3.Secure Firewall Feature: High Availability

In a high-availability architecture, one device operates actively while the other stays on standby. A standby device does not actively process traffic or security events. For example, suppose a failure is detected in the active device, or there’s any discontinuation of keepalive messages from the active device.

In that case, the standby device takes over the role of the active device and starts operating actively to maintain continuity in firewall operations. An active device periodically sends keepalive messages and replicates its configurations to a standby device. Therefore, the communication channel between the peers of a high-availability pair must be robust and with much less latency. 

Exploreing SASE Cloud

One way to examine SaaS-based applications and introduce some cloud security is by using Cisco Umbrella with the SASE Cloud. The SASE Cloud can also have a cloud access security broker known as Cloudlock. The Cisco Umbrella CASB  is like a broker that hooks into the application’s backend to determine users’ actions. It does this by asking for the service via an Application Programming Interface (API) call and not by DPI.

Cisco Umbrella
Diagram: Cisco Umbrella. Source is Cisco

Cisco Cloudlock is part of the SASE cloud that provides a cloud-native cloud access security broker (CASB) that protects your cloud users, data, and apps. Cloud lock’s simple, open, and automated approach uses APIs to manage the risks in your cloud app ecosystem. With Cloudlock, you can more quickly combat data breaches while meeting compliance regulations.

Cisco Umbrella also has a firewall known as the Cisco Umbrella Firewall. We can take the Cisco Umbrella Firewall to improve its policy decision using information gleaned from the CASB. In addition, we map network flows to a specific user action via cloud applications and CASB solutions. So this is one area you can look into.

Extending the Firewall with SASE Cloud

Cisco Umbrella Firewall:

The SASE Cloud with Cisco Umbrella firewall is a good solution that can be combined with the on-premise Firewall. So, if you have FDT at the edge of your network, why would you need to introduce a Cisco Umbrella Firewall or any other SASE technologies? Or if you have a SASE cloud with a Cisco Umbrella, why would you need FDT?

First, it makes sense to process specific traffic locally. However, the two categories of traffic that Cisco Umbrella excels in beyond any firewall are DNS and CASB. Your edge firewall is less effective against some outbound traffic, such as dynamically changing DNS and undecryptable TLS connections. DNS is Cisco Umbrella’s bread and butter.

DNS Request Proceed The IP Connection

Knowledge Check: Cisco DNS-layer security.

DNS requests precede the IP connection, enabling DNS resolvers to log requested domains over any port or protocol for all network devices, office locations, and roaming users. As a result, you can monitor DNS requests and subsequent IP connections to improve the accuracy and detection of compromised systems, security visibility, and network protection. 

You can also block requests to malicious destinations before a connection is even established, thus stopping threats before they reach your network or endpoints. Cisco Umbrella under the hood can clean your DNS traffic and stop the attacks before they get to any malicious connection. 

SASE Cloud: Cisco Umbrella CASB.

Also, you can not decrypt SaaS-based applications and CASB on the edge firewall. The Firewall can’t detect if the user is carrying out any data exfiltration.

With SASE cloud, Cisco Umbrella, and its integrated CASB offering, we get better visibility in this type of traffic and apply a risk category to certain kinds of activity. So now we have an excellent combination. The cloud security stack does what it does best: processing cycles away from the Firewall.

**Cisco Umbrella Integration**

With the Cisco Secure Firewall, they have nice DNS redirection to the Cisco Umbrella Firewall. The on-premise Firewall communicates API to Cisco Umbrella and pulls in the existing DNS policy so the Umbrella DNS policies can be used with the current firewalling policies.  Recently, Cisco has gone one step further, and you can have a SIG tunnel between the Cisco Secure Firewall Management Center (FMC) and the Cisco Umbrella.

So there is a tunnel and have per tunnel IKE ID and bundle multiple tunnels to Umbrella.  Now, we can have load balance across multi-spoke tunnels with per-tunnel custom IKE ID. Once set up, we can have certain kinds of traffic going down each tunnel.

**Endpoint controls**

Then, we have the endpoint, such as your desktop computer or phone. We can collect a wealth of information about each network connection. This information can be fed into the Firewall via metadata. So you can provide both the Cisco Umbrella Firewall and the Cisco Secure Firewall. Again, for improved policy.

The Firewall, either the Cisco Secure Firewall or the Cisco Umbrella Firewall, does not need to decrypt any traffic. Instead, we can get client context discovery via passive fingerprinting using an agent on the endpoint. We can get a wealth of attributes you can’t get with DPI. So we can move from DPI to everything and augment that with all other components to get better visibility.

Summary: Cisco Secure Firewall with SASE Cloud

In today’s rapidly evolving digital landscape, organizations face the challenge of ensuring robust security while embracing the benefits of cloud-based solutions. Cisco Secure Firewall with SASE (Secure Access Service Edge) Cloud offers a comprehensive and streamlined approach to address these concerns. This blog post delved into the features and benefits of this powerful combination, highlighting its ability to enhance security, simplify network management, and optimize performance.

Understanding Cisco Secure Firewall

Cisco Secure Firewall serves as the first line of defense against cyber threats. Its advanced threat detection capabilities and deep visibility into network traffic provide proactive protection for organizations of all sizes. Cisco Secure Firewall ensures a secure network environment by preventing unauthorized access, blocking malicious content, or detecting and mitigating advanced threats.

Introducing SASE Cloud

On the other hand, SASE Cloud revolutionizes how organizations approach network and security services. SASE Cloud offers a scalable and agile solution by converging network functions and security services into a unified cloud-native platform. It combines features such as secure web gateways, data loss prevention, firewall-as-a-service, and more, all delivered from the cloud. This eliminates the need for costly on-premises infrastructure and allows businesses to scale their network and security requirements effortlessly.

The Power of Integration

When Cisco Secure Firewall integrates with SASE Cloud, it creates a formidable combination that enhances security posture while delivering optimal performance. The integration allows organizations to extend their security policies seamlessly across the entire network infrastructure, including remote locations and cloud environments. This unified approach ensures consistent security enforcement, reducing potential vulnerabilities and simplifying management overhead.

Simplified Network Management

One of the key advantages of Cisco Secure Firewall with SASE Cloud is its centralized management and control. Administrators can easily configure and enforce security policies, monitor network traffic, and gain valuable insights through a single glass pane of glass. This simplifies network management, reduces complexity, and enhances operational efficiency, enabling IT teams to focus on strategic initiatives rather than mundane tasks.

Conclusion:

In conclusion, the combination of Cisco Secure Firewall with SASE Cloud provides organizations with a robust and scalable security solution that meets the demands of modern networks. By integrating advanced threat detection, cloud-native architecture, and centralized management, this potent duo empowers businesses to navigate the digital landscape confidently. Experience the benefits of enhanced security, simplified management, and optimized performance by adopting Cisco Secure Firewall with SASE Cloud.

SASE Model

SASE Model | Zero Trust Identity

SASE Model | Zero Trust Identity

In today's ever-evolving digital landscape, the need for robust cybersecurity measures is more critical than ever. Traditional security models are being challenged by the growing complexity of threats and the increasing demand for remote work capabilities. This blog post delves into the SASE (Secure Access Service Edge) model, highlighting its significance in achieving zero trust identity and enhancing overall security posture.

The SASE model combines network security and wide-area networking (WAN) capabilities into a unified cloud-based service. It integrates security functions like secure web gateways, data loss prevention, firewall-as-a-service, and more, with WAN capabilities such as SD-WAN (Software-Defined Wide Area Networking). This convergence allows organizations to simplify their security architecture while ensuring consistent protection across all endpoints.

Zero trust is an essential principle within the SASE model. Unlike traditional security models that rely on perimeter-based defenses, zero trust operates on the assumption that no user or device should be inherently trusted. Instead, access is granted based on dynamic factors such as user behavior, device health, and contextual data. This approach minimizes the attack surface and strengthens overall security.

Identity as the New Perimeter: In the SASE model, identity becomes the new perimeter. By adopting zero trust principles and leveraging technologies like multi-factor authentication, biometrics, and continuous monitoring, organizations can ensure that only authorized users with verified identities gain access to sensitive resources. This shift from network-centric security to identity-centric security enables a more granular and robust approach to protecting critical assets.

Strengthening Security with SASE and Zero Trust Identity: Bringing together the SASE model and zero trust identity strengthens an organization's security posture in multiple ways. By integrating security and networking functions into a unified service, organizations can enforce consistent security policies across all endpoints, regardless of their location. This approach enhances visibility, mitigates risks, and allows for more efficient incident response.

Implementing the SASE model with zero trust identity brings several benefits. These include improved threat detection and response capabilities, reduced complexity in managing security infrastructure, enhanced user experience through seamless and secure access, and increased agility to adapt to changing business needs. Furthermore, the consolidation of security functions in the cloud reduces operational costs and simplifies maintenance.

The SASE model, with its focus on zero trust identity, revolutionizes the way organizations approach cybersecurity. By shifting the security paradigm from perimeter-based defenses to identity-centric protection, businesses can adapt to the evolving threat landscape and ensure a higher level of security. Embracing the SASE model and zero trust identity is a proactive step towards safeguarding critical assets and empowering secure digital transformation.

Highlights: SASE Model | Zero Trust Identity

Understanding the SASE Model

– The SASE model, coined by Gartner, combines network security and wide area networking (WAN) capabilities into a unified, cloud-native platform. It revolves around converging networking and security functions, enabling organizations to simplify their infrastructure while enhancing security and performance. By consolidating various security services like secure web gateways, firewall-as-a-service, data loss prevention, and more, the SASE model offers a holistic approach to protecting networks and data.

– To implement the SASE model effectively, it is crucial to understand its key components. These include secure access, network security functions, cloud-native architecture, and global points of presence (PoPs). Secure access ensures that users can connect to resources securely, regardless of location.

– Network security functions encompass various security services, including firewalling, secure web gateways, and zero-trust network access. The cloud-native architecture leverages the scalability and agility of the cloud, while global PoPs enable organizations to achieve optimal performance and low latency.

Key SASE Model Benefits:

A. The adoption of the SASE model brings many benefits to organizations. First, it simplifies network architecture, reducing the complexity and costs of managing multiple security appliances.

B. Second, regardless of location, it provides consistent and robust security across all users and devices. This is particularly valuable in today’s remote work and mobile workforce era.

C. Additionally, the SASE model enhances performance by leveraging cloud-native technologies and global PoPs, ensuring seamless connectivity and reduced latency.

**Challenge: Traditional Security Devices**

Firewalls and other security services will still have a crucial role, but we must modernize the solution, especially regarding encrypted traffic and applying policies on an enterprise-wide scale. It’s a good idea to start offloading functions to the SASE solution and replacing them with Umbrella SASE. The SASE model is more of a journey than a product you can switch on and could take 3 – 5 years.

**Challenge: New Cloud Locations**

The enterprise data center’s virtual private network (VPN) must remain. Even though most applications are SaaS-based, on-premise applications will still be around for compliance and security, or they will be more complex to offload to the Internet. This could be partner resources. We need a solution to satisfy all these access requirements: cloud and on-premises application access. So, we need VPN access to the enterprise data center’s enterprise application and protected DIA for SaaS-based applications.

Cisco SASE with Cisco Umbrella

Once you have a SASE solution, you need to evolve it. The SASE model is unlike installing a firewall and configuring policies; you can add and enhance your SASE technology in many ways to increase your security posture. With Umbrella SASE, we are moving our security to the cloud and expanding this with the Cisco Umbrella platform and Zero Trust Identity from Cisco Duo. First, Cisco Umbrella provides the core SASE technology security functionality, such as DNS-layer filtering, and then Cisco Duo focuses on the Zero Trust Identity side.

Example SASE Technology: IPS IDS

Understanding Suricata

Suricata is an open-source Intrusion Prevention System (IPS) and Intrusion Detection System (IDS) that offers real-time threat detection and prevention capabilities. It employs robust signature-based detection, protocol analysis, and behavioral monitoring to identify and block malicious network traffic.

Suricata seamlessly integrates with Security Information and Event Management (SIEM) solutions to enhance its effectiveness. This integration enables centralized log management, correlation of security events, and streamlined incident response. By aggregating and analyzing Suricata’s alerts within a SIEM, security teams gain valuable insights into potential threats and can swiftly mitigate risks.

Understanding Zero Trust Identity

Zero-trust identity is a security framework that operates on the principle of “never trust, always verify.” It challenges the traditional perimeter-based security model by assuming that no user or device should be inherently trusted, regardless of location or network environment. Instead, zero-trust identity emphasizes continuous authentication and authorization processes to ensure secure resource access.

**Key Zero Trust Identity Points**

Several key components need to be in place to implement zero-trust identity effectively. These include multi-factor authentication (MFA), robust identity and access management (IAM) systems, risk-based access controls, and comprehensive visibility and monitoring capabilities. Each component plays a crucial role in establishing a solid zero-trust identity framework.

The adoption of zero trust identity offers various benefits to organizations. Firstly, it significantly reduces the risk of data breaches and unauthorized access by implementing strict access controls and authentication methods.

Secondly, zero trust identity enhances visibility into user activities, enabling quick detection and response to potential threats. Lastly, this approach allows for organizations to have a more flexible and scalable security infrastructure, accommodating the needs of a distributed workforce and cloud-based environments.

Identity-centric Focus

The identity-centric focus of zero trust uses an approach to security to ensure that every person and every device granted access is who and what they say they are. It achieves this authentication by focusing on the following key components:

  1. The network is always assumed to be hostile. 
  2. External and internal threats always exist on the network. 
  3. Network locality needs to be more sufficient to decide trust in a network. As discussed, other contextual factors must also be taken into account.
  4. Every device, user, and network flow is authenticated and authorized. All of this must be logged.
  5. Security policies must be dynamic and calculated from as many data sources as possible.

Example: Security Scan with Lynis

Lynis is an open-source security auditing tool that assesses the security of Linux and Unix-based systems. It performs a comprehensive scan, analyzing various aspects such as configuration settings, software packages, file integrity, and user accounts. By conducting an in-depth examination, Lynis helps identify potential vulnerabilities and provides recommendations for remediation.

Zero Trust Protection with Vault

**Authentication: Proving Your Identity in the Digital World**

Authentication is the process of verifying who you are before granting access to any system. With Vault, this process is streamlined through a variety of methods, ranging from username and password combinations to more sophisticated options like multi-factor authentication (MFA) and token-based systems. By integrating with LDAP, OAuth, and other identity systems, Vault ensures that the right people have access to the right resources without compromising security.

**Authorization: Controlling Access with Precision**

Once authentication is confirmed, the next step is authorization—determining what an authenticated user is allowed to do. Vault employs policies to manage permissions effectively. These policies are written in a high-level language that allows administrators to specify precise access controls. Whether it’s read-only access for certain users or full administrative privileges, Vault’s policy-based approach ensures that users only interact with the data and systems they are permitted to, minimizing risks and enhancing security.

**Identity: The Cornerstone of Secure Access**

Identity management is more than just usernames and passwords; it’s about ensuring that every entity, whether human or machine, is uniquely identified and managed. Vault’s identity features allow for seamless integration with existing identity providers, creating a unified access management system. By leveraging identity, Vault can simplify access management across diverse environments, making it easier to audit and manage security policies and ensuring that every access request is legitimate.

Vault

**What is Identity-Aware Proxy?**

Identity-Aware Proxy is a Google Cloud service that verifies user identities and provides secure access to applications running on Google Cloud Platform (GCP). Unlike traditional security models that rely solely on network-level controls, IAP adopts a zero-trust approach. This means it considers identity as the primary perimeter, ensuring that only authenticated users can access your applications, regardless of their location or device.

**How Does IAP Work?**

At its core, IAP functions as a gatekeeper, intercepting requests to your applications and checking if the user has the necessary permissions. It leverages Google’s comprehensive identity and access management (IAM) infrastructure to authenticate users and enforce access policies. When a user attempts to connect to your application, IAP verifies their credentials, checks their assigned roles, and evaluates any conditional access policies before granting or denying access.

**Key Benefits of Using IAP**

1. **Enhanced Security:** By focusing on user identity rather than network location, IAP reduces the risk of unauthorized access. This zero-trust approach is especially critical in today’s landscape, where remote work is increasingly common.

2. **Simplified Access Management:** IAP integrates seamlessly with Google Cloud IAM, allowing you to define and manage user roles and permissions from a centralized location. This simplifies the process of granting or revoking access as your team changes.

3. **Cost-Efficiency:** Since IAP operates at the application layer, it eliminates the need for complex VPN configurations and reduces the overhead associated with managing traditional network security measures.

**Implementing IAP in Your Environment**

Setting up IAP requires a few straightforward steps. First, ensure your applications are deployed on GCP and accessible through HTTPS. Next, configure OAuth 2.0 credentials to enable IAP to authenticate users. Finally, define your access policies using Google Cloud IAM, specifying which users or groups have permission to access each application. Google provides detailed documentation and support to guide you through the setup process.

Identity aware proxy

Google Cloud IAM

## Understanding Google Cloud’s IAM

Google Cloud’s IAM is a powerful tool that allows organizations to manage access control by defining who (identity) has what access (roles) to which resources. It operates on the principle of least privilege, ensuring that users have only the permissions necessary to perform their jobs. With IAM, administrators can granularly control access, monitor permissions, and audit activities, thereby enhancing security and compliance.

## The Role of Zero Trust in IAM

Zero Trust is a security framework that challenges the traditional perimeter-based security model. It operates on the principle of “never trust, always verify,” meaning every request to access resources is authenticated and authorized, regardless of its origin. Google Cloud’s IAM plays a crucial role in implementing a Zero Trust architecture by enforcing strict identity verification, using multi-factor authentication, and constantly monitoring user activities to detect and respond to anomalies.

## Key Features of Google Cloud’s IAM

Google Cloud’s IAM offers several features that align with Zero Trust principles:

– **Role-Based Access Control (RBAC):** Assign roles based on job functions, ensuring users only have access to what they need.

– **Fine-Grained Access Control:** Define access at a detailed level, including specific resources and actions.

– **Audit Logs:** Maintain comprehensive logs of all access and changes, providing transparency and aiding in compliance.

– **Integration with Identity Providers:** Seamlessly integrate with various identity providers to manage identities and access from a central point.

Google Cloud IAM

Starting Endpoint Security 

Understanding Endpoint Security

Endpoint security protects individual devices or endpoints that connect to a network. These endpoints include desktop computers, laptops, servers, and mobile devices. The primary goal of endpoint security is to prevent unauthorized access, detect potential threats, and respond to any security incidents promptly.

Address Resolution Protocol (ARP) plays a vital role in endpoint security. It maps an IP address to a corresponding MAC address within a local network. By maintaining an updated ARP table, network administrators can ensure that communication within the network remains secure and efficient.

Proper route configuration is another critical aspect of endpoint security. Routes determine how data packets are transmitted between different networks. By carefully configuring routes, network administrators can control traffic flow, prevent unauthorized access, and mitigate the risk of potential attacks.

Netstat, a command-line tool, provides valuable insights into network connections and statistics. Using Netstat, network administrators can monitor active connections, identify potential security threats, and take appropriate measures to safeguard their endpoints. Regularly analyzing Netstat output can help detect suspicious activities or abnormal behavior within the network.

Detecting Authentication failures in logs

Understanding Syslog

Syslog is a standard protocol for message logging. It enables various devices and applications to send log messages to a central syslog server. The server is a centralized log repository, facilitating easy management and analysis. By tapping into syslog, security analysts gain access to a wealth of information about system events, network traffic, and potential security incidents.

Auth.log, short for authentication log, is a file specific to Unix-based systems. It records all authentication-related events, such as successful and failed login attempts, password changes, and user authentication errors. Analyzing the auth.log can provide crucial insights into potential security breaches, unauthorized access attempts, and suspicious user behavior.

 

Understanding User Authentication

User authentication is the cornerstone of identity security in Linux. By implementing robust authentication protocols, such as password-based authentication or public critical infrastructure (PKI), users can validate their identities and gain access to the system. Multifactor authentication (MFA) adds an extra layer of security by combining different authentication methods, further fortifying the system against unauthorized access.

Access Controls: Securing Identity

Access controls play a vital role in securing identity within Linux. By utilizing mechanisms like file permissions, ownership, and access control lists (ACLs), administrators can regulate user privileges and restrict unauthorized access to sensitive files and directories. Furthermore, the least privilege (PoLP) principle should be applied, granting users only the necessary permissions to perform their designated tasks and minimizing potential security risks.

Understanding SELinux

SELinux, short for Security-Enhanced Linux, is a security module integrated within the Linux kernel. It provides a robust framework for mandatory access controls (MAC) and fine-grained access control policies. Unlike traditional Linux access control mechanisms, SELinux goes beyond simple user and group permissions, enabling administrators to define and enforce highly granular policies.

Enforcing Strong Access Control

SELinux plays a vital role in enhancing zero-trust endpoint security. Enforcing MAC policies and implementing strong access controls ensures that each endpoint adheres to the principle of least privilege. SELinux helps mitigate the potential damage by limiting the attacker’s capabilities even if an endpoint or credentials are compromised.

Related: Before you proceed, you may find the following posts helpful:

  1. SD WAN SASE
  2. Zero Trust SASE
  3. SASE Definition
  4. SASE Visibility

SASE Technology with Zero Trust Identity

**Centralized Security Stack**

When you think about it, surface challenges must be solved by examining recent trends. For a start, historically, most of the resources lived in the data center, and we could centralize our security stack. However, with users accessing the network anywhere, we have public cloud apps with different connectivity metrics to understand. In addition, we now have an internet/cloud-centric connectivity model. So, we need to re-think to facilitate these new communication flows.

As a first step, you don’t need to throw out all your network and security appliances and jump to the SASE model. For an immediate design, you can augment your on-premises network security appliance with Umbrella SASE DNS-layer security. DNS-layer security is a good starting point with Cisco Umbrella. It would be best if you made some slight changes to this.

This way, you don’t need to make any significant architectural changes to get immediate benefits from SASE and its cloud-native approach to security.

SASE Technology with Zero Trust Identity

You can then further this SASE model to include Zero Trust Identity with, for example, Cisco Duo. With Cisco Duo, we are moving from inline security inspection on the network to securing users at the endpoint or the application layer. An actual Zero Trust Identity strategy changes the level of access or trust based on contextual data about the user or device requesting access.

**Identity – New Perimeter**

Now, we are heading into identity as the new perimeter. Identity, in its various forms, is the new perimeter. The new identity perimeter needs to be protected with other mechanisms you may have in your existing environments.

We have identity sprawl with potentially unprecedented access, making any of the numerous identities a high-value target for bad actors to compromise. For example, in a multi-cloud environment, it’s common for identities to be given a dangerous mix of entitlements, further extending the attack surface area security teams need to protect.

**Challenge: Identity attacks are hard to detect**

Nowadays, bad actors can use even more gaps and holes as entry points. With the surge of identities, including humans and non-humans, IT security administrators face the challenge of containing and securing the identity sprawl as the attack surface widens. 

What makes this worse is that security teams’ primary issue is that identity-driven attacks are hard to detect. How do you know if a bad actor or a sys admin uses the privilege controls? 

Security teams must find a reliable way to monitor suspicious user behavior to determine the signs of compromised identities. For this, behavioral analysis must happen in the background, looking for deviations from baselines. Once a variation has occurred, we can trigger automation, such as with a SOAR playbook that can, for example, perform threat hunting.

Zero Trust & Port Knocking

Understanding Port Knocking

Port knocking is a clever security technique that involves a series of connection attempts to predefined closed ports on a server. These connection attempts act as a secret knock, effectively “opening” the desired port for subsequent communication. By hiding the open ports, port knocking reduces the visibility of services to potential attackers, making it harder to exploit vulnerabilities.

One significant advantage of port knocking is its ability to mitigate brute-force attacks. Since the ports are closed by default, unauthorized access attempts are futile. Port knocking adds an extra layer of obscurity, making it challenging for attackers to identify open ports and devise attack strategies. This technique can be beneficial in environments with impractical or insufficient traditional firewalls.

Example: Social-Engineering Toolkit. 

**Credential Attacks**

Credential harvester or phishing attacks aim to trick individuals into providing their sensitive login information through fraud. Attackers often create deceptive websites or emails resembling legitimate platforms or communication channels. These masquerading techniques exploit human vulnerabilities, such as curiosity or urgency, to deceive unsuspecting victims.

**Fake Login Pages**

To execute a successful credential harvester attack, perpetrators typically utilize various methods. One common approach involves creating fake login pages that mimic popular websites or services. Unaware of the ruse, unsuspecting victims willingly enter their login credentials, unknowingly surrendering their sensitive information to the attacker. Another technique involves sending phishing emails that appear genuine, prompting recipients to click on malicious links and unknowingly disclose their login details.

**Gain Entry to other Platforms**

The consequences of falling victim to a credential harvester attack can be severe. From personal accounts to corporate networks, compromised login information can lead to unauthorized access, data theft, identity theft, and financial fraud. Attackers often leverage their credentials to gain entry into other platforms, potentially compromising sensitive information and causing extensive damage to individuals or organizations.

**Mitigating the Risks**

Thankfully, several proactive measures can mitigate the risks associated with credential harvester attacks. First and foremost, user education plays a crucial role. Raising awareness about the existence of these attacks and providing guidance on identifying phishing attempts can empower individuals to make informed decisions. Implementing robust email filters, web filters, and antivirus software can also help detect and block suspicious activities.

One highly effective strategy to fortify defenses against credential harvester attacks is implementing two-factor authentication (2FA). By requiring an additional verification step, such as a unique code sent to a registered mobile device, 2FA adds an extra layer of security. Even if attackers obtain login credentials, they would still be unable to access the account without secondary verification.

Example Technology: Scanning Networks

Understanding Network Scanning

Network scanning analyzes a network to detect active hosts, open ports, and potential security weaknesses. It provides a comprehensive view of the network infrastructure and aids in identifying possible entry points for malicious actors. By performing network scans, organizations can proactively strengthen their cybersecurity defenses.

A: Port Scanning: Port scanning is one of the fundamental techniques used in network scanning. It involves probing a target system for open ports essential for establishing network connections. Tools like Nmap and Zenmap are commonly employed for port scanning, allowing security professionals to identify vulnerable services and potential attack vectors.

B: Vulnerability Scanning: Vulnerability scanning identifies weaknesses, flaws, or misconfigurations within network devices and systems. This technique provides valuable insights into potential security risks that attackers could exploit. Tools like Nessus and OpenVAS are widely used for vulnerability scanning, enabling organizations to prioritize and remediate vulnerabilities effectively.

Evolution to a SASE Model

The Internet: New Enterprise Network

We are stating that there has been a substantial evolution. The Internet is the new network, and users and apps are more distributed; the Internet is used to deliver those services. As a result, we have a greater dependency on the Internet, but the Internet’s reliability could be more consistent around the globe. For example, BGP is unreliable, and we always have BGP incidents. We need to look at other tools and solutions to layer on top of what we have to improve Internet reliability.

BGP operates over TCP port 179. BGP TCP Port 179 serves as the channel through which BGP routers establish connections and exchange routing information. The linchpin facilitates the dynamic routing decision-making process across diverse networks. However, due to its criticality, BGP Port 179 has become an attractive target for malicious actors seeking to disrupt network operations or launch sophisticated attacks.

Common Threats Targeting BGP TCP Port 179

BGP TCP Port 179, the backbone of internet routing, faces various security threats. From route hijacking to Distributed Denial of Service (DDoS) attacks, the vulnerabilities within this port can have severe consequences on network stability and data integrity. Understanding these threats is essential in implementing effective countermeasures.

Also, the cloud is the new data center. So, we no longer control and own the data and apps in the public cloud. Instead, these apps communicate to other public clouds and back to on-premises to access applications or databases that can’t be moved to the cloud. Not to mention the new paradigm to try and solve. We also reduce the types of applications on our enterprise network.

Most are trying to minimize custom applications and streamline SaaS-based applications. We can implement many SaaS-based applications. These applications are hosted in public and private clouds and accessed online. The service model is now accessible only via the public Internet. We also want the same experience at home as in the office. When I return to the office, all the network and security functions at home stay the same.

**How To Approach The SASE Model**

How do you do this? Well, there are two ways. You can facilitate this with a bespoke platform, which can be self-managed with many on-premise network and security stacks, sticking the product together and then building your own PoPs. However, you can get away from this and consume this as a service from a SASE provider, so we have a cloud consumption model for all network and security services. This is the essence of the SASE model. Why not offload all the complexity to someone else?

A. Required SASE Technology: Encryption Traffic.

We have inline security services that inspect traffic and try to glean metadata about what is happening. The inspection was easy when we connected to a web page on port 80, and everything was in clear text. Inspection and seeing what the user was doing can be done with standard firewall monitoring. But now we have end-to-end encryption between the user device and the applications.

The old IDS/IPS and firewalls need help gaining insights into encrypted traffic. We need complete visibility at the endpoint and the application layer to have more context and understand if there is any malicious activity in the encrypted traffic. Also, appropriate visibility of encrypted traffic is more important than having control. 

Sensitive data protection

B. Required SASE Technology: SIEM with Splunk and Machine Data

You will also need a SIEM tool. Splunk can be used as the primary SIEM tool and log collection from various data sources to provide insights and traffic traversing the network. Remember that machine data is everywhere and flows from all the devices we interact with, making up around 90% of today’s data. Harnessing this data can give you powerful security insights.

The machine data can be in many formats, such as structured and unstructured. As a result, it can be challenging to predict and process. There are plenty of options for storing data. Collecting all security-relevant data and turning all that data into actionable intelligence, however, is a different story.

Example Solution: Splunk

This is where Splunk comes into play, and it can take any data and create an intelligent, searchable index—adding structure to previously unstructured data. This will allow you to extract all sorts of insights, which can be helpful for security and user behavior monitoring. In the case of Splunk, it helps you quickly know your data. Splunk is a big data platform for machine data. It collects raw unstructured data and converts them into searchable events.

C. Required SASE Technology: Network Connectivity & Network Security

You want an any-to-any connectivity model, even though your users and applications are highly distributed. What types of technology do you need to have to support this? You need two essential things: network connectivity and security services. Network connectivity, such as SD-WAN for branch locations. With everything, you start with network connectivity, and then you can layer security services on top of this stack.

These services include BGP sinkhole, DNS protection, secure firewall, WAN encryption, web security, and Cisco Duo with zero-trust access. Many components need to work together, and you will use and manage many infrastructure components.

**1.End Visibility & Policy Maintenance**

We also need to have good visibility into the full end-to-end path. You can use your SASE technology with Cisco ThousandEyes for end-to-end visibility and tools to orchestrate all of this together. This has many challenges, such as building and operating these components together.

A better way is to have all these services available via one unified portal. For example, we can have network and security as a service, where you can add services you need on-demand to each Umbrella SASE PoP that is outsourced to a SASE provider. Some PoPs can filter the DNS layer, while others have the entire security stack. They turn functions on and off at will.

This should be wrapped up with policy maintenance so you can implement policy at any point, along with good scalability and multi-tenancy. Lowering the cost and employing the SASE can help, not to mention the skills used. With the SASE model, you can export it to experts and consume it.

**2.The Issue of Provisioning**

With the umbrella SASE PoP architecture, you can bring users closer to the application. Also, we can access a more modern and diverse toolkit by employing SASE technology. Remember that a big issue with on-premise hardware appliances is that we always overprovision, which can result in high management for handling traffic spikes that may only happen occasionally. When it comes to hardware-based solutions, we always overprovision them.

With SASE, we have the agility of a software-based model where we can scale up and down, which you can’t do with a hardware-based model. If you need more scale, you or your Umbrella SASE provider can introduce another Virtual Network Function (VNF) and scale this out in software configuration instead of a new hardware appliance.

Umbrella SASE – Starting

Start with DNS Protection

As a first SASE model step, we need DNS protection. This is the first SASE technology to be implemented with a SASE solution. Cisco Umbrella can be used here. Cisco umbrella is a recursive DNS service; you can get a lot of information from DNS requests, and a great place to start security. You can learn to see attacks before they launch, have the correct visibility to protect access anywhere, and block and stop threats before the connection.

Below is a recap on DNS. DNS, by default, uses UDP and works with several records.

**DNS and TTL** 

DNS can be updated dynamically and has very little TTL. If you can interact with that traffic at a base level regardless of where the user is, you can see what they are doing. For example, you can see what updates happen if a malware attack occurs. DNS is very lightweight; we can protect the endpoint and block malware before attempting the connection.

Suppose someone clicks on a phishing link or malware calls back to a C&C server for additional attack information. In that case, that connection does not happen, and you don’t need to process this traffic across a firewall or other security screen stack that can add latency.

Connecting to Umbrella SASE does not cause latency issues. We can offload the hardware used to protect this and now put it into the cloud, and you don’t need the additional hardware to accommodate traffic spikes and growth protection at a DNS layer. Cisco Umbrella gives you accuracy at the DNS layer without any overhead. You can control this traffic and see what is going on to see who is and where. All of the traffic can be identified with DNS.

**Gaining Insight: DNS** 

Point the existing DNS resolver to Cisco Umbrella, then connect users and get insight into DNS requests for on or off-the-network traffic. We start with passive monitoring, and then we go to deploy blocking. It would help if you did this without re-architecting your network with the ability to minimize false positives. Therefore, pointing your existing DNS to Umbrella, a passive change, is a good starting point. Then, enable blocking internally based on policy.

There is an enterprise network, and endpoints must point to internal DNS servers. You can modify existing internal DNS servers to have their traffic go to the Cisco Umbrella for screening. So the DNS query goes to Cisco Umbrella for internet-bound traffic, and then Cisco Umbrella carries the recursive DNS queries to the Authoritative DNS servers.

**The Role of Clients and Agents**

It would help to get an Umbrella client or agent on your endpoint. An agent on the endpoint will give you additional visibility. What happens when the users go home from the office? You want to maintain visibility, which can be achieved with an agent. What I like about SASE is that you can have an enterprise-wide policy in a few minutes. You can also increase your DNS performance by leveraging the SASE PoPs. The SASE PoPs should be well integrated with an authoritative DNS server. 

In summary, there are two phases. First, you can start with a network monitoring and blocking stage with DNS-layer filtering and then move to the endpoint, gaining visibility and lowering your attack surface. Now, we are heading into the zero-trust identity side of things.

Starting Zero Trust Identity: Cisco Dou

For additional security, we can look at Zero Trust Identity. This can be done with Cisco Dou, which provides Zero Trust Identity on the endpoint and ensures the device is healthy and secure. We need to trust the user, my endpoint, and the network they are on. In the past, we just looked at the IP as an anchor for trust. With zero trust, we can now have adaptive policies and risk-based decisions, enforce the least privilege with, for example, just-in-time access, and bring in a lot more context than we had with IP addressing for security.

Cisco Duo Technologies for Umbrella SASE

Duo’s MFA (multi-factor authentication) and 2FA (two-factor Authentication) app and access tools can help make security resilience easy for your organization with user-friendly features for secure access, strong authentication, and device monitoring. The following are some of the technologies used with Cisco Duo.

a. Multi-factor Authentication (MFA): Multi-factor authentication (MFA) is an access security product that verifies a user’s identity when logging in. Using secure authentication tools adds two or more identity-checking steps to user logins.

b. Adaptive Access: With adaptive access, we have security policies for every situation. Now, we can gain granular information about who can access what and when. Cisco Duo lets you create custom access policies based on role, device, location, and other contextual factors, so we can use much contextual information to make decisions.

c. Device Verification: Also, verify any device’s trust, identify risky devices, enforce contextual access policies, and report on device health using an agentless approach or by integrating your device management tools.

d. Single-Sign-On: Then we have single sign-on (SSO): Single sign-on (SSO) from Duo provides users with an easy and consistent login experience for any application, whether on-premises or cloud-based. With SSO, we have a platform that we connect to for access to all of our applications. Not just SaaS-based applications but also custom applications. CyberArk is good in this space, too.

Zero Trust Identity Technologies

a) Adaptive policies

First, adaptive policies. Cisco Duo has built a cloud platform where you can set up adaptive policies to check for anomalies and then give the user an additional check. This is like step-up authentication. Then, we move towards conditional access, a step beyond authentication. Conditional access goes beyond authentication to examine the context and risk of each access attempt. For example, contextual factors may include consecutive login failures, geo-location, type of user account, or device IP to either grant or deny access. Based on those contextual factors, it may be granted only to specific network segments. 

b) Risk-based decisions 

The identity solution should be configurable to allow SSO access, challenge the user with MFA, or block access based on predefined conditions set by policy. It would help if you looked for a solution that can offer a broad range of requirements, such as IP range, day of the week, time of day, time range, device O/S, browser type, country, and user risk level. 

These context-based access policies should be enforceable across users, applications, workstations, mobile devices, servers, network devices, and VPNs. A key question is whether the solution makes risk-based access decisions using a behavior profile calculated for each user.

c) Enforce Least Privilege and JIT Techniques

Secure privileged access and manage entitlements. For this reason, many enterprises employ a least privilege approach, where access is restricted to the resources necessary for the end-user to complete their job responsibilities with no extra permissions. A standard technology here would be Just in Time (JIT). Implementing JIT ensures that identities have only the appropriate privileges, when necessary, as quickly as possible and for the least time required. 

A technology to enforce the least privilege is just-in-time (JIT) techniques that dynamically elevate rights only when needed. The solution allows for JIT elevation and access on a “by request” basis for a predefined period, with a full audit of privileged activities. Full administrative rights or application-level access can be granted, time-limited, and revoked.

Summary: SASE Model | Zero Trust Identity

Organizations face numerous challenges in ensuring secure and efficient network connectivity in today’s rapidly evolving digital landscape. This blog post delved into the fascinating world of the Secure Access Service Edge (SASE) model and its intersection with the Zero Trust Identity framework. Organizations can fortify their networks and safeguard their critical assets by understanding the key concepts, benefits, and implementation considerations of these two security approaches.

Understanding the SASE Model

The SASE Model, an innovative framework introduced by Gartner, combines network security and wide-area networking into a unified cloud-native service. This section explores the core principles and components of the SASE Model, such as secure web gateways, data loss prevention, and secure access brokers. The SASE Model enables organizations to embrace a more streamlined and scalable approach to network security by converging network and security functions.

Unpacking Zero Trust Identity

Zero-trust identity is a security paradigm emphasizing continuous verification and granular access controls. This section delves into its fundamental principles, including the concepts of least privilege, multifactor authentication, and continuous monitoring. By adopting a zero-trust approach, organizations can mitigate the risk of unauthorized access and minimize the impact of potential security breaches.

Synergies and Benefits

This section explores the synergies between the SASE Model and Zero Trust Identity. Organizations can establish a robust security posture by leveraging the SASE Model’s network-centric security capabilities alongside the granular access controls of Zero Trust Identity. The seamless integration of these approaches enhances visibility, minimizes complexity, and enables dynamic policy enforcement, empowering organizations to protect their digital assets effectively.

Implementation Considerations

Implementing the SASE Model and Zero Trust Identity requires careful planning and consideration. This section discusses key implementation considerations, such as organizational readiness, integration challenges, and scalability. Organizations can successfully deploy a comprehensive security framework that aligns with their unique requirements by addressing these considerations.

Conclusion: In conclusion, the SASE Model and Zero Trust Identity are two powerful security approaches that, when combined, create a formidable defense against modern threats. Organizations can establish a robust, scalable, and future-ready security posture by adopting the SASE Model’s network-centric security architecture and integrating it with the granular access controls of Zero Trust Identity. Embracing these frameworks enables organizations to adapt to the evolving threat landscape, protect critical assets, and ensure secure and efficient network connectivity.

rsz_1te_agents

SASE Visibility with Cisco ThousandEyes

SASE Visibility with Cisco ThousandEyes

In today's interconnected digital landscape, enterprises are increasingly adopting Secure Access Service Edge (SASE) solutions to streamline their network and security infrastructure. One key aspect of SASE implementation is ensuring comprehensive visibility into the network performance and security posture. In this blog post, we will explore how Cisco ThousandEyes can enhance SASE visibility, empowering organizations to optimize their network operations and ensure a secure and seamless user experience.

SASE visibility refers to the ability to monitor and analyze network traffic, performance metrics, and security events across the entire SASE architecture. It involves gaining insights into user experience, application performance, network latency, and security threats. By leveraging Cisco ThousandEyes, organizations can achieve end-to-end visibility, enabling them to proactively identify and address any potential issues that may impact their SASE deployment.

- Monitoring User Experience: User experience is a critical aspect of SASE visibility. With Cisco ThousandEyes, organizations can monitor real-time user experience metrics such as application response time, page load time, and transaction success rates. This granular visibility helps IT teams quickly identify and resolve performance bottlenecks, ensuring a seamless user experience regardless of user location or device.

- Ensuring Application Performance: Application performance is another crucial component of SASE visibility. Cisco ThousandEyes enables organizations to monitor and analyze application performance across the SASE architecture, including public clouds, data centers, and remote locations. By leveraging comprehensive network measurements and deep packet inspection, IT teams can proactively optimize application delivery, ensuring high performance and availability.

- Detecting and Mitigating Security Threats: Security is paramount in any SASE implementation. Cisco ThousandEyes offers robust security monitoring capabilities, allowing organizations to detect and mitigate potential threats in real-time. With advanced threat intelligence and anomaly detection, IT teams can identify and respond to security incidents promptly, safeguarding critical assets and ensuring compliance with industry regulations.

Cisco ThousandEyes plays a pivotal role in enhancing SASE visibility for organizations. By providing comprehensive insights into user experience, application performance, and security threats, Cisco ThousandEyes empowers IT teams to optimize network operations, ensure a seamless user experience, and protect against potential risks. With its advanced monitoring and analysis capabilities, Cisco ThousandEyes is a valuable tool in the SASE journey, enabling organizations to achieve greater visibility, security, and performance.

Highlights: SASE Visibility with Cisco ThousandEyes

Understanding SASE Visibility

SASE visibility refers to gaining deep insights and real-time analytics into network traffic, security events, and user behavior across the entire network infrastructure. By leveraging advanced technologies such as AI and machine learning, SASE visibility gives organizations a comprehensive view of their network, enabling proactive threat detection, incident response, and policy enforcement.

### The Importance of Visibility in SASE

Visibility is the backbone of any effective SASE architecture. Without it, organizations are essentially navigating the digital world blindfolded. Visibility within SASE encompasses the ability to monitor traffic, detect anomalies, and enforce policies in real-time. This capability is crucial for identifying potential threats, ensuring compliance, and optimizing network performance. By having a clear line of sight into every activity within the network, businesses can proactively address vulnerabilities before they become detrimental.

### Implementing SASE Visibility in Your Organization

Implementing SASE visibility requires a strategic approach. Start by assessing your current network architecture and identifying areas that could benefit from enhanced visibility. Invest in solutions that offer comprehensive monitoring and analytics capabilities.

Training your IT staff to interpret data insights effectively is also crucial in maximizing the benefits of SASE visibility. Collaboration with trusted vendors can further streamline the integration process, ensuring a smooth transition to a more secure and efficient network environment.

SASE visibility comprises several vital components that work in synergy to deliver comprehensive insights:

1. Network Traffic Monitoring: SASE visibility empowers organizations to monitor network traffic at a granular level, identifying potential security gaps, abnormal behavior, and bandwidth bottlenecks. This enables timely remediation and ensures a seamless user experience.

2. Application Visibility: With SASE, organizations gain unparalleled application usage and performance visibility. This allows for practical application control, optimization, and prioritization, ensuring critical business applications receive the necessary resources and identifying potential security risks.

3. User Behavior Analytics: SASE visibility goes beyond traditional user monitoring, employing advanced analytics to detect and respond to suspicious user behavior. Organizations can swiftly identify anomalies and mitigate potential threats by establishing baseline behavior patterns.

SASE Visibility with Cisco ThousandEyes

Visibility forms the foundation of effective network management, security, and troubleshooting. In the context of SASE, visibility refers to the ability to monitor and gain insights into network traffic, application performance, security threats, and user experience. Due to SASE’s distributed and dynamic nature, traditional monitoring tools fail to provide visibility. This is where Cisco ThousandEyes steps in.

**Network Intelligence Platform**

Cisco ThousandEyes is a comprehensive network intelligence platform that empowers organizations with end-to-end visibility across their SASE infrastructure. It combines network monitoring, application performance monitoring, and security capabilities to deliver a holistic view of the network ecosystem. By leveraging active and passive monitoring techniques, ThousandEyes enables organizations to proactively identify and resolve issues, ensure optimal application performance, and strengthen security posture.

A. Network Monitoring: Cisco ThousandEyes provides real-time visibility into network paths, latency, and packet loss, helping organizations identify bottlenecks and optimize traffic routing. With its global network of monitoring agents, it offers insights into network performance from various locations, ensuring a comprehensive view.

B. Application Performance Monitoring: Ensuring optimal application performance is critical in a SASE environment. ThousandEyes monitors application performance metrics, such as response time, throughput, and availability, enabling organizations to identify and troubleshoot performance issues.

C. Security Monitoring: Cisco ThousandEyes helps organizations monitor security threats within their SASE architecture. It leverages deep packet inspection and threat intelligence to detect and mitigate potential security risks. By providing visibility into traffic patterns and anomalies, ThousandEyes strengthens security posture and enhances incident response capabilities.

Network & Application Visibility Technologies:

**Understanding SPAN**

SPAN, also known as port mirroring, is a feature that enables the monitoring of network traffic on a switch. It replicates packets from one or more source ports to a destination port, where a network analyzer or monitoring tool can capture and analyze the traffic. By implementing SPAN, network administrators gain valuable insights into network behavior, troubleshoot issues, and ensure network security.

**SPAN Configuration**

Configuring SPAN on Cisco NX-OS is straightforward. Administrators can define the source ports from which traffic will be mirrored and specify the destination port where the mirrored traffic will be sent. Additionally, Cisco NX-OS provides flexible options to configure SPAN filters, allowing administrators to selectively mirror specific types of traffic based on various criteria, such as VLANs, protocols, or IP addresses.

**Key SPAN Benefits**

The benefits of utilizing SPAN on Cisco NX-OS are vast. Firstly, it enables network administrators to conduct in-depth network traffic analysis, helping them identify potential bottlenecks, anomalies, or security threats. This visibility is essential for optimizing network performance and maintaining a secure environment. Secondly, SPAN allows seamless integration with third-party monitoring tools, providing network administrators a wide range of options for analyzing captured traffic. Lastly, SPAN on Cisco NX-OS is highly scalable, supporting the monitoring of multiple source ports and accommodating the evolving needs of network infrastructures.

Understanding sFlow

sFlow is a sampling technology that enables network devices to collect and analyze traffic data. By sampling packets regularly, sFlow provides a representative view of the overall network traffic, allowing administrators to identify and address potential issues efficiently. It offers a scalable and non-intrusive approach to network monitoring.

sFlow provides valuable insights into network performance metrics, such as bandwidth utilization, top talkers, and application-level statistics. Network administrators can proactively identify and address potential bottlenecks by analyzing these metrics, ensuring optimal network performance. The real-time nature of sFlow allows for quick troubleshooting and capacity planning, ultimately improving the overall user experience.

ThousandEyes & SASE: A Proactive Approach

Combining Cisco ThousandEyes with your SASE VPN gives you end-to-end visibility into the SASE security stacks and all network paths, including nodes. These can be consumed from Cisco ThousandEyes, enabling a proactive approach to monitoring your SASE solution, a bundle of components. Cisco ThousandEyes has several agent deployment models that you can use depending on whether you want visibility into remote workers or users at the branch site or even agent-to-agent testing.

Remember that ThousandEyes is not just for a Cisco SASE solution; it has multiple monitoring use cases, of which Cisco Umbrella SASE is just one. ThousandEyes also has good integrations with Cisco AppDynamics for full-stack end-to-end observability. First, let’s do a quick recap on the SASE definition.

Example Product: Cisco ThousandEyes

### What is Cisco ThousandEyes?

Cisco ThousandEyes is a cloud-based network intelligence platform that offers deep insights into network performance from multiple vantage points, including local networks, cloud environments, and even end-user devices. By leveraging a combination of synthetic and real-user monitoring techniques, ThousandEyes helps organizations identify and resolve performance issues before they impact users. This comprehensive visibility allows IT teams to pinpoint the root cause of problems swiftly, reducing downtime and improving overall user experience.

### Key Features of Cisco ThousandEyes

Cisco ThousandEyes boasts a plethora of features designed to empower IT teams with actionable intelligence. Some of the standout features include:

1. **End-to-End Visibility**: Gain insights into every hop between your users and the applications they rely on, whether they are hosted on-premises or in the cloud.

2. **Synthetic Monitoring**: Simulate user interactions to proactively detect performance issues across different regions and ISPs.

3. **Real-User Monitoring**: Capture real-time data from actual user sessions to understand how network performance impacts user experience.

4. **Network Path Visualization**: Visualize the entire network path to identify bottlenecks and pinpoint where performance degradation occurs.

5. **Alerts and Notifications**: Set customizable alerts to stay informed about network anomalies and performance thresholds in real-time.

### Benefits of Using Cisco ThousandEyes

Implementing Cisco ThousandEyes can yield numerous benefits for organizations, including:

1. **Improved Troubleshooting**: With detailed network path visualization and comprehensive performance data, IT teams can troubleshoot issues more effectively and efficiently.

2. **Enhanced User Experience**: By proactively monitoring and addressing performance issues, organizations can ensure a smoother and more reliable user experience.

3. **Increased Visibility**: Gain unparalleled insights into both internal and external networks, helping to identify and mitigate risks before they impact operations.

4. **Optimized Resource Allocation**: Leverage performance data to make informed decisions about network infrastructure investments and optimizations.

### Practical Applications of Cisco ThousandEyes

Cisco ThousandEyes can be utilized in a variety of scenarios to enhance network performance and reliability:

1. **Cloud Migrations**: Monitor and troubleshoot performance issues during cloud migration projects to ensure a seamless transition.

2. **SaaS Performance Monitoring**: Keep tabs on the performance of critical SaaS applications to ensure they meet user expectations.

3. **Remote Workforce Support**: Provide remote employees with reliable access to corporate resources by monitoring and optimizing network performance from their locations.

4. **ISP Performance Comparison**: Compare the performance of different ISPs to make informed decisions about service providers and optimize connectivity.

Challenges to the Cisco SASE Solution

The Internet is unstable

The first issue is that the Internet is fragile. We have around 14,000 BGP routing incidents per year. This includes a range of outages and attacks on the BGP protocol and peering relationships: Port 179. Border Gateway Protocol (BGP) is the glue of the Internet backbone, so attacks and outages can ripple effects across different Autonomous Systems (AS). So, if BGP is not stable, which it is not, it can cause problems.

Cloud connectivity based on the Internet will not be stable. Internet cloud providers need more stability regarding network performance on the Internet. These providers rely on the public Internet instead of using a private backbone to carry traffic.

**A: Internet Blindspots**

When moving to a SASE environment, we face several challenges. Internet blindspots can be an Achilles’ heel to SASE deployments and performance. After all, network paths today consist of many more hops over longer and more complex segments (e.g., Internet, security, and cloud providers) that may be entirely out of IT’s control. 

Legacy network monitoring tools are no longer suitable for this Internet-centric environment because they primarily collect passive data from on-premises infrastructure. We also have a lot of complexity and moving parts. Modern applications have become increasingly complex, involving modular architectures distributed across multi-cloud platforms. Not to mention a complex web of interconnected API calls and third-party services.

As a result, understanding the application experience for an increasingly remote and distributed workforce is challenging—and siloed monitoring tools fail to provide a complete picture of the end-to-end experience.

**B: Out of your control**

Now that workers are everywhere and cloud-based applications are abundant, the Internet is the new enterprise network. The perimeter has moved to the edges, with most devices and components out of their control. This has many consequences. So, how do enterprises ensure a digital experience when they no longer own the underlying transport, services, and applications their business relies on? 

With these new complex and dynamic deployment models, we now have significant blind spots. Network paths are now much longer than they were in the past. Nothing is just one or two hops away. If you do a traceroute from your SASE VPN client, it may seem like one hop, but it’s much more.

**C: Multiple Segments & Components**

We have a lot of complexity with numerous segments and different types of components, such as the Internet, security providers such as Zscaler, and cloud providers. All of this is out of your control. If I were to put my finger in the air, on average, 80% could be out of my control. So, it would help if you paid immediate attention to some things, such as visibility into the underlay, applications, and service dependencies.

Useful Network Troubleshooting

What is Traceroute?

A traceroute is a command-line tool that traces the path data packets take from one point to another on the Internet. It provides valuable information about network hops, latency, and data routing. The traceroute maps the route by sending packets with increasing Time-To-Live (TTL) values, revealing each intermediate node.

When executing the traceroute command, a detailed output is generated. Each line represents a hop, displaying the IP address, domain name (if available), and round-trip time (RTT) for that specific hop. The RTT indicates the time a packet travels from the source to that particular hop and back. Analyzing this output can unveil valuable insights into network performance and potential bottlenecks.

Understanding ICMP Basics

ICMP, often called the “heartbeat of the Internet,” is an integral part of the Internet protocol suite. It operates at the network layer and facilitates essential functions, including error reporting, network diagnostics, and congestion control. By exchanging control messages between routers and hosts, ICMP plays a crucial role in maintaining the smooth flow of data packets across the network.

Echo Request & Echo Reply

ICMP messages come in various forms, each serving a specific purpose. From Echo Request and Echo Reply messages, commonly associated with the ubiquitous Ping utility, to Destination Unreachable and Time Exceeded messages, ICMP provides valuable feedback on network issues and packet status. We will explore the different ICMP message types and their significance in troubleshooting network problems and ensuring efficient communication.

Related: Before you proceed, you may find the following posts helpful:

  1. Zero Trust SASE
  2. SD-WAN SASE
  3. SASE Solution
  4. Dropped Packet Test
  5. Secure Firewall
  6. SASE Definition

Integration with Cisco ThousandEyes

ThousandEyes & SASE Integration

Cisco ThousandEyes, a leading network intelligence platform, seamlessly integrates with SASE visibility, amplifying its capabilities. With this integration, organizations can leverage ThousandEyes’ comprehensive network monitoring and troubleshooting capabilities, combined with SASE visibility’s holistic approach.

This collaboration empowers organizations to identify network issues, optimize performance, and ensure a secure and seamless user experience. Cisco Umbrella SASE provides recursive DNS services and helps organizations securely embrace direct internet access (DIA).  When applications are hosted in the cloud, we don’t need to backhaul all traffic to the enterprise data center; there will still be applications hosted in the data center; we can use SD-WAN.

Cisco Umbrella started with DNS security solutions and then expanded to include the following features, all delivered from a single cloud security service: DNS-layer security and interactive threat intelligence, a secure web gateway, firewall, cloud access security broker (CASB) functionality, and integration with Cisco SD-WAN.

The Way Forward: SASE VPN End-to-End

–Network Underlay Visibility

Firstly, you need to gain visibility into the network underlay. If you do a traceroute, you may see only one hop. Still, it would help if you had insights into every Layer 3 hop across the network underlay, including Layer 2 or firewalling and load-balancing services in the path.

–Monitoring Metrics

Secondly, you also need to monitor business-critical applications efficiently and thoroughly understand how users are experiencing an application with full-page loads, metrics that matter most to them, and multistep transactions beyond an application’s front door. This will include login availability along with the entire application workflow.

–Understanding Dependancies

Gaining actionable visibility into service dependencies would help. This will enable you to detect, for example, any service disruptions in ISP networks and DNS providers and see how they impact application availability, response times, and page load performance.

–DNS Server Performance

However, we have a hierarchy of servers involved in the DNS process to support the number of steps in the DNS process. For example, some of these steps would include requesting website information, contacting the recursive DNS servers, querying the authoritative DNS servers, Accessing the DNS record, etc. 

We must consider the performance of your network’s DNS servers, resolvers, and records. And this can include various vendors across the DNS hierarchy.

Gaining Control & WAN Visibility

You lose control and visibility when WAN connectivity and business-critical applications migrate to shared infrastructure, the Internet, and public cloud locations. One way to regain visibility and control is with Cisco ThousandEyes. Cisco ThousandEyes allows you to monitor your users’ digital experience against software as a service and on-prem applications, regardless of where they are, through the essential elements of your SASE architecture.

SASE is not just one virtual machine (VM) or virtual network function; it comprises various technologies or VNFs such as SD-WAN, SWG, VPN, and ZTNA. 

A) SD-WAN: A Good Starting Point 

We know the SASE definition and the convergence of networking and security in cloud-native solutions with global PoP. Cisco SD-WAN is a great starting point for your Cisco SASE solution, especially SD-WAN security, which has been mainstream for a while now. But what would you say about gaining the correct visibility into your SASE model? We have a lot of networking and security functionality now bundled into PoPs, along with different ways to connect to the PoP, whether you are at home or working from the branch office. 

B) Connecting To The SASE PoPs

So, if you are at home, you will have a VPN client and go directly to Cisco Umbrella SASE. If you are in the Office, you will likely connect to the SASE PoP or on-premise application via Cisco SD-WAN. The SD-WAN merges with the SASE PoP with redundant IPsec tunnels. You can have up to 8 IPsec tunnels with four active tunnels. The automated policy can be set up between Cisco vManage and Cisco Umbrella, so it’s a good interaction.

C) Cisco Umbrella SASE

Cisco Umbrella SASE is about providing secure connectivity to our users and employees. We need to know precisely what they are doing and not always blame the network when there is an issue. Unfortunately, the network is easy to blame, even though it could be something else.

Scenario: Remote Worker: Creating a SASE VPN

Let’s say we have a secure remote worker. They need to access the business application that could be on-premises in the enterprise data center or served in the cloud. So, users will initiate their SASE VPN client to access a VPN gateway for on-premise applications and then land on the corporate LAN. Hopefully, the LAN will be tied down with microsegmentation, and the SASE VPN users will not get overly permissive broad access.

Suppose the applications are served over the Internet in a public cloud SaaS environment. In this case, the user must go through Cisco Umbrella, not to the enterprise data center but to the cloud. You know that Cisco Umbrella SASE will have a security stack that includes DNS-layer filtering, CASB, and URL filtering. DNS-layer filtering is the first layer of defense.

SASE VPN: Identity Service

In both cases, working remotely or from the branch office, some Identity services may fall under the zero trust network access (ZTNA) category. Identity services are done with Cisco Duo. CyberArk also has complete identity services.

These identity providers offer identity services such as Single-Sign-On (SSO) and Multi-Factor Authentication (MFA) to ensure users are who they say they are. They present users with multiple MFA challenges and a seamless experience with SSO via an identity portal.

**Out of your control**

In both use cases of creating an SASE VPN, we need visibility into several areas that are out of your control. For example, suppose the user works from home. In that case, we will need visibility into their WiFi network, the secure SASE VPN tunnel to the nearest Umbrella PoP, the transit ISP, and the SASE security functions.

**Require Full Visibility**

We need visibility into numerous areas, and each region is different. But one thing that they share is that they are all out of our control. Therefore, we must question and gain complete visibility into something out of our control. 

We will have similar problems with edge use cases where workers work from branch sites. If these users go to the Internet, they will still use the Cisco Umbrella SASE security stack, but it will go through SD-WAN first.

Monitoring SD-WAN

However, using SD-WAN will allow us to monitor another part. So, with SD-WAN, we add another layer of needed visibility into the SD-WAN overlay and underlay. The SD-WAN underlay will have multiple ISPs, components, and decades-old equipment.

A: Overlay and Application Mappings

We will have different applications mapped to other overlays for the overlay network, potentially changing them on the fly based on performance metrics. The diagram below shows that some application types prefer different paths and network topologies based on performance metrics such as latency. 

B: New Monitoring Tools

With SD-WAN, the network overlay is now entirely virtualized, allowing an adaptive, customized network infrastructure that responds to an organization’s changing needs. So when you move to a SASE environment, you are becoming more dependent on an increasing number of external networks and services that you do not own and that traditional tools can’t monitor, resulting in blind spots that will lead to gaps in security and many operational challenges to moving to SASE.

Cisco ThousandEyes: Different Vantage Points.

Cisco ThousandEyes provides visibility end-to-end across your SASE environment. It allows you to be proactive and see problems before they happen, reducing your time to resolution. Remember that today, we have a complex environment with many new and unpredictable failure modes. Having the correct visibility lets you control the known and unknown failure modes.

Cisco ThousandEyes can also give you actionable data. For example, when service degradation occurs, you can quickly identify where the problem is. So, your visibility will need to be actionable. To gain actionable visibility, you need to monitor different things from different levels. One way to do this is with other types of agents.

Cloud, Enterprise, and End-user Vantage Points

Using a global collective of cloud, enterprise, and end-user vantage points, ThousandEyes enables organizations to see any network, including those belonging to Internet and cloud providers, as if it were their own—and to correlate this visibility with application performance and employee experience.

From Thousand Eyes’ different vantage points, which are based on deploying agents, we can see the layer 3 hop-by-hop underlays from remote users and SD-WAN sites to secure edge and from secure edge to application servers SaaS application performance, including monitoring login availability and application workflows, service dependency monitoring, including secure edge PoP, and DNS servers.

**Example of an Issue: HTTP Response Time**

That’s quite a lot of areas to grasp. So, let’s say you are having performance issues with Office365, and the response time has increased. First, you would notice an increase in HTTP response time from a specific office. The next stage would be to examine the network layer and see an increase in latency. So, in this case, we have network problems.

Then, you can investigate the problem further using the packet visualization Cisco ThousandEyes offers to pinpoint precisely where it is happening. The packet visualization shows you the exact path of the Office to the Internet via Umbrella. It provides all the legs of the Internet and can pinpoint the problem to the specific device. So now we have end-to-end visibility via this remote worker right to the application.

Cisco ThousandEyes Agents

Endpoint Agent Testing:

The secure remote worker could be on the move and working from anywhere. In this case, you need the ThousandEyes Endpoint agent. The Endpoint agent performs active application and network performance tests and passively collects performance data, such as WiFi and device-level metrics like CPU and memory.

It also detects and monitors any SASE VPN, other VPN gateways, and proxies. The most crucial point to note about the Endpoint agent is that it follows the user regardless of where they work, whether in the branch office or remote locations. The endpoint agent is location-agnostic. However, creating a baseline for users with this type of movement will be challenging.

**Passive Monitoring**

The endpoint agent, by default, does some passive monitoring. WiFi performance is a metric that always sees the percentage of retransmitted packets that would indicate a problem occurring. If the user is working from home and saying this application is not working, you can tell if the WiFi is now working and ask them to carry out the necessary troubleshooting if the issue is at their end.

The endpoint agent also automatically performs default gateway networking testing. This is synthetic network testing using the default gateway. Remote working has an extensive internal network, so you can map it out and help them troubleshoot. 

**Underlay Network Testing**

They can test the underlay network to the VPN termination points. So, if you are on a VPN, you have one hop! But if you need to determine any packet loss, etc., you must see the exact underlay. The underlay testing can tell you if there is a problem with the upstream ISP or the VPN termination points.

Enterprise Agent Testing:

The Enterprise agent is set up from the Endpoint agent. The Enterprise agent has, on top of it, complete application testing. Unlike the endpoint agents, it can do page load testing. Using Webex, you can set up the RTP tests for the agent running in the various WebEx data centers.  

Then, we have the secure edge design, where the users work from a branch office. This is where we have an Enterprise agent—one agent from all users working in the Office. So, one agent for all users and devices in the LAN can be installed on several device types—for example, Cisco Catalyst 8000 or ISR 4000 series. Or if you can’t install it on a Cisco device, you can install it as a Docker container or in a smaller office; you can deploy it on a Raspberry PI.

**Network Performance Testing**

It performs active application and network performance testing, similar to the Endpoint agent. However, one main difference is that it can perform complex web application tests. The Enterprise agent has a fully-fledged browser on top of it, and it can open up a web application, download images needed to load the page load event and log in to the application. 

This is an essential test for the zero trust network access (ZTNA) category, as it supports complete web testing for applications beyond SSO. It can also test VPN and the SDN overlay and underlay. In addition, it provides a continuous baseline regardless of whether there are any active users. The baseline is 24/7, and you can immediately know if there are problems. This is compared to the Endpoint agent, which can’t provide a baseline due to choppy data.

Cisco ThousandEyes also has a Cloud agent that can augment the Enterprise agent. The Cloud agent is installed in over 200 locations worldwide and in WebEx data centers. Consider the cloud agent to increase the enterprise agent. It allows you to do two-way networking and bidirectional testing. Here, we can test agent to agent.

SD-WAN Underlay Visibility

The enterprise agent can also test the SD-WAN underlay. In this testing, you can configure some data policies and allow the network test to go into the underlay. You can even test the Umbrella IPsec Gateway or the SD-WAN router in the data center, which gives you hop-by-hop insights into the underlay.

Device Layer Visibility

We also have device layer visibility. Here, we gather network device topology to gain visibility into the performance of the secure edge internal devices. This will show you all the Layer 3 nodes in your network, firewall, load balancer, and other Layer 2 devices.

Summary: SASE Visibility with Cisco ThousandEyes

In today’s digital landscape, the demand for secure and efficient network connectivity is higher than ever. With the rise of remote work and cloud adoption, organizations are turning to Secure Access Service Edge (SASE) solutions to streamline their network infrastructure. Cisco Thousandeyes emerges as a powerful tool in this realm, offering enhanced visibility and control. In this blog post, we explored the key features and benefits of Cisco Thousandeyes, shedding light on how it can revolutionize SASE visibility.

Understanding SASE Visibility

To grasp the importance of Cisco Thousandeyes, it’s crucial to comprehend the concept of SASE visibility. SASE visibility refers to monitoring and analyzing network traffic, performance, and security across an organization’s network infrastructure. It provides valuable insights into user experience, application performance, and potential security threats.

The Power of Cisco Thousandeyes

Cisco Thousandeyes empowers organizations with comprehensive SASE visibility that extends across the entire network. By leveraging its advanced monitoring capabilities, businesses gain real-time insights into network performance, application behavior, and end-user experience. With Thousandeyes, IT teams can identify and troubleshoot issues faster, ensuring optimal network performance and security.

Key Features and Functionalities

In this section, we will delve into the key features and functionalities offered by Cisco Thousandeyes. These include:

1. Network Monitoring: Thousandeyes provides end-to-end visibility, allowing organizations to monitor their network infrastructure from a single platform. It tracks network performance metrics, such as latency, packet loss, and jitter, enabling proactive issue resolution.

2. Application Performance Monitoring: With Thousandeyes, businesses can gain deep insights into application performance across their network. IT teams can identify bottlenecks, optimize routing, and ensure consistent application delivery to enhance user experience.

3. Security Monitoring: Cisco Thousandeyes offers robust security monitoring capabilities, enabling organizations to detect and mitigate potential threats. It provides visibility into network traffic, identifies anomalies, and facilitates rapid incident response.

Integration and Scalability

One of the significant advantages of Cisco Thousandeyes is its seamless integration with existing network infrastructure. It can integrate with various networking devices, cloud platforms, and security tools, ensuring a cohesive and scalable solution. This flexibility allows businesses to leverage their current investments while enhancing SASE visibility.

Conclusion:

In conclusion, Cisco Thousandeyes proves to be a game-changer in SASE visibility. Its comprehensive monitoring capabilities empower organizations to optimize network performance, ensure application reliability, and enhance security posture. By embracing Cisco Thousandeyes, businesses can journey toward a more efficient and secure network infrastructure.

WAN Security

SD WAN Security

SD WAN Security

In today's interconnected world, where businesses rely heavily on networks for their daily operations, ensuring the security of Wide Area Networks (WANs) has become paramount. WANs are at the heart of data transmission, connecting geographically dispersed locations and enabling seamless communication. However, with the rise of cyber threats and the increasing complexity of networks, it is crucial to understand and implement effective WAN security measures. In this blog post, we will explore the world of WAN security, its key components, and strategies to safeguard your network.

WAN security refers to the protection of data and network resources against unauthorized access, data breaches, and other malicious activities. It involves a combination of hardware, software, and protocols designed to ensure the confidentiality, integrity, and availability of data transmitted over wide area networks. By implementing robust security measures, organizations can mitigate the risks associated with WAN connectivity and maintain the privacy of their sensitive information.

a. Firewalls: Firewalls act as a barrier between internal networks and external threats, monitoring and controlling incoming and outgoing network traffic. They enforce security policies and filter data packets based on predetermined rules, preventing unauthorized access and potential attacks.

b. Virtual Private Networks (VPNs): VPNs create secure, encrypted tunnels over public networks, such as the internet. By establishing a VPN connection, organizations can ensure the confidentiality and integrity of data transmitted between remote locations, protecting it from eavesdropping and tampering.

c. Intrusion Detection and Prevention Systems (IDPS): IDPS solutions monitor network traffic in real-time, identifying and responding to potential threats. They analyze network packets, detect unusual or malicious activity, and take prompt action to prevent and mitigate attacks.

Key Services:
a. Strong Authentication: Implement multi-factor authentication methods to enhance access control and verify the identity of users connecting to the network. This includes the use of passwords, smart cards, biometrics, or other authentication factors.

b. Regular Patching and Updates: Keep network devices, software, and security solutions up to date with the latest patches and firmware releases. Regularly applying updates helps address known vulnerabilities and strengthens network security.

c. Encryption: Utilize strong encryption protocols, such as AES (Advanced Encryption Standard), to protect sensitive data in transit. Encryption ensures that even if intercepted, the data remains unreadable to unauthorized individuals.

In an age where cyber threats are ever-evolving, securing your WAN is crucial to maintain the integrity and confidentiality of your data. By understanding the key components of WAN security, implementing best practices, and maintaining proactive network monitoring, organizations can strengthen their defenses and safeguard their networks

Highlights: SD WAN Security

Understanding SD-WAN

SD-WAN is a technology that simplifies the management and operation of a Wide Area Network (WAN) by separating the network hardware from its control mechanism. It allows organizations to connect remote branches, data centers, and cloud networks efficiently. However, with this flexibility comes the need for robust security measures.

As businesses increasingly adopt SD-WAN to meet their networking needs, ensuring the security of these networks becomes paramount. Cyber threats such as data breaches, malware attacks, and unauthorized access pose significant risks. Organizations must understand the potential vulnerabilities and implement appropriate security measures to protect their SD-WAN infrastructure.

Deployment Best Practices:

When deploying SD-WAN, several crucial security considerations need to be addressed.Each of these factors plays a vital role in safeguarding the SD-WAN environment. These include:

  1. Authentication and access control,
  2. Encryption,
  3. Threat detection and prevention,
  4. Secure connectivity to cloud services, and
  5. Secure integration with existing security infrastructure. 

Organizations should follow best practices to ensure optimal security in an SD-WAN environment. These include conducting regular security audits, implementing multi-factor authentication, utilizing encryption for data in transit, deploying intrusion detection and prevention systems, and establishing secure connectivity protocols. By adhering to these practices, businesses can mitigate potential risks and enhance their overall network security.

Key SD-WAN Security Point:

SD-WAN security is mainly based on the use of IP security (IPsec), VPN tunnels, next-generation firewalls (NGFWs), and the microsegmentation of application traffic.

Example Product: Cisco Meraki

**Section 1: Simplified Network Management**

One of the standout features of Cisco Meraki is its intuitive dashboard, which offers a centralized interface for managing your entire network. Gone are the days of juggling multiple consoles and interfaces. With Cisco Meraki, you can oversee network performance, security settings, and device management all in one place. This streamlined approach not only saves time but also reduces the likelihood of human error.

**Section 2: Advanced Security Features**

When it comes to network security, Cisco Meraki pulls out all the stops. The platform offers a variety of advanced security features, including:

– **Next-Gen Firewall:** Cisco Meraki’s firewall capabilities go beyond traditional firewalls by providing application visibility and control, intrusion prevention, and advanced malware protection.

– **Network Access Control (NAC):** Ensure that only authorized devices can connect to your network, minimizing the risk of unauthorized access.

– **Auto VPN:** Simplify the process of establishing secure connections between remote sites with automatic VPN configuration.

These features ensure that your network remains secure against a wide range of cyber threats, giving you peace of mind.

**Section 3: Scalability and Flexibility**

Cisco Meraki is designed to grow with your business. Whether you’re adding new devices, expanding to new locations, or integrating third-party applications, the platform scales effortlessly. The cloud-based architecture allows for easy updates and seamless integration, ensuring that your network remains up-to-date with the latest security protocols and features.

**Section 4: Real-Time Analytics and Reporting**

Data is the new gold, and Cisco Meraki understands this well. The platform offers robust analytics and reporting tools that provide real-time insights into your network’s performance and security posture. From bandwidth usage to threat detection, the dashboard provides comprehensive metrics that help in making informed decisions. This level of visibility is crucial for identifying potential vulnerabilities and optimizing network performance.

A Layered Approach to Security

  • Decrease the Attack Surface

SD-WAN security allows end users to connect directly to cloud applications and resources without backhauling through a remote data center or hub. This will enable organizations to offload guest traffic to the Internet instead of using up WAN and data center resources. The DIA model, where Internet access is distributed across many branches, increases the network’s attack surface and makes security compliance a critical task for almost every organization.

The broad threat landscape includes cyber warfare, ransomware, and targeted attacks. Firewalling, intrusion prevention, URL filtering, and malware protection must be leveraged to prevent, detect, and protect the network from all threats. The branches can consume Cisco SD-WAN security through integrated security applications within powerful WAN Edge routers, cloud services, or regional hubs where VNF-based security chains may be leveraged or robust security stacks may already exist.

  • The Role of Cisco Umbrella

SD-WAN can be combined with Cisco Umbrella via a series of redundant IPsec tunnels for additional security measures, increasing the robustness of your WAN Security.

Cisco Umbrella is a cloud-based security platform that offers advanced threat protection and secure web gateways. By providing an additional layer of security, Umbrella helps organizations defend against malicious activities, prevent data breaches, and protect their network infrastructure. With its global threat intelligence and DNS-layer security, Umbrella offers real-time threat detection and protection, making it an ideal complement to SD-WAN deployments.

  • Integrating Umbrella with SD-WAN

By integrating Cisco Umbrella with SD-WAN, organizations can fortify their security posture. Umbrella’s DNS-layer security protects against threats, blocking malicious domains and preventing connections to command and control servers. This, combined with SD-WAN’s ability to encrypt traffic and segment networks, creates a robust security framework that safeguards against cyber threats and potential data breaches.

Example Technology: Sensitive Data Protection

Sensitive data protection

SD-WAN Security Features

Encryption and Data Protection: One of SD-WAN’s fundamental security features is encryption. By encrypting data traffic, SD-WAN ensures that sensitive information remains protected from unauthorized access or interception. This feature is essential when transmitting data across public networks or between different branches of an organization.

Firewall Integration: Another key security feature of SD-WAN is its seamless firewall integration. SD-WAN solutions can provide advanced threat detection and prevention mechanisms by incorporating firewall capabilities. This helps businesses safeguard their networks against potential cyber-attacks, ensuring the confidentiality and integrity of their data.

Intrusion Detection and Prevention System (IDPS): SD-WAN solutions often have built-in Intrusion Detection and Prevention Systems (IDPS). These systems monitor network traffic for suspicious activity or potential threats, promptly alerting administrators and taking necessary actions to mitigate risks. The IDPS feature enhances the overall security posture of the network, proactively defending against possible attacks.

Secure Multi-tenancy: SD-WAN offers secure multi-tenancy capabilities for organizations operating in multi-tenant environments. This ensures that each tenant’s network traffic is isolated and protected, preventing unauthorized access between tenants. Secure multi-tenancy is essential for maintaining the confidentiality and preventing data breaches in shared network infrastructures.

Example Technology: IPS IDS

Understanding Suricata IPS/IDS

Suricata is an open-source intrusion detection and prevention system (IDS/IPS) that offers advanced threat detection and prevention capabilities. It combines the functionalities of a traditional IDS with the added benefits of an IPS, making it a powerful asset in network security.

Suricata boasts many features and capabilities, making it a formidable defense mechanism against cyber threats. It provides a multi-layered approach to identifying and mitigating security risks, from signature-based detection to protocol analysis and behavioral anomaly detection.

WAN Security

Enterprise Firewall:

Traditional branch firewall design involves deploying the appliance in either in-line Layer 3 mode or transparent Layer 2 mode behind or even ahead of the WAN Edge router. Now, for stateful inspection, we have to have another device. This adds complexity to the enterprise branch and creates unnecessary administrative overhead in managing the added firewalls. 

**Application-aware Firewall**

A proper firewall protects stateful TCP sessions, enables logging, and implements a zero-trust domain between network segments. Cisco SD-WAN takes an integrated approach and implements a robust Application-Aware Enterprise Firewall directly into the SD-WAN code.

Cisco SD-WAN takes an integrated approach. It has implemented an application-aware enterprise firewall directly into the SD-WAN code, so there is no need for another inspection device.

Cisco has integrated the stateful firewall with the NBAR 2 engine. Now, with these two, we have good application visibility and granularity. In addition, the enterprise firewall can detect applications with the very first packet. The Cisco SD-WAN firewall provides stateful inspection, zone-based policies, and segment awareness. It can also classify over 1,400 Layer 7 applications and apply granular policy control to them based on category or individual basis.

Example Firewalling: Cisco’s Zone-Based Firewall

Understanding Zone-Based Firewalls

At its core, a zone-based firewall (ZBF) is a security feature that allows network administrators to define security zones and control the traffic flow between them. Unlike traditional firewalls, which operate based on interfaces, ZBF operates based on zones. Each zone represents a logical group of network devices, such as LAN, WAN, or DMZ, and traffic between these zones is regulated using policies defined by the administrator.

1. Enhanced Network Segmentation: By dividing the network into distinct security zones, ZBF enables granular control over the traffic flow, minimizing potential attackers’ risk of lateral movement. This segmentation helps contain breaches and limits the impact of security incidents.

2. Simplified Policy Management: ZBF simplifies firewall policy management by allowing administrators to define policies at the zone level rather than dealing with complex interface-based rules. This approach streamlines policy deployment and reduces the likelihood of misconfigurations.

3. Application Layer Inspection: ZBF supports deep packet inspection, enabling administrators to perform application-specific filtering and apply security policies based on the application layer attributes. This capability enhances network visibility and strengthens security posture.

Intrusion Prevention:

An IDS/IPS can inspect traffic in real time to detect and prevent attacks by comparing the application behavior against a known database of threat signatures. This is based on the Snort engine and runs as a container. So, Snort is the most widely deployed intrusion prevention system globally. The solution is combined with Cisco Talos, which puts out the signatures. The Cisco Talos Intelligence Group is one of the world’s largest commercial threat intelligence teams, which are comprised of researchers, analysts, and engineers.

**Connecting to Tales Signature Database**

Cisco vManage connects to the Talos signature database, downloads the signatures on a configurable periodic or on-demand basis, and pushes them down into the branch WAN Edge routers without user intervention. Signatures are rules that an IDS and an IPS uses to detect typical intrusive activity. Also, you can use the allowlist approach if you see many false positives. It is better to start this in detect mode so the engine can learn before you start blocking.

**Snort Based IPS/IDS**

Intrusion detection and prevention (IDS/IPS) can inspect traffic in real time to detect and prevent cyberattacks and notify the network operator through Syslog events and dashboard alerts. IDS/IPS is enabled through IOS-XE application service container technology. KVM and LxC containers are used, and they differ mainly in how tightly they are coupled to the Linux kernel used in most network operating systems, such as IOS XE.

The Cisco SD-WAN IDS/IPS runs Snort, the most widely deployed intrusion prevention engine globally, and leverages dynamic signature updates published by Cisco Talos. The signatures are updated via vManage or manually using CLI commands available on the WAN Edge device.

URL filtering:

URL filtering is another Cisco SD-WAN security function that leverages the Snort engine to inspect HTTP and HTTPS payloads to provide web security at the branch. In addition, the URL filtering engine enforces acceptable use controls to block or allow websites. It downloads the URL database and blocks websites based on over 80 categories. It can also make decisions based on a web application score. This information is gained from Webroot/Brightcloud. 

**Comprehensive Security**

URL Filtering leverages the Snort engine to provide comprehensive web security at the branch. It can be configured to permit or deny websites based on 82 different categories, the site’s web reputation score, and a dynamically updated URL database when an end user requests access to a particular website through their web browser. The URL Filtering engine inspects the web traffic, queries any custom URL lists, compares the URL to the blocked or allowed categories policy, and finally consults the URL Filtering database.

Note: Advanced Malware Protection and Threat Grid:

Advanced Malware Protection (AMP) and Threat Grid are the newest additions to the SD-WAN security features. As with URL filtering, both AMP and Threat Grid leverage the Snort engine and Talos for the real-time inspection of file downloads and malware detection. AMP can block malware entering your network using antivirus detection engines, one-to-one signature matching, machine learning, and fuzzy fingerprinting.

DNS Web Layer Security:

Finally, we have DNS layer security. Some countries have a rule that you cannot inspect HTTP or HTTPS packets to filter content. So, how can you filter content if you can’t inspect HTTP or HTTPS packets?

We can do this with DNS packets. So before the page is loaded in the browser, the client sends a DNS request to the DNS server for the website, asking for a name to IP mapping. Once registered with Umbrella cloud, the WAN Edge router intercepts DNS requests from the LAN and redirects them to Umbrella resolvers.

If the requested page is a known malicious site or is not allowed (based on the policies configured in the Umbrella portal, the DNS response will contain the IP address for an Umbrella-hosted block page. 

**DNSCrypt, EDNS, and TLS Decryption**

DNS web layer security also supports DNSCrypt, EDNS, and TLS decryption. In the same way that SSL turns HTTP web traffic into HTTPS encrypted web traffic, DNSCrypt turns regular DNS traffic into encrypted DNS traffic that is secure from eavesdropping and man-in-the-middle attacks. It does not require changes to domain names or how they work; it simply provides a method for securely encrypting communication between the end user and the DNS servers in the Umbrella cloud located around the globe.

In some scenarios, it may be essential to avoid intercepting DNS requests for internal resources and passing them on to an internal or alternate DNS resolver. To meet this requirement, the WAN Edge router can leverage local domain bypass functionality, where a list of internal domains is defined and referenced during the DNS request interception process. 

Related: For additional pre-information, you may find the following posts helpful:

  1. SD WAN Tutorial
  2. WAN Monitoring
  3. Virtual Firewalls

Starting WAN Security

Key Points on SD-WAN Security

A: Unveiling the Security Risks in SD-WAN Deployments:

While SD-WAN offers enhanced network performance and agility, it also expands the attack surface for potential security breaches. The decentralized nature of SD-WAN introduces complexities in securing data transmission and protecting network endpoints. Threat actors constantly evolve tactics, targeting vulnerabilities within SD-WAN architectures to gain unauthorized access, intercept sensitive information, or disrupt network operations. Organizations must be aware of these risks and implement robust security measures.

B: Implementing Strong Authentication and Access Controls:

Robust authentication mechanisms and access controls are essential to mitigate security risks in SD-WAN deployments. Multi-factor authentication (MFA) should be implemented to ensure that only authorized users can access the SD-WAN infrastructure. Additionally, granular access controls should be enforced to restrict privileges and limit potential attack vectors. By implementing these measures, organizations can significantly enhance the overall security posture of their SD-WAN deployments.

C: Ensuring Encryption and Data Privacy:

Protecting data privacy is critical to SD-WAN security. Encryption protocols should be employed to secure data in transit between SD-WAN nodes and across public networks. By leveraging robust encryption algorithms and key management practices, organizations can ensure the confidentiality and integrity of their data, even in the face of potential interception attempts. Data privacy regulations, such as GDPR, further emphasize the importance of encryption in safeguarding sensitive information.

D: Monitoring and Threat Detection:

Continuous monitoring and threat detection mechanisms are pivotal in maintaining SD-WAN security. Intrusion Detection Systems (IDS) and Security Information and Event Management (SIEM) tools can provide real-time insights into network activities, identifying potential anomalies or suspicious behavior. Through proactive monitoring and threat detection, organizations can swiftly respond to security incidents and mitigate potential risks before they escalate.

Example Technology: Scanning Networks

What is Network Scanning?

Network scanning examines a network to identify active hosts, open ports, and potential vulnerabilities. By systematically scanning a network, security experts gain valuable insights into its architecture and weaknesses, allowing them to strengthen defenses and prevent unauthorized access.

a. Ping Sweeps: Ping sweeps are simple yet effective techniques that involve sending ICMP Echo Request packets to multiple hosts to determine their availability and responsiveness.

b. Port Scans: Port scanning involves probing a host for open ports and determining which services or protocols are running. Tools like Nmap provide comprehensive port scanning capabilities.

c. Vulnerability Scans: Scanners search for weaknesses in network devices, operating systems, and applications. Tools such as OpenVAS and Nessus are widely used to identify potential vulnerabilities.

SD WAN’s Initial Focus

– The initial SD-WAN deployment model involved integrating corporate communications with the organizational fabric and corporate communications with the SD-WAN overlay. There was an immediate ROI, as cheap broadband links could be brought into the branch and connected to the organization’s network with the SD-WAN overlay.

– For some time now, we have been gaining benefits from SD-WAN’s base, such as site connection; we can now design and implement the application optimizations that SD-WAN offers, such as integrated security. This enables us to get additional benefits from SD-WAN.

– From a security perspective, end-to-end segmentation and policy are critical. The control, data, and management planes must be separated throughout the environment and secured appropriately. In addition, the environment should support native encryption that is robust and scalable and offers lightweight key management.

SD WAN traffic steering
Diagram: SD WAN traffic steering. Source Cisco.

SD-WAN Key Security Feature: DIA

With SD-WAN, we can now instead go directly from the branch through DIA to the applications hosted in the Cloud by leveraging DNS and geo-location services for the best possible performance. This, however, presents different types of attack surfaces that we need to deal with.

We have different security implications for moving the Internet edge to the branch. In the DIA model, Internet access is distributed across many components; for example, unsecured guest users are allowed Internet access directly. They may be guests, but we are responsible for content filtering and ensuring compliance. So, we have internal and external attack vectors that need to be considered with this new approach to the WAN.

Threat Categories:

You could group these threats into three main categories. Outside-in threats could consist of denial of services or unauthorized access. Inside-out threats could be malware infection or phishing attacks. Then, we have internal threats where lateral movements are a real problem. With every attack vector, the bad actor must find high-value targets, which will likely not be the first host they land on.

Required: Integrated Security at the Branch:

To protect against these threats, we need a new security model with comprehensive, integrated security at the branch site. The branch leveraged the appropriate security mechanisms, such as application-aware firewalling, intrusion prevention, URL filtering, and malware protection, to prevent, detect, and protect the network and the various identities from all threats. 

SD WAN Security Featrues
Diagram: SD-WAN Security Features. The source is Cisco.

SD-WAN Deployment Models

SD-WAN can be designed in several ways. For example, you can have integrated security at the mentioned branch. We can also consume security through cloud services or regional hubs where VNF-based security chains may be leveraged. So, to enable or deploy SD-WAN security, you can choose from different types of security models.  

**Thin Branch**

The first model would be cloud security, often considered a thin branch with security in the Cloud. For example, this design or deployment model might not suit healthcare. Then, we have integrated protection with a single platform for routing and security at the branch. This deployment model is widespread, and we will examine a use case soon. 

**Regional Hub**

A final deployment model would be the regional hub design. We have a co-location or carrier-neutral facility (CNF) where the security functions are virtual network functions (VNFs) at the regional collection hub. I have seen similar architecture with a SASE deployment and segment routing between the regional hubs.

SD WAN Deployment
Diagram: SD WAN Deployment. The source is Cisco

Recap: WAN Challenges

First, before we delve into these main areas, let me quickly recap the WAN challenges. We had many sites connected to the MPLS site without a single pane of glass. With many locations, you could not see or troubleshoot, and it could be the case that one application was taking up all the bandwidth. 

A. Challenge: – **Visibility**

Visibility was a big problem; any gaps in visibility would affect your security. In addition, there needed to be more application awareness, which resulted in complex operations, and a DIY approach to application optimization and WAN virtualization resulted in fragmented security.

SD-WAN solves all the challenges by giving you an approach to centrally provisioning, managing, monitoring, and troubleshooting the WAN edges. So, SD-WAN is not a single VM; it is an array of technologies grouped under the umbrella of SD-WAN. As a result, it increases application performance over the WAN while offering security and data integrity.

B. Challenge: – **Identities & Identity Types**

So, we have users, devices, and things, and we no longer have one type of host to deal with. We have many identities and identity types. One person may have several devices that need an IP connection to communicate with applications hosted in the primary data center, IaaS, or SaaS. 

C. Challenge: – **Useful Telemetry**

IP connectivity must be done securely and on a scale while gaining good telemetry. We know the network edges send a wealth of helpful information or telemetry. We can predict or know that you need to upgrade specific paths, which helps monitor traffic patterns and make predictions. Of course, all this needs to operate over a security infrastructure.

SD-WAN security is extensive and encompasses a variety of factors. However, it falls into two main categories. First, we have the security infrastructure category, which secures the control and data plane.

D. Challenge: – **DIA Security**

Then, we have the DIA side of things, where we need to deploy several security functions, such as intrusion prevention, URL filtering, and an application-aware firewall. SD-WAN can be integrated with SASE for DNS-layer filtering. The Cisco version of SASE is Cisco Umbrella.

Now, we need to have layers of security known as the defense-in-depth approach, and DNS-layer filtering is one of the most critical layers, if not the first layer of defense. Everything that wants IP connectivity has to perform a DNS request, so it’s an excellent place to start.

Cisco SD-WAN Security Features

**Secure the SD-WAN Infrastructure**

The SD-WAN infrastructure builds the SD-WAN fabric. Consider a material a mesh of connectivity that can take on different topologies. We have several SD-WAN components that can reside in the Cloud or on-premise. These components are the Cisco vBond, vAnalytics, vManage, and vSmart controllers. Of course, whether you are cloud-ready depends on whether you have these components on the Cloud or on-premises.

  • SD-WAN vBond

The Cisco vBond is the orchestration plane and orchestrates the control and management plane. The Cisco vBond is the entry into the network and is the first point of authentication. So if you pass authentication, the vBond will tell the WAN Edge device that is trying to come online in the fabric who they need to communicate in the Cloud or on-premises, depending on the design, to build a control plane and data plane and get into the fabric securely. 

Essentially, the vBond distributes connectivity information of the vManage/vSmarts to all WAN edge routers.

The Cisco vBond also acts as a STUN server, allowing you to circumvent different types of Network Address Translation (NAT). There are different types of NAT, and we need a unit or device that can be aware of NAT and tell the WAN edge devices that this is your real IP and port. This way, when you build the control information, you ensure you have the correct address. 

  • The Cisco vSmart

The Cisco vSmart is the brain of the solution and facilitates fabric discovery. It performs policy, routes, and key exchange. In addition, all the WAN edge devices, physical or virtual, will build connectivity to multiple vSmart controllers in different regions for redundancy.

So, the vSmart acts as a dissemination point that distributes data plane and application-aware routing policies to the WAN edge routers. It’s like an enhanced BGP route reflector (RR) but reflects much more than routes, such as policy, control, and security information. This drastically reduces complexity and offers a highly resilient architecture.

These devices connect to the control plane security with TLS or DTLS tunneling. You can choose this when you set up your SD-WAN. All of this is configured via vManage.

  • Data Plane Security

Then, we have the data plane—physical or virtual—known as your WAN edge, which is responsible for moving packets. It no longer has to deal with the complexity of a control pane on the WAN side, such as BGP configurations and maintaining peering relationships. Of course, it would help if you still had a control pane on the LAN site, such as route learning via OSPF. But on the WAN side, all the complex peerings have been pushed into the vSmart controllers. 

The WAN edge device establishes DLTS or TLS tunnels to the SD-WAN control plane, which consists of the vSmart controllers. In addition to the DTLS and TLS tunnel, the WAN edge creates a secure control plane with the vSmarts and Cisco’s purpose-built Overlay Management Protocol (OMP).

OMP is the enhanced routing protocol for SD-WAN. You can add many extensions to OMP to enhance the SD-WAN fabric. It is much more intelligent than a standard routing protocol.

  • Cisco vManage

vManage is the UI you can use for Day 0, Day 1, and Day 2 operations. All policies, routing, and QoS security are configured in vManage. Then, vManage pushes this directly to the WAN edge or the vSmart. It depends on what you are looking for.

If you reconfigure a box, such as an IP address, this could be pushed down directly to the box with NETCONF; however, changing the policy to a remote site does not get pushed down via the vManage. So, in the case of advanced configurations, the vSmart will carry out some path calculation and push this down in a state mode to the WAN Edge.

  • Device Identity

We have started to secure the fabric, and everything is encrypted for the control plane. But before we get into the data plane security, we must look at physical security. So here, we need to address device and software authentication. How can you authenticate a Cisco device and make sure that Cisco OS runs on that device? Unfortunately, many counterfeit devices are produced, but those, when booted up, will not even load.

In the past, many vulnerabilities were found in the IOS classic routes. We had, for example, runtime infection and static infection. Someone could access the devices and modify them for all of these to be successful. With some vulnerabilities, it contacted C&C servers when the router came online. So, Malware in IOS is a real threat. There was a security breach that affected the line cards. 

However, now Cisco authenticates Cisco hardware and software with Cisco Trust Anchor modules. We also need to secure the OS with Cisco Secure Boot. 

  • Secure Control Plane

We have taken the burden from the WAN edge router. The traditional WAN had integrated control and data plane where we had high complexity, limited scale, and path selection. So, even if you use DMVPN, you still carry out the routing, such as EIGRP or OSPF. So you are not saved from this. We will have the IKE and routing components with DMVPN. IKE in large environments is hard to scale. .

With SD-WAN, we have a network-wide control pane that is different from that of DMVPN. Moreover, as the WAN edge has secure and authenticated connectivity to the vSmart controllers, we can use the vSmart controllers to remove the complexity, especially for central key rotation. So now, with SD-WAN, you can have IKE-less architecture. 

So you only need a single peering to the vSmart, which allows you to scale horizontally. On top of this, we have OMP. It was designed from the ground up to be very extensive and to carry values that mean something to SD-WAN. It is not just used to replace a routing protocol; it can do much more than have IP prefixes. It can take the keys, policy information, service insertion, and multicast node information.

  • The TLOC 

It is also distributed, allowing edge devices to provide their identity in the fabric. We have TLOC that will enable you to build a fabric. The TLOC allows you to make any network design you wish. The TLOC is a transport locator with a unique WAN fabric identity. The TLOC is on every box, composed of system IP, color, or label for the transport and the encapsulation ( IPsec and GRE ). Now, we can make a differential on every box, and you can have much more control. You can carry all the TLOC and sub information in the OMP peerings.

Once the TLOC is advertised to the vSmart controllers, the vSmart advertises it to the WAN edges. In this case, we have a full mesh, but you can limit who can learn the TLOC or block it to build a hub-and-spoke topology.

You can change the next hop of a TLOC to change where a route is advertised. In the past, changing BGP on a wide scale was challenging as it was box by box, but now, we can quickly build the topology with SD-WAN.

  • Secure Data Plane

So we have secure connectivity from their WAN edge to the vSmart. We have an OMP that runs inside secure DLTS/TLS tunnels. And this is all dynamic. The OMP session to the Smart to the WAN edge can get the required information, such as TLOC and security keys. Then, the WAN edge devices can build an IPsec tunnel to each other, and this is not just standard IPsec but UDP-based IPsec. The UDP-based IPsec tunnels between two boxes allow tunnels over multiple types of transport. The transport and fabric are now agnostic.

We still have route learning on the LAN side, and this route is placed into a VPN, just like a VRF. So this is new reachability information learned from the LAN and sent as an OMP update to the vSmart. The vSmart acts as a route reflector and reflects this information. The vSmart makes all the path decisions for the network.

If you want to manipulate the path information, you can do this in the vSmart controller. So you can choose your preferred transports or change the next hop from the controller without any box-by-box configuration.

  • Direct Internet Access

Next, let us examine direct internet access. So, for direct access, we need to meet several use cases. The primary use case is PCI compliance, so before the packet leaves the branch, it needs to be inspected with a stateful firewall and an IPS solution. The SD-WAN enterprise firewall is application-aware, and we have IPS integrated with SD-WAN that can solve this use case.

Then, we have a guest access use case, where guests work in a branch office. We need content filtering for these guests, too. SD-WAN can run URL filtering that can be used here, as well as a direct cloud access use case. So we want to provide optimal performance to employee traffic but select and choose applications and send them directly from the branch to the Cloud and other applications to the HQ. Again, the DNS web layer security would be helpful here.

So the main features, enterprise firewall, URL filtering, and IPS, are on the box, with the DNS layer filtering being a cloud feature with Cisco Umbrella. This provides complete edge security and does not need a two-box solution, except for the additional Cisco Umbrella, a cloud-native solution dispersed around the globe with security functions delivered from PoPs.

Example of a Cisco device or VNF

One way to consume Cisco SD-WAN security is by leveraging Cisco’s integrated security applications within a rich portfolio of powerful WAN Edge routers, such as the ISR4000 series. On top of the native application-aware stateful firewall, these WAN Edge routers can dedicate compute resources to application service containers running within IOS-XE to enable in-line IDS/IPS, URL filtering, and Advanced Malware Protection (AMP).

Remember, Cisco SD-WAN security can also be consumed through cloud services or regional hubs where VNF-based security chains may be leveraged, or robust security stacks may already exist.

Summary: SD WAN Security

In today’s digital landscape, organizations increasingly adopt Software-Defined Wide Area Network (SD-WAN) solutions to enhance their network connectivity and performance. However, with the growing reliance on SD-WAN, ensuring robust security measures becomes paramount. This blog post explored key considerations and best practices to ensure secure SD-WAN deployments.

Understanding the Basics of SD-WAN

SD-WAN brings flexibility and efficiency to network management by leveraging software-defined networking principles. It allows organizations to establish secure and scalable connections across multiple locations, optimizing traffic flow and reducing costs.

Recognizing the Security Challenges

While SD-WAN offers numerous benefits, it also introduces new security challenges. One key concern is the increased attack surface due to integrating public and private networks. Organizations must be aware of potential vulnerabilities and implement adequate security measures.

Implementing Layered Security Measures

To fortify SD-WAN deployments, a layered security approach is crucial. This includes implementing next-generation firewalls, intrusion detection and prevention systems, secure web gateways, and robust encryption protocols. It is also important to regularly update and patch security devices to mitigate emerging threats.

Strengthening Access Controls

Access control is a vital aspect of SD-WAN security. Organizations should enforce robust authentication mechanisms, such as multi-factor authentication, and implement granular access policies based on user roles and privileges. Additionally, implementing secure SD-WAN edge devices with built-in security features can enhance access control.

Monitoring and Incident Response

Continuous monitoring of SD-WAN traffic is essential for detecting and responding promptly to security incidents. Deploying security information and event management (SIEM) solutions can provide real-time visibility into network activities, enabling rapid threat identification and response.

Conclusion:

In conclusion, securing SD-WAN deployments is a critical aspect of maintaining a resilient and protected network infrastructure. By understanding the basics of SD-WAN, recognizing security challenges, implementing layered security measures, strengthening access controls, and adopting proactive monitoring and incident response strategies, organizations can ensure a robust and secure SD-WAN environment.

Cisco Umbrella

SD-WAN SASE

SD WAN SASE

SD-WAN, or Software-Defined Wide Area Networking, is a transformative technology that enhances network connectivity for geographically dispersed businesses. By utilizing software-defined networking principles, SD-WAN empowers organizations to optimize their wide area network infrastructure, reduce costs, and improve application performance. The key features of SD-WAN include dynamic path selection, centralized management, and enhanced security capabilities.

Secure Access Service Edge, or SASE, is an emerging architectural framework that combines network security and wide area networking into a single cloud-native service. SASE offers a holistic approach to secure network connectivity, integrating features such as secure web gateways, firewall-as-a-service, zero-trust network access, and data loss prevention. By converging security and networking functions, SASE simplifies network management, improves performance, and enhances overall security posture.

Implementing SD-WAN and SASE brings forth a multitude of benefits for businesses. Firstly, organizations can achieve cost savings by leveraging cheaper internet links and reducing reliance on expensive MPLS connections. Secondly, SD-WAN and SASE improve application performance through intelligent traffic steering, ensuring optimal user experience. Moreover, the centralized management capabilities of these technologies simplify network operations, reducing complexity and enhancing agility.

To implement SD-WAN and SASE effectively, businesses need to consider several key factors. This includes evaluating their existing network infrastructure, defining their security requirements, and selecting the appropriate vendors or service providers. It is crucial to design a well-thought-out migration plan and ensure seamless integration with existing systems. Additionally, comprehensive testing and monitoring are essential to guarantee a smooth transition and ongoing success.

As technology continues to evolve, the future of network connectivity lies in the hands of SD-WAN and SASE. These innovative solutions enable businesses to embrace digital transformation, support remote workforces, and adapt to rapidly changing business needs. The integration of artificial intelligence and machine learning capabilities within SD-WAN and SASE will further enhance network performance, security, and automation.

SD-WAN and SASE are revolutionizing network connectivity by providing businesses with scalable, cost-effective, and secure solutions. The combination of SD-WAN's optimization capabilities and SASE's comprehensive security features creates a powerful framework for modern network infrastructures. As organizations navigate the ever-evolving digital landscape, SD-WAN and SASE will undoubtedly play a crucial role in shaping the future of network connectivity.

Highlights: SD WAN SASE

SD-WAN SASE

Understanding SD-WAN

SD-WAN is a networking approach that utilizes software-defined principles to simplify the management and operation of a wide area network. It replaces conventional hardware-based network appliances with software-based solutions, enabling centralized control and automation of network resources. By separating the control plane from the data plane, SD-WAN optimizes traffic routing and provides enhanced visibility and control over network performance.

One of SD-WAN’s key advantages is its ability to enhance network performance. With traditional WAN architectures, network traffic may suffer from congestion and latency issues, leading to decreased performance and user dissatisfaction.

**Dynamically Routing**

SD-WAN tackles these challenges by dynamically routing traffic across multiple paths, optimizing the utilization of available bandwidth. Additionally, it offers intelligent traffic prioritization and Quality of Service (QoS) mechanisms, ensuring that critical applications receive the necessary bandwidth and delivering an improved user experience.

**Advanced Security Features**

Security is a critical concern for any network infrastructure. SD-WAN addresses this concern by incorporating advanced security features. Encryption protocols, secure tunneling, and traffic segmentation are some of the security mechanisms provided by SD-WAN solutions.

Furthermore, SD-WAN offers improved network resilience by enabling automatic failover and seamless traffic rerouting in case of link failures. This ensures high availability and minimizes the impact of network disruptions on critical business operations.

Cisco SD-WAN Cloud hub

SD-WAN Cloud Hub serves as a centralized networking architecture that enables businesses to connect their various branch locations to the cloud. It leverages the software-defined networking (SDN) capabilities of SD-WAN technology to establish secure and optimized connections over the internet. With SD-WAN Cloud Hub, businesses can achieve superior network performance, reduced latency, and enhanced security compared to traditional WAN solutions.

– Enhanced Network Performance: SD-WAN Cloud Hub optimizes network traffic and intelligently routes it through the most efficient path, resulting in improved application performance and user experience.

– Increased Security: With built-in encryption and secure tunnels, SD-WAN Cloud Hub ensures the confidentiality and integrity of data transmitted between branch locations and the cloud.

– Simplified Network Management: The centralized control and management capabilities of SD-WAN Cloud Hub make it easy for businesses to monitor and configure their network settings, reducing complexity and operational costs.

Example WAN Performance & PfR:

Understanding Performance-Based Routing

Performance-based routing is a dynamic method that leverages network monitoring tools and algorithms to determine the most efficient path for data transmission. Unlike traditional routing protocols that rely on static metrics such as hop count, performance-based routing considers latency, packet loss, and bandwidth availability factors. Constantly evaluating network performance enables routers to make informed decisions in real time, ensuring optimal data flow.

1: Enhanced User Experience: With performance-based routing, data packets are directed through the fastest and most reliable paths, minimizing latency and packet loss. This results in a superior user experience, faster page load times, smoother video streaming, and reduced buffering.

2: Increased Network Efficiency: Performance-based routing optimizes bandwidth usage by dynamically adapting to changing network conditions. It automatically reroutes traffic away from congested links, distributing it evenly and reducing bottlenecks. This leads to improved overall network efficiency and better utilization of available resources.

3: Improved Reliability and Redundancy: Performance-based routing enhances network reliability by actively monitoring link performance. In case of link failures or degraded performance, it can dynamically reroute traffic to alternative paths, ensuring seamless connectivity and minimizing service disruptions.

SD-WAN with DMVPN Phase 3

**Understanding DMVPN Phase 3**

DMVPN, short for Dynamic Multipoint VPN, is a Cisco technology that simplifies the deployment of VPN networks. Building upon the previous phases, DMVPN Phase 3 introduces several key enhancements. One notable feature is the inclusion of the Next Hop Resolution Protocol (NHRP), which facilitates the dynamic mapping of IP addresses to physical addresses, optimizing network routing and reducing latency.

**Implementing DMVPN Phase 3**

Implementing DMVPN Phase 3 requires careful planning and configuration. The process involves establishing a hub-and-spoke network topology, where the hub acts as a central point of communication, and the spokes connect to it. Configuring NHRP and IPsec encryption are crucial steps in deploying DMVPN Phase 3. Organizations can seamlessly integrate DMVPN Phase 3 into their network infrastructure with proper guidance and expertise.

Understanding SASE

SASE, pronounced “sassy,” is a transformative approach to network security that combines network and security functionalities into a unified cloud-based service. It converges wide area networking (WAN) capabilities with comprehensive security functions, all delivered as a service. SASE aims to simplify and streamline network security, providing organizations with a more efficient and scalable solution.

–SASE Components:

SASE is built upon several key components that work together harmoniously. These include secure web gateways (SWG), cloud access security brokers (CASB), firewall-as-a-service (FWaaS), zero-trust network access (ZTNA), and software-defined wide area networking (SD-WAN). Each component is vital in creating a to robust and comprehensive security framework.

–SASE Solutions:

SASE solutions generally consist of a networking component, such as a software-defined wide area network (SD-WAN), plus a wide range of security components offered in cloud-native format.

These security components are added to secure the network’s communication from end to end, provide consistent policy management and enforcement, add security analytics, and enable an integrated administration capability to manage every connection from everything to every resource.

Some of these features commonly include Zero Trust Network Access (ZTNA), which means a Zero Trust approach to security is one of the security components that enables SASE. Therefore, SASE is dependent on Zero Trust.

–Note: The first layer of defense:

I always consider the DNS layer security to be the first layer. Every transaction needs a DNS request, so it’s an excellent place to start your security. If the customer needs an additional measure of defense that can introduce the other security functions that the Cisco Umbrella offers. You turn on and off security functions based on containers as you see fit.

Cloud DNS with Google 

Example SASE Technology: IPS IDS

Understanding Suricata

Suricata is an open-source intrusion detection and prevention system (IPS/IDS) for high-speed network traffic analysis. It utilizes multi-threading and a rule-based detection engine to scrutinize network traffic for potential threats, providing real-time alerts and prevention measures. Its versatility extends beyond traditional network security, making it a valuable asset for individuals and organizations.

Suricata offers extensive features that enable efficient threat detection and prevention. Its rule-based engine allows for customizable rule sets, ensuring tailored security policies. Additionally, Suricata supports various protocols, including TCP, UDP, and ICMP, further enhancing its ability to monitor network traffic comprehensively. Advanced features like file extraction, SSL/TLS decryption, and protocol detection add another layer of depth to its capabilities.

The Synergy of SASE and SD-WAN Integration

When SASE and SD-WAN are combined, a networking solution delivers the best of both worlds. By integrating SD-WAN capabilities into the SASE architecture, organizations can simultaneously leverage the benefits of secure connectivity and optimized network performance. This integration allows for intelligent traffic routing based on security policies, ensuring that sensitive data flows through secure channels while non-critical traffic takes advantage of optimized paths.

One significant advantage of integrating SASE and SD-WAN is simplified network management. With a unified platform, IT teams can centrally manage and monitor network connectivity, security policies, and performance. This centralized approach eliminates the need for complex and fragmented network management tools, streamlining operations and reducing administrative overhead.

**Use Case: DMVPN and SD-WAN**

**Example: DMVPN over IPSec**

DMVPN is a tunneling protocol that allows for the creation of virtual private networks over a public network infrastructure. Unlike traditional VPNs, DMVPN offers a dynamic and scalable architecture, making it ideal for large-scale deployments. By leveraging multipoint GRE (Generic Routing Encapsulation), DMVPN enables direct communication between remote sites without needing a full-mesh topology. This significantly simplifies network management and reduces overhead.

**DMVPN & Security**

IPsec, short for Internet Protocol Security, is a widely adopted protocol suite that provides secure communication over IP networks. It offers confidentiality, integrity, and authentication services, ensuring that data transmitted between network nodes remains secure and tamper-proof. IPsec establishes a secure channel between DMVPN nodes by encrypting IP packets and protecting sensitive information from unauthorized access.

**Combining DMVPN and IPsec**

The combination of DMVPN and IPsec benefits organizations seeking robust and scalable networking solutions. Firstly, DMVPN’s dynamic architecture allows for easy scalability, making it suitable for businesses of all sizes. Additionally, using IPsec ensures end-to-end security, safeguarding data from potential threats. Moreover, by eliminating the need for a full-mesh topology, DMVPN reduces administrative overhead, simplifying network management processes.

DMVPN Single Hub, Dual Cloud Architecture

The single hub, dual cloud configuration takes DMVPN to the next level by enhancing redundancy and performance. This architecture connects a central hub to two separate cloud providers, creating a highly resilient and highly available network infrastructure. This setup ensures the network remains operational even if one cloud provider experiences downtime, minimizing disruptions and maximizing uptime.

a. Enhanced Redundancy: The single hub, dual cloud DMVPN architecture significantly improves network redundancy by connecting to two cloud providers. In a cloud provider outage, traffic is automatically rerouted to the alternate cloud, ensuring seamless connectivity and minimal impact on business operations.

b. Optimized Performance: With dual cloud connectivity, the network can distribute traffic intelligently, leveraging the resources of both cloud providers. This load balancing enhances network performance, efficiently utilizing available bandwidth and minimizing latency.

c. Scalability: Single hub, dual cloud DMVPN offers scalability, enabling businesses to expand their network infrastructure as their requirements grow easily. New sites can seamlessly integrate into the architecture without compromising performance or security.

Related: Before you proceed, you may find the following post helpful for pre-information:

  1. SASE Definition
  2. DNS Security Solutions
  3. Cisco Umbrella CASB
  4. SASE Model
  5. Secure Firewall
  6. SASE Visibility
  7. Zero Trust SASE

SASE Networking

Starting SASE Networking

We have a common goal to achieve this. To move users closer to the cloud services they are accessing. However, traffic sent over the Internet is all best-effort and is often prone to bad actors’ attacks and unforeseen performance issues. Over 14,000 BGP incidents occurred last year, so cloud access over the Internet varies if BGP is unstable.

No one approach solves everything, but deploying SASE ( secure access service edge ) will give you a solid posture. Secure Access Service Edge deployment is not something you take out of a box and plug in.

A careful strategy is needed, and I recommend starting with SD-WAN. Specifically, SD-WAN security creates an SD-WAN SASE design. SD-WAN is now mainstream, and cloud security integration is becoming critical, enabling enterprises to evolve to a cloud-based SASE architecture. The SASE Cisco version is called Cisco Umbrella.

**Security SASE**

As organizations have shifted how they connect their distributed workforce to distributed applications in any location, the convergence of networking and cloud security has never been more critical. And that is what security SASE is all about—bringing these two pillars together and enabling them from several cloud-based PoPs.

Designing, deploying, and managing end-to-end network security is essential in today’s constant attacks. Zero Trust SASE lays the foundation for customers to adopt a cloud-delivered policy-based network security service model.

**SD-WAN SASE**

Then, we have Cisco SD-WAN, a cornerstone of the SASE Solution. In particular, Cisco SD-WAN integration with Cisco Umbrella enables networks to access cloud workloads and SaaS applications securely with one-touch provisioning, deployment flexibility, and optimized performance.

We have several flexible options for journeying to SASE Cisco with Cisco SD-WAN. Cisco has a good solution combining Cisco SD-WAN and cloud-native security, Cisco Umbrella, into a single offering that delivers complete protection. We will get to how this integrates in just a moment.

However, to reach this integration point, you must first understand the stage in your SASE journey. Everyone will be at different stages of the SASE journey, with unique networking and security requirements. For example, you may still be at the SD-WAN with on-premises security.

Then, others may be further down the SASE line with SD-WAN and Umbrella SIG integration or even partially at a complete SASE architecture. As a result, there will be a mixture of thick and thin branch site designs.

SASE Network: First steps 

A mix of SASE journey types will be expected, but you need a consistent, unique policy over this SASE deployment mix. Therefore, we must strive for a compatible network and security function anywhere for continuous service. 

As a second stage to consider, most are looking for multi-security services, not just a CASB or a Firewall. A large number of organizations are looking for multi-function cloud security services. Once you move to the cloud, you will increase efficiency and benefit from multi-functional cloud-delivered security services.

SASE Network: Combined all security functions

So, the other initial step to SASE is to combine security services into a cloud-delivered service. All security functions are now delivered from one place, dispersed globally with PoPs. This can be done with Cisco Umbrella, a multi-function security SASE solution.

Cisco Umbrella integrates multiple services to manage protection and has all of this on one platform. Then, you can deploy it to the locations where it is needed. For example, some sites only need the DNS-layer filtering; for others, you may need full CASB and SWGs.

SASE Network: Combine security with networking 

Once we have combined all security functions, we need to integrate networking into security, which requires a flexible approach to meeting multi-cloud at scale. This is where we can introduce SD-WAN as a starting point of convergence. SD-WAN’s benefits are clear: dynamic segmentation, application optimization, cloud networking, integrated analytics, and assurance. So, we are covering technology stacks and how the operations team consumes the virtual overlay features.

Cisco SD-WAN use cases that can help you transform your WAN edge with deeper cloud integration and rapid access to SASE Cisco. So you can have Cisco Umbrella cloud security available from the SD-WAN controller and vice versa. So this is a good starting point.

Secure Access Service Edge

New connectivity structures: Let us rewind for a moment. The concept of Secure Access Service Edge is based on several reasons. Several products can be combined to form an SASE offering. The main reason for SASE is the major shift in the IT landscape.

We have different types of people connecting to the network, using our network to get to the cloud, or there can be direct cloud access. This has driven the requirements for a new security architecture to match these new connectivity structures. Nothing can be trusted, so you need to evolve your connectivity requirements. 

Shift Workflows to the cloud: There has been a shift of workloads moving to the cloud. Therefore, there are better approaches than providing a data center backhaul to users requesting cloud applications. Backhauling to a central data center to access cloud applications is an actual waste of resources.

And should only be used for applications that can’t be placed in the cloud. This will result in increased application latency and unpredictable user experience. However, the cloud drives a significant network architect shift; you should take advantage of this.

SASE Network: New SASE design

Initially, we had a hub-and-spoke architecture with traditional appliances, but we have moved to a design where we deliver network and security capabilities. This puts the Internet at the center, creating a global cloud edge that makes sense for users to access, not just go to central data because it’s there. 

This is the paradigm shift we are seeing with the new SASE architecture. So, users can connect directly to this new cloud edge, the main headquarters can join the cloud edge, and branch offices can connect via SD-WAN to the cloud edge.

So, this new cloud edge contains all data and applications. Then, you can turn the other security and network functions that each cloud edge PoP needs into a suite for the branch site or remote user connecting.

1) The need for DIA

Firstly, most customers want to leverage Direct Internet Access (DIA) circuits because they want the data center to be something other than the aggregation path for most of the traffic going to the cloud. Then, we have complications or requirements for some applications, such as Office 365.

In this case, there is a specific requirement from Microsoft. Such an application can not be subject to the proxy. Office365 demands DIA and should be provided with Azure ExpressRoute, for example.

2) Identity Security

Then, we will consider identity and identity security. We have new endpoints and identities to consider. We must consider multiple contextual factors when determining the risk level of the identity requesting access. Now that the premier has shifted, how do I have complete visibility of the traffic flow and drive consistent identity-driven policy—not just for the user but also for the devices?

3) Also, segmentation. How do you extend your segmentation strategy to the cloud and open up new connectivity models? For segmentation, you want to isolate all your endpoints, and this may include IoT, CCTV, and other devices. 

**Identity Security Technologies**

Multi-factor authentication (MFA) can be used here, and we can combine multiple authentication factors to grant access. This needs to be a continuous process. I’m also a big fan of Just in Time access. Here, we only give access to a particular segment for a specific time. Once that time is up, access is revoked. This certainly reduces the risk of Malware spreading. In addition, you can isolate privileged sessions and use step-up authentication to access critical assets.

Security SASE 

SASE Cisco converges the network, connectivity, and security into a user service. It is an alternative to the traditional on-premises approach to protection. And instead of having separate silos for network and security, SASE unifies networking and security services and delivers edge-to-edge protection.

All-in-one box

SASE is more of a journey to reach than an all-in-one box you can buy and turn on. We know SASE entails Zero Trust Network Access (ZTNA), SD-WAN, CASB, FWaaS, RBI, and SWG, to name a few. 

SASE Effectivity wants to consolidate adequate security and threat protection through a single vendor with a global presence and peering relationships. 

**SASE connectivity: SD-WAN SASE**

Connectivity is where we need to connect users anywhere to applications everywhere. This is where the capabilities of SD-WAN SASE come into play. SD-WAN brings advanced technologies such as application-aware routing, WAN optimization, per-segment topologies, and dynamic tunnels.

**SD-WAN Driving Connectivity** 

Now, we have SD-WAN that can handle the connectivity side of things. Then, we need to move to control based on the security side. Control is required for end-to-end threat visibility and security. So, even though the perimeter has shifted, you still need to follow the zero trust model outside of the traditional boundary. 

Multiple forms of security drive SASE that can bring this control; the main ones are secure web gateways, cloud-delivered firewalls, cloud access security brokers, DNS layer security, and remote browser isolation. We need these network and security central pillars to converge into a unified model, which can be provided as a software-as-a-service model.

**Building the SASE architecture** 

There can be several approaches to forming this architecture. We can have a Virtual Machine (VM) for each of the above services, place it in the cloud, and then call this SASE. However, too many hops between network and security services in the VM design will introduce latency. As a result, we need to have a SASE approach that is born for the cloud. A bunch of VMs for each network and security service is not a scalable approach.

SASE: Microservices Architecture

Therefore, a better approach would be to have a microservices, multi-tenancy container architecture with the flexibility to optimize and scale. Consider the SASE architecture to be cloud-native.

A multitenant cloud-native approach to WAN infrastructure enables SASE to service any edge endpoint, including the mobile workforce, without sacrificing performance or security. It also means the complexities of upgrades, patches, and maintenance are handled by the SASE vendor and abstracted away from the enterprise.

Cisco Umbrella is built on a cloud-native microservices architecture. However, the umbrella does not alone provide SASE; it must be integrated with other Cisco products to provide the SASE architecture. Let’s start with Cisco SD-WAN.

Cisco SD-WAN: Creating SD-WAN SASE

SD-WAN grew in popularity as a more agile and cloud-friendly approach to WAN connectivity. With large workloads shifting to the cloud, SD-WAN gave enterprises a more reliable alternative to Internet-based VPN and a more agile, affordable alternative to MPLS for several use cases.

Underlay – Overlay Network Design

In addition, by abstracting away underlying network transports and enabling a software-defined approach to the WAN, SD-WAN helped enterprises improve network performance and address challenges such as the high costs of MPLS bandwidth and the trombone-routing problem. 

SD-WAN is essential for SASE success and is a crucial building block for SASE. SASE Cannot Deliver Ubiquitous Security without the Safeguards SD-WAN Provides, Including:

  • Enabling Network Address Translation (NAT)
  • Segmenting the network into multiple subnetworks
  • Firewalling unwanted incoming and VLAN-to-VLAN traffic
  • Securing site-to-site/in-tunnel VPN

So, SD-WAN can ride on top of any transport, whether you have an MPLS or internet breakout, and onboard any users and consumption model. This is a good starting point for SASE. Here, we can use SD-WAN embedded security as a starting point for SASE.  

Example: Underlay & Overlay with GRE

SD-WAN Security Stack: SD-WAN SASE

The SD-WAN security stack is entirely consistent on-premises and in the cloud. SD-WAN supports the enterprise firewall, which is layer 7 aware, an intrusion prevention system built on SNORT, URL filtering, advanced malware protection, and SSL proxy.

A container architecture enables everything except the enterprise firewall; automated security templates exist. So, based on the intent, the SD-WAN component of vManage will push the config to the WAN edge so that the security services can be turned on.

And all of this can be done with automated templates from the SD-WAN controller. It configures the Cisco Umbrella from Cisco SD-WAN. What I find helpful about this is the excellent integration between vManage—essentially, streamlining security. There are automated templates in vManage that you can leverage to achieve this functionality in Cisco Umbrella.

Cisco Umbrella: Enabling Security SASE

The next level of the SASE journey would be with Cisco Umbrella. So, we still have the SD-WAN network and security capabilities enabled. An SD-WAN fabric provides a secure connection to connect to Cisco Umbrella, gaining all the benefits of the SD-WAN connecting model, such as auto tunnel and intelligent traffic steering.

This can be combined with Cisco Umbrella’s cloud security capabilities. So, with these two products combined, we are beginning to fill out our defense in the depth layer of security functions. Multiple security features will also work together to strengthen your security posture.

SD-WAN SASE: Connecting the SASE Network 

We use a secure IPsec tunnel for SD-WAN to connect to Cisco Umbrella. An IPsec tunnel is set up to the Cisco Umbrella by pushing the SIG feature template. So, there is no need to set up a tunnel for each WAN edge at the branch.

The IPsec tunnels at the branch are auto-created to the Cisco Umbrella headend. This provides deep integration and automation capabilities between Cisco SD-WAN and Cisco Umbrella. You don’t need to design this; this is done for you.

IPsec Tunnel Capabilities

What type of IPsec capabilities do you have? Remember that each single IPsec tunnel can support 250 Mbps and burst higher if needed. In the case of larger deployments, multiple tunnels can be deployed to support higher capacity. So, active-active tunnels can be created for more power. There is also an excellent high available with this design. You have an IPsec tunnel established to primary Cisco Umbrella PoP.

If this Cisco Umbrella goes down, all the services can be mapped to a secondary Umbrella data center in the same or a different region if needed. It is doubtful that two SASE PoPs will go down in the areas of the same region.

Hybrid Anycast handles the failure to secondary SASE PoP or DR site. You don’t need to design this; it is done automatically for you. So, with this design, Cisco has what is known as a unified deployment template called the “Secure Internet Gateway Template.” 

Active-active tunnels

The Cisco SD-WAN vManage auto-template allows up to 4 active tunnels, operating at 250 Mpbs each from a single device. The Cisco SD-WAN can then ECMP load-balance traffic on each of the tunnels. Eight tunnels can be supported, but only four can be active.

These tunnels are established from a single Public IP address using NAT-T, which opens up various design options. Now, you can have active-active tunnels, weighted load balancing, and flexible traffic engineering with a unique template.

We know that each tunnel supports 250 Mbps. We now support four tunnels with ECMP for increased throughput. These four tunnels can give you 1Gbps from the branch to the Cisco Umbrella headend. So, as a network admin, you can pass 1Gpbs of traffic to the Umbrella SIG to maintain performance. 

IPsec Tunnel configuration 

For weighted load balancing, we can have, let’s say, two tunnels to the Cisco Umbrella with the same weight. These are two DIA circuits with the same bandwidth. So when the importance is confirmed the same for the different ISPs, the traffic will be equally load-balanced. Cisco uses per-flow load balancing and not per-packet load balancing. The Cisco load balancing is done by flow pinning, where a flow is dictated by hashing the four Tuple. 

So, for example, there will be a static route pairing to both tunnels, and the metric will be the same; you can also have an unequal-cost multi-path use case. You may have small branch sites with dual DIA circuits and different bandwidths and entitlements.

Traffic can be steered at 80:20 over the DIA circuits to optimize the WAN. If you had a static route statement, you would see different metrics. 

Example Technology: IPsec

Policy-Based Routing to Cisco Umbrella

You can also have policy-based routing to Cisco Umbrella. This allows you to configure flexible traffic engineering. For example, you would like only specific application traffic from your branch to Umbrella. So, at one branch site, you should send Office 365 or GitHub traffic to Cisco Umbrella; then, at Branch 2, you should send all traffic. This would include all cloud and internet-bound traffic. So we can adopt the use case for each design requirement. 

Policy-based routing to the Cisco Umbrella allows you to select which applications are sent to the Umbrella, limiting what types of traffic are routed to the Umbrella by their presence; here, we are leveraging Deep Packet Inspection (DPI) for application classification within data policy. All of this is based on an app-aware data policy. 

Layer 7 Health check 

You will also want to monitor the IPsec tunnel health during brownouts. An underlying transport issue could cause this. And dynamically influence traffic forwarding based on high-performing tunnels. Here, Cisco has an L7 tracker with a custom SLA that can be used to monitor the tunnel health. The L7 tracker sends an HTTPing request to the Umbrella service API ( service.sig.umbrella.com) to measure RTT latency and then compares this to the user’s configured SLA. If tunnels do not meet the required SLA, they are marked down based on the tracker status. The traffic will then go through the available tunnels.  

SD-WAN and SASE

Summary: SD WAN SASE

In today’s increasingly digital world, businesses constantly seek innovative solutions to enhance network connectivity and security. SD-WAN SASE (Software-Defined Wide Area Network Secure Access Service Edge) is a groundbreaking technology. In this blog post, we delved into the intricacies of SD-WAN SASE, its benefits, and how it is revolutionizing network connectivity.

Section 1: Understanding SD-WAN

SD-WAN, or Software-Defined Wide Area Network, is a virtualized approach to connecting and managing networks. It allows organizations to efficiently connect multiple locations, whether branch offices, data centers, or cloud-based applications. By leveraging software-defined networking principles, SD-WAN offers enhanced agility, performance, and cost savings compared to traditional WAN solutions.

Section 2: Unveiling SASE

SASE, which stands for Secure Access Service Edge, is a transformative concept that combines network security and WAN capabilities into a unified cloud-based architecture. It enables organizations to consolidate networking and security functions, delivering comprehensive protection and improved performance. SASE replaces the traditional hub-and-spoke network model with a more agile and secure architecture.

Section 3: The Synergy of SD-WAN and SASE

When SD-WAN and SASE are combined, the result is a powerful solution that brings together the benefits of both technologies. SD-WAN provides network agility and scalability, while SASE ensures robust security measures are seamlessly integrated into the network. This synergy enables organizations to optimize their network performance while safeguarding against evolving cybersecurity threats.

Section 4: Benefits of SD-WAN SASE

4.1 Enhanced Performance and User Experience: SD-WAN SASE optimizes traffic routing, ensuring applications and data take the most efficient path. It prioritizes critical applications, resulting in improved performance and user experience.

4.2 Simplified Network Management: The unified architecture of SD-WAN SASE simplifies network management by consolidating various functions into a single platform. This streamlines operations and reduces complexity.

4.3 Enhanced Security: With SASE, security functions are natively integrated into the network. This ensures consistent and comprehensive protection across all locations, devices, and users, regardless of their physical location.

4.4 Cost Savings: SD-WAN SASE reduces the reliance on expensive hardware and dedicated security appliances, resulting in cost savings for organizations.

Conclusion:

In conclusion, SD-WAN SASE is transforming the landscape of network connectivity and security. By combining the agility of SD-WAN and the robustness of SASE, organizations can achieve optimal performance, enhanced security, simplified management, and cost savings. Embracing this innovative technology can empower businesses to stay ahead in the ever-evolving digital world.

SASE Cisco

SASE | SASE Solution

SASE Solution

In the realm of network security, the rise of SASE (Secure Access Service Edge) solution has been nothing short of revolutionary. Combining the capabilities of networking and security into a single cloud-based service, SASE has transformed the way organizations manage and protect their digital infrastructure. In this blog post, we will explore the key components and benefits of SASE, shedding light on how it is reshaping the landscape of network security.

SASE, an acronym for Secure Access Service Edge, is a comprehensive framework that converges network and security services into a unified cloud-native architecture. By merging wide area networking (WAN) and network security functions, SASE enables organizations to simplify their infrastructure while enhancing security and performance. This convergence is achieved through the integration of various technologies such as SD-WAN (Software-Defined Wide Area Networking), firewall-as-a-service, secure web gateways, and more.

1. SD-WAN: SD-WAN technology lies at the heart of SASE, providing agile and scalable connectivity across geographically dispersed locations. It offers centralized management, intelligent traffic routing, and dynamic path selection, optimizing network performance and reliability.

2. Cloud-native Security: SASE leverages cloud-native security services, including firewall-as-a-service (FWaaS), secure web gateways (SWG), data loss prevention (DLP), and zero-trust network access (ZTNA). These services are delivered from the cloud, ensuring consistent and robust security across the entire network infrastructure.

3. Identity-Centric Access: SASE incorporates an identity-centric approach to access control, focusing on user identity rather than network location. With zero-trust principles, SASE ensures that only authorized users and devices can access the network, regardless of their location or network connection.

Benefits of SASE
1. Simplified Infrastructure: SASE eliminates the need for multiple point solutions by consolidating networking and security into a single cloud-based service. This simplification reduces complexity, streamlines operations, and lowers costs associated with managing disparate security tools.

2. Enhanced Security: With its cloud-native security services, SASE provides advanced threat protection, real-time monitoring, and granular access control. This ensures that organizations can defend against emerging threats while maintaining compliance with industry regulations.

3. Improved Performance: SASE leverages SD-WAN technology to optimize network traffic, enabling faster and more reliable connectivity. By dynamically routing traffic based on application and network conditions, SASE minimizes latency and maximizes performance for end-users.

The emergence of SASE solution has revolutionized network security by converging networking and security services into a unified cloud-native architecture. With its key components such as SD-WAN, cloud-native security, and identity-centric access, SASE offers simplified infrastructure, enhanced security, and improved performance for organizations of all sizes.

As the digital landscape continues to evolve, embracing the power of SASE becomes imperative to stay resilient against ever-evolving cyber threats and ensure seamless connectivity across the network.

Highlights: SASE Solution

What is SASE?

SASE, which stands for Secure Access Service Edge, is an innovative networking architecture that combines network security and wide-area networking (WAN) capabilities into a unified cloud-based solution. It shifts networking and security functionalities to the cloud, eliminating the need for traditional hardware-centric approaches. By converging these services, SASE offers a holistic and scalable solution that adapts to the ever-evolving demands of modern businesses.

**The Core Components of SASE**

SASE integrates several key technologies to deliver its promise of seamless security and connectivity. At its core, SASE combines Software-Defined Wide Area Networking (SD-WAN) with robust security services such as Secure Web Gateways (SWG), Cloud Access Security Brokers (CASB), Firewall as a Service (FWaaS), and Zero Trust Network Access (ZTNA). This convergence allows organizations to simplify their IT infrastructure, reduce costs, and enhance security posture by delivering network and security functions from the cloud.

**Benefits of Adopting SASE**

Organizations that adopt SASE can experience a multitude of benefits. Firstly, SASE offers improved performance by leveraging the cloud’s scalability, ensuring users experience low latency and high-speed connections regardless of location. Secondly, the unified approach to security reduces complexity, enabling IT teams to manage policies across all users and devices from a single platform. Lastly, SASE enhances security by applying consistent and context-aware security policies, thus reducing vulnerabilities and potential breaches.

**Challenges and Considerations**

While SASE presents numerous advantages, organizations must also consider potential challenges. Transitioning to a SASE framework requires careful planning and execution, as it involves re-architecting existing network and security infrastructures. Additionally, choosing the right SASE provider is crucial, as the quality and range of services can vary significantly. Organizations must evaluate providers based on their ability to deliver comprehensive security features, global reach, and reliability.

SASE Key Points:

1. Enhanced Security: SASE provides robust security measures, such as integrated firewalling, data loss prevention, and secure web gateways, to safeguard networks and data from emerging threats. Organizations can streamline their security operations and reduce complexity with a unified security framework.

2. Improved Performance: SASE optimizes network performance and minimizes latency by leveraging cloud-native infrastructure. It enables efficient traffic routing, intelligent application steering, and dynamic bandwidth allocation, ensuring a seamless user experience even in geographically dispersed environments.

3. Simplified Network Management: Traditional networking architectures often involve managing multiple vendors and complex configurations. SASE simplifies network management through centralized policy-based controls and automation, reducing administrative overhead and enhancing operational efficiency.

Adopting a SASE Solution:

By adopting a SASE solution, businesses can unlock a plethora of benefits. Firstly, it provides secure access to applications and data from any location, enabling seamless remote work capabilities. Additionally, SASE eliminates the need for traditional hardware-based security appliances, reducing costs and complexity. The centralized management and policy enforcement offered by SASE ensures consistent security across the entire network, regardless of the user’s location or device.

**Adopt a Phased Approach**

While the benefits of SASE are enticing, organizations must approach its implementation strategically. Assessing the existing network infrastructure, defining security requirements, and selecting a reliable SASE provider is crucial. A phased approach to performance, starting with pilot projects and gradually scaling up, can help organizations ensure a smooth transition and maximize the potential of SASE.

**SASE Components**

Understanding its key components is essential to fully grasping the power of SASE. These include secure web gateways (SWG), cloud access security brokers (CASB), firewall-as-a-service (FWaaS), data loss prevention (DLP), and zero-trust network access (ZTNA). Each element is crucial in fortifying network security while enabling seamless user connectivity.

Example SASE Technology: Web Security Scanner

security web scanner Zero Trust & Identity

The path to zero trust starts with identity. Network access is based on the identity of the user, the device, and the application, not on the device’s IP address or physical location. And this is for a good reason. There needs to be contextual information.

a) User & Device Identity

The user/device’s identity must reflect the business context instead of being associated with binary constructs utterly disjointed from the upper layers. This binds an identity to the networking world and is the best way forward for policy enforcement.

b) Lack of Good Identifiers

Therefore, the dependency on IP or applications as identifiers is removed. Now, the policy is applied consistently regardless of where the user/device is located, while the identity of the user/device/service can be factored into the policy. The SASE stack is dynamically applied based on originality and context while serving zero trust at strategic points in the cloud, enforcing an identity-centric perimeter.

Example Identity Technology: Identity Aware Proxy 

Identity aware proxy

c) The Role of SASE Security

In this post, we will decompose the Zero Trust SASE, considering the SASE fabric and what a SASE solution entails. The SASE security consists of global PoPs. With network and security functions built into each PoP, they are operated with a single management plane. This post will examine the fabric components while discussing the generic networking and security challenges that SASE overcomes, focusing on Cisco SASE.

**Example SASE Solution – Cisco Approach: CSP 5000 & NFV**

The Cisco SASE definition is often deemed just Cisco Umbrella; however, that is just part of the solution. Cisco SASE includes the Umbrella but entails an entirely new architecture based on the CSP 5000 and Network Function Virtualization (NFV) and a series of Virtual Network Functions (VNFs) such as virtual firewalls. We will touch on Cisco SASE soon.

As the SASE solution has many dependencies, you, as an enterprise, know how far you are in your cloud adoption. Whether you are a public cloud first, hybrid, multi, or private cloud path affects the design of your DMZ. SASE security is all about optimizing the DMZ to enable secure methods.

Related: For pre-information, you may find the following posts helpful:

  1. SD-WAN SASE
  2. SASE Model
  3. Cisco Secure Firewall
  4. Ebook on SASE Capabilities

SASE Solution

SASE – Cloud-based 

SASE directs to a concept incorporating cloud-based software-defined wide area networking (SD-WAN) with a range of security services and unified management functionality for delivering security and SD-WAN capabilities to any edge computing location. A prime use case for SASE is to address the performance bottleneck issues of traditional networks that rely on traffic backhauling. Further, by integrating identity, business context, and real-time risk assessment into every connection, SASE architectures pledge to control a variety of cyber-attacks.

SASE explained
Diagram: SASE explained. Source Fortinet.

The DMZ: Calling a SASE Solution

**Updating the DMZ**

First, the SASE architecture updates the DMZ, which has remained unchanged since the mid-90s. The DMZ, often called the perimeter network, is a physical or logical subnetwork whose sole purpose is to expose an organization’s external-facing services to untrusted networks.

The DMZ adds a layer of security so that potentially insecure external networks can only access what is exposed in the DMZ. At the same time, the rest of the organization’s network is protected by a security stack.

**Extra Time But that’s about it**

As a result, the DMZ is considered a small, isolated network portion and, if configured correctly, will give you extra time to detect and address breaches, malware, and other types of attacks before they further penetrate the internal networks. 

The critical factor here is that it’s a layer that, at best, gives you additional time before the breach to the internal network. The central pain point with the current DMZ architecture is that the bad actor knows it’s there unless you opt for zero trust single packet authentication or some other zero-trust technology. 

Example Product: Cisco Umbrella & DNS Layer Security 

Key Note: SASE security and SD-WAN

This is similar to updating the WAN edge with SD-WAN to optimize performance per application with SDWAN overlays. Both SASE and SD-WAN are updating, let’s say, the last hardware bastions in your infrastructure: SD-WAN with the WAN edge and SASE with the DMZ. 

The DMZ is a vital section, but it needs to be secure from a perimeter firewall with a port and our traffic flow. It also needs good visibility, the ability to detect and attack, and respond appropriately, and quick reaction time—speeds achievable only with secure automation.

A Perfect DMZ: SASE Solution

  • Support API and Modeling Languages

These new DMZ designs need to be open. It must support API and open standard modeling languages like XML and YANG. This will allow you to support various network and security devices, physical, virtual, and hybrid, via secure API. Not only does it need to be open, but it also needs to be extensible and repeatable. So, we can allow new functionality to be added and removed as the architecture evolves and react to dynamic business objectives.

  • Scalability with NFV

SASE also needs to scale up and down, out and in, with little or no disruption to existing services. It should be able to scale without adding physical appliances, as physical devices can only scale so far. The SASE solution needs Network Function Virtualization ( NFV ) with a series of Virtual Network Functions (VNFs) chained together. Cisco CSP 5000 can be used here, and we will discuss it briefly.

  • Orchestration & Automation

You want to avoid dealing with the device’s CLI. The new SASE fabric needs to be well-programmable. All functional elements of the architecture are fully programmable via API.

The APIs cannot just read data but can change behavior, such as network device configurations. So you will need an orchestrator for this. For example, Ansible Tower could automate and manage configuration drift among the virtual network functions. Ansible Tower provides end-to-end team automation with features such as workflow templates and integration into the CI/CD pipelines.

SASE Security & SDN

Network segmentation is essential to segment the data and control plane traffic. So, the control plane configures the devices, and the data plane forwards the traffic. The segmentation aspect is sufficient for the scalability and performance of resolutions. To manage SASE security, you will need to employ software-defined networking principles. The SDN controller is not in the forward path. It just sets up the data plane. The data plane should operate even if the control plane fails. However, the control plane could have some clustering to avoid failure.

Example: Software Defined Perimeter (SDP)

**Understanding the Basics of VPC Service Controls**

VPC Service Controls are designed to augment the security of Google Cloud services by establishing a security perimeter around your cloud resources. This means that even if a malicious actor gains access to your network, they won’t be able to exfiltrate data outside the defined boundaries. This section will delve into the fundamental components of VPC Service Controls, including service perimeters, access levels, and policy configurations. We’ll also touch on how these controls integrate with other Google Cloud security features to provide a comprehensive security framework.

**Implementing VPC Service Controls: Step-by-Step Guide**

Setting up VPC Service Controls can seem daunting, but with a structured approach, it becomes manageable. This section outlines a step-by-step guide to implementing VPC Service Controls on Google Cloud. From defining your service perimeter to configuring access levels and monitoring the setup, we’ll provide practical insights and best practices. Additionally, we’ll discuss common pitfalls and how to avoid them, ensuring a smooth and efficient configuration process.

VPC Security Controls VPC Service Controls

Standard Data Center Design

**Traffic Flows**

There will be the consumers of services. So, they can be customers, remote users, partners, and branch sites. These consumers will have to access applications which are hosted in the network or cloud domain. So, the consumers will typically have to connect to a WAN edge for applications hosted in the network.

On the other hand, if consumers want to connect to cloud-based applications, they can go directly to, let’s say, IaaS or the more common SaaS-based applications. Again, this is because access to cloud-based applications does not go via the WAN edge.

For consumers to access network applications not hosted in the cloud, as discussed, they are met with the WAN edge. Traffic will need to traverse the WAN edge to get to the application, which will have another layer of network and security functionality deeper in the network.

**WAN Edge Components**

At the network’s edge, we have many different types of network and security functionality. So, we will have standard routers, a WAN optimization controller, Firewalls, Email Gateways, Flow collectors, and other types of probes to collect traffic.

Then, a network will have to switch fabric. So, the old days of the 3-tier data Center architecture are gone. All primary switching fabrics or switching fabrics that you want IP forwarding to scale are based on the spine leaf architecture, for example, the Cisco ACI with ACI networks. The ACI Cisco has good Multi Pod and Multi-Site capabilities.

**Application Hosting**

Then, we go deeper into the applications and have app-tier access. So, we have application-hosted Internet for internal users. Each one will have its security, forwarding proxy devices, and load balancers. All these are physical wires tied to the fabric and will have limited capacity.

Example: MPLS Global WAN

For global data center design. These will commonly connect over MPLS, which provides the Global WAN. Each data center would connect to an MPLS network and will usually be grouped by regions such as EMEA or AMERICAS. So, we have distributed networks, such as the MPLS network label switches. You can also have Segment Routing to provide this global WAN, which improves traffic engineering.

So, some common trends have challenged parts of this design. Many of these trends have called for the introduction of a new network area called the SASE fabric, commonly held in a CNF or a collocation facility. This fabric already has all the physical connectivity and circuits for you.

Common Trends: SASE Architecture

In a cloud-centric world, users and devices require access to services everywhere. These services are now commonly migrated to SaaS and IaaS-based clouds. So we have an app migration from “dedicated” private to “shared” public cloud. These applications became easy to change based on a microservices design. The growth was rapid, and now you must secure workloads in a multi-tenant environment.

SASE – **Identity is the new perimeter**

As a result, the focal point has changed considerably. Now, it is the identity of the user and device, along with other entities around the connection group, as opposed to the traditional model focusing solely on the data center. Identity then becomes the new perimeter. 

SASE – **Bandwidth Requirements**

Another major trend is that capacity requirements and bandwidth for public clouds doubled. Now that applications are hosted in the cloud, we also need to make changes on the fly to match the agility of the cloud.

When migrating these applications, we must rapidly upgrade internet-facing firewalling, for example, due to remote user access demands. Also, security teams demand IPS/AMP appliance insertions. In a cloud environment, it’s up to you to secure your workloads, and you need the same security levels in the cloud as you would on-premises.

SASE – **Constant Security Policy**

These apps are not in our data center, so we need to ensure that these migrated applications have the same security policy that would be housed in the AWS or Azure clouds. So we need more services in the current infrastructure.

Now we have more wiring and configuration, what is the impact on an extensive global network? You have a distributed application in several areas and want to open a port. These configurations need to be done and monitored in many places and with many teams.

Internal data applications are becoming less important than those that run in public clouds. More apps are in the cloud, and the data center is becoming less important as the prime focal point. The data center will always be retained, but the connectivity and design will change with the introduction of an SASE solution.

Security Solution Appliances

A. Different Technology Stacks:

Many common problems challenge the new landscape. Due to deployed appliances for different technology stacks for networking and security, not to mention the failover requirements between them, we are embedded with high complexity and overhead.

B. DMZ & Increased Latency:

The legacy network and security located in the DMZ designs increase latency. The latency is even with service chaining, but it will expand and become more challenging to troubleshoot. In addition, the world is encrypted. This needs to be inspected without degrading application performance.

C. Required: Global SASE Fabric

These challenges are compelling reasons to leverage a cloud-delivered SASE solution. The SASE architecture is a global fabric consisting of a tailored network for application types typically located in the cloud: SASE optimizes where it makes the most sense for the user, device, and application – at geographically dispersed PoPs. Many will connect directly to a colocation facility that can hold the SASE architecture.

D. Architectural Changes

The significant architecture changes you have seen in the past are that consumers, remote users, customers, branches, and partners will connect to the WAN edge, Internet, or IaaS via a colocation facility. Circuits migrated to selected “central hubs” connectivity and colocation sites from the data center.

The old DC will become another application provider connecting to the colocation. Before addressing what this collocation looks like, we will address the benefits of redefining the network and security architecture. Yes, adopting SASE reduces complexity and overhead, improves security, and increases application performance, but what does that mean practically?

Problems with complexity/overhead/processing/hardware-based solutions

E. Hardware Capacity Limits

Traditional mechanisms are limited by the hardware capacity of the physical appliances at the customer’s site and the lag created for hardware refresh rates needed to add new functionality. Hardware-based network and security solutions build the differentiator of the offering into the hardware. With different hardware, you can accelerate the services and add new features.

F. Feature Limits

Some features are available on specific hardware, not the hardware you already have on-site. In this case, the customer will need to do the heavy lifting. In addition, as the environment evolves, we should not depend on the new network and security features from the new appliance generation. This inefficient and complex model creates high operational overhead and management complexity.

G. Device Upgrades

Device upgrades for new features require significant management. From experience, changing out a line card would involve multiple teams. For example, if the line card ran out of ports or you need additional features from a new generation, 

This would involve project planning, on-site engineers, design guides, and, hopefully, line card testing and out-of-hours work. For critical sites to ensure a successful refresh, team members may need to be backed up. Many touches need to be managed.

SASE architecture overcomes tight coupling/hardware-based solutions.

Cloud-based SASE

The cloud-based SASE enables updates for new features and functionality without requiring new deployments of physical appliances. A physical appliance will still need to be deployed, but it can host many virtual networks and security functions, which has an immediate effect on ease of management.

Network and security deployment can now occur without ever touching the enterprise network, allowing enterprises to adopt new capabilities quickly. Once the tight coupling between the features and the customer appliance is removed, we have increased agility and simplicity for deploying network and security services.

Cisco SASE: Virtualization of Network Functions

With a Cisco SASE platform, when we create an object, such as the virtualization of Network Functions. The policy in the networking domain is then available in other domains, such as security. Network function virtualization, where we de-couple software from hardware, is familiar.

This is often linked to automation and orchestration, where we focus on simplifying architecture, particularly on Layer 4 – Layer 7 services. Virtual machine hosting has enabled the evolution of a variety of virtualized workloads. The virtualization of network and security functions allows you to scale up, down, and in and out at speed and scale without embedding service.

Example: Virtual Appliances

Let’s say you have an ASAv5 as a virtual appliance. This virtual appliance has, for example, 1 Core. If you want more cores, you can scale up to support ASA v50, which supports eight cores. So we can scale up and down. However, what if you want to scale out?

Multipath load balancing

Here, we can add more cloud service providers and ASAv to scale out virtual firewalls with equal-cost multipath load balancing. You want to buy something other than a physical appliance that will only ever do one function. The days of multiple physical point solutions are ending as sase gains momentum. Instead, you want your data center to scale when capacity demands it without physical limitations.

Cisco SASE Architecture: NFV

NFV network services can be deployed and managed much more flexibly because they can be implemented in a virtualized environment using x86 computing resources instead of purpose-built dedicated hardware appliances. The CSP 5000 Series can help you make this technology transition.

In addition, with NFV, the Cisco SASE open approach allows other vendors to submit their Virtual Network Functions (VNF) for certifications to help ensure compatibility with Cisco NFV platforms.

Cloud Services Platform

This central location is a PoP that could be a Cloud Services Platform that could provide the virtualized host. For example, the Cloud Services Platform CSP-5000 could host CSR, FTD, F5, AVI networks, ASAv, or KVM-based services. These network and security functions represent the virtual network appliances that consist of virtual machines. 

  • Cisco SASE and the CSP 5000

Within the Cisco SASE design, the CSP 5000 Series can be deployed within data centers, regional hubs, colocation centers, the WAN edge, the DMZ, and even at a service provider’sprovider’s Point of Presence (PoP), hosting various Cisco and third-party VNFs. We want to install the CSP at a PoP, specifically in a collocation facility. If you examine the CSP-5000 for a block diagram, you will see that Cisco SASE has taken a very open ecosystem approach to NFV, such as Open vSwitch. 

  • Key Technology: SR-IOV

It uses Single Root I/O Virtualization (SR-IOV) and an Open vSwitch Data Plane Development Kit (OVS-DPDK). The optimized data plane provides near-line rates for SR-IOV-enabled VNFs and high throughput with OVS DPDK interfaces.

  • The Role of the OVS

The CSP has a few networking options. First, the Open vSwirch ( OVS) is the Software layer two switches for the CSP-500. So, the CSP internal software switches bridge the virtual firewall to the load balancer to the ToR switches. You can also use SR-IOV Virtual Ethernet Bridge Mode (VB), which performs better. As a third option, we have SR-IOV, virtual Ethernet Port Aggregators Mode (VEPA)

**Cisco SASE Security Policies** 

With the flexible design Cisco SASE offers, any policies assigned to users are tied to that user regardless of network location. This removes the complexity of managing network and security policies across multiple locations, users, and devices. But, again, all of this can be done from one platform.

SASE  architecture overcomes the complexity and heavy lifting/scale.

**A Personal Note: SASE** 

I remember from a previous consultancy. We were planning next year’s security budget. The network was packed with numerous security solutions. All these point solutions are expensive, and there is never a fixed price, so how do you plan for this? Some new solutions we were considering charge on usage models, which we needed the quantity at that time. So the costs keep adding up and up.

SASE removes these types of headaches. In addition, consolidating services into a single provider will reduce the number of vendors and agents/clients on the end-user device. So we can still have different vendors operating a sase fabric, but they are now VNF on a single appliance.

**Complexity Reduction**

Overall, substantial complexity savings will be from consolidating vendors and technology stacks, pushing this to the cloud away from the on-premises enterprise network. The SASE fabric abstracts the complexity and reduces costs. In addition, from a hardware point of view, the cloud-based SASE can add more PoPs of the same instance for scale and additional capacity. This is known as vertical scaling, and also, in new locations, known as horizontal scaling.

**SASE Overcomes Intensive Processing**

Additionally, the SASE-based cloud takes care of intensive processing. For example, as much of internet traffic is now encrypted, malware can use encryption to evade and hide from detection. Here, each PoP can perform deep packet dynamics on TLS-encrypted traffic.

**Encryption/Decryption**

You may not need to decrypt to fully understand the payload. Still, partial decryption and examining payload patterns to understand the malicious activity seem enough. The SASE vendor needs to have some Deep Packet Dynamic technologies.

**Challenge: Traditional Firewalling**

Traditional firewalls are not capable of inspecting encrypted traffic. Therefore, performing DPI on TLS-encrypted traffic would require extra modules or a new appliance. A SASE solution ensures the decryption and inspection are done at the PoP, so no performance hits or new devices are needed on the customer sites. This can be done with Deep Packet Dynamic technologies.

Performance Problems – Packet drops/latency

Network congestion resulting in dropped and out-of-order packets could be better for applications. Latency-sensitivity applications such as collaboration, video, VoIP, and web conferencing are hit hardest by packet drops. Luckily, there are options to minimize latency and the effects of packet loss.

1. **WAN Optimization**

SD-WAN solutions have WAN optimization features that can be applied on an application-by-application or site-by-site basis. Along with WAN optimization features, there are protocol and application acceleration techniques.

2.**Privatize the WAN**

In addition to existing techniques to reduce packet loss and latency, we can privatize the WAN as much as possible. To control the adverse and varying effects the last mile and middle mile have on applications, we can privatize with a private global backbone consisting of a fabric of PoPs.

Once privatized, we have more control over traffic paths, packet loss, and latency. A private network fabric is a crucial benefit of SASE, as it drives application performance. So we can inspect east-west and north-south traffic.

3.** Traffic Engineering**

Traffic engineering and performance improvement are easy since we have a centralized fabric consisting of many hubs and spokes. When you centralize some of the architecture into a centralized fabric, it is easier to make traffic adjustments globally. The central hub will probably be a collocation facility and can be only one hop away, so the architecture will be simpler and easier to implement.

PoP optimization – Routing algorithms, and TCP proxy.

Each PoP in the SASE cloud-based solution optimizes where it makes the most sense, not just at the WAN edge. For example, within the SASE fabric, we have global route optimizations to determine which path is best and can be changed for all traffic or specific applications.

4.**PoP acting as TCP Proxy**

These routing algorithms factor in performance metrics such as latency, packet loss, and jitter. I am selecting the optimal route for every network packet. Unlike internet routing, which favors cost over performance, the WAN backbone constantly analyzes and tries to improve performance.

Example: Increasing The TCP Window Size

-As everything is privatized, we have all the information to create the largest packet size and use rate-based algorithms over traditional loss-based algorithms. As a result, you don’t need to learn anything, and throughput can be maintained end-to-end.

– As each PoP acts as a TCP proxy server, techniques are employed so that the TCP client and server think they are closer. Therefore, a larger TCP window is set, allowing more data to be passed before waiting for an acknowledgment.

Example Technology: TCP Performance Parameters

Summary: SASE Solution

In today’s rapidly evolving technological landscape, traditional networking approaches are struggling to keep up with the demands of modern connectivity. Enter SASE (Secure Access Service Edge) – a revolutionary solution that combines network and security capabilities into a unified cloud-based architecture. In this blog post, we explored the key features and benefits of SASE and delve into how it is shaping the future of networking.

Understanding SASE

SASE, pronounced “sassy,” represents a paradigm shift in networking. It converges wide-area networking (WAN) and network security services into a single, cloud-native solution. By integrating these traditionally disparate functions, organizations can simplify network management, improve security, and enhance overall performance. SASE embodies the principles of simplicity, scalability, and flexibility, all while delivering a superior user experience.

The Power of Cloud-native Architecture

At the core of SASE lies its cloud-native architecture. By leveraging the scalability and agility of the cloud, organizations can dynamically scale their network and security resources based on demand. This elasticity eliminates the need for costly infrastructure investments and allows businesses to adapt quickly to changing network requirements. With SASE, organizations can embrace the benefits of a cloud-first approach without compromising on security or performance.

Enhanced Security and Zero Trust

One of the key advantages of SASE is its inherent security capabilities. SASE leverages a Zero Trust model, which means that every user and device is treated as potentially untrusted, regardless of their location or network connection. By enforcing granular access controls, strong authentication mechanisms, and comprehensive threat detection, SASE ensures that only authorized users can access critical resources. This approach significantly reduces the attack surface, mitigates data breaches, and enhances overall security posture.

Simplified Network Management

Traditional networking architectures often involve complex configurations and multiple point solutions, leading to a fragmented and challenging management experience. SASE streamlines network management by centralizing control and policy enforcement through a unified console. This centralized approach simplifies troubleshooting, reduces administrative overhead, and enables organizations to maintain a consistent network experience across their distributed environments.

Conclusion:

As the digital landscape continues to evolve, embracing innovative networking solutions like SASE becomes imperative for organizations seeking to stay ahead of the curve. By consolidating network and security functions into a unified cloud-native architecture, SASE provides simplicity, scalability, and enhanced security. As businesses continue to adopt cloud-based applications and remote work becomes the norm, SASE is poised to revolutionize the way we connect, collaborate, and secure our networks.