network-automation3

Network Configuration Automation

 

automate network configuration

 

Network Configuration Automation

In today’s fast-paced digital landscape, businesses rely heavily on efficient network operations to stay competitive. Network configuration automation has emerged as a game-changer, enabling organizations to streamline their network management processes and enhance overall operational efficiency. This blog post will delve into network configuration automation, its benefits, and how it revolutionizes how businesses manage their networks.

Network configuration automation is the practice of using software tools to automate the management and configuration of network devices. Traditionally, network administrators have manually configured and managed network devices, which can be time-consuming, error-prone, and resource-intensive. With network configuration automation, these tasks are automated, enabling administrators to manage and control their network infrastructure centrally, reducing the risk of human error and accelerating network deployment and updates.

 

Highlights: Network Configuration Automation

  • Application Changes

How applications are deployed today is so different from the way applications were deployed 10-15 years ago. So much has changed with the app. The problem we are seeing today is that the network is not tightly coupled with these other developments. The provision of various network policies and corresponding configurations are not tightly associated with the application.

Most of the time, they are loosely coupled and reactive. For example, analyzing firewall rules and providing a network assessment is nearly impossible with old security devices driving the need for network configuration automation and the ability to automate network configuration.

 

Before you proceed, you may find the following articles of interest:

  1. Open Networking
  2. A10 Networks
  3. Brownfield Network Automation

 



Automate Network Configuration.

Key Network Configuration Automation Discussion Points:


  • Introduction to Network Configuration Automation and what is involved.

  • Highlighting the components of Automate Network Configuration.

  • Critical points on the use of Ansible and Ansible variables.

  • Technical details on how virtualization changes the manual approach.

  • Technical details on SDN as a companion to automation.

 

Back to basics with the Network Automation.

One of the easiest and quickest ways to get started with network automation is to automate the creation of the device configuration files used for initial device provisioning and push them to network devices. You can also extra a lot of information with automation. For example, network devices have enormous static and ephemeral data buried inside, and using open-source tools or building your own gets you access to this data.

Examples of this type of data include entries in the BGP table, OSPF adjacencies, active neighbors, interface statistics, specific counters and resets, and even counters from application-specific integrated circuits (ASICs) themselves on newer platforms.

 

  • A key point: Lab guide with Ansible Core

We have Ansible installed and a managed host already prepared in the following. The managed host needs to have SSH enabled and a user with admin privileges. Ansible finds managed hosts by looking at the inventory file. The inventory file is also a great place to pass variables that can be used to remove site-specific information; this is set under the host var section below.

Remember that Ansible requires Python, and below, we are running Python version 3.0.3 and Jinja version 3.0.3, which is used for templating. You can pass information to ansible managed hosts with playbooks and ad hoc commands. Below, I’m using an ad hoc command, calling the command module by default, and testing with a ping.

 

Ansible configuration
Diagram: Ansible Configuration

 

Benefits of Network Configuration Automation:

1. Time and Resource Efficiency: Organizations can free up their IT staff to focus on more strategic initiatives by automating repetitive and time-consuming network configuration tasks. This results in increased productivity and efficiency across the organization.

2. Enhanced Accuracy and Consistency: Manual configuration processes are prone to human error, leading to misconfigurations and network downtime. Network configuration automation eliminates these risks by ensuring consistency and accuracy in network configurations, reducing the chances of costly errors.

3. Rapid Network Deployment: Network administrators can quickly deploy network configurations across multiple devices simultaneously with automation tools. This accelerates network deployment and enables organizations to respond faster to changing business needs.

4. Improved Security and Compliance: Network configuration automation enhances security by enforcing standardized configurations and ensuring compliance with industry regulations. Automated security protocols can be applied consistently across the network, reducing vulnerabilities and enhancing overall network protection.

5. Simplified Network Management: Automation tools provide a centralized platform for managing network configurations, making it easier to monitor, troubleshoot, and maintain network devices. This simplifies network management and reduces the complexity associated with manual configuration processes.

Implementing Network Configuration Automation:

To implement network configuration automation, organizations need to consider the following steps:

1. Assess Network Requirements: Understand the specific network requirements, including device types, network protocols, and security policies.

2. Select an Automation Tool: Evaluate different automation tools available in the market and choose the one that aligns with the organization’s needs and network infrastructure.

3. Create Configuration Templates: Develop standardized configuration templates that can be easily applied to network devices. These templates should include best practices, security policies, and network-specific configurations.

4. Test and Validate: Before deploying automated configurations, thoroughly test and validate them in a controlled environment to ensure their effectiveness and compatibility with the existing network infrastructure.

5. Monitor and Maintain: Regularly monitor and maintain the automated network configurations to identify and resolve any issues or security vulnerabilities that may arise.

 

The Need to Automate Network Configuration

There are always hundreds, if not thousands, of outdated rules even though the application service is not required. Another example is unused VLANs left configured on access ports, posing a security risk. The problem lies in the process: how we change and provision the network is not tied to the application. It is not automated. Inconsistent configurations tend to grow as human interaction is required to tidy things up. People move on and change roles.

You cannot guarantee the person creating a firewall rule will be the engineer deleting the rule once the corresponding applications are decommissioned or changed. And if you don’t have a rigorous change control process, deprecated configurations will be idle on active nodes.

 

  • A key point: The use of Ansible variables in an Ansible architecture.

For configuration management, you could opt for Red Hat Ansible. The Ansible architecture consists of modules with tasks on the target hosts listed in the inventory. Various plugins are available for additional context and Ansible variables for flexible playbook development. Ansible Core is the CLI-based version of automation, and Ansible Tower is the platform.

The recommended approach for enterprise-wide security would be a platform-based approach to the Ansible architecture. Using a platform approach using Ansible variables creates a very flexible automation journey where you can have one playbook with Ansible variables, removing any site-specific information running against several different inventories that could relate to your other function, Dev, Staging, and Production.

Network Automation

The network is critical for business continuity, resulting in real uptime pressure. The operational uptime is directly tied to the success of the business. This results in a manual. The culture that manifests is manual and slow. The actual bottleneck is our manual culture for network provision and operation.

 

Virtualization – Beginning the change

Virtualization vendors are changing the manual approach. For example, if we look at essential MAC address learning and its process with traditional switches. The source MAC address of an incoming Ethernet frame is examined, and if the source MAC address is known, it doesn’t need to do anything, but if it’s not known, it will add that MAC to its table and make a note of the port the frame entered. The switch has a port for MAC mapping. The table is continually maintained, and MAC addresses are added and removed via timers and flushing.

 

The virtual switch

The virtual switch operates differently. Whenever a VM spins up, and a VNIC attaches to the virtual switch, the Hypervisor programs everything it needs to know to forward that traffic into its process on the virtual switch. There is no MAC learning. When you spin down the VM, the hypervisor does not need to wait for a timer.

It knows the source is no longer there; as a result, it no longer needs to have that state. Less state in a network is a good thing. The critical point is that the provision of the application/ virtual machine is tightly coupled with the provisioning of network resources. Tightly coupling applications to network resources/provisioning offers less “Garbage Collection.”

 

Box mentality  

When the contents of HLD / LLD are completed, and you are now moving to the configuration stage, the current implementation-specific details are done per box. The commands are defined on individual boxes and are vendor-specific. This works functionally, and it’s how the Internet was built, but it lacks agility and proper configuration management. Many repetitive tasks with a box mentality destroy your ability to scale.

Businesses are mainly concerned with agility and continuity, but you cannot have these two things with manual provisions. You must look at your network as a system, not as individual boxes. When you look at applications and their scaling, the current network-style implementation method does not scale and keeps in line with the apps. A move to network configuration automation and automatic interaction is the solution.

 

Configuration management

Network Configuration Automation and Automate Network Configuration

We must move out of a manual approach and into an automated system. Focus initially on low-hanging fruit and easy wins. What takes engineers the longest to do? Do VLAN and Subnet allocation sheets ring a bell? We should size according to demand and not care about the type of VLAN or the Internal subnet allocation. Microsoft Azure cloud is a perfect example.

They do not care about the type of private address they assign to internal systems. They automate the IP allocation and gateway assignment so you can communicate locally. Designing optimum networks to last and scale is not good enough anymore. The network must evolve and be programmed to keep up with app placement. The configuration approach needs to change, and we should move to proper configuration management and automation.

A widespread tool of choice is Ansible. As previously mentioned, we have Ansible Tower as a platform, and for CLI-based devices, we have Ansible Core—both support variable substitution with Ansible variables.

 

SDN: A companion to network automation?

One benefit of Software Defined Networking (SDN) is that it lets you view your network holistically, with a central viewpoint. Network configuration automation is not SDN, and SDN is not network automation. They work side by side and complement each other. SDN allows you to be abstract and prevents those not needing to see the detail from not seeing it.

The application owners do not care about VLANs. Application designers should also not care about local IP allocations if they have designed the application correctly. Centralization is also a goal for SDN. Centralization with SDN is different from control-plane centralization. Central SDN controller devices should not fully control the control plane.

SDN companies have learned this and now allow network nodes to handle some or part control plane operations.  

 

  • Programming network: Automate network configuration

You don’t need to be a programmer, but you should start to think like one. Learning to program will make you better equipped to deal with things. Programming networks is a diagonal step to what you are doing now, offering an environment to run code and ways to test code before you run it out.

The current CLI is the most dangerous approach to device configuration; you can even lock yourself out of a device. Programming adds a safety net. It’s more of a mental shift. Stop jumping to the CLI and THINK FIRST. Break the task down and create workflows. Workloads are then mapped to an automation platform.

 

  • A key point: TCL and EXPECT

TCL ( Tool Command Language ) is a scripting language that was created in 1988 at UC Berkeley. It aims to tie together Shell scripts and Unix commands. EXPECT is a TCL extension written by Don Libes. It automates Telnet, SSH, and Serial sessions to perform many repetitive tasks.

EXPECT’s main drawback is that it is not secure and is synchronous only. If you log onto a device, you display login credentials in the EXPECT scripts and cannot encrypt that data in the code. In addition, it operates sequentially, meaning you send a command and wait for the output; it does not send send send and wait to receive; it’s a send and waits, sends and wait for mythology.

 

  • A key point: SNMP has failed | NETCONF begins

SNMP is used for fault handling, monitoring equipment, and retrieving performance data, but very few use SNMP to set configurations. More often, there is no 1:1 translation between a CLI configuration operation and an SNMP “SET.” It’s hard to get this 1-2-1 correlation. As a result, not many people use SNMP for device configuration and management of structures.

CLI scripting was the primary approach to automating configuration changes to the network before NETCONF. Unfortunately, CLI scripting has several limitations, including a lack of transaction management, no structured error management, and the ever-changing structure and syntax of commands, making scripts fragile and costly to maintain. Even the use of autocorrelation scripts won’t be able to fix it.

People make mistakes, and ultimately, people are bad at stuff. It’s the nature of the beast. Human error plays a significant role in network outages, and if a person is not logging in doing CLI, they are less likely to make a costly mistake. Human interaction with the network is a major cause of network outages.

 

NETCONF & Tail-F

NETCONF ( network control protocol ) is an XML-based data encoding for configuration and protocol messages. It offers secure transport and is Asynchronous, so it’s not sequential like TCL and EXPECT. Asynchronous makes NETCONF more efficient. It allows the separation of the configuration from the non-configuration items.

Backup and restore are complex with SNMP, as you have no idea what fields are used to restore. Also, because of the binary nature of SNMP, it isn’t easy to compare configurations from one device to another. NETCONF is much better at doing this.

 

A final note: Transaction-based approach

It offers a transaction-based approach. A transaction is a set of configuration changes, not a sequence. SNMP for configuration requires everything to be in the correct sequence/order. But with a transaction, you throw in everything, and the device figures out how to roll it out.

What matters is that operators can write service-level applications that activate service-level changes and don’t have to make the application aware of all the sequence of changes that must be completed before the network can serve application responses and requests. Check out an exciting company called Tail-F (now part of Cisco) which offers a family of NETCONF-enabled products.

 

Conclusion:

Network configuration automation is revolutionizing how businesses manage their networks. It offers many benefits, including time and resource efficiency, enhanced accuracy, rapid network deployment, improved security, and simplified network management. By embracing this technology, organizations can streamline network operations, reduce human error, and stay ahead in the dynamic and ever-evolving digital landscape.

 

Automation Network Configuration

IDS IPS Azure

IDS IPS Azure

 

 

IDS IPS Azure

In today’s rapidly evolving digital landscape, it has become crucial for businesses to prioritize the security of their cloud environments. With increasing cyber threats, organizations seek robust security solutions to protect their valuable data and applications. In this blog post, we will explore the concepts of the Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) in Azure and understand how they can fortify the security of your cloud infrastructure.

An Intrusion Detection System (IDS) is vital to any comprehensive security strategy. IDS is a vigilant watchdog, continuously monitoring network traffic for suspicious activities and potential security breaches. It identifies and analyzes unauthorized access attempts, malware infections, and other malicious activities within the network.

Highlights IDP IPS Azure

  • Azure IDS

Azure offers a native IDS solution called Azure Security Center. This cloud-native security service provides threat detection and response capabilities across hybrid cloud workloads. By leveraging machine learning and behavioral analytics, Azure Security Center can quickly identify potential security threats, including network-based attacks, malware infections, and data exfiltration attempts.

  • Azure Cloud

Microsoft Azure Cloud consists of functional design modules and services such as Azure Internet Edge, Virtual Networks (VNETs), ExpressRoute, Network Security Groups (NSGs) & User Defined Routing (UDR). Some resources are controlled solely by Azure; others are within the customer’s remit. The following post discusses some of those services and details a scenario design use case incorporating Barracuda Next Generation (NG) appliances and IDS IPS Azure.

 

For pre-information, you may find the following post helpful:

  1. Network Security Components
  2. WAN Design Considerations
  3. Distributed Firewalls
  4. Network Overlays
  5. NFV Use Cases
  6. OpenStack Architecture

 



IDS IPS Azure

Key IDS IPS Azure Discussion Points:


  • Introduction to IDS IPS Azure and what is involved.

  • Highlighting Network and Cloud Access.

  • Critical points on the Azure Cloud Access Layer.

  • Technical details on inside Azure Cloud.

  • Technical details for the NG firewall and VNET communication.

 

Back to basics with an Intrusion Detection

Network Intrusion

Network intrusion detection determines when unauthorized people attempt to break into your network. However, keeping those bad actors out or extracting them from the network once they’ve gotten in are two different problems. However, keeping intruders out of your network is only meaningful if you know when they’re breaking in. Unfortunately, it’s impossible to keep everything out all the time.

So as a good starting point, detecting unauthorized connections is a good start, but it is only part of the story. For example, network intrusion detection systems are great at detecting attempts to, for example, log in to your system and access unprotected network shares.

 

Key Features of Azure IDS:

1. Network Traffic Analysis:

Azure IDS analyzes network traffic to identify patterns and anomalies that may indicate potential security breaches. It leverages machine learning algorithms to detect unusual behavior and promptly alerts administrators to take appropriate action.

2. Threat Intelligence Integration:

Azure Security Center integrates with Microsoft’s global threat intelligence network, enabling it to access real-time information about emerging threats. This integration allows Azure IDS to stay up-to-date with the latest threat intelligence, providing proactive defense against known and unknown threats.

3. Security Alerts and Recommendations:

The IDS solution in Azure generates detailed security alerts, highlighting potential vulnerabilities and offering actionable recommendations to mitigate risks. It empowers organizations to address security gaps and fortify their cloud environment proactively.

 

IDS IPS Azure: Network & Cloud Access

Azure Network Access Layer is the Azure Internet edge security zoneconsisting of IDS/IPS for DDoS and IDS protection. It isolates Azure’s private networks from the Internet, acting as Azure’s primary DDoS defense mechanism. Azure administrators ultimately control this zone; private customers do not have access and can not make configuration changes.

The customer can, however, implement their IDS/IPS protection by deploying 3rd party-virtual appliances within their private virtual network (VNET). Ideally in a services sub-VNET. Those appliances can then be used in conjunction with Azures IDS/IPS but can not be used as a replacement. The Azure Internet Edge is a mandatory global service offered to all customers.

 

IDS IPS Azure
Diagram: IDS IPS Azure.

 

Azure Cloud Access Layer is the first point of control for customers, and it gives administrators the ability to administer and manage network security on their Azure private networks. It is comparable to the edge of a corporate network that faces the Internet, i.e., Internet Edge.

The Cloud Access Layer contains several Azure “free” services, including virtual firewalls, load balancers, and network address translation ( NAT ) functionality. It allows administrators to map ports and restrict inbound traffic with ACL. A VIP represents the cloud access load balance appliance to the outside world.

Any traffic destined for your services first hit the VIP. You can then configure what ports you want to open and match preferred traffic sources. If you don’t require using any cloud access layer services, you can bypass it, allowing all external traffic directly to that service. Beware that this will permit all ports from all sources.

 

Inside Azure cloud

Customers can create VNETs to represent subscriptions or services. For example, you can have a VNET for Production services and another VNET for Development. Within the VNET, you can further divide the subnet to create DMZ, Application tiers, Database, and Active Directory ADFS subnets. A VNET is a control boundary, and subnets configured within a VNET are usually within the VNET’s subnet boundary. Everything within a VNET can communicate automatically. However, VNET-to-VNET traffic is restricted and enabled via configuring gateways.

 

  • Network security groups

To segment traffic within a VNET, you can use Azures Network Security Groups (NSGs). They are applied to a subnet or a VM and, in some cases, both. NSGs are more enhanced than standard 5-tuple packet filters, and their rules are stateful. For example, if an inbound rule allows traffic on a port, then a matching rule on the outbound side is not required for the packets to flow on the same port.

 

  • User-defined routing

User-Defined Routing modifies the next hop of outbound traffic flows. It can point traffic to appliances for further actions or scrubbing, providing more granular traffic engineering. UDR could be compared to Policy-Based Forwarding (PBR) and a similar on-premise feature. 

 

Multi VNET with multi NG firewalls 

The following sections will discuss the design scenario for Azure VNET-to-VNET communication via Barracuda NG firewalls, TINA tunnels, and Azures UDR. The two VNETs use ExpressRoute gateways for “in” cloud Azure fabric communication. Even though the Azure ExpressRoute gateway is for on-premise connectivity, it can be used for cloud VNET-to-VNET communication.

DMZ subnet consists of Barracuda NG firewalls for security scrubbing and Deep Packet Inspection (DPI). Barracuda’s Web Application Firewalls (WAF) could also be placed a layer ahead of the NG and have the ability to perform SSL termination and offload. To route traffic to and from the NG appliance, use UDR. For example, TOO: ANY | FROM: WEB | VIA: NG

To overcome Azure’s lack of traffic analytics, the NG can be placed between service layers to provide analytics and traffic profile analyses. Traffic analytics helps determine outbound traffic flows if VMs get compromised, and attackers attempt to “beachhead.” If you ever compromised better to analyze and block traffic yourself than call the Azure helpline 🙂

 

VNET-to-VNET TINA tunnels

To secure VNET-to-VNET traffic, Barracuda NG supports TINA tunnels for encryption. Depending on the number of VNETs requiring cross-communication, TINA tunnels can be deployed as full mesh or hub and spoke design terminating on the actual NG. TINA tunnels are also used to provide backup traffic engineering over the Internet. They are transport agnostic and can route different flows via the ExpressRoute and Internet gateways. They hold a similar analogy to SD-WAN but without the full feature set.

Diagram: VNET-to-VNET TINA tunnels

 

A similar design case exists using Barracuda TINA agents on servers to create TINA tunnels directly to NGS in remote VNET. It’s a similar concept to an Agent VPN configured on hosts. However, instead of UDR, you can use TINA agents and enable tunnels from hosts to NG firewalls.

The agent method reduces the number of NGS and is potentially helpful for hub and spoke VNET design. The main drawbacks are the lack of analytics in the VNET without the NG and the requirement to configure agents on participating hosts.

Conclusion:

Implementing robust security measures is paramount in today’s digital landscape, where cyber threats are becoming increasingly sophisticated. Azure IDS and IPS solutions, offered through Azure Security Center, provide organizations with the tools to effectively detect, prevent, and respond to potential security breaches in their cloud environment. By leveraging the power of machine learning, behavioral analytics, and real-time threat intelligence, Azure IDS and IPS enhance the overall security posture of your Azure infrastructure, enabling you to focus on driving business growth with peace of mind.