Implementing Network Security
Having the appropriate network visibility is key to understanding network performance and implementing network security components. Now, much of the technology used in the network performance world, such as Netflow, is very much security-focused. There is a challenging landscape, we have workloads moving to the cloud without monitoring or any security plan. We need to find a solution to have visibility over these clouds and on-premise application without having to refactor the entire monitoring and security stack.
The challenge we have is that the network is complex and is always changing. And we have seen this with WAN monitoring along with the issues that can arise from routing convergence. This may not comes in the form of a hardware refresh, but from a network software perspective is changing constantly and needs to keep dynamic. If you don’t have full visibility while the network changes, this will result in different security blind spots.
Implementing Network Security.
Existing security tools are in place, but there needs to be better security integrated. And here, we can look for the network and provide that additional integration point. In this case, we can use a network packet broker for sitting in the middle and feeding all the security tools with data that has already been transformed or, let’s say, optimized for that particular security device it is sending back to, reducing false positives.
Increased enterprise security challenges demand new efforts and methods to stay ahead of threat actors. Therefore, monitoring the environment must be taken from multiple vantage points. Then we can identify patterns that could be early indicators of attack. Finally, once we know there is an attack, we can implement a proactive response model, which will be key to success.
We need good network observability tools to understand what is happening in your environment. Bad actors are always at work, going through new things and creating new ways to exploit. Consider how you gain complete network visibility when deciding on your monitoring solution. We must assume that the actor already has access to the zero-trust approach to security.
So we assume the threat already has access and authentication at all levels, along with having the correct security appliance in places such as the Web Application Firewalls (WAF), Intrusion Detection Systems (IDS), and Intrusion Prevention System (IPS). But the most important point is to assume we have a breach and the bad actor is already present on our network.
Implementing network security: The hacking stages
There are different stages of an attack chain, and with the correct network visibility, you can break the attack at each stage. Firstly, there will be the initial recon, access discovery, where a bad actor wants to understand the lay of the land to determine the next moves. Once they know this, they can try to exploit it.
Stage 1: Deter
You need first to deter threats and unauthorized access, detect suspicious behaviour and access and finally, automatically respond and alert. So it would help if you looked at network security. For the first stage, which deters, we have our anti-malware devices, perimeter security devices, identity access, firewalls, and load balancers.
Stage 2: Detect
The next dimensions of security are detection. Here we can examine the IDS, log insights, and any security feeds aligned with analyses and flows consumption. Again, any signature-based detection can assist you here.
Stage 3: Respond
Then we need to focus on how you can respond. This will be with anomaly detection and response solution. Remember that all of this must be integrated with, for example, the firewall that will enable you to block and then deter that access.
A key point: Red Hat Ansible Tower
Ansible is the common automation language for everyone across your organization. Specifically, Ansible Tower can be the common language between security tools. This leads to repetitive work and the ability to respond to security events in a standardized way. If you want a unified approach, automation can help you here, especially with a Platform such as Ansible Tower. It would help if you integrated Ansible Tower and your security technologies.
Example: Automating firewall rules. We can add an allowlist entry in the firewall configuration to allow traffic from a certain machine to another. We can have a playbook that first adds the source and destination I.P.s as variables. Then when a source and destination object are defined, the actual access rule between those is defined. All can be done with automation.
Implementing Network Security
There is not one single device that can stop an attack. We need to examine multiple approaches that should be able to break the attack at any part of this attack chain. Whether the bad actors are doing their TCP scans, ARP Scans, or Malware scans. You want to be able to identify these before they become a threat. And you always need to assume threat access, leverage all possible features, and ensure that every application is critical and protected.
So we need to improve monitoring, investigation capabilities, and detection in various technologies. This is where the zero-trust architecture can help you monitor and improve detection. In addition, we must look at network visibility, logging, and Encrypted Traffic Analyses (ETA) to improve investigation capabilities.
So, for implementing network security, you need to consider that the network and the information gleaned from the network add a lot of value. It can still be used with an agent-based approach where an agent collects data from the host and sends it back to, for example, a data lake where you set up a dashboard and query. However, an agent-based approach will have blind spots. For one, it misses a holistic network view and can’t be used with unmanaged devices such as far-reaching edge IoT.
The information gleaned from the host misses out on information that can be derived for the network. Especially with network-derived traffic analysis, you can look into unmanaged hosts such as IoT. Any host and its actual data. This is not something that can be derived from a log file. The issue we have with log data is if a bad actor gets internal to the network, the first thing that they want to do to cover their footprints is log spoofing and log injections.
Agent-based and network-derived intelligence
An agent-based approach can be appended along with network-derived intelligence’s deep packet inspection process. Network-derived intelligence allows you to pull out tons of metadata attributes, such as what traffic is this, what are the attributes of the traffic, is what a video, and what is the frame rate. The beauty is that this can get both north-south and east-west traffic and unmanaged devices. So we have expanded the entire infrastructure by combing an agent-based approach and a network-derived intelligence.
Detecting rogue activity: Layers of security
Now we can detect new vulnerabilities, such as old SSL cyphers, shadow I.T. activity, such as torrent and crypto mining, and suspicious activities, such as port spoofing. Rogue activities such as crypto mining is a big concern. Many workflows get broken, and many breaches and attacks install crypto mining software. This is the best way for a bad actor to make money. So the way to detect this is not to have an agent but to examine network traffic and look for anomalies in the traffic. When there are anomalies in the traffic, the traffic may not look too different. This is because the mining software will not generate log files, and there is no command and control communication.
So we make the observability and SIEM more targeted to get better information. With the network, we have new capabilities to detect and invent. This adds a new layer to the defence in depth and makes you more involved in the cloud threats that are happening at the moment. Netflow is used for network monitoring, detection, and response. Here you can detect the threats and integrate them with other tools so we can see the network intrusion as it begins. So it makes a decision based on the network. So you can see the threats as they happen.
Security Principles: Monitoring and Observability
So, when implementing network security we need to follow security principles and best practices. Firstly, monitoring and observability. To set up effective security controls on a zero-trust network, you need to have a clear picture of all the users and devices that have access to a network and what access privileges they require to do their jobs. Therefore, the comprehensive audit you must take should keep up-to-date access lists and policies. We need to ensure that network security policies are kept up to date. It is a good idea to test their effectiveness regularly to ensure no vulnerabilities have escaped notice. Finally, monitoring. The zero-trust network traffic is constantly monitored for unusual or suspicious behaviour.
You can’t protect what you can’t see
The first step in the policy optimization process is how the network connects, what is connecting, and what it should be. You can’t protect what you can’t see. Therefore, everything desperately managed within a hybrid network must be fully understood and consolidated. Secondly, once you know how things connect, how do you ensure they don’t reconnect through a broader definition of connectivity? You must support different user groups, security groups, and I.P. addresses. You can’t just rely on I.P. addresses to implement security controls anymore. For this, we need visibility and not just at a traffic flow level but at a process and contextual data level. Without this granular application visibility, it’s difficult to map and fully understand normal traffic flow and irregular communication patterns.
Complete network visibility
We also need to identify when there is a threat easily. For this, we need a multi-dimensional security model and good visibility. Network visibility is integral to security, compliance, troubleshooting, and capacity planning. Unfortunately, custom monitoring solutions cannot cope with the explosive growth networks. And we have good solutions from Cisco, such as Cisco’s Nexus Dashboard Data Broker (NDDB). Cisco’s Nexus Dashboard Data Broker (NDDB) is a packet brokering solution that provides a software-defined, programmable solution that can aggregate, filter, and replicate network traffic using SPAN or optical TAPs for network monitoring and visibility.
What prevents visibility?
There is a long list that can prevent visibility. Firstly, there are too many devices and the complexity and variance between vendors in how they are managed. Even CLI commands from the same vendor vary. Too many changes result in the inability to meet service level agreement (SLA), as you are just layering on connectivity without fully understanding how the network connects. This results in complex firewall policies. For example, you have access but are not sure if you should have access. Again, this leads to large, complex firewall policies without context. More often, there is a lack of visibility of the entire network. For example, AWS teams understand the Amazon cloud but do not have visibility on-premise. We also have distributed responsibilities across multiple teams, which results in fragmented processes and workflows.
Security Principles: Data-flow Mapping
Network security starts with the data. Data-flow mapping enables you to map and understand how data flows within an organization. But, first, you must understand how data flows across your hybrid network and between all the different resources and people, such as internal employees, external partners, and customers. This includes the who, what, when, where, why, and how your data is used in creating a strong security posture. You are then able to understand access to sensitive data.
Data-flow mapping will help you create a baseline. Once you have a baseline, you can start implementing Chaos Engineering projects to help you understand your environment and its limits. One example would be a Chaos Engineering Kubernetes project that breaks systems in a controlled manner.
What prevents mapping sensitive data flows
So what prevents mapping sensitive data flow? Firstly, the inability to understand how the hybrid network connects. Do you know where sensitive data is, how you find it and how to ensure it has the minimum access necessary? With many teams managing different parts and the rapid pace of application deployments, there are often no documents. No filing systems in place. There is a lack of application connectivity requirements. People don’t worry about documenting and focus on connectivity. More often than, we have an overconnected network environment.
We often connect first and then think about security. Also, the inability to understand if application connectivity violates security policy and lacks application-required resources. Finally, the lack of visibility into the cloud and deployed applications and resources. What is in the cloud, and how is it connected to on-premise and external Internet access?
Implementing Network Security and the Different Types of Telemetry
Implementing network security involves leveraging the different types of telemetry for monitoring and analysis. And for this, we have different types of packet analysis and telemetry data. Packet analysis is key, involving new tools and technologies such as packet brokers. In addition, there need to be SPAN taps installed strategically in the network infrastructure. Telemetry, such as flow, SNMP, and API, is also examined. Flow are technologies such as IPFIX and NETFLOW. Also, we can start to look at API telemetry. Then we have logs that provide a wealth of information. So we have different types of telemetry and different ways of collecting and analyzing this, and now we can use this from both the network and security perspectives.
From the security presence, it would be for threat detection and response. Then for the network side of things, it would be for network and application performance. So there are a lot of telemetries that can be used for security. These technologies were originally viewed as performance monitoring. However, security and networking have been merged to meet the cybersecurity use cases. So in summary, we have flow, SNMP, and API for network and application performance and encrypted traffic analysis, and machine learning for threat and risk identification for security teams.
The issues with packet analysis: Encryption.
The issue with packet analysis is that everything is encrypted, especially with TLS1.3. And at the WAN Edge. So how do you decrypt all of this, and how to store all of this? If you are decrypting traffic, it can create an exploit and potential attack surface, and you also don’t want to decrypt everything.
- A key point: Do not fully decrypt the packets
One possible solution is not fully decrypting the packets. However, when looking at the packet information, especially in the header, which can consist of layer 2 headers and TCP headers. You can immediately decipher what is normal and what malicious. You can look at the packers’ length and the arrival time order and understand what DNS server it uses. Also, look at the round trip time and the connection times. There are a lot of understandings and features that you can extract from encrypted traffic without fully decrypting it. Combining all of this information can be fed to different machine learning models and understand what good and bad traffic is. So you don’t need to decrypt everything. So you may not have to look at the actual payload, but from the pattern of the packets, you can see with the right tools that one is a bad website, and another is a good website.
Summary: Implementing network security
To summarise how you might start implementing network security, I have summarized this into four main stages. First, implementing network security starts with good visibility; this visibility needs to be combined with all of our existing security tools. A packet broker can be used along with good automation. Finally, this approach needs to span all our environments, both on-premises and in the cloud.
Stage 1: Know your infrastructure with good visibility
So the first thing is getting to know all the traffic around your infrastructure. Once you know, they need to know this for on-premises, cloud, and multi-cloud scenarios. So it would help if you had higher visibility across all environments.
Stage 2: Implement security tools
With all environments, we have infrastructure that our applications and services ride upon. Then we have several tools that are used to protect. These tools will be placed in different parts of the network. As you know, we have firewalls, DLP, email gateways, and SIEM. So we have to of different tools to carry out different security functions. These tools will not disappear or be replaced anytime soon, but they must be better integrated.
Stage 3: Network packet broker
You can introduce a network packet broker. So we can have a packed brokering device that fetches the data and then sends the data back to the existing security tools you have in place. Essentially, this ensures that there are no blind spots in the network. Remember that this network packet broker should support any workload to any tools.
Stage 4: Cloud packet broker
So in the cloud, you will have a variety of workloads and several tools in the cloud such as SIEM, IPS, and APM. These tools need access to your data. A packet broker can be used in the cloud too. So if you are in a cloud environment, you need to understand the native cloud protocols such as VPC mirroring; then, this traffic can be brokered, allowing some transformation to happen before we move the traffic over. These transformant functions can include de-duplication, packet slicing, and TLS analyses.
This will give you full visibility into the data set across VPC at scale, eliminating any blind spots while also improving the security posture by sending appropriate network traffic to the tools stack in the cloud, whether packets or metadata.