Cyber data and information security idea. Yellow padlock and key and blue keyboard. Computer, information safety, confidentiality concept

CASB Tools

 

 

CASB Tools

Cloud computing has become an integral part of modern businesses, offering unparalleled scalability, flexibility, and cost-effectiveness. However, the reliance on the cloud also brings forth new security challenges. Companies must ensure the confidentiality, integrity, and availability of their data, even when it resides outside their traditional network perimeters. This is where Cloud Access Security Broker (CASB) tools come into play. In this blog post, we will explore the importance of CASB tools and how they help organizations enhance their cloud security.

CASB tools act as a crucial intermediary layer between an organization’s on-premises network and the cloud service providers they utilize. These tools provide a range of security capabilities, including visibility, control, and threat protection. By implementing CASB tools, businesses gain granular visibility into their cloud usage, allowing them to identify potential risks, enforce security policies, and protect sensitive data.

 

Highlights: CASB Tools

  • Network Security Components

Recently, when I spoke to Sorell Slaymaker, we agreed that every technology has its own time and place. Often a specific product set is forcefully molded to perform all tasks. This carries along with its problems. For example, no matter how modified the Next-Gen firewall is, it cannot provide total security. As you know, we need other products for implementing network security, such as a proxy or a cloud access security broker (CASB API) to work alongside the Next-Gen firewall and zero trust technologies, such as single packet authorization to complete the whole picture.

 

Before you proceed, you may find the following posts helpful

  1. SASE Definition.
  2. Zero Trust SASE
  3. Full Proxy
  4. Cisco Umbrella CASB
  5. OpenStack Architecture
  6. Linux Networking Subsystem
  7. Cisco CloudLock
  8. Network Configuration Automation

 

  • A key point: Video on Cloud Access Security Brokers (CASB).

In the following video, we will discuss cloud access security brokers. CSB are subject matter experts in the middle, assisting with a wide range of cloud enablement challenges. The CASB API consists of a broker relationship between the cloud and the consumer and applies to public and private clouds serving all cloud service models – IaaS, PaaS, and SaaS.

 

 

Back to basics with CASB

A cloud access security broker (CASB) allows you to move to the cloud safely. It protects your cloud users, data, and apps that can enable identity security. With a CASB, you can more quickly combat data breaches while meeting compliance regulations.

For example, Cisco has a CASB in its SASE Umbrella solution that exposes shadow IT by supplying the ability to detect and report on cloud applications in use across your organization. For discovered apps, you can view details on the risk level and block or control usage to manage cloud adoption better and reduce risk.

 

Key Features and Benefits:

1. Visibility and Control:

CASB tools offer comprehensive visibility into cloud applications and services being used within an organization. They provide detailed insights into user activities, data transfers, and application dependencies, allowing businesses to monitor and manage their cloud environment effectively. With this information, organizations can create and enforce access policies, ensuring that only authorized users and devices can access critical data and applications.

2. Data Loss Prevention:

CASB tools help prevent data leakage by monitoring and controlling the movement of data within the cloud. They employ advanced techniques such as encryption, tokenization, and data classification to protect sensitive information from unauthorized access. Additionally, CASB tools enable businesses to set up policies that detect and prevent data exfiltration, ensuring compliance with industry regulations.

3. Threat Protection:

CASB tools play a vital role in identifying and mitigating cloud-based threats. They leverage machine learning algorithms and behavioral analytics to detect anomalous user behavior, potential data breaches, and malware infiltration. By continuously monitoring cloud activities, CASB tools can quickly detect and respond to security incidents, thereby minimizing the impact of potential breaches.

4. Compliance and Governance:

Maintaining compliance with industry regulations is a top priority for organizations across various sectors. CASB tools provide the necessary controls and monitoring capabilities to help businesses meet compliance requirements. They assist in data governance, ensuring that data is stored, accessed, and transmitted securely according to applicable regulations.

 

CASB Tools: Introducing CASB API Security 

Network security components are essential for safeguarding business data. As most data exchange is commonplace in business, APIs are also widely leveraged. An application programming interface (API) is a standard way of exchanging data between systems, typically over an HTTP/S connection. An API call is a predefined way of obtaining access to specific types of information kept in the data fields.

However, with the acceleration of API communication in the digital world, API security, and CASB API marks a critical moment as the data is being passed everywhere. The rapid growth of API communication has resulted in many teams being unprepared. Although there is an ease in performing the API integrations, along with that comes the challenging part of ensuring proper authentication, authorization, and accounting (AAA).

When you initiate an API, there is a potential to open up calls to over 200 data fields. Certain external partners may need access to some, while others may require access to all.

That means a clear and concise understanding of data patterns and access is critical for data loss prevention. Essentially, bad actors are more sophisticated than ever, and simply understanding data authorization is not enough to guard the castle of your data. Business management and finances can face a massive loss due to a lack of data security.

 

CASB Tools: The challenge

Many enterprise security groups struggle to control the shadow IT; managing all Amazon web services (AWS) accounts is one example where AWS employs tools, such as Macie, that are used for API management. However, these tools work well for only AWS accounts that Macie is turned on for. Enterprises can have hundreds of test and development accounts with a high risk of data leakage, which the security teams are unaware of.

Also, containers and microservices often use transport layer security (TLS) connections to establish secure connectivity, but this falls short in several ways. Examining the world of API security poses the biggest challenge that needs to be solved in the years to come. So what’s the solution?

 

CASB Tools and CASB API: The way forward

Let’s face it! The digital economy is run by APIs, which permit an exchange of the data that needs to be managed. API security tools have become a top priority with the acceleration of API communication. We don’t want private, confidential, and/or regulated data to leave that is not supposed to leave, and we need to account for data that does leave. If you don’t have something in the middle and encrypt just the connections, data can flow in and out without any governance and compliance.

Ideally, a turnkey product to manage API security in real-time independent of the platform: in the cloud, hybrid, or on-premise, is the next technological evolution of the API security tool market. Authentically, having an API platform across the entire environment, and enforcing real-time security with analytics, empowers the administrators to control the movements and access of data. Currently, API security tools fall into three different types of markets.

  1. Cloud Access Security Brokers: The CASB API security is between an enterprise and the cloud-hosted services, such as O365, SFDC, ADP, or another enterprise
  2. API Management Platforms: They focus on creating, publishing, and protecting an API. Development teams that create APIs consumed internally and externally rely on these tools as they write applications. You can check out the Royal Cyber blog to learn about API management platforms like IMB API Connect, MuleSoft, Apigee API, and MS Azure API.
  3. Proxy Management: They focus on decrypting all the enterprise traffic, scanning, and reporting anomalies. Different solutions are typically used for different types of traffic – web and email. Chat is an example.

 

CASB Tools
Diagram: CASB Tools.

 

Cloud Access Security Brokers

The rise of CASB occurred due to inadequacies of the traditional WAF, Web Security Gateway, and Next-Gen Firewalls product ranges. The challenge with these traditional products is that they work more as a service than at the data level.

They operate at the HTTP/S layer, usually not classifying and parsing the data. Their protection target is different from that of a CASB. Let’s understand it more closely. If you parse the data, you can classify it. Then you have rules to define the policy, access, and the ability to carry out analytics. As the CASB solutions mature, they will become more automated.

They can automatically do the API discovery, mapping, classifying, and intelligent learning. The CASBs provide a central location for policy and governance. The CASBs sit as a boundary between the entities. It is backed by the prerequisite to be able to decrypt the traffic, be it a TLS or IPSec. After decrypting, it reads, parses, and then re-encrypts the traffic to send it on its way.

 

Tokenization

When you are in the middle, you need to decrypt the traffic and then parse it to examine and classify all the data. Once it is classified, if there are specific data that is highly sensitive, be it private, confidential, or regulated, you can tokenize it or redact it at a field and file level. Many organizations previously would create a TLS or IPsec connection between themselves and the cloud provider or 3rd party network.

However, they didn’t have strict governance or compliance capabilities to control and track the data going in and out of the organization. TLS or IPsec is the only point-to-point encryption; the traffic is decrypted once you reach the end location. As a result, sensitive data is then available, which could be in an unsecured network.

Additional security controls are needed so that when the connections are complete, the data has an additional level of encryption. TLS or IPSec is for the data in motion, and tokenization is for the data at rest. We have several ways to secure data, and tokenization is one of them. Others include encryption with either provider-managed keys or customer BYOK.

We also have different application-layer encryption. Tokenization substitutes the sensitive data element with a non-sensitive equivalent. The non-sensitive equivalent is referred to as a token. As a result, the 3rd party needs additional credentials to see that data.

However, when you send the data out to a 3rd party, you add another layer of encryption by putting in a token instead of a specific number like the social security number. Redact means that the data is not allowed to leave the enterprise.

 

CASB API Security 

For API security, AAA is at an API layer. This differs from the well-known AAA model used for traditional network access control (NAC). Typically, you allow IP addresses and port numbers in the network world. In an API world, we are at the server and service layer.

Data Loss Prevention (DLP) is a common add-on feature for CSBs. Once you parse and classify the data, you can then govern it. Here, what is of primary concern is what the data is allowed to leave, who is allowed to access it, and when. The DLP is an entire market, whereas the CASB will be specific for specific APIs.

More often than, you need a different DLP solution, for example, to scan your word documents. Some vendors bundle in DLP and CASB. We see this with the Cisco Umbrella, where the CASB and DLP engines are on the same platform.

Presently, the next-generation CASBs are becoming more application-specific. They now have the specific capability for Office 365 and SalesForce. The market is constantly evolving; it will integrate with metadata management over time.

 

API Management Platforms

API Management platforms are used by DevOps teams when they are creating, publishing, and protecting their APIs. DevOps create an API that is consumed internally and externally to enable their application to rely on these tools. In an enterprise, had everyone been using an effective API management tool, you wouldn’t need a CASB. One of the main reasons for introducing CASBs is that you have a lot of development and test environments that lack good security tools. As a result, you need the 3rd tool to ensure governance and compliance.

 

    • Finally, Proxy Management

A proxy monitors all the traffic going in and out of the organization. A standard proxy keeps a tab on the traffic moving internally to the site. A reverse proxy is the opposite, i.e. an external organization looking for internal access to systems. A proxy operates at layer 5 and layer 6. It controls and logs what the site users are doing but does not go into layer 7, where the all-important data is.

 

Bitcoin coins

Blockchain-Based Applications

 

Blockchain-Based Applications

Blockchain technology has been making waves across various industries, promising enhanced security, transparency, and efficiency. With its decentralized nature, blockchain has the potential to revolutionize traditional systems and drive innovation in numerous areas. In this blog post, we will delve into the world of blockchain-based applications, exploring their benefits and discussing their impact on different sectors.

 

Highlights: Blockchain-Based Applications

  • Smart Contracts

Firstly, a smart contract is a business application. You need several intelligent contracts to work together to form business applications. If you are a bank or a hedge fund, you should utilize some guarantee to secure these business applications and their protocols. They all run with a smart contract and different protocols (Ethereum, Neo, Hyperledger Fabric) that carry business risks.

As a result, a comprehensive solution for securing, assuring, and enabling decentralized applications which are tightly integrated into your organization’s CI/CD process is required. This will enable you to innovate securely with blockchain cybersecurity and Blockchain-based Applications.

  • The Need For A Reliable System

With transactions, you need reliable systems that you can trust and are tamper-proof. We live in a world full of Internet fraud, malware, and state-sponsored attacks. One needs to trust the quality and integrity of the information you are receiving. Companies generating new tokens or going through token events must control their digital assets. As there is no regulation in this area, most are self-regulated, but they need some tools to enable them to be more self-regulated. 

 

Before you proceed, you may find the following posts helpful:

  1. DNS Security Solutions
  2. Generic Routing Encapsulation
  3. IPv6 Host Exposure
  4. What is BGP Protocol in Networking
  5. Data Center Failover
  6. Network Security Components
  7. Internet of Things Theory
  8. Service Chaining

 



Blockchain based applications.

Key Blockchain-based Applications Discussion Points:


  • Introduction to Blockchain based applications and what is involved.

  • Highlighting the details of Cybersecurity and Blockchain.

  • Critical points on security audits and smart contracts.

  • Technical details on vulnerabilities with a distributed ledger.

 

  • A key point: Video on Blockchain PaaS

The following video discusses Blockchain PaaS. Blockchain technology is a secured, replicated digital ledger of transactions. It is shared among a distributed set of computers instead of having a single provider. A transaction can be anything of value in the blockchain world and not solely a financial transaction. For example, it may record the movement of physical or digital assets in a blockchain ledger. However, the most common use is to record financial transactions.

 

 

Blockchain cybersecurity 

Blockchain cybersecurity is not just about using blockchain as an infrastructure. Most of what can be done is off-chain by using cybersecurity for blockchain-based applications. Off-chain uses analytics and machine learning algorithms running on the ledger. This enables you to carry out an analysis of the smart contracts before they are even executed!

Moreover, no discussion about blockchain would be complete without mentioning Bitcoin. Cryptocurrencies work using decentralized blockchain technology spread across several computers that manage and record all transactions. Again, part of the appeal of this technology is its security. Because of this, cryptocurrencies like Blockchain are hugely appealing to traders.

 

Enhanced Security:

One of the key advantages of blockchain-based applications is their robust security measures. Unlike centralized systems, blockchain networks distribute data across multiple nodes, making it nearly impossible for hackers to tamper with the information. The use of cryptographic algorithms ensures that data stored on the blockchain is highly secure, providing peace of mind for users and businesses alike.

2. Improved Transparency:

Transparency is another crucial aspect that blockchain brings to the table. By design, blockchain records every transaction or activity on a shared ledger that is accessible to all participants. This transparency fosters trust among users, as they can verify and track every step of a transaction or process. In industries such as supply chain management, this level of transparency can help prevent fraud, counterfeit products, and unethical practices.

3. Decentralization and Efficiency:

Blockchain-based applications operate on decentralized networks, eliminating the need for intermediaries or central authorities. This peer-to-peer approach streamlines processes, reduces costs, and increases efficiency. For instance, in the financial sector, blockchain-powered payment systems can enable faster, cross-border transactions at lower fees, bypassing traditional banking intermediaries.

4. Smart Contracts:

Smart contracts are self-executing contracts with predefined rules and conditions stored on the blockchain. These contracts automatically execute and enforce the terms of an agreement without the need for intermediaries. Smart contracts have far-reaching applications, including in sectors such as real estate, insurance, and supply chain management. They eliminate the need for manual verification and reduce the risk of fraud or dispute.

5. Impact on Various Industries:

Blockchain-based applications have the potential to disrupt and transform multiple industries. In healthcare, blockchain can securely store and share patient data, improving interoperability and facilitating medical research. In the energy sector, blockchain can enable peer-to-peer energy trading and establish a decentralized grid. Additionally, blockchain-based voting systems can enhance the transparency and integrity of democratic processes.

 

That is not all though

That is not all, though. There are many companies doing security audits for intelligent contracts the manual way. However, an automated way of doing things is needed. Employing machine learning algorithms will maximize the benefits of security audits. For adequate security, vulnerability assessments are required to run on smart contracts. A unique simulation design is required that enables you to assess the smart contracts before deployment into the chain to determine the future impact of those smart contracts. This allows you to detect any malicious code running and run the tests before you deploy to your chain, enabling you to understand the future impact before it happens entirely.

Protection is needed for different types of detection—for example, human error, malicious error, and malware vulnerability. Let’s not forget about hackers. Hackers are always looking to hack specific protocols. Once a coin reaches a specific market cap, it becomes very interesting for hackers.

Vulnerabilities can significantly affect the distributed ledger once executed, not to mention the effects of UDP scanning. What is needed is a solution that can eliminate the vulnerabilities in smart contracts. Essentially, it would help if you tried to catch any security vulnerability in the development stage, the deployment stage, and runtime in the ledger. For example, during build time, intelligent contracts code and log files are scanned to ensure that you always deploy robust and secure applications.

Conclusion:

Blockchain-based applications hold immense potential to reshape traditional systems and drive innovation in various sectors. With enhanced security, transparency, and efficiency, blockchain technology is set to revolutionize industries and empower individuals and businesses. As blockchain continues to evolve, it will be exciting to witness the transformative impact it has on our daily lives and the global economy.

Blockchain and cybersecurity

with safety.3D rendering

Brownfield Network Automation

 

Brownfield Automation

 

 

 

 

Brownfield Network Automation

In today’s rapidly evolving digital landscape, businesses constantly seek ways to enhance productivity, streamline operations, and stay ahead of the competition. Network automation has emerged as a powerful solution to achieve these goals, allowing organizations to automate manual tasks, optimize network performance, and boost overall efficiency. While greenfield network automation is widely discussed, this blog post sheds light on another vital aspect: Brownfield Network Automation. Let’s explore how this transformative approach can unlock the untapped potential of existing networks.

Brownfield Network Automation refers to automating an existing network infrastructure that has already been deployed and is fully operational. Unlike greenfield automation, which involves building a network from scratch, brownfield automation focuses on optimizing and modernizing existing networks. It allows businesses to leverage their current investments while reaping the benefits of automation.

Highlights: Brownfield Network Automation

  • The Traditional CLI

Software companies that build automation for network components have an assumption that traditional management platforms don’t apply to what is considered to be the modern network. Networks are complex and contain many moving parts and ways to be configured. So, what does it mean to automate the modern network when considering brownfield network automation? Innovation in this area has been lacking for so long until now with ansible automation.

If you have multi-vendor equipment and can’t connect to all those devices, breaking into the automation space is complex, and the command line interface (CLI) will live a long life. This has been a natural barrier to entry for innovation in the automation domain.

  • Automation with Ansible

But now we have the Ansible architecture using Ansible variables, NETCONF, and many other standard modeling structures that allow automation vendors to communicate to all types of networks, such as brownfield networks, greenfield, multi-vendor, etc. These data modeling tools and techniques enable an agnostic programmable viewpoint into the network.

The network elements still need to move to a NETCONF-type infrastructure, but we see all major vendors, such as Cisco, moving in this direction. Moving off the CLI and building programmable interfaces is a massive move for network programmability and open networking.

 

For pre-information, visit the following.

  1. Network Configuration Automation
  2. CASB Tools
  3. Blockchain-Based Applications

 

Back to basics with Brownfield Network Automation

Network devices have massive static and transient data buried inside, and using open-source tools or building your own gets you access to this data. Examples of this type of data include active entries in the BGP table, OSPF adjacencies, active neighbors, interface statistics, specific counters and resets, and even counters from application-specific integrated circuits (ASICs) themselves on newer platforms. So how do we get the best of this data, and the role of automation can help you here?

  • A key point: Ansible Tower

To operationalize your environment and drive automation to production, you need to have everything centrally managed and better role-based access. This is where you could use Ansible Tower, which has several Ansible features such as scheduling, job templates, and a project that help you safely enable automation in the enterprise at scale.

 

 

Ansible Red Hat

Challenges of Brownfield Automation:

Implementing network automation in a brownfield environment poses unique challenges. Legacy systems, diverse hardware, and complex configurations often hinder the seamless integration of automation tools. Additionally, inadequate documentation and a lack of standardized processes can make it challenging to streamline the automation process. However, with careful planning and a systematic approach, these challenges can be overcome, leading to significant improvements in network efficiency.

Benefits of Brownfield Network Automation:

1. Enhanced Efficiency: Brownfield Network Automation enables organizations to automate repetitive manual tasks, reducing the risk of human errors and increasing operational efficiency. Network engineers can focus on more strategic initiatives by eliminating the need for manual configuration changes.

2. Improved Agility: Automating an existing network allows businesses to respond quickly to changing requirements. With automation, network changes can be made swiftly, enabling organizations to adapt to evolving business needs and market demands.

3. Cost Savings: By automating existing networks, organizations can optimize resource utilization, reduce downtime, and improve troubleshooting capabilities. This leads to substantial cost savings regarding operational expenses and increased return on investment.

4. Seamless Integration: Brownfield Network Automation allows for integrating new technologies and services with existing network infrastructure. Businesses can seamlessly introduce new applications, services, and security measures by leveraging automation tools without disrupting existing operations.

5. Enhanced Network Security: Automation enables consistent enforcement of security policies, ensuring compliance and reducing the risk of human error. Organizations can strengthen their network defenses and safeguard critical data by automating security configurations.

Best Practices for Brownfield Network Automation:

1. Comprehensive Network Assessment: Conduct a thorough assessment of the existing network infrastructure, identifying areas that can benefit from automation and potential obstacles.

2. Standardization and Documentation: Establish standardized processes and documentation to ensure consistency across the network. This helps in streamlining the automation process and simplifying troubleshooting.

3. Gradual Implementation: Adopt a phased approach to brownfield automation, starting with low-risk tasks and gradually expanding to more critical areas. This minimizes disruption and allows for easy troubleshooting.

4. Collaboration and Training: Foster collaboration between network engineers and automation specialists. Training the network team on automation tools and techniques is crucial to ensure successful implementation and ongoing maintenance.

5. Continuous Monitoring and Optimization: Regularly monitor and fine-tune automated processes to optimize network performance. This includes identifying and addressing any bottlenecks or issues

 

Brownfield Network Automation; DevOps Tools

Generally, you have to use DevOps tools, orchestrators, and controllers to do the jobs you have always done yourself. However, customers are struggling with the adoption of these tools. How do I do the jobs I used to do on the network with these new tools? That’s basically what some software companies are focused on. From a technical perspective, some vendors don’t talk to network elements directly.

This is because you could have over 15 tools touching the network, and part of the problem is that everyone is talking to the network with their CLI. As a result, inventory is out of date, network errors are common, and CMD is entirely off, so the ability to automate is restricted based on all these prebuilt silo legacy applications. For automation to work, a limited number of elements should be talking to the network. With the advent of controllers and orchestrators, we will see a market transition.

 

DevOps vs. Traditional

If you look back, when we went from time-division multiplexing (TDM) to Internet Protocol (IP) address, the belief is that network automation will eventually have the same impact. The ability to go from non-programmability to programmability will represent the most significant shift we will see in the networking domain.

Occasionally, architects design something complicated when it can be done in a less complicated manner with a more straightforward handover. The architectural approach is never modeled or in a database. The design process is uncontrolled, yet the network is an essential centerpiece.

There is a significant use case for automating and controlling the design process. Automation is an actual use case that needs to be filled, and there are various ways vendors have approached this. It’s not a fuzzy buzzword coming out of silicon valley. Intent-based networking? I’m falling victim to this myself, too, sometimes. Is intent-based networking a new concept?

 

OpenDaylight (ODL)

I spoke to one vendor building intent-based API on top of OpenDaylight (ODL). There has been an intent-based interface for five years, so it’s not a new concept to some. There are some core requirements for this to work. You have to be federated, programmable, and modeled.

Some have hijacked intent-based to a very restricted definition, and an intent-based network has to consist of highly complex mathematical algorithms. Depending on who you talk to, these mathematical algorithms are potentially secondary for intent-based networking.

OpenDaylight (ODL)

Diagram: OpenDaylight (ODL): Network Automation.

 

One example of an architectural automation design is connecting to the northbound interface like Ansible. These act as the trustworthy source for the components underneath their management. You can then federate the application programming interface (API) and speak NETCONF, JSON, and YAML types. This information is then federated into a centralized platform that can provide a single set of APIs into the IT infrastructure.

So if you are using ServiceNow, you can request a through a catalog task. That task will then be patched down into the different subsystems that tie together that service management or device configuration. It’s a combination of API federation data modeling and performing automation.

The number one competitor of these automation companies are users that still want to use the CLI or vendors offering an adapter into a system, yet these are built on the foundation of CLIs. These adapters can call a representational state transfer (REST) interface but can’t federate it.

This will eventually end up breaking. You need to make an API call to the subsystem in real-time. As networking becomes increasingly dynamic and programmable, federated API is a suitable automation solution.

Brownfield Automation

 

 

Conclusion:

Brownfield Network Automation offers a powerful opportunity for organizations to unlock the full potential of existing network infrastructure. By embracing automation, businesses can enhance operational efficiency, improve agility, and achieve cost savings. While challenges may exist, implementing best practices and taking a systematic approach can pave the way for a successful brownfield automation journey. Embrace the power of automation and revolutionize your network for a brighter future.

Zero Trust Networking

 

 

Zero Trust Networking (ZTN)

In today’s interconnected and data-driven world, the need for robust cybersecurity measures has never been more critical. With cyber threats becoming increasingly sophisticated, organizations strive to adopt proactive security strategies to safeguard their sensitive information. This is where the concept of zero-trust networking comes into play. In this blog post, we will delve into the fundamentals of zero-trust networking, its benefits, and how it can revolutionize how we approach cybersecurity.

Zero trust networking is a security framework that challenges the traditional approach of trust-based network architectures. Unlike the conventional perimeter-based security model, which assumes that everything within the network is trustworthy, zero-trust networking adopts a more skeptical mindset. It operates under the principle of “never trust, always verify,” meaning that every user, device, and application is considered untrusted by default, regardless of location or network access.

Highlights: Zero Trust Networking

  • The Role of Segmentation

It’s a fact that security consultants carrying out audits will see a common theme. There will always be a remediation element; the default line is that you need to segment. There will always be the need for user and micro-segmentation of high-value infrastructure in sections of the networks. Micro-segmentation is hard without Zero Trust Network Design and Zero Trust Security Strategy.

  • User-centric

Zero Trust Networking (ZTN) is a dynamic and user-centric method of microsegmentation for zero trust networks, which is needed for high-value infrastructure that can’t be moved, such as an AS/400. You can’t just pop an AS/400 in the cloud and expect everything to be ok. Recently, we have seen a rapid increase in using SASE, a secure access service edge. Zero Trust SASE combines network and security functions, including zero trust networking but offering from the cloud.

 

For pre-information, you may find the following posts helpful:

  1. Technology Insight for Microsegmentation

 



Microsegmentation for Zero Trust Networks

Key Zero Trust Networking Discussion points:


  • Discussion on Zero Trust Networking.

  • The challenges with traditional segmentation. 

  • Description of microsegmentation for zero trust networks.

  • Operational challenges with TCP.

  • Zero Trust, alwasy verify model.

 

Back to basics with Zero Trust Networking

Traditional network security

Traditional network security architecture breaks different networks (or pieces of a single network) into zones contained by one or more firewalls. Each zone is granted some level of trust, determining the network resources it can reach. This model provides solid defense in depth. For example, resources deemed riskier, such as web servers that face the public internet, are placed in an exclusion zone (often termed a “DMZ”), where traffic can be tightly monitored and controlled.

 

Critical Principles of Zero Trust Networking:

1. Least Privilege: Zero trust networking enforces the principle of least privilege, ensuring that users and devices have only the necessary permissions to access specific resources. Limiting access rights significantly reduces the potential attack surface, making it harder for malicious actors to exploit vulnerabilities.

2. Microsegmentation: Zero trust networking leverages microsegmentation to divide the network into smaller, isolated segments or zones. Each segment is treated as an independent security zone with access policies and controls. This approach minimizes lateral movement within the network, preventing attackers from freely traversing and compromising sensitive assets.

3. Continuous Authentication: In a zero-trust networking environment, continuous authentication is pivotal in ensuring secure access. Traditional username and password credentials are no longer sufficient. Instead, multifactor authentication, behavioral analytics, and other advanced authentication mechanisms are implemented to verify the legitimacy of users and devices consistently.

Benefits of Zero Trust Networking:

1. Enhanced Security: Zero trust networking provides organizations with an enhanced security posture by eliminating the assumption of trust. This approach mitigates the risk of potential breaches and reduces the impact of successful attacks by limiting lateral movement and isolating critical assets.

2. Improved Compliance: With the growing number of stringent data protection regulations, such as GDPR and CCPA, organizations are under increased pressure to ensure data privacy and security. Zero trust networking helps meet compliance requirements by implementing granular access controls, auditing capabilities, and data protection measures.

3. Increased Flexibility: Zero trust networking enables organizations to embrace modern workplace trends, such as remote work and cloud computing, without compromising security. Zero-trust networking facilitates secure access from any location or device by focusing on user and device authentication rather than network location.

Challenges to Consider:

While zero-trust networking offers numerous benefits, implementing it can pose particular challenges. Organizations may face difficulties redesigning their existing network architectures, ensuring compatibility with legacy systems, and managing the complexity associated with granular access controls. However, these challenges can be overcome with proper planning, collaboration, and tools.

 

Microsegmentation for Zero Trust Networks

Suppose we roll back the clock. VLANs were never used for segmentation. Their sole purpose was to divide broadcast domains and improve network performance. The segmentation piece came much later on. Access control policies were carried out on a port-by-port and VLAN-by-VLAN basis. This would involve the association of a VLAN with an IP subnet to enforce subnet control, regardless of who the users were.

Also, TCP/IP was designed in a “safer” world based on an implicit trust mode of operation. It has a “connect first and then authenticate second” approach. This implicit trust model can open you up to several compromises. Zero Trust and Zero Trust SDP change this model to “authenticate first and then connect”.

It is based on the individual user instead of the more traditional IP addresses and devices. In addition, firewall rules are binary and static. They simply state should this IP block have access to this network (Y/N). That’s not enough, as today’s environment has become diverse and distributed.

Let us face it. Traditional constructs have not kept pace or evolved with today’s security challenges. The perimeter is gone, so we must keep all services ghosted until efficient contextual policies are granted.

 

zero trust networking
Diagram: Zero Trust Networking (ZTNA).

 

Organizational challenges

One of the main challenges customers have right now is that their environments are changing. They are moving to cloud and containerized environments. This surfaces many security questions from an access control perspective, especially in a hybrid infrastructure where you have traditional data centers with legacy systems, along with highly scalable systems, all at the same time.

An effective security posture is all about having a common way to enforce a policy-based control and contextual access policy around user and service access.

When organizations transition into these new environments, they must use multiple toolsets. These tool sets are not very contextual as to how they operate. For example, you may have Amazon web services (AWS) security groups defining IP address ranges that can gain access to a particular virtual private cloud (VPC).

This isn’t granular or has any associated identity or device recognition capability. Also, developers in these environments are massively titled, and we struggle with how to control them.

 

Trust and Verify Model vs. Zero Trust Networking (ZTN)

If you look at how VPN has worked, you have this trust and verify model, connect to the network, and then you can be authorized. The problem with this approach is that you can already see much of the attack surface from an external perspective. This can potentially be used to move laterally around the infrastructure to access critical assets.

Zero trust networking capabilities are focused more on a contextual identity-based model. For example, who is the user, what are they doing, where are they coming in from, is their endpoint up to date from threat posture perspectives, and what is the rest of your environment saying about these endpoints?

Once all this is done, they are entitled to communicate, similar to granting a conditional firewall rule based on a range of policies, not just a Y/N, i.e., has there been a malware check at the last minute or been a 2-factor authentication process, etc.?

I envision a Zero Trust Network ZTN solution with several components. A client will effectively communicate with a controller and then a gateway. The gateway acts as the enforcement point used to segment the infrastructure you seek to protect logically. The enforcement point could be in front of a specific set of applications or subnets you want to segment.

Conclusion:

Zero-trust networking provides a proactive and comprehensive security approach in a rapidly evolving threat landscape. By embracing the principles of least privilege, microsegmentation, and continuous authentication, organizations can enhance their security posture and protect their critical assets from internal and external threats. As technology advances, adopting zero-trust networking is not just a best practice but a necessity in today’s digital age.

 

Zero Trust Network ZTN

 

 

Zero Trust Network ZTN

In an increasingly interconnected world, the need for robust cybersecurity measures has never been more critical. Traditional security models, built around a trusted perimeter, are no longer sufficient to protect against evolving threats. Enter the Zero Trust Network, a revolutionary approach that challenges the conventional notion of trust and aims to enhance data protection across organizations of all sizes. In this blog post, we will explore the concept of Zero Trust Networking, its fundamental principles, and its benefits in bolstering cybersecurity defenses.

Zero Trust Networking is a security model that operates on the principle of “never trust, always verify.” Unlike traditional network security models, which assume that everything within an organization’s network is trusted, Zero Trust Networking treats every user, device, and application as potentially untrusted. This approach ensures that access to sensitive resources is only granted after thorough verification and validation, regardless of whether the request originates from within or outside the network perimeter.

Highlights: Zero Trust Network

  • Everything is Untrusted

Stop malicious traffic before it even gets on the IP network. In this world of mobile users, billions of connected things, and public cloud applications everywhere – not to mention the growing sophistication of hackers and malware – the Zero Trust Network Design and Zero Trust Security Strategy movement is a new reality. As the name suggests, Zero Trust Network ZTN means no trusted perimeter.

  • Single Packet Authorization

Everything is untrusted; even after authentication and authorization, a device or user only receives the least privileged access. Such is necessary to stop all these potential security breaches. Identity and access management (IAM) is the foundation of excellent IT security and the key to providing zero trust, along with crucial zero trust technologies such zero trust remote access and single packet authorization.

 

Before you proceed, you may find the following posts helpful:

  1. Zero Trust SASE
  2. Identity Security
  3. Zero Trust Access

 

Back to basics with a zero-trust network

A zero-trust network is built upon five essential declarations:

  1. The network is always assumed to be hostile.
  2. External and internal threats exist on the network at all times
  3. Network locality alone is not sufficient for deciding trust in a network.
  4. Every device, user, and network flow is authenticated and authorized.
  5. Policies must be dynamic and calculated from as many data sources as possible.

Critical Principles of Zero Trust Networking:

1. Least Privilege: Zero Trust Networking follows the principle of least privilege, ensuring that users and devices only have access to the resources necessary to perform their specific tasks. This prevents unauthorized access and minimizes the potential impact of a security breach.

2. Micro-Segmentation: Zero Trust Networking emphasizes the concept of micro-segmentation, dividing the network into smaller, isolated segments. By implementing strict access controls between these segments, the lateral movement of threats is contained, reducing the risk of a widespread breach.

3. Continuous Authentication: Zero Trust Networking emphasizes continuous authentication, requiring users to verify their identities at each access attempt. This helps prevent unauthorized access even if login credentials are compromised.

Benefits of Zero Trust Networking:

1. Enhanced Security: Zero Trust Networking significantly reduces the attack surface for potential threats by assuming that no user or device is inherently trustworthy. This approach ensures that even if one part of the network is compromised, the rest remains protected.

2. Improved Compliance: With increasingly stringent data protection regulations, organizations must demonstrate robust security measures. Zero Trust Networking provides a strong framework for ensuring compliance with industry-specific regulations like HIPAA and GDPR.

3. Flexibility and Scalability: Zero Trust Networking can be implemented across various network environments, including on-premises, cloud, and hybrid setups. This flexibility allows organizations to adapt their security posture as their infrastructure evolves.

Zero Trust Remote Access

Zero Trust Networking (ZTN) applies zero-trust principles to enterprise and government agency IP networks. Among other things, ZTN integrates IAM into IP routing and prohibits the establishment of a single TCP/UDP session without prior authentication and authorization. Once a session is established, ZTN ensures all traffic in motion is encrypted. In the context of a common analogy, think of our road systems as a network and the cars and trucks on it as IP packets.

Today, anyone can leave his or her house and drive to your home and come up your driveway. That driver may not have a key to enter your home, but he or she can cause it and wait for an opportunity to enter. In a Zero Trust world, no one can leave their house to travel over the roads to their home without prior authentication and authorization. This is required in the digital, virtual world to ensure security.

zero trust remote access
Diagram: Zero trust remote access.

 

 

The challenges of the NAC

In the voice world, we use signaling to establish authentication and authorization before connecting the call. In the data world, this can be done with TCP/UDP sessions, and in many cases, in conjunction with Transport Layer Security, or TLS. The problem is that IP routing hasn’t evolved since the mid-‘90s.

IP routing protocols such as Border Gateway Protocol are standalone; they don’t integrate with directories. Network admission control (NAC) is an earlier attempt to add IAM to networking, but it requires a client and assumes a trusted perimeter. NAC is IP address-based, not TCP/UDP session state-based.

 

Zero trust remote access: Move up the stack 

The solution is to make IP routing more intelligent and bring up the OSI stack to Layer 5, where security and session state reside. The next generation of software-defined networks is taking a more thoughtful approach to networking with Layer 5 security and performance functions.

While organizations, over time, have added firewalls, session border controllers, WAN optimizers, and load balancers to networks for their ability to manage session state and provide the intelligent performance and security controls required in today’s networks.

For instance, firewalls stop malicious traffic in the middle of a network and do nothing within a Layer 2 broadcast domain. Every organization has directory services based on IAM that define who is allowed access to what. Zero Trust Networking takes this further by embedding this information into the network and enabling malicious traffic to be stopped at the source.

zero trust security meaning
Diagram: Zero trust security meaning.

 

Another great feature of ZTN is anomaly detection. An alert can be generated when a device starts trying to communicate with other devices, services, or applications to which it doesn’t have permission. Hackers use a process of discovery, identification, and targeting to break into systems; with Zero Trust, you can prevent them from starting the initial discovery.

Conclusion:

In an era where cyber threats continue to evolve, traditional security models are no longer sufficient to protect sensitive data. Zero Trust Networking offers a paradigm shift in cybersecurity, shifting the focus from trust to verification. By adopting the principles of least privilege, micro-segmentation, and continuous authentication, organizations can strengthen their defenses and mitigate the risk of data breaches. Embracing Zero Trust Networking is a proactive step towards ensuring the security and integrity of critical assets in today’s digital landscape.

 

Software Defined Perimeter Solutions

 

 

Software Defined Perimeter Solutions

The following post discusses the Software Defined Perimeter solution, the need for a new software perimeter solution, and how this can be integrated. Many companies use SD-WAN and multiprotocol label switching (MPLS) based architectures to attack the same problem. However, the application is changing, which requires a shift in these architectures. Application developers and software service providers want to take more control of their sessions. And we are seeing this with the rise of Open networking. The next generation of applications is mobile, in the cloud, software as a service (SaaS), and the internet of things (IoT) based. It is everywhere and does not stay in the walled garden of the enterprise.

You could say it’s wholly distributed but not dark and hidden from the Internet. Solutions encompassing a zero trust network design are needed to connect the application endpoints wherever and wherever they are allowed. We are seeing a new type of perimeter, which some call these software defined perimeter solutions. A perimeter that is built from scratch by the application and employs a zero-trust model, much of which is derived from cloud security alliance software defined perimeter.

 

Before you proceed, you may find the following posts helpful:

  1. Distributed Firewalls
  2. Software Defined Internet Exchange

 



Software Perimeter Solution.

Key Software Defined Perimeter Solutions Discussion points:


  • Discussion on Software Defined Perimeter ( SDP ).

  • The challenges with traditional perimeters.

  • Changes in the environment are causing a need for SDP.

  • SDP and end to end security.

  • Ways to integrate the application.

 

 

 

 

software defined perimeter solutions
Diagram: Software-defined perimeter solutions: The changing landscape.

 

Software-defined Perimeter solutions: Right at the Application

The perimeter should now be at the application. And with network traffic engineering, this is already common in microservices environments where the application programming interface (API) is the perimeter. Still, we should now introduce this to non-containerized environments, especially regarding IoT endpoints. Fundamentally, the only way to secure the new wave of applications is to a) have reliable end-to-end reachability and b) assume a zero-trust model. You have to assume that the wires are not secure.

For this, you need to have the ability to integrate the security solution more deeply within the application. A collaborative model that tightly integrates with the application provides the required security and network path control. Previously the traditional model separated the application from the network. The network does what it wants, and the application does what it wants. The application would throw its bits over the wire and hope the wires are secure. The packets will eventually get to their destination.

software defined perimeter solutions
Diagram: Software-defined perimeter solutions: The TCP/IP model issues.

 

End-to-end security 

It would be best to have a solution that works more closely with the application to ensure end-to-end security. This can be done by installing client software on the mobile app or using some software developer’s kit (SDK) or API. SD-WAN vendors have done a great job of backhauling security to the cloud and responded to the market very well. However, security must now become a first-class citizen within the application. You can’t rely on tunnels on the internet anymore. From the application, you need to take the packet, readdress, encrypt it, and send it to a globally private network. This private network can provide all the relevant services using standard Pops with physical/virtual equipment or containerized packed forwarders that can be spun upon demand. Containerized packed forwarders sound a little bit more interesting.

Each session can then get distributed to find the best route where several routers can be spun up all over the globe. The packet forwarder is spun up on different networks and autonomous systems (AS) worldwide, enabling the maximum number of diverse paths between point A and point B, i.e. different backbone providers, different AS, peering, and routing agreements.

The application endpoint can then examine all those paths for their given session and direct the packet to the best-performing path. If a path shows unacceptable performance within seconds, the session can be changed to a different path. Performance metrics like jitter, latency, and packet loss should be analyzed in real-time. And throughput for specific applications. You can do a linear optimization from all those variables.

Essentially you should do cold potato routing on the private network instead of hot potato routing. Cold potato routing keeps the packets on the network for as long as possible to take advantage of all the required optimizations.

 

Ways to integrate with the application

You can start by integrating with the application IAM (identity, access, and management) structure and then define what the application can talk to. For example, the port and protocols enable the business policies to govern the application’s network behavior.

A packet from the internet address towards an IoT endpoint in the home must be authenticated and authorized to fit the business policy. These endpoints could then be streamlined into a distributed ledger like a blockchain. Suddenly, with proper machine learning and collaborations, you may be able to identify some of the DDoS attacks before they create massive problems.

Sometimes, the policies can be tied to hardware router trust, i.e. the silicon. This is often seen in the IoT-connected car use case. The silicon itself is creating a unique identifier, not a defined identity but an immutable one. The unique conditions of the silicon itself generate the identity. In this case, you don’t care about the IP addresses anymore. As technology progresses, we are becoming less reliant on IP addresses.

Within the software, you must include a certificate and public key infrastructure (PKI) system, so now you have a bidirectional, authenticated, and authorized certificate exchange. Again, you don’t care about the IP address as you work at an upper level.

 

Cloud Native Meaning

 

 

Cloud Native Meaning

In recent years, the term “cloud native” has gained immense popularity in technology. Organizations across industries embrace cloud-native architectures to enhance their digital transformation efforts. But what exactly does “cloud native” mean? In this blog post, we will delve into the concept of cloud-native, exploring its meaning, characteristics, and the benefits it offers to businesses.

At its core, cloud-native refers to building and running applications that takes full advantage of cloud computing capabilities. It entails designing applications specifically for deployment on cloud platforms, utilizing the native features and services offered by the cloud provider. Cloud-native applications are typically developed using microservices architecture, enabling them to be highly scalable, resilient, and easily manageable.

Highlights: Cloud Native Meaning

  • The Journey To Cloud Native

We must find ways to efficiently secure cloud-native microservice environments and embrace a Zero Trust Network Design. To assert who we say we are, we need to provide every type of running process in an internal I.T. infrastructure with an intrinsic identity. So, in our journey for cloud native meaning, how do you prove who you are flexibly, regardless of what platform?

  • Everything is an API Call

First, you need to give someone enough confidence to trust your words. I.P. addresses are like home addresses, creating a physical identity for the house but not telling you who the occupants are. The more you know about the people living in the house, the richer the identity you can give them. The richer the identity, the more secure the connection.

Larger companies have evolved their authentication and network security components to support internal service-to-service, server-to-server, and server-to-service communications. Everything is an API call. There may only be a few public-facing API calls, but there will be many internal API calls, such as user and route lookups. In this situation, CASB tools will help your security posture. However, this will result in a scale we have not seen before. Companies like Google and Netflix have realized there is a need to move beyond what most organizations do today.

 

Before you proceed, you may find the following post helpful:

  1. Microservices Observability
  2. What is OpenFlow
  3. SASE Definition
  4. Identity Security
  5. Security Automation
  6. Load Balancing
  7. SASE Model

 



Cloud Native Meaning.

Key Cloud Native Meaning Discussion points:


  • The rise of the use of APIs.

  • Network segmentation.

  • The new perimeter is the API.

  • The issues with Tokens and Keys.

  • Identity-based controls.

 

  • A key point: Video on Cloud operating models

In the following video, we will discuss the cloud operating models. Clouds operate under different service models – Infrastructure of a service IaaS, Platform of a service PaaS, and Software of a Service SaaS service models. These service models provide different abstraction layers to the consumer and offer different security requirements to the consumer. Public Cloud Providers are not a single type, and A generic evaluation of security cannot be generalized amongst all of them.

 

 

Back to basics Cloud Native Meaning

The Role of cloud computing

Cloud computing is feasible only because of the technologies that enable resource virtualization. If you’re going to have multiple virtual endpoints share a physical network. Still, different virtual endpoints belong to different customers, and the communication between these virtual endpoints also needs to be isolated. In other words, the network is a resource, too, and network virtualization is the technology that enables sharing of a standard physical network infrastructure.

 

Essential Characteristics of Cloud Native Applications:

Cloud-native applications exhibit several key characteristics that set them apart from traditional applications. These characteristics include:

1. Scalability: Cloud-native applications are designed to scale effortlessly, allowing organizations to handle increasing workloads without compromising performance. By leveraging cloud resources, applications can dynamically adjust their resource usage based on demand.

2. Resilience: Cloud native applications are built with resilience in mind. They are designed to handle failures gracefully, automatically recovering from disruptions and minimizing downtime. This resilience is achieved through fault tolerance, self-healing, and distributed architectures.

3. Agility: Cloud-native applications enable organizations to rapidly develop, deploy, and update software. Developers can easily package, deploy, and manage applications across different environments by utilizing containerization technologies like Docker and orchestration tools like Kubernetes.

4. DevOps Culture: Cloud native development fosters a DevOps culture by promoting collaboration between development and operations teams. Continuous integration and continuous deployment (CI/CD) practices are integral parts of cloud-native development, allowing organizations to deliver new features and updates faster.

Benefits of Cloud Native Adoption:

Embracing a cloud-native approach offers numerous benefits for organizations:

1. Cost Optimization: Cloud-native applications can leverage cloud resources more efficiently, resulting in cost savings by only utilizing the resources needed at any time. Additionally, organizations no longer need to invest in physical infrastructure or worry about maintenance costs.

2. Scalability and Elasticity: Cloud-native applications can dynamically scale up or down based on demand, ensuring optimal performance during peak periods while reducing costs during periods of low usage.

3. Faster Time to Market: The agility provided by cloud-native development allows organizations to bring new products and features to market faster. By automating the deployment process and utilizing scalable infrastructure, development teams can focus on delivering value rather than managing infrastructure.

4. Improved Reliability and Resilience: Cloud-native applications are inherently designed to be more reliable and resilient. By utilizing containerization and distributed architectures, organizations can achieve higher levels of availability and minimize the impact of failures.

 

Cloud Native Meaning: Network Segmentation

Firstly, they rely on traditional network constructs such as firewalls and routers to enforce authentication. If traffic wants to get from one I.P. segment to another I.P. segment, it passes through a layer 4 to layer 7 rule set filter; then, those endpoints should be allowed to talk to each other.

 

micro segmentation technology

 

However, network segmentation is a bit too coarse-grained versus what is happening at the application layer. The application layer is going through significant transformational changes. The changes to network security represent the application less than the application should be represented. API gateways, web application firewalls (WAFs), and next-gen firewalls are all secondary to protecting microservices. They are just the first layer of defense. Every API call is HTTP/HTTPS, so what good is a firewall anyway?

 

Traditional Security Mechanism

We have new technologies being protected by traditional means. The traditional security mechanism based on I.P. and 5-tuple can’t work in a cloud-native microservice architecture, especially when there are lateral movements. Layer 4 is coupled with the network topology and lacks the flexibility to support agile applications.

These traditional devices must follow the microservice workload, source/destination I.P. address, and source/destination port number around the cloud, or else things will be practical. This is a design phase that will not happen. You need to bring security to the microservice, not vice versa.

Traditional security mechanisms are evaporating. The perimeter has changed and is now at the API layer. Every API presents a new attack surface which results in a gap. A gap must be filled, and only a few companies do this. There are too few, and we need more.

 

Outdated: Tokens and Keys

The other thing is the continued use of Tokens and Keys. They are hard-coded strings that serve as a proxy for who you are. The problem is that the management of these items is complex. For example, rotating and revoking them is challenging. This is compounded by the fact that we must presume that the I.T. organization infrastructure will become more horizontally scaled and dynamic.

We can all agree that we will arbitrarily see these things spin up and down with the introduction of technologies such as containers and serverless because it’s more cost-effective than building and supporting tightly coupled monolithic applications. So the design pattern of enterprise I.T. is moving forward, making the authentication problem more difficult. So we need to bring life a new initiative. We need a core and identity construct that supports the design pattern when building for the future. 

 

Cloud-native meaning with a new identity-based mechanism

We need companies to recognize that a new identity-based mechanism is required. We have had the identity for human-centric authentication for decades. There are hundreds of companies built to do this. However, the scale of identity management of internal infrastructure is an order of magnitude more significant than it was with humans.

That means existing technology constructs and technologies such as Active Directory need to be more scalable and built for this scale. This is where the opportunity arises – building architectures that match this scale.

Another thing that comes to mind when you look at these authentication frameworks when you think about identity and cryptography, it’s a bit of a black box, especially for the newer organizations that don’t have the capacity and DNA to think about infrastructure at that layer.

For the organization, there is interest in a product that allows them to translate internal policies that are no longer in the internal data center but in the cloud. We need a way to carry out the mappings to the middleware public cloud using identity as the core service. When everything has an atomic identity, it can also be used for other purposes. For example, you can use it to chain identities together and better debug and trace, to name a few.

Conclusion:

In today’s rapidly evolving digital landscape, embracing cloud-native architectures has become imperative for organizations seeking to stay competitive. By understanding the meaning and characteristics of cloud-native, businesses can leverage its benefits to drive innovation, improve efficiency, and deliver exceptional user experiences. Organizations must adapt and embrace this transformative approach to ensure long-term success in the digital era as the cloud-native ecosystem continues to evolve.

rsz_technology_focused_hubnw

Matt Conran | Network World

Hello, I have created a Network World column and will be releasing a few blogs per month. Kindly visit the following link to view my full profile and recent blogs – Matt Conran Network World.

The list of individual blogs can be found here:

“In this day and age, demands on networks are coming from a variety of sources, internal end-users, external customers and via changes in the application architecture. Such demands put pressure on traditional architectures.

To deal effectively with these demands requires the network domain to become more dynamic. For this, we must embrace digital transformation. However, current methods are delaying this much-needed transition. One major pain point that networks suffer from is the necessity to dispense with manual working, which lacks fabric wide automation. This must be addressed if organizations are to implement new products and services ahead of the competition.

So, to evolve, to be in line with the current times and use technology as an effective tool, one must drive the entire organization to become a digital enterprise. The network components do play a key role, but the digital transformation process is an enterprise-wide initiative.”

“There’s a buzz in the industry about a new type of product that promises to change the way we secure and network our organizations. It is called the Secure Access Service Edge (SASE). It was first mentioned by Gartner, Inc. in its hype cycle for networking. Since then Barracuda highlighted SASE in a recent PR update and Zscaler also discussed it in their earnings call. Most recently, Cato Networks announced that it was mentioned by Gartner as a “sample vendor” in the hype cycle.

Today, the enterprises have upgraded their portfolio and as a consequence, the ramifications of the network also need to be enhanced. What we are witnessing is cloud, mobility, and edge, which has resulted in increased pressure on the legacy network and security architecture. Enterprises are transitioning all users, applications, and data located on-premise, to a heavy reliance on the cloud, edge applications, and a dispersed mobile workforce.”

“Microsoft has introduced a new virtual WAN as a competitive differentiator and is getting enough tracking that AWS and Google may follow. At present, Microsoft is the only company to offer a virtual WAN of this kind. This made me curious to discover the highs and lows of this technology. So I sat down with Sorell Slaymaker, Principal Consulting Analyst at TechVision Research to discuss. The following is a summary of our discussion.

But before we proceed, let’s gain some understanding of the cloud connectivity.

Cloud connectivity has evolved over time. When the cloud was introduced about a decade ago, let’s say, if you were an enterprise, you would connect to what’s known as a cloud service provider (CSP). However, over the last 10 years, many providers like Equinix have started to offer carrier-neutral collocations. Now, there is the opportunity to meet a variety of cloud companies in a carrier-neutral colocation. On the other hand, there are certain limitations as well as cloud connectivity.”

“Actions speak louder than words. Reliable actions build lasting trust in contrast to unreliable words. Imagine that you had a house with a guarded wall. You would feel safe in the house, correct? Now, what if that wall is dismantled? You might start to feel your security is under threat. Anyone could have easy access to your house.

In the same way, with traditional security products: it is as if anyone is allowed to leave their house, knock at your door and pick your locks. Wouldn’t it be more secure if only certain individuals whom you fully trust can even see your house? This is the essence of zero-trust networking and is a core concept discussed in my recent course on SDP (software-defined perimeter).

Within a zero-trust environment, there is no implicit trust. Thus, trust must be sourced from somewhere else in order to gain access to protected resources. It is only after sufficient trust has been established and the necessary controls are passed, that the access is granted, providing a path to the requested resource. The access path to the resource is designed differently, depending on whether it’s a client or service-initiated software-defined perimeter solution.”

“Networking has gone through various transformations over the last decade. In essence, the network has become complex and hard to manage using traditional mechanisms. Now there is a significant need to design and integrate devices from multiple vendors and employ new technologies, such as virtualization and cloud services.

Therefore, every network is a unique snowflake. You will never come across two identical networks. The products offered by the vendors act as the building blocks for engineers to design solutions that work for them. If we all had a simple and predictable network, this would not be a problem. But there are no global references to follow and designs vary from organization to organization. These lead to network variation even while offering similar services.

It is estimated that over 60% of users consider their I.T environment to be more complex than it was 2 years ago. We can only assume that network complexity is going to increase in the future.”

We are living in a hyperconnected world where anything can now be pushed to the cloud. The idea of having content located in one place, which could be useful from the management’s perspective, is now redundant. Today, the users and data are omnipresent.

The customer’s expectations have up-surged because of this evolution. There is now an increased expectation of high-quality service and a decrease in customer’s patience. In the past, one could patiently wait 10 hours to download the content. But this is certainly not the scenario at the present time. Nowadays we have high expectations and high-performance requirements but on the other hand, there are concerns as well. The internet is a weird place, with unpredictable asymmetric patterns, buffer bloat and a list of other performance-related problems that I wrote about on Network Insight. [Disclaimer: the author is employed by Network Insight.]

Also, the internet is growing at an accelerated rate. By the year 2020, the internet is expected to reach 1.5 Gigabyte of traffic per day per person. In the coming times, the world of the Internet of Things (IoT) driven by objects will far supersede these data figures as well. For example, a connected airplane will generate around 5 Terabytes of data per day. This spiraling level of volume requires a new approach to data management and forces us to re-think how we delivery applications.”

“Deploying zero trust software-defined perimeter (SDP) architecture is not about completely replacing virtual private network (VPN) technologies and firewalls. By and large, the firewall demarcation points that mark the inside and outside are not going away anytime soon. The VPN concentrator will also have its position for the foreseeable future.

A rip and replace is a very aggressive deployment approach regardless of the age of technology. And while SDP is new, it should be approached with care when choosing the correct vendor. An SDP adoption should be a slow migration process as opposed to the once off rip and replace.

As I wrote about on Network Insight [Disclaimer: the author is employed by Network Insight], while SDP is a disruptive technology, after discussing with numerous SDP vendors, I have discovered that the current SDP landscape tends to be based on specific use cases and projects, as opposed to a technology that has to be implemented globally. To start with, you should be able to implement SDP for only certain user segments.”

“Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.

More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.

The perimeter approach based its design on visibility and accessibility. If an entity external to the network can’t see an internal resource, then access cannot be gained. As a result, external entities were blocked from coming in, yet internal entities were permitted to passage out. However, it worked only to a certain degree. Realistically, the fixed network perimeter will always be breachable; it’s just a matter of time. Someone with enough skill will eventually get through.”

“In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations.

The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia.

A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects.”

“As the cloud service providers and search engines started with the structuring process of their business, they quickly ran into the problems of managing the networking equipment. Ultimately, after a few rounds of getting the network vendors to understand their problems, these hyperscale network operators revolted.

Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking.”

“I recently shared my thoughts about the role of open source in networking. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.

The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.

Seemingly, we are beginning to see less investment in hardware unless there is a specific segment that needs to be resolved. But generally, software-based platforms are preferred as they bring many advantages. It is evident that there has been a technology shift. We have moved networking from hardware to software and this shift has positive effects for users, enterprises and service providers.”

“BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, there’s a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?

There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.

The problem with BGP is that it’s not capacity or performance aware and therefore its decisions can sink the application’s performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the network’s performance.”

“The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both – the advantages and disadvantages.

The network and security need to keep up with this rapid pace of change. If you cannot match the speed of the digital age, then ultimately bad actors will become a hazard. Therefore, the organizations must move to a zero-trust environment: default deny, with least privilege access. In today’s evolving digital world this is the primary key to success.

Ideally, a comprehensive solution must provide protection across all platforms including legacy servers, VMs, services in public clouds, on-premise, off-premise, hosted, managed or self-managed. We are going to stay hybrid for a long time, therefore we need to equip our architecture with zero-trust.”

“With the introduction of cloud, BYOD, IoT, and virtual offices scattered around the globe, the traditional architectures not only hold us back in terms of productivity but also create security flaws that leave gaps for compromise.

The network and security architectures that are commonly deployed today are not fit for today’s digital world. They were designed for another time, a time of the past. This could sound daunting…and it indeed is.”

“The Internet was designed to connect things easily, but a lot has changed since its inception. Users now expect the internet to find the “what” (i.e., the content), but the current communication model is still focused on the “where.”

The Internet has evolved to be dominated by content distribution and retrieval. As a matter of fact, networking protocols still focus on the connection between hosts that surfaces many challenges.

The most obvious solution is to replace the “where” with the “what” and this is what Named Data Networking (NDN) proposes. NDN uses named content as opposed to host identifiers as its abstraction.”

“Today, connectivity to the Internet is easy; you simply get an Ethernet driver and hook up the TCP/IP protocol stack. Then dissimilar network types in remote locations can communicate with each other. However, before the introduction of the TCP/IP model, networks were manually connected but with the TCP/IP stack, the networks can connect themselves up, nice and easy. This eventually caused the Internet to explode, followed by the World Wide Web.

So far, TCP/IP has been a great success. It’s good at moving data and is both robust and scalable. It enables any node to talk to any other node by using a point-to-point communication channel with IP addresses as identifiers for the source and destination. Ideally, a network ships the data bits. You can either name the locations to ship the bits to or name the bits themselves. Today’s TCP/IP protocol architecture picked the first option. Let’s discuss the section option later in the article.

It essentially follows the communication model used by the circuit-switched telephone networks. We migrated from phone numbers to IP addresses and circuit-switching by packet-switching with datagram delivery. But the point-to-point, location-based model stayed the same. This made sense during the old times, but not in today’s times as the view of the world has changed considerably. Computing and communication technologies have advanced rapidly.”

“Technology is always evolving. However, in recent time, two significant changes have emerged in the world of networking. Firstly, the networking is moving to software that can run on commodity off-the-shelf hardware. Secondly, we are witnessing the introduction and use of many open source technologies, removing the barrier of entry for new product innovation and rapid market access.

Networking is the last bastion within IT to adopt the open source. Consequently, this has badly hit the networking industry in terms of the slow speed of innovation and high costs. Every other element of IT has seen radical technology and cost model changes over the past 10 years. However, IP networking has not changed much since the mid-’90s.

When I became aware of these trends, I decided to sit with Sorell Slaymaker to analyze the evolution and determine how it will inspire the market in the coming years.”

“Ideally, meeting the business objectives of speed, agility, and cost containment boil down to two architectural approaches: the legacy telco versus the cloud-based provider.

Today, the wide area network (WAN) is a vital enterprise resource. Its uptime, often targeting availability of 99.999%, is essential to maintain the productivity of employees and partners and also for maintaining the business’s competitive edge.

Historically, enterprises had two options for WAN management models — do it yourself (DIY) and a managed network service (MNS). Under the DIY model, the IT networking and security teams build the WAN by integrating multiple components including MPLS service providers, internet service providers (ISPs), edge routers, WAN optimizer, and firewalls.

The components are responsible for keeping that infrastructure current and optimized. They configure and adjust the network for changes, troubleshoot outages and ensure that the network is secure. Since this is not a trivial task, therefore many organizations have switched to an MNS. The enterprises outsource the buildout, configuration and on-going management often to a regional telco.”

“To undergo the transition from legacy to cloud-native application environments you need to employ zero trust.

Enterprises operating in the traditional monolithic environment may have strict organizational structures. As a result, the requirement for security may restrain them from transitioning to a hybrid or cloud-native application deployment model.

In spite of the obvious difficulties, the majority of enterprises want to take advantage of cloud-native capabilities. Today, most entities are considering or evaluating cloud-native to enhance their customer’s experience. In some cases, it is the ability to draw richer customer market analytics or to provide operational excellence.

Cloud-native is a key strategic agenda that allows customers to take advantage of many new capabilities and frameworks. It enables organizations to build and evolve going forward to gain an edge over their competitors.”

“Domain name system (DNS) over transport layer security (TLS) adds an extra layer of encryption, but in what way does it impact your IP network traffic? The additional layer of encryption indicates controlling what’s happening over the network is likely to become challenging.

Most noticeably it will prevent ISPs and enterprises from monitoring the user’s site activity and will also have negative implications for both; the wide area network (WAN) optimization and SD-WAN vendors.

During a recent call with Sorell Slaymaker, we rolled back in time and discussed how we got here, to a world that will soon be fully encrypted. We started with SSL1.0, which was the original version of HTTPS as opposed to the non-secure HTTP. As an aftermath of evolution, it had many security vulnerabilities. Consequently, we then evolved from SSL 1.1 to TLS 1.2.”

“Delivering global SD-WAN is very different from delivering local networks. Local networks offer complete control to the end-to-end design, enabling low-latency and predictable connections. There might still be blackouts and brownouts but you’re in control and can troubleshoot accordingly with appropriate visibility.

With global SD-WANs, though, managing the middle-mile/backbone performance and managing the last-mile are, well shall we say, more challenging. Most SD-WAN vendors don’t have control over these two segments, which affects application performance and service agility.

In particular, an issue that SD-WAN appliance vendors often overlook is the management of the last-mile. With multiprotocol label switching (MPLS), the provider assumes the responsibility, but this is no longer the case with SD-WAN. Getting the last-mile right is challenging for many global SD-WANs.”

“Today’s threat landscape consists of skilled, organized and well-funded bad actors. They have many goals including exfiltrating sensitive data for political or economic motives. To combat these multiple threats, the cybersecurity market is required to expand at an even greater rate.

The IT leaders must evolve their security framework if they want to stay ahead of the cyber threats. The evolution in security we are witnessing has a tilt towards the Zero-Trust model and the software-defined perimeter (SDP), also called a “Black Cloud”. The principle of its design is based on the need-to-know model.

The Zero-Trust model says that anyone attempting to access a resource must be authenticated and be authorized first. Users cannot connect to anything since unauthorized resources are invisible, left in the dark. For additional protection, the Zero-Trust model can be combined with machine learning (ML) to discover the risky user behavior. Besides, it can be applied for conditional access.”

“There are three types of applications; applications that manage the business, applications that run the business and miscellaneous apps.

A security breach or performance related issue for an application that runs the business would undoubtedly impact the top-line revenue. For example, an issue in a hotel booking system would directly affect the top-line revenue as opposed to an outage in Office 365.

It is a general assumption that cloud deployments would suffer from business-impacting performance issues due to the network. The objective is to have applications within 25ms (one-way) of the users who use them. However, too many network architectures backhaul the traffic to traverse from a private to the public internetwork.”

“Back in the early 2000s, I was the sole network engineer at a startup. By morning, my role included managing four floors and 22 European locations packed with different vendors and servers between three companies. In the evenings, I administered the largest enterprise streaming networking in Europe with a group of highly skilled staff.

Since we were an early startup, combined roles were the norm. I’m sure that most of you who joined as young engineers in such situations could understand how I felt back then. However, it was a good experience, so I battled through it. To keep my evening’s stress-free and without any IT calls, I had to design in as much high-availability (HA) as I possibly could. After all, all the interesting technological learning was in the second part of my day working with content delivery mechanisms and complex routing. All of which came back to me when I read a recent post on Cato network’s self-healing SD-WAN for global enterprises networks.

Cato is enriching the self-healing capabilities of Cato Cloud. Rather than the enterprise having the skill and knowledge to think about every type of failure in an HA design, the Cato Cloud now heals itself end-to-end, ensuring service continuity.”

While computing, storage, and programming have dramatically changed and become simpler and cheaper over the last 20 years, however, IP networking has not. IP networking is still stuck in the era of mid-1990s.

Realistically, when I look at ways to upgrade or improve a network, the approach falls into two separate buckets. One is the tactical move and the other is strategic. For example, when I look at IPv6, I see this as a tactical move. There aren’t many business value-adds.

In fact, there are opposites such as additional overheads and minimal internetworking QoS between IPv4 & v6 with zero application awareness and still a lack of security. Here, I do not intend to say that one should not upgrade to IPv6, it does give you more IP addresses (if you need them) and better multicast capabilities but it’s a tactical move.

It was about 20 years ago when I plugged my first Ethernet cable into a switch. It was for our new chief executive officer. Little did she know that she was about to share her traffic with most others on the first floor. At that time being a network engineer, I had five floors to be looked after.

Having a few virtual LANs (VLANs) per floor was a common design practice in those traditional days. Essentially, a couple of broadcast domains per floor were deemed OK. With the VLAN-based approach, we used to give access to different people on the same subnet. Even though people worked at different levels but if in the same subnet, they were all treated the same.

The web application firewall (WAF) issue didn’t seem to me as a big deal until I actually started to dig deeper into the ongoing discussion in this field. It generally seems that vendors are trying to convince customers and themselves that everything is going smooth and that there is not a problem. In reality, however, customers don’t buy it anymore and the WAF industry is under a major pressure as constantly failing on the customer quality perspective.

There have also been red flags raised from the use of the runtime application self-protection (RASP) technology. There is now a trend to enter the mitigation/defense side into the application and compile it within the code. It is considered that the runtime application self-protection is a shortcut to securing software that is also compounded by performance problems. It seems to be a desperate solution to replace the WAFs, as no one really likes to mix its “security appliance” inside the application code, which is exactly what the RASP vendors are currently offering to their customers. However, some vendors are adopting the RASP technology.

“John Kindervag, a former analyst from Forrester Research, was the first to introduce the Zero-Trust model back in 2010. The focus then was more on the application layer. However, once I heard that Sorell Slaymaker from Techvision Research was pushing the topic at the network level, I couldn’t resist giving him a call to discuss the generals on Zero Trust Networking (ZTN). During the conversation, he shone a light on numerous known and unknown facts about Zero Trust Networking that could prove useful to anyone.

The traditional world of networking started with static domains. The classical network model divided clients and users into two groups – trusted and untrusted. The trusted are those inside the internal network, the untrusted are external to the network, which could be either mobile users or partner networks. To recast the untrusted to become trusted, one would typically use a virtual private network (VPN) to access the internal network.”

“Over the last few years, I have been sprawled in so many technologies that I have forgotten where my roots began in the world of the data center. Therefore, I decided to delve deeper into what’s prevalent and headed straight to Ivan Pepelnjak’s Ethernet VPN (EVPN) webinar hosted by Dinesh Dutt. I knew of the distinguished Dinesh since he was the chief scientist at Cumulus Networks, and for me, he is a leader in this field. Before reading his book on EVPN, I decided to give Dinesh a call to exchange our views about the beginning of EVPN. We talked about the practicalities and limitations of the data center. Here is an excerpt from our discussion.”

“If you still live in a world of the script-driven approach for both service provider and enterprise networks, you are going to reach limits. There is only so far you can go alone. It creates a gap that lacks modeling and database at a higher layer. Production-grade service provider and enterprise networks require a production-grade automation framework.

In today’s environment, the network infrastructure acts as the core centerpiece, providing critical connection points. Over time, the role of infrastructure has expanded substantially. In the present day, it largely influences the critical business functions for both the service provider and enterprise environments”

“At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The monolith keeps much of the state in memory and replicates between the instances, which makes them hard to split and scale. Scaling up can be expensive and scaling out requires replicating the state and the entire application, rather than the parts that need to be replicated.

In comparison to microservices, which provide separation of the logic from the state, the separation enables the application to be broken apart into a number of smaller more manageable units, making them easier to scale. Therefore, a microservices environment consists of multiple services communicating with each other. All the communication between services is initiated and carried out with network calls, and services exposed via application programming interfaces (APIs). Each service comes with its own purpose that serves a unique business value.”

“When I stepped into the field of networking, everything was static and security was based on perimeter-level firewalling. It was common to have two perimeter-based firewalls; internal and external to the wide area network (WAN). Such layout was good enough in those days.

I remember the time when connected devices were corporate-owned. Everything was hard-wired and I used to define the access control policies on a port-by-port and VLAN-by-VLAN basis. There were numerous manual end-to-end policy configurations, which were not only time consuming but also error-prone.

There was a complete lack of visibility and global policy throughout the network and every morning, I relied on the multi-router traffic Grapher (MRTG) to manual inspect the traffic spikes indicating variations from baselines. Once something was plugged in, it was “there for life”. Have you ever heard of the 20-year-old PC that no one knows where it is but it still replies to ping? In contrast, we now live in an entirely different world. The perimeter has dissolved, resulting in perimeter-level firewalling alone to be insufficient.”

“Recently, I was reading a blog post by Ivan Pepelnjak on intent-based networking. He discusses that the definition of intent is “a usually clearly formulated or planned intention” and the word “intention” is defined as ’what one intends to do or bring about.” I started to ponder over his submission that the definition is confusing as there are many variations.

To guide my understanding, I decided to delve deeper into the building blocks of intent-based networking, which led me to a variety of closed-loop automation solutions. After extensive research, my view is that closed-loop automation is a prerequisite for intent-based networking. Keeping in mind the current requirements, it’s a solution that the businesses can deploy.

Now that I have examined different vendors, I would recommend gazing from a bird’s eye view, to make sure the solution overcomes today’s business and technical challenges. The outputs should drive a future-proof solution”

“What keeps me awake at night is the thought of artificial intelligence lying in wait in the hands of bad actors. Artificial intelligence combined with the powers of IoT-based attacks will create an environment tapped for mayhem. It is easy to write about, but it is hard for security professionals to combat. AI has more force, severity, and fatality which can change the face of a network and application in seconds.

When I think of the capabilities artificial intelligence has in the world of cybersecurity I know that unless we prepare well we will be like Bambi walking in the woods. The time is now to prepare for the unknown. Security professionals must examine the classical defense mechanisms in place to determine if they can withstand an attack based on artificial intelligence.”

“When I began my journey in 2015 with SD-WAN, the implementation requirements were different to what they are today. Initially, I deployed pilot sites for internal reachability. This was not a design flaw, but a solution requirement set by the options available to SD-WAN at that time. The initial requirement when designing SD-WAN was to replace multiprotocol label switching (MPLS) and connect the internal resources together.

Our projects gained the benefits of SD-WAN deployments. It certainly added value, but there were compelling constraints. In particular, we were limited to internal resources and users, yet our architecture consisted of remote partners and mobile workers. The real challenge for SD-WAN vendors is not solely to satisfy internal reachability. The wide area network (WAN) must support a range of different entities that require network access from multiple locations.”

“Applications have become a key driver of revenue, rather than their previous role as merely a tool to support the business process. What acts as the heart for all applications is the network providing the connection points. Due to the new, critical importance of the application layer, IT professionals are looking for ways to improve the architecture of their network.

A new era of campus network design is required, one that enforces policy-based automation from the edge of the network to public and private clouds using an intent-based paradigm.

SD-Access is an example of an intent-based network within the campus. It is broken down into three major elements

  1. Control-Plane based on Locator/ID separation protocol (LISP),
  2. Data-Plane based on Virtual Extensible LAN (VXLAN) and
  3. Policy-Plane based on Cisco TrustSec.”

“When it comes to technology, nothing is static, everything is evolving. Either we keep inventing mechanisms that dig out new security holes, or we are forced to implement existing kludges to cover up the inadequacies in security on which our web applications depend.

The assault on the changing digital landscape with all its new requirements has created a black hole that needs attention. The shift in technology, while creating opportunities, has a bias to create security threats. Unfortunately, with the passage of time, these trends will continue to escalate, putting web application security at center stage.

Business relies on web applications. Loss of service to business-focused web applications not only affects the brand but also results in financial loss. The web application acts as the front door to valuable assets. If you don’t efficiently lock the door or at least know when it has been opened, valuable revenue-generating web applications are left compromised.”

“When I started my journey in the technology sector back in the early 2000’s the world of networking comprised of simple structures. I remember configuring several standard branch sites that would connect to a central headquarters. There was only a handful of remote warriors who were assigned, and usually just a few high-ranking officials.

As the dependence on networking increased, so did the complexity of network designs. The standard single site became dual-based with redundant connectivity to different providers, advanced failover techniques, and high-availability designs became the norm. The number of remote workers increased, and eventually, security holes began to open in my network design.

Unfortunately, the advances in network connectivity were not in conjunction with appropriate advances in security, forcing everyone back to the drawing board. Without adequate security, the network connectivity that is left to defaults is completely insecure and is unable to validate the source or secure individual packets. If you can’t trust the network, you have to somehow secure it. We secured connections over unsecured mediums, which led to the implementation of IPSec-based VPNs along with all their complex baggage.”

“Over the years, we have embraced new technologies to find improved ways to build systems.  As a result, today’s infrastructures have undergone significant evolution. To keep pace with the arrival of new technologies, legacy is often combined with the new, but they do not always mesh well. Such a fusion between ultra-modern and conventional has created drag in the overall solution, thereby, spawning tension between past and future in how things are secured.

The multi-tenant shared infrastructure of the cloud, container technologies like Docker and Kubernetes, and new architectures like microservices and serverless, while technically remarkable, increasing complexity. Complexity is the number one enemy of security. Therefore, to be effectively aligned with the adoption of these technologies, a new approach to security is required that does not depend on shifting infrastructure as the control point.”

“Throughout my early years as a consultant, when asynchronous transfer mode (ATM) was the rage and multiprotocol label switching (MPLS) was still at the outset, I handled numerous roles as a network architect alongside various carriers. During that period, I experienced first-hand problems that the new technologies posed to them.

The lack of true end-to-end automation made our daily tasks run into the night. Bespoke network designs due to the shortfall of appropriate documentation resulted in one that person knows all. The provisioning teams never fully understood the design. The copy-and-paste implementation approach is error-prone, leaving teams blindfolded when something went wrong.

Designs were stitched together and with so much variation, that limited troubleshooting to a personalized approach. That previous experience surfaced in mind when I heard about carriers delivering SD-WAN services. I started to question if they could have made the adequate changes to provide such an agile service.”

Tech Brief Video Series – Enterprise Networking

Hello,

I have created an “Enterprise Networking Tech Brief” Series. Kindly click on the link to view the video. I’m trying out a few videos styles.

Enterprise Networking A –  LISP Components & DEMO – > https://youtu.be/PBYvIhxwrSc

Enterprise Networking B – SD-Access & Intent-based networking – > https://youtu.be/WKoGSBw5_tc

” In campus networking, there are a number of different trends that are impacting the way networks will be built in the future. Mobility, pretty much every user that is getting onto the campus is a mobile device. It used to be only company-owned devices, nows it is about BYOD and wearables. It is believed that the average user will bring about 2.7 devices to the workplace – a watch, and intelligent wearables. This aspect access to printers or collaboration systems. They also expect the same type of access to cloud workloads and application workloads in private DC. 

All this to be seamless across all devices. Iot – the corporate IoT within a campus network-connected light, card readers, all the things you would like to find in an office building. How do you make sure these cannot compromise your networks. Every attack we have seen in 12 – 19 has involved an insecure IoT device that is not managed or produced by I.T., In some cases, this IoT Device has access to the Internet, and the company network cause issues with malware and hacks. The source from Matt Conran Network World

Enterprise Networking CHands-on configuration for LISP introduction – > https://youtu.be/T1AZKK5p9PY

Enterprise Networking DIntroducing load balancing – > https://youtu.be/znhdUOFzEoM

” Load balancers operate at different Open Systems Interconnection ( OSI ) Layers from one data center to another; common operation is between Layer 4 and Layer 7. This is because each data centers hosts-unique applications with different requirements. Every application is unique with respect to the number of sockets, TCP connections ( short-lived or long-lived ), idle time-out, and activities in each session in terms of packets per second. One of the most important elements of designing a load-balancing solution is to understand fully the application structure and protocols”

Enterprise Networking E –  Hand-on configuration for LISP Debugging – > https://youtu.be/h7axIhyu1Bs

Enterprise Networking FTypes of load balancing – > https://youtu.be/ThCX03JYoL8

“Application-Level Load Balancing: Load balancing is implemented between tiers in the applications stack and is carried out within the application. Used in scenarios where applications are coded correctly making it possible to configure load balancing in the application. Designers can use open source tools with DNS or some other method to track flows between tiers of the application stack. Network-Level Load Balancing: Network-level load balancing includes DNS round-robin, Anycast, and L4 – L7 load balancers. Web browser clients do not usually have built-in application layer redundancy, which pushes designers to look at the network layer for load balancing services. If applications were designed correctly, load balancing would not be a network-layer function.”

Enterprise Networking HIntroducing application performance and buffer sizes – > https://youtu.be/d36fPso1rZg

“Today’s data centers have a mixture of applications and workloads all with different consistency requirements. Some applications require predictable latency while others sustained throughput. It’s usually the case that the slowest flow is the ultimate determining factor affecting the end-to-end performance. So to try to satisfy varied conditions and achieve predictable application performance we must focus on consistent bandwidth and unified latency for ALL flows types and workloads.”

Enterprise Networking IApplication performance: small vs large buffer sizes – > https://youtu.be/JJxjlWTJbQU

“Both small and large buffer sizes have different effects on application flow types. Some sources claim that small buffers sizes optimize performance, while other claims that larger buffers are better. Many of the web giants including Facebook, Amazon, and Microsoft use small buffer switches. It depends on your environment. Understanding your application traffic pattern and testing optimizations techniques are essential to finding the sweet spot. Most out-of-the-box applications are not going to be fine-tuned for your environment, and the only rule of thumb is to lab test.

Complications arise when the congestion control behavior of TCP interacts with the network device buffer. The two have different purposes. TCP congestion control continuously monitors available network bandwidth by using packet drops as the metric. On the other hand buffering is used to avoid packet loss. In a congestion scenario, the TCP is buffered, but the sender and receiver have no way of knowing that there is congestion and the TCP congestion behavior is never initiated. So the two mechanisms that are used to improve application performance don’t compliment each other and require careful testing for your environment.”

Enterprise Networking J – TCP Congestion Control – > https://youtu.be/ycPTlTksszs

“The discrepancy and uneven bandwidth allocation for flow boil down to the natural behavior of how TCP reacts and interacts with insufficient packet buffers and the resulting packet drops. The behavior is known as the TCP/IP bandwidth capture effect. The TCP/IP bandwidth capture effect does not affect the overall bandwidth but more individual Query Completion Times and Flow Completion Times (FCT) for applications. The QCT and FCT are prime metrics for measuring TCP-based application performance. A TCP stream’s pace of transmission is based on a built-in feedback mechanism. The ACK packets from the receiver adjust the sender’s bandwidth to match the available network bandwidth. With each ACK received, the sender’s TCP starts to incrementally increase the pace of sending packets to use all available bandwidth. On the other hand, it takes 3 duplicate ACK messages for TCP to conclude packet loss on the connection and start the process of retransmission.”

Enterprise Networking K – Mice and Elephant flows – > https://youtu.be/vCB_JH2o1nk

” There are two types of flows in data center environments. We have a large, elephant and smaller mice flow. Elephant flows might only represent a low proportion of the number of flows but consume most of the total data volume. Mice flows are, for example, control and alarm/control messages and usually pretty significant. As a result, they should be given priority over larger elephant flows, but this is sometimes not the case with simple buffer types that don’t distinguish between flow types. Priority can be given by somehow regulating the elephant flows with intelligent switch buffers. Mice flows are often bursty flows where one query is sent to many servers. This results in many small queries getting sent back to the single originating host. These messages are often small only requiring 3 to 5 TCP packets. As a result, the TCP congestion control mechanism may not even be evoked as the congestion mechanisms take 3 duplicate ACK messages. Due to the size of elephant flows they will invoke the TCP congestion control mechanism (mice flows don’t as they are too small).

Enterprise Networking LMultipath TCP – > https://youtu.be/Dfykc40oWzI

“Transmission Control Protocol (TCP) applications offer reliable byte stream with congestion control mechanisms adjusting flows to current network load. Designed in the 70s, TCP is the most widely used protocol and remains largely unchanged, unlike the networks it operates within. Back in those days the designers understood there could be link failure and decided to decouple the network layer (IP) from the transport layer (TCP). This enables the routing with IP around link failures without breaking the end-to-end TCP connection. Dynamic routing protocols do this automatically without the need for transport layer knowledge. Even Though it has wide adoption, it does not fully align with the multipath characteristics of today’s networks. TCP’s main drawback is that it’s a single path per connection protocol. A single path means once the stream is placed on a path ( endpoints of the connection) it can not be moved to another path even though multiple paths may exist between peers. This characteristic is suboptimal as the majority of today’s networks, and end hosts have multipath characteristics for better performance and robustness.”

Enterprise Networking MMultipath TCP use cases – > https://youtu.be/KkL_yLNhK_E

“Multipath TCP is particularly useful in the multipath data center and mobile phone environments. All mobiles allow you to connect via WiFi and a 3G network. MPTCP enables either the combined throughput and the switching of interfaces ( Wifi / 3G ) without disrupting the end-to-end TCP connection. For example, if you are currently on a 3G network with an active TCP stream, the TCP stream is bound to that interface. If you want to move to the Wifi network you need to reset the connection and all ongoing TCP connections will, therefore, get reset. With MPTCP the swapping of interfaces is transparent. Next-generation leaf and spine data center networks are built with Equal-Cost Multipath (ECMP). Within the data center, any two endpoints are equidistant. For one endpoint to communicate to another, a TCP flow is placed on a single link, not spread over multiple links. As a result, single-path TCP collisions may occur, reducing the throughput available to that flow. This is commonly seen for large flows and not small mice flow.”

Enterprise Networking N – > Multipath TCP connection setup – > https://youtu.be/ALAPKcOouAA

“The aim of the connection is to have a single TCP connection with many subflows. The two endpoints using MPTCP are synchronized and have connection identifiers for each of the subflows. MPTCP starts the same as regular TCP. If additional paths are available additional TCP subflow sessions are combined into the existing TCP session. The original TCP session and other subflow sessions appear as one to the application, and the main Multipath TCP connection seems like a regular TCP connection. The identification of additional paths boils down to the number of IP addresses on the hosts. The TCP handshake starts as normal, but within the first SYN, there is a new MP_CAPABLE option ( value 0x0 ) and a unique connection identifier. This allows the client to indicate they want to do MPTCP. At this stage, the application layer just creates a standard TCP socket with additional variables indicating that it wants to do MPTCP. If the receiving server end is MP_CAPABLE it will reply with the SYN/ACK MP_CAPABLE along with its connection identifier. Once the connection is agreed the client and server will set upstate. Inside the kernel, this creates a Meta socket acting as the layer between the application and all the TCP subflows.”

More Videos to come!

Additional Enterprise Networking information can be found at the following:

Tech Brief Video Series – Cloud Computing

Hello, I have created a “Cloud Computing Tech Brief” Series. Below, we have videos that can assist you in the learning process of cloud computing. Kindly click on the link to view the video. I’m trying out a few video styles.

Cloud Computing A – Cloud – Introducing Immutable Server Infrastructure – > https://youtu.be/Ogtt2bETNZM

“Traditionally, we have physical servers that were costly, difficult to maintain and workflows were time-consuming. Administrators wanted to abstract a lot of these challenges using virtualization so they could focus more on the application. The birth of virtualization gave rise to virtual servers and the ability to instantiate workloads within a shorter period of time. Similar to how virtualization brings new ways to improve server infrastructure, Immutable server infrastructure takes us one step further. Firstly, mutable server infrastructure is servers that require additional care once they have been deployed. This may include upgrade or downgrading or tweaking configuration files for specific optimization. Usually, this is done on a server-by-server basis.

Cloud Computing B – Cloud – Introducing Blockchain PaaS – > https://youtu.be/3MdkvOR9TGk

“The blockchain technology is a secured replicated digital ledger of transactions. It is shared among a distributed set of computers, as opposed to having a single provider. A transaction can be anything of value in the blockchain world and not solely a financial transaction. For example, it may be used to record the movement of physical or digital assets in a blockchain ledger. However, the most common use is to record financial transactions. The blockchain ecosystem is growing rapidly and we are seeing the introduction of many new solutions ranging from open-source blockchain, mobile wallets, authentication, and trading with cryptocurrencies like Bitcoin, which can even be traded automatically thanks to trading bots, and now Blockchain PaaS. A technology that was previously seen as an on-premise technology is now becoming part of the public cloud providers platform as a service (PaaS) technologies.”

Cloud Computing C – Cloud – Introducing Multicloud – > https://youtu.be/AnMQH_noNDo

Many things are evolving as the cloud moves into its second decade of existence. It has gone beyond I.T and now affects the way an organization operates and has become a critical component for new technologies. The biggest concern about public cloud enablement is not actually security, it’s application portability amongst multiple cloud providers. You can’t rely on a single provider anymore. Organizations do not want to get locked into specific cloud frameworks unable to move the application from one cloud provider to another. As a result, we are seeing the introduction of multi-clouds application strategies. As opposed to simply having a public, private, or hybrid cloud infrastructure model. What differentiates the hybrid cloud from the public & private cloud is that there is a flow of data between public and private resources. And Multi-Cloud is a special case of hybrid cloud computing.”

Cloud Computing D – Cloud – Introducing Hyperscale Computing – > https://youtu.be/cIrC2zpBNrM

“We have transitioned from the client/server model to complex mega-scale applications within a short space of time. Batch computing requires a high performance for large amounts of capacity on demand. IoT applications change the paradigm and typically combine the traits of cloud-native applications along with big data apps. Machine learning, automatic driving, and heavy analytics form a new era of application that needs to be supported by hyper-scale infrastructures. Hyperscale is the ability to scale for example compute, memory, networking, and storage resources appropriately to demand to facilitate distributed computing environments.”

Cloud Computing E – Cloud – Introducing Cloud Service Brokerage – > https://youtu.be/qpfmSdygg2M

“The majority of customers do not rely on a few cloud services; more than often they want to run a large number of different services. These cloud adoption characteristics create challenges when you want to adopt multiple services from one provider or pursue a multi-cloud strategy. The variety brings about cloud sprawl giving management many pain points. The multi-cloud environment is complex & cloud service brokerage can help with their automation bringing services together, optimizing cloud to cloud and on-prem to cloud environments. CSB are subject matter experts sitting in the middle assisting with a wide range of cloud enablement challenges. They broker relationships between the cloud and the consumer applying to both public and private clouds serving all cloud service models – IaaS, PaaS, and SaaS.”

Cloud Computing F – Cloud – Introducing Edge Computing – > https://youtu.be/5mbPiKd_TFc

“By the year 2020, the Internet is expected to reach 1.5 Gigabytes of traffic per day per person. However, the Internet of Things driven by objects will by far supersede these data rates. For example, a connected airplane will generate around 5 Terabytes of data per day. This amount of data is impossible to analyze in a timely fashion in one central location. You simply can’t send everything to the cloud. Even if you have an infinite bandwidth which you don’t latency will always get you. Edge computing moves certain types of actions as close as possible to the source of the information. It is the point where the physical world interacts with the digital world.”

Cloud Computing G – Cloud – Introducing Cloudbursting – > https://youtu.be/OFJbWMGB6lQ

“Cloudbursting has a fairly simple concept. It entails the ability to add or subtract compute capacity between on-premise and public or private clouds or to support a multi-cloud environment all used to handle traffic peaks. Many companies use cloud bursting to construct a hybrid cloud model. The idea seems straightforward as holding spare infrastructure equipment on-premise to support high traffic loads during ad-hoc times can be expensive especially when you have the option to use the on-demand elasticity of the cloud.”

More Videos to come!

Correlate Disparate Data Points

 

 

Correlate Disparate Data Points

Businesses and organizations can access vast amounts of data in today’s data-driven world. However, making sense of this data can be a challenging task. One effective way to gain valuable insights is by correlating disparate data points. We can unlock hidden knowledge and make informed decisions by finding connections and patterns between seemingly unrelated data. In this blog post, we will delve into the concept of correlating disparate data points and explore their significance in various fields.

Understanding Correlation:

Correlation refers to the statistical relationship between two or more variables. It allows us to determine if there is a connection between different data points and to what extent they influence each other. We can uncover meaningful insights and make predictions based on observed patterns by analyzing the correlation.

 

Highlights: Correlate Disparate Data Points

  • The Required Monitoring Solution

Digital transformation intensifies the touch between businesses, customers, and prospects. Although it expands the workflow agility, on the other hand, it introduces a significant level of complexity as it requires a more agile Information Technology (IT) architecture along with the increase in data correlation. This belittles the network and application visibility creating a substantial data volume and data points that require monitoring. The monitoring solution is required to correlate disparate data points.

The Role of Observability

SASE definition overcomes this by combining data points and offering them as a cloud solution that combines network security components into one coherent offering. The traditional monitoring use case can also be supplemented with the new Observability practices. To understand the differences between monitoring and Observability, in this recent post: Observability vs monitoring. All of which will improve network visibility.

 

Before you proceed, you may find the following posts helpful:

  1. Ansible Tower
  2. Network Stretch
  3. IPFIX Big Data
  4. Microservices Observability
  5. Software Defined Internet Exchange

 



Correlate Disparate Data Points.

Key Correlate Disparate Data Points Discussion points:


  • The rise of data and data points.

  • Technology transformation.

  • Growing data points and volumes.

  • Troubleshooting challenges.

  • A final note on econmic value.

 

Back to basics with Correlate Disparate Data Points

Data observability

Over the last while, data has transformed almost everything we do, starting as a strategic asset and evolving the core strategy. However, managing data quality is the most critical barrier for organizations to scale data strategies due to the need to identify and remediate issues appropriately. Therefore, we need an approach to quickly detect, troubleshoot, and prevent a wide range of data issues through data observability, a set of best practices that enable data teams to gain greater visibility of data and its usage.

Identifying Disparate Data Points:

Disparate data points refer to information that appears unrelated or disconnected at first glance. They can be derived from multiple sources, such as customer behavior, market trends, social media interactions, or environmental factors. The challenge lies in recognizing the potential relationships between these seemingly unrelated data points and understanding the value they can bring when combined.

Unveiling Hidden Patterns:

Correlating disparate data points reveals hidden patterns that would otherwise remain unnoticed. For example, in the retail industry, correlating sales data with weather patterns may help identify the impact of weather conditions on consumer behavior. Similarly, correlating customer feedback with product features can provide insights into areas for improvement or potential new product ideas.

Benefits in Various Fields:

The ability to correlate disparate data points has significant implications across different domains. Analyzing patient data alongside environmental factors in healthcare can help identify potential triggers for certain diseases or conditions. In finance, correlating market data with social media sentiment can provide valuable insights for investment decisions. In transportation, correlating traffic data with weather conditions can optimize route planning and improve efficiency.

Tools and Techniques:

Advanced data analysis techniques and tools are essential to correlate disparate data points effectively. Machine learning algorithms, data visualization tools, and statistical models can help identify correlations and patterns within complex datasets. Additionally, data integration and cleaning processes are crucial in ensuring accurate and reliable results.

Challenges and Considerations:

Correlating disparate data points is not without its challenges. Combining data from different sources often involves data quality issues, inconsistencies, and compatibility challenges. Additionally, ethical considerations regarding data privacy and security must be considered when working with sensitive information.

 

Getting Started: Correlate Disparate Data Points

Many businesses feel overwhelmed by the amount of data they’re collecting and don’t know what to do with it. The digital world swells the volume of data and data correlation to a business has access to. Apart from impacting the network and server resources, the staff is also taxed in their attempts to manually analyze the data while resolving the root cause of the application or network performance problem. Furthermore, IT teams operate in silos, making it difficult to process data from all the IT domains – this severely limits business velocity.

Digital Transformation

Diagram: Digital innovation and data correlation.

 

Data Correlation: Technology Transformation

Conventional systems, while easy to troubleshoot and manage, do not meet today’s requirements, which has led to introduction an array of new technologies. The technological transformation umbrella includes virtualization, hybrid cloud, hyper-convergence, and containers.

While technically remarkable, introducing these technologies posed an array of operationally complex monitoring tasks and increased the volume of data and the need to correlate disparate data points. Today’s infrastructures comprise complex technologies and architectures.

They entail a variety of sophisticated control planes consisting of next-generation routing and new principles such as software-defined networking (SDN), network function virtualization (NFV), service chaining, and virtualization solutions.

Virtualization and service chaining introduce new layers of complexity that don’t follow the traditional monitoring rules. Service chaining does not adhere to the standard packet forwarding paradigms, while virtualization hides layers of valuable information.

Micro-segmentation changes the security paradigm while introducing virtual machine (VM) mobility introduces north-to-south and east-to-west traffic trombones. The VM, which the application sits on, now has mobility requirements and may move instantly to different on-premise data center topology types or external to the hybrid cloud.

The hybrid cloud dissolves the traditional network perimeter and triggers disparate data points in multiple locations. Containers and microservices introduce a new wave of application complexity and data volume. Individual microservices require cross-communication, potentially located in geographically dispersed data centers.

All these waves of new technologies increase the number of data points and volume of data by an order of magnitude. Therefore, an IT organization must compute millions of data points to correlate information from business transactions to infrastructures such as invoices and orders.

 

Growing Data Points & Volumes

The need to correlate disparate data points

As part of the digital transformation, organizations are launching more applications. More applications require additional infrastructure. As a result, the infrastructure is always snowballing; therefore, the number of data points you need to monitor increases.

Breaking up a monolithic system into smaller, fine-grained microservices adds complexity when monitoring the system in production. With a monolithic application, we have well-known and prominent investigation starting points.

But the world of microservices introduces multiple data points to monitor, and it’s harder to pinpoint latency or other performance-related problems. The human capacity hasn’t changed – a human can correlate at most 100 data points per hour. The actual challenge surfaces because they are monitored in a silo.

Containers are deployed to run software that is found more reliable when moved from one computing environment to another, often used to increase business agility. However, the increase in agility comes at a high cost – containers will generate 18x more data than they would in traditional environments. Conventional systems may have a manageable set of data points to be managed, while a full-fledged container architecture could have millions.

secure digital transformation
Diagram: Secure digital transformation.

 

The amount of data to be correlated to support digital transformation far exceeds human capabilities. It’s just too much for the human brain to handle. Traditional monitoring methods are not prepared to meet the demands of what is known as “big data.” This is why some businesses use the big data analytics software from Kyligence.

That uses an AI-augmented engine to manage and optimize the data, allowing businesses to see their most valuable data. This helps businesses to make decisions. While data volumes grow to an unprecedented level, visibility is decreasing due to the complexity of the new application style and the underlying infrastructure. All this is compounded by ineffective troubleshooting and team collaboration.

 

Ineffective Troubleshooting Team Collaboration

The application rides on various complex infrastructures and, at some stage, require troubleshooting. There should be a science to troubleshooting, but most departments stay with the manual way. This causes challenges with cross-team collaboration upon an application troubleshooting event among multiple data center segments – network, storage, database, and application.

IT workflows are complex, and a single response/request query will touch all supporting infrastructure elements: routers, servers, storage, database, etc. For example, an application request may traverse the web front ends in one segment to be processed by database and storage modules on different segments. This may require firewalling or load-balancing services in different on and off-premise data centers.

IT departments will never have a single team overlooking all areas of the network, server, storage, database, and other infrastructure modules. The technical skill sets required are far too broad for any individual to handle efficiently.

Multiple technical teams are often distributed to support various technical skill levels at various locations, time zones, and cultures. Troubleshooting workflows between teams should be automated, although they are not because monitoring and troubleshooting are carried out in silos, completely lacking any data point correlation. The natural assumption is to add more people, which is nothing less than fueling the fire. An efficient monitoring solution is a winning formula.

There is an increasingly vast lack of collaboration due to silo boundaries that don’t even allow you to look at each other’s environments. By the design of the silos, engineers blame each other as collaboration is not built by the very nature of how different technical teams communicate.

Engineers say bluntly, “It’s not my problem; it’s not my environment.” In reality, no one knows how to drill down and pinpoint the root cause. Mean Time to Innocence becomes the de facto working practice when the application faces downtime. It’s all about how you can save yourself. Compounding application complexity, the lack of efficient collaboration and troubleshooting science creates a bleak picture.

 

How to Win the Race with Growing Volumes of Data and Data Volumes?

How do we resolve this mess and ensure the application meets the service level agreement (SLA) and operates at peak performance levels? The first thing you need to do is collect the data. Not just from one domain but all domains at the same time. Data must be collected from various data points from all infrastructure modules, no matter how complicated.

Once the data is collected, application flows are detected, and the application path is computed in real-time. The data is extracted from all data center points and correlated to determine the exact path and time. The path visually presents the correct application route and over what devices the application is traversing.

For example, the application path can instantly show application A flowing over a particular switch, router, firewall, load balancer, web frontend, and database server.

 

It’s An Application World

The application path defines what infrastructure components are being used and will change dynamically in today’s environment. The application that rides over the infrastructure uses every element in the data center, including interconnects to the cloud and other off-premise physical or virtual locations.

Customers are well informed about the products and services, as they have all the information at their fingertips. Thereby, it makes the work of applications complex to deliver excellent results. To comprehend the business’s top priorities and work towards them, having the right Objectives and Key Results (OKRs) is essential. You can review some examples of OKRs by Profit if you want to learn more about this topic.

That said, it is essential to note that an issue with critical application performance can happen in any compartment or domain on which the application depends. In a world that monitors everything but monitors in a silo, it’s difficult to understand the cause of the application problem quickly. The majority of time is spent isolating and identifying rather than fixing the problem.

Imagine a monitoring solution helping customers select the best coffee shop to order a cup from. The customer has a variety of coffee shops to choose from, and there are several lanes in each. One lane could be blocked due to a spillage, while the other could be slow due to a training cashier. Wouldn’t having all this information upfront before leaving your house be great?

 

Economic Value

Time is money in two ways. First is the cost, and the other is damage to the company brand due to poor application performance. Each device requires several essential data points to monitor. These data points contribute to determining the overall health of the infrastructure.

Fifteen data points aren’t too bad to monitor, but what about a million data points? These points must be observed and correlated across teams to conclude application performance. Unfortunately, the traditional monitoring approach in silos has a high time value. 

Using traditional monitoring methods and in the face of application downtime, the theory of elimination and answers are not easily placed in front of the engineer. There is a time value that creates a cost. Given the amount of data today, on average, it takes 4 hours to repair an outage, and an outage costs $300K.

If there is lost revenue, the cost to the enterprise, on average, is $5.6M. How much will it take, and what cost will a company incur if the amount of data increases 18x? A recent report states that only 21% of organizations can successfully troubleshoot within the first hour. That’s an expensive hour that could have been saved with the right monitoring solution.

There is real economic value in applying the correct monitoring solution to the problem and adequately correlating between silos. What if a solution does all the correlation? The time value is now shortened because, algorithmically, the system is carrying out the heavy-duty manual work for you.

Conclusion:

In a world inundated with data, correlating disparate data points is a powerful skill. By uncovering hidden patterns and connections, we can gain valuable insights and make informed decisions. Whether in business, healthcare, finance, or any other field, leveraging the potential of correlating disparate data points can lead to innovative solutions and improved outcomes. Embracing this approach can propel organizations forward and unlock new opportunities for growth and success.