data center design

Virtual Data Center Design

 

data center design

Virtual Data Center Design

In today’s rapidly evolving digital landscape, data centers support the growing demand for storage, computing power, and connectivity. However, traditional data centers face challenges like limited scalability, high operating costs, and physical space constraints. Many organizations are turning to virtual data centers as a more efficient and flexible solution to address these issues. In this blog post, we will delve into the concept of virtual data centers, exploring their benefits, key components, and significance in shaping the future of data management.

A virtual data center (VDC) is a software-defined infrastructure that leverages virtualization technologies to pool and abstract computing resources, storage, and networking capabilities. Unlike traditional data centers, which rely on physical servers and hardware, VDCs enable organizations to create, manage, and provision virtual machines (VMs) and applications on demand.

 

  • Gaining Efficiency

Deploying multiple tenants on a shared infrastructure is far more efficient than having single tenants per physical device. With a virtualized infrastructure, each tenant requires isolation from all other tenants sharing the same physical infrastructure.

For a data center network design, each network container requires path isolation, for example, 802.1Q on a shared Ethernet link between two switches, and device virtualization at the different network layers, for example, Cisco Application Control Engine ( ACE ) or Cisco Firewall Services Module ( FWSM ) virtual context. To implement independent paths with this type of data center design, you can create Virtual Routing Forwarding ( VRF ) per tenant and map the VRF to Layer 2 segments.

  • Example: Virtual Data Center Design. Cisco.

More recently, the Cisco ACI network enabled segmentation based on logical security zones known as endpoint groups. Where security constructs known as contracts are needed to communicate between endpoint groups. The Cisco ACI still uses VRFs, but they are used differently. Then we have the Ansible Architecture that can be used with Ansible variables to automate the deployment of the network and security constructs for the virtual data center. This brings consistency and will eliminate human error.

 

Before you proceed, you may find the following posts helpful for pre-information:

  1. Context Firewall
  2. Virtual Device Context
  3. Dynamic Workload Scaling
  4. ASA Failover
  5. Data Center Design Guide

 

Data Center Network Design

Key Virtual Data Center Design Discussion Points:


  • Introduction to Virtual Data Center Design and what is involved.

  • Highlighting the details of VRF-lite and how it works.

  • Critical points on the use of virtual contexts and how to implement them.

  • A final note on load disributon and appliciation tier separation. 

 

Back to basics with data center types.

Numerous kinds of data centers and service models are available. Their category relies on several critical criteria. Such as whether one or many organizations own them, how they serve in the topology of other data centers, and what technologies they use for computing and storage. There are main types of data centers include:

  • Enterprise data centers.
  • Managed services data centers.
  • Colocation data centers.
  • Cloud data centers.

You may build and maintain your own hybrid cloud data centers, lease space within colocation facilities, also known as colos, consume shared compute and storage services, or even use public cloud-based services.

 

Benefits of Virtual Data Centers:

1. Scalability: Virtual data centers offer unparalleled scalability, allowing businesses to quickly expand or contract their infrastructure based on their evolving needs. With the ability to provision additional resources in real time, organizations can quickly adapt to changing workloads, ensuring optimal performance and reducing downtime.

2. Cost Efficiency: Virtual data centers significantly reduce operating costs by eliminating the need for physical servers and reducing power consumption. Consolidating multiple VMs onto a single physical server optimizes resource utilization, improving cost efficiency and lowering hardware requirements.

3. Flexibility: Virtual data centers allow organizations to deploy and manage applications across multiple cloud platforms or on-premises infrastructure. This hybrid cloud approach enables seamless workload migration, disaster recovery, and improved business continuity.

Critical Components of Virtual Data Centers:

1. Hypervisor: At the core of a virtual data center lies the hypervisor, a software layer that partitions physical servers into multiple VMs, each running its operating system and applications. Hypervisors enable the efficient utilization of hardware resources and facilitate VM management.

2. Software-Defined Networking (SDN): SDN allows organizations to define and manage their network infrastructure through software, decoupling network control from physical devices. This technology enhances flexibility, simplifies network management, and enables greater security and agility within virtual data centers.

3. Virtual Storage: Virtual storage technologies, such as software-defined storage (SDS), enable the pooling and abstraction of storage resources. This approach allows for centralized management, improved data protection, and simplified storage provisioning in virtual data centers.

 

Data center network design: VRF-lite

VRF information from a static or dynamic routing protocol is carried across hop-by-hop in a Layer 3 domain. Multiple VLANs in the Layer 2 domain are mapped to the corresponding VRF. VRF-lite is known as a hop-by-hop virtualization technique. The VRF instance logically separates tenants on the same physical device from a control plane perspective.

From a data plane perspective, the VLAN tags provide path isolation on each point-to-point Ethernet link that connects to the Layer 3 network. VRFs provide per-tenant routing and forwarding tables and ensure no server-server traffic is permitted unless explicitly allowed.

virtual and forwarding

 

Service Modules in Active/Active Mode

Multiple virtual contexts

The service layer must also be virtualized for tenant separation. The network services layer can be designed with a dedicated Data Center Services Node ( DSN ) or external physical appliances connected to the core/aggregation. The Cisco DSN data center design cases use virtual device contexts (VDC), virtual PortChannel (vPC), virtual switching system (VSS), VRF, and Cisco FWSM and Cisco ACE virtualization. 

This post will look at a DSN as a self-contained Catalyst 6500 series with ACE and firewall service modules. Virtualization at the services layer can be accomplished by creating separate contexts representing separate virtual devices. Multiple contexts are similar to having multiple standalone devices.

The Cisco Firewall Services Module ( FWSM ) provides a stateful inspection firewall service within a Catalyst 6500. In addition, it offers separation through a virtual security context that can be transparently implemented as Layer 2 or as a router “hop” at Layer 3. The Cisco Application Control Engine ( ACE ) module also provides a range of load-balancing capabilities within a Catalyst 6500.

 

FWSM  features

 ACE features

Route health injection (RHI)

Route health injection (RHI)

Virtualization (context and resource allocation)

Virtualization (context and resource allocation)

Application inspection

Probes and server farm (service health checks and load-balancing predictor)

Redundancy (active-active context failover)

Stickiness (source IP and cookie insert)

Security and inspection

Load balancing (protocols, stickiness, FTP inspection, and SSL termination)

Network Address Translation (NAT) and Port Address Translation (PAT )

NAT

URL filtering

Redundancy (active-active context failover)

Layer 2 and 3 firewalling

Protocol inspection

 

You can offer high availability and efficient load distribution with a context design. The first FWSM and ACE are primary for the first context and standby for the second context. The second FWSM and ACE are primary for the second context and standby for the first context. Traffic is not automatically load-balanced equally across the contexts. Additional configuration steps are needed to configure different subnets in specific contexts.

 

Virtual Firewall and Load Balancing
Diagram: Virtual Firewall and Load Balancing

 

Compute separation

Traditional security architecture placed the security device in a central position, either in “transparent” or “routed” mode. All inter-host traffic had to be routed and filtered by the firewall device located at the aggregation layer before communication could occur. This works well in low virtualized environments when you have few VMs. Still, a high-density model ( heavily virtualized environment ) forces us to reconsider firewall scale requirements at the aggregation layer.

It is recommended to deploy virtual firewalls at the access layer to address the challenge of VM density and the ability to move VMs while keeping their security policies. This creates intra and inter-tenant zones and enables finer security granularity within single or multiple VLANs.

 

Application tier separation

The Network-Centric model relies on VLAN separation for three-tier application deployment for each tier. Each tier should have its VLAN in one VRF instance. If VLAN-to-VLAN communication needs to occur, traffic must be routed via a default gateway where security policies can enforce traffic inspection or redirection.

vShield ( vApp ) virtual appliance can inspect inter-VM traffic among ESX hosts, and layers 2,3,4 and 7 filters are supported. A drawback of this approach is that the FW can become a choke point. Compared to the Network-Centric model, the Server-Centric model uses separate VM vNICs to daisy chain tiers together.

 

Data center network design with Security Groups

The concept of Security groups replacing subnet-level firewalls with per-VM firewalls/ACLs. With this approach, there is no traffic tromboning or single choke points. It can be implemented with Cloudstack, OpenStack ( Neutron plugin extension ), and VMware vShield Edge. Security groups are elementary; you assign VMs and specify filters between groups. 

Security groups are suitable for policy-based filtering but don’t consider other functionality where data plane states are required for replay attacks. Security groups give you echo-based functionality and should be good enough for current TCP stacks that have been hardened over the last 30 years. But if you require full stateful inspection and do not regularly patch your servers, then you should implement a complete stateful-based firewall.

 

data center design

Dynamic Workload Scaling

Dynamic Workload Scaling ( DWS )

 

 

Dynamic Workload Scaling ( DWS ) 

In today’s fast-paced digital landscape, businesses strive to deliver high-quality services while minimizing costs and maximizing efficiency. To achieve this, organizations are increasingly adopting dynamic workload scaling techniques. This blog post will explore the concept of dynamic workload scaling, its benefits, and how it can help businesses optimize their operations.

  • Adjustment of resources

Dynamic workload scaling refers to the automated adjustment of computing resources to match the changing demands of a workload. This technique allows organizations to scale their infrastructure up or down in real time based on the workload requirements. By dynamically allocating resources, businesses can ensure that their systems operate optimally, regardless of varying workloads.

  • Defined Thresholds

Dynamic workload scaling is all about monitoring and distributing traffic at user-defined thresholds. Data centers are under pressure to support the ability to burst new transactions to available Virtual Machines ( VM ). In some cases, the VMs used to handle the additional load will be geographically dispersed, with both data centers connected by a Data Center Interconnect ( DCI ) link. The ability to migrate workloads within an enterprise hybrid cloud or in a hybrid cloud solution between enterprise and service provider is critical for business continuity for planned and unplanned outages.

 

Before you proceed, you may find the following post helpful:

  1. Network Security Components
  2. Virtual Data Center Design
  3. How To Scale Load Balancer
  4. Distributed Systems Observability
  5. Active Active Data Center Design
  6. Cisco Secure Firewall

 

Dynamic Workloads

Key Dynamic Workload Scaling Discussion Points:


  • Introduction to Dynamic Workload Scaling and what is involved.

  • Highlighting the details of dynamic workloads and how they can be implemented.

  • Critical points on how Cisco approaches Dynamic Workload Scaling.

  • A final note on design considerations.

 

Back to basics with OTV.

Overlay Transport Virtualization (OTV) is an IP-based technology to provide a Layer 2 extension between data centers. OTV is transport agnostic, indicating that the transport infrastructure between data centers can be dark fiber, MPLS, IP routed WAN, ATM, Frame Relay, etc.

The sole prerequisite is that the data centers must have IP reachability between them. OTV permits multipoint services for Layer 2 extension and separated Layer 2 domains between data centers, maintaining an IP-based interconnection’s fault-isolation, resiliency, and load-balancing benefits.

Unlike traditional Layer 2 extension technologies, OTV introduces the Layer 2 MAC routing concept. The MAC-routing concept enables a control-plane protocol to advertise the reachability of Layer 2 MAC addresses. As a result, the MAC-routing idea has enormous advantages over traditional Layer 2 extension technologies that traditionally leveraged data plane learning, flooding Layer 2 traffic across the transport infrastructure.

 

Cisco and Dynamic Workloads

A new technology introduced by Cisco, called Dynamic Workload Scaling ( DWS ), satisfies the requirement of dynamically bursting workloads based on user-defined thresholds to available resource pools ( VMs ). It is tightly integrated with Cisco Application Control Engine ( ACE ) and Cisco’s Dynamic MAC-in-IP encapsulation technology known as Overlay Transport Virtualization ( OTV ), enabling resource distribution across Data Center sites. OTV provides the LAN extension method that keeps the virtual machine’s state as it passes locations, and ACE delivers the load-balancing functionality.

 

dynamic workloads
Dynamic workload and dynamic workload scaling.

 

Dynamic workload scaling: How does it work?  

  • DWS monitors the VM capacity for an application and expands that application to another resource pool during periods of peak usage. We provide a perfect solution for distributed applications among geographically dispersed data centers.
  • DWS uses the ACE and OTV technologies to build a MAC table. It monitors the local MAC entries and those located via the OTV link to determine if a MAC entry is considered “Local” or “Remote.”
  • The ACE monitors the utilization of the “local” VM. From these values, the ACE can compute the average load of the local Data Center.
  • DWS uses two APIs. One is to monitor the server load information polled from VMware’s VCenter, and another API is to poll OTV information from the Nexus 7000.
  • During normal load conditions, when the data center is experiencing low utilization, the ACE can load incoming balance traffic to the local VMs.
  • However, when the data center experiences high utilization and crosses the predefined thresholds, the ACE will add the “remote” VM to its load-balancing mechanism.
workload scaling
Workload scaling and its operations.

 

Dynamic workload scaling: Design considerations

During congestion, the ACE adds the “remote” VM to its load-balancing algorithm. The remote VM placed in the secondary data center could add additional load on the DCI. Essentially hair-pining traffic for some time as ingress traffic for the “remote” VM continues to flow via the primary data center. DWS should be used with Locator Identity Separation Protocol ( LISP ) to enable automatic move detection and optimal ingress path selection.

 

Benefits of Dynamic Workload Scaling:

1. Improved Efficiency:

Dynamic workload scaling enables businesses to allocate resources precisely as needed, eliminating the inefficiencies associated with over-provisioning or under-utilization. Organizations can optimize resource utilization and reduce operational costs by automatically scaling resources up during periods of high demand and scaling them down during periods of low demand.

2. Enhanced Performance:

With dynamic workload scaling, businesses can effectively handle sudden spikes in workload without compromising performance. Organizations can maintain consistent service levels and ensure smooth operations during peak times by automatically provisioning additional resources when required. This leads to improved customer satisfaction and retention.

3. Cost Optimization:

Traditional static infrastructure requires businesses to provision resources based on anticipated peak workloads, often leading to over-provisioning and unnecessary costs. Dynamic workload scaling allows organizations to provision resources on demand, resulting in cost savings by paying only for the resources utilized. Additionally, by scaling down resources during periods of low demand, businesses can further reduce operational expenses.

4. Scalability and Flexibility:

Dynamic workload scaling allows businesses to scale their operations as needed. Whether expanding to accommodate business growth or handling seasonal fluctuations, organizations can easily adjust their resources to match the workload demands. This scalability and flexibility enable businesses to respond quickly to changing market conditions and stay competitive.

Dynamic workload scaling has emerged as a crucial technique for optimizing efficiency and performance in today’s digital landscape. By dynamically allocating computing resources based on workload requirements, businesses can improve efficiency, enhance performance, optimize costs, and achieve scalability. Implementing robust monitoring systems, automation, and leveraging cloud computing services are critical steps toward successful dynamic workload scaling. Organizations can stay agile and competitive and deliver exceptional customer service by adopting this approach.

Key Features of Cisco Dynamic Workload Scaling:

Intelligent Automation:

Cisco’s dynamic workload scaling solutions leverage intelligent automation capabilities to monitor real-time workload demands. By analyzing historical data and utilizing machine learning algorithms, Cisco’s automation tools can accurately predict future workload requirements and proactively scale resources accordingly.

Application-Aware Scaling:

Cisco’s dynamic workload scaling solutions are designed to understand the unique requirements of different applications. By utilizing application-aware scaling, Cisco can allocate resources based on the specific needs of each workload, ensuring optimal performance and minimizing resource wastage.

Seamless Integration:

Cisco’s dynamic workload scaling solutions seamlessly integrate with existing IT infrastructures, allowing businesses to leverage their current investments. This ensures a smooth transition to dynamic workload scaling without extensive infrastructure overhauls.

Conclusion:

In today’s dynamic business environment, efficiently managing and scaling workloads is critical for organizational success. Cisco’s dynamic workload scaling solutions provide businesses with the flexibility, performance optimization, and cost savings necessary to thrive in an ever-changing landscape. By leveraging intelligent automation, application-aware scaling, and seamless integration, Cisco empowers organizations to adapt and scale their workloads effortlessly. Embrace Cisco’s dynamic workload scaling and unlock the full potential of your business operations.