Dynamic Workload Scaling ( DWS )

Dynamic Workload Scaling ( DWS ) – Monitor and distribute traffic at user defined thresholds

Data Centers are under pressure to support the ability to burst new transactions to available Virtual Machines ( VM ). In some cases, the VM’s used to handle the additional load will be geographically dispersed with both data centers connected by a Data Center Interconnect ( DCI ) link. The ability to migrate workloads within an enterprise hybrid cloud or in a hybrid cloud solution between enterprise and service provider is key for business continuity for both planned and unplanned outages.

A new technology introduced by Cisco, called Dynamic Workload Scaling ( DWS ) satisfies the requirement of dynamically bursting workloads based on user defined thresholds to available resource pools ( VMs ). It is tightly integrated with Cisco Application Control Engine ( ACE ) and Cisco’s Dynamic MAC-in-IP encapsulation technology known as Overlay Transport Virtualization ( OTV ), enabling resource distribution across Data Center sites. OTV provides the LAN extension method that is used to keep state of the virtual machine as it passes locations and ACE provides the load balancing functionality.

 

Dynamic Workload Scaling

Dynamic Workload Scaling

 

How does it work? Key points

-DWS monitors VM capacity for an application and expands that application to another resource pool during periods of peak usage. Providing a perfect solution to support distributed applications among geographically dispersed data centers.

-DWS uses both the ACE and OTV technologies to build a MAC table. It monitors the local MAC entries and those located via the OTV link to determine if a MAC entry is considered “Local” or “Remote”.

 

DWS Flow

DWS Flow

 

-The ACE monitors the utilization of the “local” VM. From these values the ACE can compute the average load of the local Data Center.

-DWS uses two API’s. One to monitor the server load information polled from VMware’s VCenter and another API to poll OTV information from the Nexus 7000.

-During normal load condition when the data center is experiencing low utilization the ACE can load balance incoming traffic to the local VMs.

-However, when the Data Center experiences high utilization and is crossing the predefined thresholds, the ACE will add the “remote” VM to its load balancing mechanism.

 

DWS design considerations

During congestion the ACE adds the “remote” VM to its load balancing algorithm. The remote VM which is placed in the secondary data center could add additional load on the DCI. Essentially hair-pining traffic for a period of time as ingress traffic for the “remote” VM continues to flow via the primary data center. DWS should be used in conjunction with Locator Identity Separation Protocol ( LISP ) to avail of its automatic move detection and optimal ingress path selection.

About Matt Conran

Matt Conran has created 169 entries.

2 Comments

  • Ivan Pepelnjak

    Hey, isn’t ACE EOL’d for a few years? And DWS seemed to be all the rage in 2011 (at least based on the dates I found on docs on Cisco.com). Is there something new that I missed?

  • Matt Conran

    Hi Ivan. thanks for the comment!!
    So what would be your recommendation here. A different vendor, like F5 ( ScaleN ) , SDX SLB ?

Leave a Reply