Cloud Data Center | Modular building blocks

Data Center Design Principles – Modular building blocks

 

Hugh domains fail  for a reason” – “Russ White”

The ability to scale the data center is based on a modular design by creating repeatable modular building blocks. For the virtual data center these building blocks can be referred to as “Points of Delivery”, also known as PODs and “Integrated Compute Stacks“, also known as ICSs such as VCE Vblock and FlexPod.

You could define a POD as a modular unit of data center components that support incremental build-out of the data center. They are the basis for modularity within the cloud data center. To scale a POD and expand incrementally, designers can add Integrated Compute Stacks ( ICS ) within a POD. ICS are a second, smaller unit that are also added as repeatable units.

The general idea behind these two forms of modularity is to have consistent predictable configurations with supporting implementation plans that can be rolled out when a predefined performance limit is reached. For example, if POD-A reaches 70% capacity, a new POD called POD-B is implementation in the exact same fashion. The key point here is that the modular architecture provides a predictable set of resource characteristics that are added as the need arises. This adds numerous benefits to fault isolation, capacity planning and ease of new technology adoption.

Special service PODs can be used for specific security and management functions.

 

POD Concept

POD Concept

 

Design Considerations

The size of the POD is relative to the MAC addresses supported at the aggregation layer. Different vNICs require unique MAC addresses; which usually equates to 4 MAC address per VM. The Nexus 7000 series supports up to 128,000 MAC address so in a large POD design, 11,472 workloads can be enabled which translate to 11,472 VM – 45,888 MAC addresses.

Sharing VLANS among different PODS is not recommended and you should try to filter VLANs on trunk ports to stop unnecessary MAC address flooding. Spanning VLANs among PODs would result in a end-to-end spanning tree which should be avoided at all costs.

 

Multi-Tenancy in the Cloud Data Center

Within each of these PODS and ICS stacks, multi-tenancy and tenant separation is key. A tenant is basically an entity that subscribes to cloud services and can be defined by two ways. A tenants definition depends on its location in the networking world. A tenant in the enterprise private cloud could be a department or business unit. However, a tenant in the public world could be an individual customer or an organization.

A tenant, either an individual or business unit can then deploy an application on the cloud infrastructure and each tenant can have differentiating levels of resource allocation within the cloud. Common service offerings fall into 4 tiers, such as PremiumGold, Silver and Bronze. There are recent tiers, such as Copper and Palladium and these will be discussed in later posts.

Its does this by firstly, selecting a network container that provides them with a virtual dedicated network ( within a shared infrastructure ). The customer then goes through a VM sizing model, storage allocation / protection and finally disaster recovery tier.

 

Virtual Data Center Service Tiers

Virtual Data Center Service Tiers

SP – Storage Protection, DR – Disaster Recovery.

 

Example of Tiered Service Model

Gold Silver Bronze
Services FW and LB service LB service None
Bandwidth 40% 30% 20%
Segmentation Single VRF Single VRF Single VRF
VLAN Multiple VLANs Multiple VLANs Single VLANs
Data Protection Clone Snap None
Data Recovery Remote replication Remote Replication None

 

Network Container

The type of service selected in the network container will vary depending on application requirements. In some cases, applications may require several tiers. For example a Gold tier could require a three-tier application layout ( front end, application and database ), with each tier placed on a separate VLAN, requiring stateful services ( dedicated virtual firewall and load balancing instances). Other tiers may simply require a shared VLAN with front end firewalling to restrict inbound traffic flows.

Usually, a tier will use a single individual VRF ( VRF-lite ) but the number of VLANs will vary depending on service level offering. For example, a cloud provider offering simple web hosting will offer a single VRF and a single VLAN. On the other hand, an enterprise customer who has a multi-layer architecture may want multiple VLANs and services ( load balancer, Firewall, Security groups, Cache ) for its application stack.

 

Compute Layer

The compute layer related to the virtual servers and the resources available to the virtual machines. Service profiles can vary depending on the size of the VM attributes, CPU, memory and storage capacity. At a compute layer, service tiers usually have three compute workload sizes as depicted in the table below.

 

Example of Compute Resources

Options Options Options
Large Medium Small
vCPU per VM 1 vCPU 0.5 vCPU 0.25 vCPU
Cores per CPU 4 4 4
VM per CPU 4 VM 16 VM 32 VM
VM per vCPU oversubscription 1:1 ( 1 ) 2:1 ( 0.5 ) 4;1 ( 0.25 )
RAM allocation 16 GB dedicated 8 GB dedicated 4 GB shared

Compute profiles can also be associated with VMware Distributed Resource Scheduling ( DRS ) profiles to prioritize specific classes of VM’s.

 

Storage Layer

This layer relates to storage allocation and the type of storage protection. For example a GOLD tier could offer three tiers of RAID-10 storage using 15K rpm FC, 10K rpm FC, and SATA drives. While a BRONZE tier could offer just a single RAID-5 with SATA drives

 

 

 

About Matt Conran

Matt Conran has created 155 entries.

Leave a Reply