Retro toggle switch ON OFF on metal background. 3d illustration

Data Centre Failure – Series 2

[stextbox id=”black” color=”000000″ ccolor=”000000″ bgcolor=”F7F7F7″ cbgcolor=”F7F7F7″ bgcolorto=”F5F5F5″ cbgcolorto=”F5F5F5″ image=”null”]This blog is the second series discussing the tail of active – active data centres and data centre failover. The first blog focuses on GTM DNS based load balancing and introduces failover challenges. This post addresses database challenges, third is storage best practices and the final post will focus on ingress and egress traffic flows. Ivan also has a great webinar on active active datacenters. [/stextbox]

A Database Management System (DBMS) is a software application that interacts with users and other applications. It sits behind other elements known as “middleware”, in between the application and the storage tier. It connects software components. Not all environments use databases, some store data in files, such as MS Excel. Also, the processing of data is not always done via query languages. For example, Hadoop has a framework to access data stored in files. Popular DBMS include MySQL, Oracle, PostgreSQL, Sybase and IBM DB2. Database storage differs depending on the needs of a system. 

What does a DBMS provide for applications? It provides a means to access massive amounts of persistent data. Databases handle terabytes of data every day. Usually, this data is much larger than what can be fit into the memory of a standard O/S system. The size of the data and quantity of connections means the performance of the database directly affects application performance. Databases carry out thousands of complex queries per second over terabytes of data. The data is often persistent, meaning the data on the database system outlives the time span of application execution. The data does not go away and sits thereafter the program stops running. Many users or applications access data concurrently and measures are in place so data is not overwritten by concurrent users. These measures are known as concurrency controls. They ensure correct results for concurrent operations. Concurrency controls do not mean exclusive access to the database. The control occurs on data items, allowing many users to access to the database but accessing different data items.

Database Concepts

The Data Model refers to how the data is stored in the database. There are a number of options, such as the relational data model, which is a set of records. Also available is stored in XML, which is a hierarchical structure of labeled values. Another solution includes a graph model where all data is represented by nodes.

The Schema sets up the structure of the database. You have to structure the data before you build the application. The database designers are the ones that establish the schema for the database. All data is stored within the schema. The schema doesn’t change that much but data changes quickly and constantly. The Data Definition Language (DDL) is what setups the schema. It’s a standard of commands that defines different data structures. Once the schema is set up and the data is loaded you start the query process and modify the data. This is done with the Data Manipulation Language (DML). They are statements used to retrieve and work with data.

The SQL query language is a standardized language based on relational algebra. It is a programming language used to manage data in a relational database and is supported by all major database systems. Because of what it can offer, many people make the decision to go on and study SQL Programming when they want to learn more about it, or if they want to have a career in this particular area. So much so, that enlisting the help of somewhere like is becoming a popular option for people to consider when they want to make sure that any work that they do is of the highest quality so they come away with the best grades. Learning the ins and outs of SQL could be very important to how you work with it going forward. It can be used through GUI, prompt, or embedded in programs. The SQL query engages with the Query optimizer of the database. The Query optimizer takes SQL queries and determines the optimum way to execute them on the database. There are two parts to the language – the Data Definition Language (DDL) and a Data Manipulation Language (DML). DDL creates and drops tables and the DML (already mentioned) is used to query and modify the database with Select, Insert, Delete and update statements. The Select statement is one of the most commonly used and performs the database query.

Database challenges

The database design and failover appetite is a business and non-technical decision. The business must decide on acceptable RTO (recovery time objective) and RPO (recovery point objective) values. How accurate do you want your data and how long can a client application be in the read-only state?

There are three main options a) Distributed databases with two-phase commit b) Log shipping c) Read-only and read-write with synchronous replication.

With distributed databases and a two-phase commit, you have multiple synchronized copies of the database. It’s very complex and latency can be a real problem affecting application performance. Many people don’t use this and go for log shipping instead. Log shipping maintains a separate copy of the database on the standby server. Two copies of a single database on different computers or the same computer with separate instances, primary database, and secondary database. Only one copy is available at any given time. Any changes to the primary databases are log shipped or propagated to the other copy of the database. Some environments have a 3rd instance, known as a monitor server. A monitor server records history checks status and tracks details of log shipping. A drawback to log shipping is that it has an RPO of non-zero. It may be the case that a transaction was written just before the failure and as a result, will be lost. Log shipping cannot guarantee zero data loss. An enhancement to log shipping is read-only and read-write copies of the database with synchronous replication between the two. With this method, there is no data loss and it’s not as complicated as distributed databases with two-phase commit.


If you have a transaction database and all the data is in one data center, when you start the VM in the other DC, you will have a problem with latency between the write database and the database client. It is not much you can do about latency but only shorten the link. Some believe that WAN optimization will decrease latency, but many of these solutions will just add to it. Depending on how well-written the application is will determine how badly the VM is affected. With very badly written applications, a few milliseconds can destroy performance. How quickly can you send SQL query across the WAN link? How many queries per transaction does the application do? Badly written applications require transactions encompassing many queries.

Swim lanes” and Ivan Pepelnak

A better approach would be to use multiple application stacks in different data centers. Load balancing can then be used to forward traffic to each instance. Better to have multiple application stacks ( known as swim lanes ) that are completely independent. Multiple instances on the same application allow you to take offline an instance without affecting others. Swim lanes is a term I found from Ivan Pelelnak’s webinars. In this case, you could use distributed database with a two-phase commit but these issues are already discussed.


A better approach is to have a single database server and ship the changes to the read-only database server. With the example of a two application stack, one of the application stacks is read-only and eventually consistent and the other application stack is read-write. So if the client needs to make a change, for example, submit an order, how do you do this from the read-only data center? There are a number of ways to do this. One way is with the client software. The application knows of the transaction and uses a different hostname and redirects the request to the read-write database. The hostname request can be used in conjunction with a load balancer to redirect queries to the correct database. Another method is to have applications with two instances of the database – read-only and read-write instances. So every transaction would know if it’s read-only or read-write and will use the appropriate instance of the database. A task of carryout out a purchase would trigger the read-write instance and browsing products would trigger read-only.

Most things we do are eventually consistent at a user’s face level. If you buy something online, even if it’s in the shopping cart it’s not guaranteed until you select the buy button. Exceptions that are not fulfilled are done manually by sending the user an email. For my information, kindly visit my previous post on data center failover with DNS based load balancer. Series 3 and Series 4 coming soon. Ivan also has a great webinar on active-active data centers.

Comments are closed.