Data Centre Failure – Series 2
A Database Management System (DBMS) is a software application that interacts with users and other applications. It sits behind other elements known as “middleware”, in between the application and the storage tier. It connects software components. Not all environments use databases, some store data in files, such as MS Excel. Also, the processing of data is not always done via query languages. For example, Hadoop has a framework to access data stored in files. Popular DBMS include MySQL, Oracle, PostgreSQL, Sybase and IBM DB2.
What does a DBMS provide for applications? It provides a means to access massive amounts of persistent data. Databases handle terabytes of data every day. Usually, this data is much larger than what can be fit into the memory of a standard O/S system. The size of the data and quantity of connections means the performance of the database directly affects application performance. Databases carry out 1000’s of complex queries per second over terabytes of data. The data is often persistent, meaning the data on the database system out lives the time span of application execution. The data does not go away and sits there after the program stops running. Many users or applications access data concurrently and measures are in place so data is not overwritten by concurrent users. These measures are known as concurrency controls. They ensure correct results for concurrent operations. Concurrency controls do not mean exclusive access to the database. The control occurs on data items, allowing many users to access to the database but accessing different data items.
The Data Model refers to how the data is stored in the database. There are a number of options, such as the relational data model, which is a set of records. Also available is storage in XML, which is a hierarchical structure of labelled values. Another solution includes a graph model where all data is represented by nodes.
The Schema sets up the structure of the database. You have to structure the data before you build the application. The database designers are the ones that establish the schema for the database. All data is stored within the schema. The schema doesn’t change that much but data changes quickly and constantly. The Data Definition Language (DDL) is what setups the schema. It’s a standard of commands that defines different data structures. Once the schema is set up and the data is loaded you start the query process and modify the data. This is done with the Data Manipulation Language (DML). They are statements used to retrieve and work with data.
SQL query language is a standardized language based on relational algebra. It is a programming language used to manage data in a relational database and is supported by all major database systems. It can be used through GUI, prompt, or embedded in programs. The SQL query engages with the Query optimizer of the database. The Query optimizer takes SQL queries and determines the optimum way to execute them on the database. There are two parts to the language – the Data Definition Language (DDL) and a the Data Manipulation Language (DML). DDL creates and drops tables and the DML (already mentioned) is used to query and modify the database with Select, Insert, Delete and update statements. The Select statement is one of the most commonly used and performs the database query.
The database design and fail over appetite is a business and non-technical decisions. The business must decide on acceptable RTO (recovery time objective) and RPO (recovery point objective) values. How accurate do you want your data and how long can a client application be in the read-only state?
There are three main options a) Distributed databases with two-phase commit b) Log shipping c) Read-only and read-write with synchronous replication.
With distributed databases and two-phase commit, you have multiple synchronized copies of the database. It’s very complex and latency can be a real problem affecting application performance. Many people don’t use this and go for log shipping instead. Log shipping maintains a separate copy of the database on the standby server. Two copies of a single database on different computers or same computer with separate instances, primary database and secondary database. Only one copy is available at any given time. Any changes to the primary databases are log shipped or propagated to the other copy of the database. Some environments have a 3rd instance, known as monitor server. A monitor server records history checks status and tracks details of log shipping. A drawback to log shipping is that it has an RPO of non-zero. It may be the case that a transaction was written just before the failure and as a result, will be lost. Log shipping cannot guarantee zero data loss. An enhancement to log shipping is read-only and read-write copies of the database with synchronous replication between the two. With this method, there is no data loss and it’s not as complicated as distributed databases with two-phase commit.
If you have a transaction database and all the data is in one datacenter, when you start the VM in the other DC, you will have a problem with latency between the write database and the database client. There is not much you can do about latency but only to shorten the link. Some believe that WAN optimization will decrease latency, but many of these solutions will just add to it. Depending on how well written the application is will determine how badly the VM is affected. With very badly written applications, a few milliseconds can destroy performance. How quickly can you send SQL query across the WAN link? How many queries per transaction does the application do? Badly written applications require transactions encompassing many queries.
“Swim lanes” and Ivan Pepelnak
A better approach would be to use multiple application stacks is different data centres. Load balancing can then be used to forward traffic to each instance. Better to have multiple application stacks ( known as swim lanes ) that are completely independent. Multiple instances on the same application allow you to take offline an instance without affecting other. Swim lanes is a term I found from Ivan Pelelnak webinars. In this case, you could use distributed database with two-phase commit but these issues are already discussed.
The better approach is to have a single database server and ship the changes to the read-only database server. With the example of a two application stack, one of the application stacks in read-only and eventually consistent and the other application stack is read-write. So if the client needs to make a change, for example, submit an order, how do you do this from the read-only data centre? There are a number of ways to do this. One way is with the client software. The application knows of the transaction and uses a different host name and redirects the request to the read-write database. The hostname request can be used in conjunction with a load balancer to redirect queries to the correct database. Another method is to have applications with two instances of the database – read-only and read-write instances. So every transaction would know if it’s read-only or read-write and will use the appropriate instance of the database. A task of carryout out a purchase would trigger the read-write instance and browsing products would trigger read-only.
Most things we do are eventually consistent at a user face level. If you buy something online, even if it’s in the shopping cart it’s not guaranteed until you select the buy button. Exceptions that are not fulfilled are done manually by sending the user an email. For my information, kindly visit my previous post on data centre failover with DNS based load balancer. Series 3 and Series 4 coming soon. Ivan also has a great webinar on active-active data centres.