Category Archives: SQL Server 2012

Data Partitioning – Scaling-Out (Part 1: Separating Operational and Historical data)

Before we start talking about Data Partitioning within the same database, I’d like us to discuss a couple methods of partitioning by scaling-out our data and servers. So, first of all, let’s think about the situation when we have all our data in the same database/table.

This is quite general multi-tier client architecture nowadays. Obviously we can have client software (especially legacy one) which is connecting to the database directly.

As we already discussed, almost every system stores two different kinds of data – operational and historical. So what if we do something like that:

In this schema, we keep all operational data and catalog entities in the main database and move historical data to the separate SQL Server(s). We define the linked servers which will allow us to join historical data with catalog entities as well as union the data from the different servers in some cases. Data placement is transparent to the clients which access the data through application server – “the must have” in such architecture. Application server knows how data is stored, queries the data from the multiple databases, merges the streams and returns the final dataset to the clients. Technically speaking, we can even have performance advantages by querying the data in parallel – each database connection could be done from the separate thread.

Legacy software that queries database directly is usually OK too – as long as software does not need access to historical data. Even if historical data is needed, it could be sometimes workarounded by creating distributed partitioned views.

The biggest drawback of this design is additional complexity and overhead introduced by developing the code that combines data from the multiple sources. E.g. application server and, sometimes, database code. Everything would depend on what do we need to do with the data. Some cases are very simple (think about code like that running in the single database/table solution):

With the new multiple servers we will need to open multiple database connections and run those statements against multiple servers:

And finally merge the data for the client. We can even have Customers list cached on application server somewhere and avoid cross-server joins. Simple enough. Although what if we look for something like that – find most expensive orders, page the results and return the second page to the client? (Small disclaimer – it’s not the best way of doing paging. Check this post for more information)

As you can imagine, there are no easy ways to implement it. We need to either write the query that collects the data from the multiple databases via linked servers and sort/page it after that. Alternatively, we can do it on application server side which could lead to the complex code that does not perform well enough. It’s also worth to mention that linked servers could introduce the own set of the issues from both, performance and development standpoints.

The biggest problem I see with such design is that requirements are rarely static – things tend to change. I have this architecture implemented in one of my systems and while things were relatively simple at the beginning, newer requirements led to the huge development overhead. But everything, of course, is system-specific and in “It depends” category. That design has its own purpose and use-cases. And we will talk about them later.

Next: Scaling-Out by Data Sharding

Data Partitioning – When to partition the data

Last week we were discussing why do we want to partition the data. I hope at this point we all agreed that data partitioning could help us with the large table. Which leads to the question – when the table becomes “big enough” for partitioning.

This is kind of theoretical discussion. I heard a lot of different opinions varied from hundreds MBs to TBs. We can spend a lot of time discussing that and will never end up with the right answer. Or, better say, all answers would be the right ones. Obviously your mileage may vary – a lot of things depend on your system, environment and hardware. But my personal take is very simple. If I expect my system to collect large amount of data, the right time (for me) to partition the data is:

Even if Data Partitioning introduces some overhead on development and testing there are two particular reasons why I’d suggest to do that. Most importantly – data partitioning changes execution plans. As result, when you finally implemented it you’d need to re-test entire system. And this could be very expensive and time consuming. Not even mention the possibility of another round of performance tuning. It’s much easier to deal with potential issues during initial development and testing especially when you don’t have a lot of data and impact of sub-optimal execution plans is smaller.

Second reason is that it’s very hard (if even possible) to partition the data keeping the system online. In most part of the cases we are going to deal with exclusive schema modification locks that prevent any access to the table even in read uncommitted transaction isolation level. And if you think how long physical data movement of 100s GBs or TBs of data could take.. Well, you get an idea.

Again, everything depends on the system and requirements. There is no need to spend time and efforts on data partitioning if you don’t expect to have a lot of data. But remember that situation and requirements can change. There is always the possibility that project would be more successful than we expect it to be 🙂

 Next: Scaling Out by separation operational and historical data

Data Partitioning – Why do we want to partition the data?

Even if we spoke about table partitioning a long time ago, I’d like to circle back to this question from the different, architectural standpoint. In the next a few posts we will discuss why we want to partition data and what options do we have to do that.

Data Partitioning is the complex and time consuming process. So as the first step I’d like to explain why we want to go through all the efforts to do that.

Reason #1 – Without partitioning everything is in the same place.

Quite obvious, is not it? And it’s not necessarily bad. One of advantages of when data is in the same place – it makes development and optimization very easy. Execution plans are predictable but that’s all about it. The biggest problem is that when everything is in the same table (or better say partition) it
stored on the same filegroup, same set of files and same disk array. Well, technically speaking we can separate our clustered and non-clustered indexes between different filegroups but at the end it introduces its own set of the issues especially in disaster recovery situations.

The problem here is that in every system we have data that belongs to the different “groups” / types – operational and historical. Duration of the operational period (which is basically customer facing OLTP portion of the system) varies – could be a day, week, month, maybe even year(s) but in the large system
there is always additional historical data and often there is a lot of historical data.

Obviously we care more about operational (customer facing) performance and would like to put operational data to the fastest disk array possible. But when everything is in the same place we don’t have a lot of choices – we either need to buy disk array big enough to store historical data, which could be
terribly expensive and generally speaking is waste of money, or compromise and go with slower disks due budget constraints.

Reason #2 – The same schema, indexes and compression

Operational and historical data usually have different access patterns. As I mentioned above, operational data is usually OLTP-related data. So we have a lot of short optimized queries. Historical data is often used for analysis and reporting – e.g. we are talking about Data Warehouse type access. Different
access patterns require different index structures. Again, when we have everything in the one table we don’t have a lot of choices. We either create additional indexes to support queries against historical data and hurt performance of OLTP part of the system or, alternatively, do not have those indexes and, as result, have bad performance of the queries against historical portion of the data.

Another thing is that in some cases we want to have slightly different schemas for those portions of the data. One of the examples – operational data can have some additional columns used for the processing. There is the good chance that historical data does not necessarily need them and when we have a lot of
records every byte counts.

Lastly, we cannot partially compress our data. Compression could help a lot with historical data which is relatively static – we are increasing performance by reducing number of pages and, as result, amount of IO operations. On the other hand, for operational data compression usually hurts performance because
data is changing quite often.

Reason #3 – Index maintenance

Same thing as above. Generally speaking we don’t need to rebuild or reorganize the indexes for historical data which is static. But there is no way we can do rebuild/reorg on the part of the data.

Reason #4 – Backup strategy

Same thing again. All data is in the same filegroup. We cannot exclude historical portion from the backups. As result, it increases backup time, size of backup file and introduces additional overhead during the process

Reason #5 – Disaster recovery

One of the biggest benefits of Enterprise Edition of SQL Server is piecemeal restore. It allows us to bring system partially online on filegroup by filegroup basis. If we had operational and historical data separated to the different filegroups we could restore operational part first and make system available
to the customers while we are working on historical part. With everything in the same non-partitioned table it’s impossible. We will need to restore everything first before system becomes available and online.

Reason #6 – Statistics

SQL Server needs to estimate number of rows on the every step of the execution plan to make it efficient. In order to do so SQL Server uses statistics and histogram in
particular. Histogram contains some values from the key as well as the information about data distributions in the intervals of the values. The problem is that histogram contains
at most 200 steps/values. So more data we have in the table, bigger intervals are. Approximations are done on each interval and as result our estimations are less accurate.

Another thing that’s worth to mention is that by default SQL Server updates statistics only after 20% of the key values are updated. So if we have 1 million rows in the table, we can insert 200,000 new rows, delete 200,000 rows or update them before statistics update is triggered. Obviously, more rows we have,
less often statistics would be updated.

That list is not completed by any means. But each of those reasons is big enough by itself to start thinking about data partitioning.

Next: When to partition the data

Sunday T-SQL Tip: Application Locks

There is the set of the tasks when you need to serialize access to some T-SQL code. Assume you have multiple instances of the data processing applications running simultaneously and does not want them to load the same data for the processing. Or you want to perform some bulk operation and want to stop the client from inserting the new data during this period.

Obviously one of the options is to use transactions in serializable isolation level. The problem with that – it could be more restrict than you want to. Supposedly you don’t want to block access to the data but rather introduce something similar to CriticalSection you have with the client development.

Fortunately SQL Server has the set of the stored procedures you can use for such purposes: sp_getapplock and sp_releaseapplock. sp_getapplock allows you to obtain shared or exclusive “user” lock on transaction or session context. If you run this SP in transaction scope, the lock would be released by the end of transaction. Otherwise when session ends.

Let’s see it in action. Let’s run the following statement from 2 different sessions

And here are the results from the both sessions:

As you can see it does the trick.

Update (2013-05-08): This post would provide more details about different methods of serialization available in SQL Server