Monthly Archives: December 2012

Data Partitioning – Scaling-Out (Part 1: Separating Operational and Historical data)

Before we start talking about Data Partitioning within the same database, I’d like us to discuss a couple methods of partitioning by scaling-out our data and servers. So, first of all, let’s think about the situation when we have all our data in the same database/table.

This is quite general multi-tier client architecture nowadays. Obviously we can have client software (especially legacy one) which is connecting to the database directly.

As we already discussed, almost every system stores two different kinds of data – operational and historical. So what if we do something like that:

In this schema, we keep all operational data and catalog entities in the main database and move historical data to the separate SQL Server(s). We define the linked servers which will allow us to join historical data with catalog entities as well as union the data from the different servers in some cases. Data placement is transparent to the clients which access the data through application server – “the must have” in such architecture. Application server knows how data is stored, queries the data from the multiple databases, merges the streams and returns the final dataset to the clients. Technically speaking, we can even have performance advantages by querying the data in parallel – each database connection could be done from the separate thread.

Legacy software that queries database directly is usually OK too – as long as software does not need access to historical data. Even if historical data is needed, it could be sometimes workarounded by creating distributed partitioned views.

The biggest drawback of this design is additional complexity and overhead introduced by developing the code that combines data from the multiple sources. E.g. application server and, sometimes, database code. Everything would depend on what do we need to do with the data. Some cases are very simple (think about code like that running in the single database/table solution):

With the new multiple servers we will need to open multiple database connections and run those statements against multiple servers:

And finally merge the data for the client. We can even have Customers list cached on application server somewhere and avoid cross-server joins. Simple enough. Although what if we look for something like that – find most expensive orders, page the results and return the second page to the client? (Small disclaimer – it’s not the best way of doing paging. Check this post for more information)

As you can imagine, there are no easy ways to implement it. We need to either write the query that collects the data from the multiple databases via linked servers and sort/page it after that. Alternatively, we can do it on application server side which could lead to the complex code that does not perform well enough. It’s also worth to mention that linked servers could introduce the own set of the issues from both, performance and development standpoints.

The biggest problem I see with such design is that requirements are rarely static – things tend to change. I have this architecture implemented in one of my systems and while things were relatively simple at the beginning, newer requirements led to the huge development overhead. But everything, of course, is system-specific and in “It depends” category. That design has its own purpose and use-cases. And we will talk about them later.

Next: Scaling-Out by Data Sharding

Data Partitioning – When to partition the data

Last week we were discussing why do we want to partition the data. I hope at this point we all agreed that data partitioning could help us with the large table. Which leads to the question – when the table becomes “big enough” for partitioning.

This is kind of theoretical discussion. I heard a lot of different opinions varied from hundreds MBs to TBs. We can spend a lot of time discussing that and will never end up with the right answer. Or, better say, all answers would be the right ones. Obviously your mileage may vary – a lot of things depend on your system, environment and hardware. But my personal take is very simple. If I expect my system to collect large amount of data, the right time (for me) to partition the data is:

Even if Data Partitioning introduces some overhead on development and testing there are two particular reasons why I’d suggest to do that. Most importantly – data partitioning changes execution plans. As result, when you finally implemented it you’d need to re-test entire system. And this could be very expensive and time consuming. Not even mention the possibility of another round of performance tuning. It’s much easier to deal with potential issues during initial development and testing especially when you don’t have a lot of data and impact of sub-optimal execution plans is smaller.

Second reason is that it’s very hard (if even possible) to partition the data keeping the system online. In most part of the cases we are going to deal with exclusive schema modification locks that prevent any access to the table even in read uncommitted transaction isolation level. And if you think how long physical data movement of 100s GBs or TBs of data could take.. Well, you get an idea.

Again, everything depends on the system and requirements. There is no need to spend time and efforts on data partitioning if you don’t expect to have a lot of data. But remember that situation and requirements can change. There is always the possibility that project would be more successful than we expect it to be 🙂

 Next: Scaling Out by separation operational and historical data

SQL Server MCM – It was a long journey

It was the day when I did 2 presentations at Teched Russia in Moscow. I came from that event completely exhausted and found the email that I passed Microsoft Certified Master Lab exam. I’ve obviously expected the email – it was about a month after I took the lab in Seattle during PASS Summit. What I did not expect is that I passed. I already started to think how to adjust my home lab and practice for retake. But, perhaps, I’m going a bit ahead of myself with the story.

I passed my first MCP exam in 2007. Originally I started to obtain the certificates with simple purpose – improve my resume. But it changed quickly. It’s strange – you don’t appreciate learning much when you are young – there are always way too many other things to be excited about. And, as many other things, you miss those challenges when you are getting older. I really enjoyed the process of learning and was happy and proud of myself after each test I passed. Last but not least, it also helped with my day-to-day job. Obviously it’s impossible to retain all information from study materials but you at least know what is available and where to look when needed. No needs to re-invent the wheels. Anyway, to make the long story short – in 6 months I got MCPD and MCITP certificates.

And then I was stuck. Of course, not in terms of learning but speaking of certifications – there were no ways to go. As much as I wanted to attend MCM training in Redmond, I knew that my company would never approve it. Neither from cost nor from time prospective. So when Joe Sack announced that training is no longer required during PASS Summit 2010, I was quite excited. And quite lazy, of course. There were other priorities I had to take care of so I lost almost an year before even started to prepare to the exams. 

I took the Knowledge exam in July 2012. This was very interesting experience. Never need to say that exam content was much more challenging than MCITP ones. The main difference from my point of view is that questions require you to think, not to know the answer. And of course, there were a lot of questions and not much time. I particularly enjoyed that there were very few questions from “It depends” category without enough information to make the decision. There were still some, but far less comparing to MCITP/MCTS ones.

I left Prometrics center with very mixed impressions – gave myself 50/50 chances. But in a month I found that I passed with relatively high score – higher than I expected. And I started to prepare to the lab. I took it in Seattle during PASS Summit last month. boB Taylor put some clarifications around the format. I, of course, cannot comment the content but to give you a tip – your main enemy is not the complexity of scenarios but time. You are out of time, you are under stress. And you are making mistakes. I’d probably disagree with boB’s advice about reading all scenarios first though – I spent 20 minutes on that – time I desperately needed later. What I would do is to scan through the scenarios and immediately take care of those you feel yourself comfortable with.
Reduce the scope as fast you can. And don’t kill yourself on testing. Do the basic tests, of course, but don’t spend any more time than absolutely needed. 

How to prepare yourself to the exams? First, and most importantly, you need to understand how SQL Server works internally. It’s not enough to know, for example, how to restore the database from the backup. You’d also need to understand what happen under-the-hood when you run RESTORE command. Next practice, practice and practice. One of my challenges, for example, is that I rarely do low level day-to-day DBA stuff. For example, I usually design backup strategy but I rarely implement it by myself. Of course, I know how to do that, but sometimes it took me a couple extra minutes to find exact syntax in BOL. And those minutes are precious.

Speaking of specific resources – my first advice would be to start with Kalen Delaney books about SQL Server internals. After that, I’d focus on MCM training videos – (Huge thanks to all SQLSkills crew). And, of course, focus on MCM reading list. This is huge but stuff clicks together over time – further you are on the way, easier it is for you.

In any case, good luck! As for me – there is still a long way to go. Once you start, you would not be able to stop. 🙂

Data Partitioning – Why do we want to partition the data?

Even if we spoke about table partitioning a long time ago, I’d like to circle back to this question from the different, architectural standpoint. In the next a few posts we will discuss why we want to partition data and what options do we have to do that.

Data Partitioning is the complex and time consuming process. So as the first step I’d like to explain why we want to go through all the efforts to do that.

Reason #1 – Without partitioning everything is in the same place.

Quite obvious, is not it? And it’s not necessarily bad. One of advantages of when data is in the same place – it makes development and optimization very easy. Execution plans are predictable but that’s all about it. The biggest problem is that when everything is in the same table (or better say partition) it
stored on the same filegroup, same set of files and same disk array. Well, technically speaking we can separate our clustered and non-clustered indexes between different filegroups but at the end it introduces its own set of the issues especially in disaster recovery situations.

The problem here is that in every system we have data that belongs to the different “groups” / types – operational and historical. Duration of the operational period (which is basically customer facing OLTP portion of the system) varies – could be a day, week, month, maybe even year(s) but in the large system
there is always additional historical data and often there is a lot of historical data.

Obviously we care more about operational (customer facing) performance and would like to put operational data to the fastest disk array possible. But when everything is in the same place we don’t have a lot of choices – we either need to buy disk array big enough to store historical data, which could be
terribly expensive and generally speaking is waste of money, or compromise and go with slower disks due budget constraints.

Reason #2 – The same schema, indexes and compression

Operational and historical data usually have different access patterns. As I mentioned above, operational data is usually OLTP-related data. So we have a lot of short optimized queries. Historical data is often used for analysis and reporting – e.g. we are talking about Data Warehouse type access. Different
access patterns require different index structures. Again, when we have everything in the one table we don’t have a lot of choices. We either create additional indexes to support queries against historical data and hurt performance of OLTP part of the system or, alternatively, do not have those indexes and, as result, have bad performance of the queries against historical portion of the data.

Another thing is that in some cases we want to have slightly different schemas for those portions of the data. One of the examples – operational data can have some additional columns used for the processing. There is the good chance that historical data does not necessarily need them and when we have a lot of
records every byte counts.

Lastly, we cannot partially compress our data. Compression could help a lot with historical data which is relatively static – we are increasing performance by reducing number of pages and, as result, amount of IO operations. On the other hand, for operational data compression usually hurts performance because
data is changing quite often.

Reason #3 – Index maintenance

Same thing as above. Generally speaking we don’t need to rebuild or reorganize the indexes for historical data which is static. But there is no way we can do rebuild/reorg on the part of the data.

Reason #4 – Backup strategy

Same thing again. All data is in the same filegroup. We cannot exclude historical portion from the backups. As result, it increases backup time, size of backup file and introduces additional overhead during the process

Reason #5 – Disaster recovery

One of the biggest benefits of Enterprise Edition of SQL Server is piecemeal restore. It allows us to bring system partially online on filegroup by filegroup basis. If we had operational and historical data separated to the different filegroups we could restore operational part first and make system available
to the customers while we are working on historical part. With everything in the same non-partitioned table it’s impossible. We will need to restore everything first before system becomes available and online.

Reason #6 – Statistics

SQL Server needs to estimate number of rows on the every step of the execution plan to make it efficient. In order to do so SQL Server uses statistics and histogram in
particular. Histogram contains some values from the key as well as the information about data distributions in the intervals of the values. The problem is that histogram contains
at most 200 steps/values. So more data we have in the table, bigger intervals are. Approximations are done on each interval and as result our estimations are less accurate.

Another thing that’s worth to mention is that by default SQL Server updates statistics only after 20% of the key values are updated. So if we have 1 million rows in the table, we can insert 200,000 new rows, delete 200,000 rows or update them before statistics update is triggered. Obviously, more rows we have,
less often statistics would be updated.

That list is not completed by any means. But each of those reasons is big enough by itself to start thinking about data partitioning.

Next: When to partition the data