Author Archives: Dmitri Korotkevitch

Sunday T-SQL tip: Cross joins and empty rowsets

As we know, cross joins without WHERE clause produce the Cartesian product of the tables. So, for example, if first table has 5 rows and second table has 3 rows, result set will have 15 rows with all possible combination of the values from the both tables.

Quite limited real-life usage though – I believe I used it maybe 2 times during last 5 years. First time it was the task to combine account specific configuration settings and system configuration settings into one row. Something like:


select
a.ConfigValue1
,a.ConfigValue2
,c.GlobalConfigValue1
,c.GlobalConfigValue2
from
dbo.Accounts a cross join dbo.Config c

Second time it was similar task – combined information from master record and list of detail record IDS in XML format. Could be easily accomplished with the function of course.


;with Details(UserList)
as
(
select UserId
from dbo.Users
where AccountId = @AccountID
for XML raw('User'), Root('Users')
)
select a.AccountId, a.AccountNumber, d.UserList
from dbo.Accounts a cross join Details d
where a.AccountId = @AccountID

 

Only problem with this approach – If one of the rowsets in the join is empty, it behaves as inner join and would not return any rows. This is absolutely correct behavior if you think about it, although you typically don’t expect it to work like that when you write it. You can see it with the following example:


;with CTE1(field1)
as
(
select 'field1' where 1 = 1
)
,CTE2(field2)
as
(
select 'field2' where 1 = 0
)
select CTE1.Field1, CTE2.Field2
from CTE1 cross join CTE2

So if you want to handle it properly, you need to change it from cross-join to full outer join with always true predicate. ;with CTE1(field1)
as
(
select 'field1' where 1 = 1
)
,CTE2(field2)
as
(
select 'field2' where 1 = 0
)
select CTE1.Field1, CTE2.Field2
from CTE1 full outer join CTE2 on
1 = 1

It will do the trick.

How SQL Server stores data (Extents, Data Pages, Data Row for In-Row data)

UPDATE (2013/10/16): It is time to refresh the content. I am writing set of posts about SQL Server Storage engine – how it stores data and what happens in database files under the hood. Please check it here.

This is still work in progress and old content is available below.

I know, I spent too much time on the boring talk about “key decisions”, “performance problems” etc. Let’s finally start to talk about practical and useful things. I know, we’re ready to write the first “create table” statement. But before we do that, let’s take a look at how SQL Server stores data. All information below applies to both SQL Server 2005 and SQL Server 2008.

Everything is stored on 8K data pages (8060 bytes are technically available). 8 data pages (64K) combines into an extent. There are 2 types of extents – mixed extent which stores data pages belong to the different objects and uniform extent which stores data pages belong to the one object. First 8 pages for the objects are stored in the mixed extents, after that only uniform extents are used. All space allocation  is done based on extents – 64K blocks regardless of the type (mixed or uniform).

There are a few special data pages types SQL Server is using to track extents allocation. Those pages are basically bitmaps – every bit handles one extent. So one page can cover 64,000 extents or almost 4Gb of data. Let’s take a quick look at them:

GAM – Global Allocation Map – tracks if extent are available for allocation or already in use.
SGAM – Shared Global Allocation Map – tracks if extents are mixed extent and have at least one data page available for use.
IAM – Index Allocation Map – tracks if extents are used by specific table/index.

There is also another special page type PFS – Page Free Space – tracks approximate amount of the free space in the page, as well as a few other things. One PFS page covers 8,088 pages or about 64Mb of data.

Let’s dive into one level deeper and take a look at the structure of the data page.

As you can see, first 96 bytes is the header. After that page contains the set of data rows and ends with offset array. 2 things are worth to mention. First – each row uses 2 extra bytes for offset storage. And second, data on the page is not sorted.

Let’s dive one level deeper and take a look at the classical data row structure for in-row data.

First 2 bytes contain header information. Next 2 bytes store the length of the fixed width data following by the data. Next are 2 bytes for the number of columns. Next, null bitmask (1 byte per 8 nullable columns). It follows by 2 bytes store number of variable width columns, variable width  column offset array (2 bytes per variable column) and variable width data. And finally there is the optional 14 bytes pointer to the version store. This one is used for optimistic isolation levels (snapshot, read committed snapshot), MARS, etc.

So what is important. First of all, fixed width data always uses space even when null. Variable width data uses 2 extra bytes for offset storage for every value. Null values of variable width data are not stored although there are still 2 bytes in the offset array unless null values are last in the row. Sounds confusing? A little bit 🙂

So where does it lead us? Let’s think about the table which accepts some transactional data.
create table dbo.TranData
(
...
TranDate datetime not null,
Amount float not null,
IsApproved int not null,
IsPending int not null,
Created datetime not null constraint DEF_TranData_Created default (getDate())
...
)

5 fields we have in this table require 8 + 8 + 4 + 4 + 8 = 32 bytes. Now let’s think for a minute. Do we really need to store transaction date/time with precision up to 3 milliseconds? Would 1 minute be OK? Same about Created column. Can we use 1 second precision? What about Amount? Can we use smallmoney or maybe decimal(9,3)?

Let’s modify the table a little bit:
create table dbo.TranData
(
...
TranDate smalldatetime not null,
Amount decimal(9,3) not null,
IsApproved bit not null,
IsPending bit not null,
Created datetime2(0) not null constraint DEF_TranData_Created default (getDate())
...
)

Now it requires: 4 + 5 + 1 + 0 (8 bit fields shares 1 byte of storage space) + 6 = 16 bytes. We ended up with 16 bytes of saving. Not much. On other hand, this is about 16K per 1000 rows. Or about 16Mb per 1M rows. And what if your system collects 1M rows per day? It would be ~5.8Gb per year. What if you have 50M rows per day..? Finally it’s not only about the storage size. It greatly affects performance of the system because of the extra IO operations and other things (we will talk about it later).

So always use correct data types. But don’t be cheap – 65,000 customers is a lot when you start the project. In a year from now you will spend hundreds of hours altering your code and replacing CustomerId smallint to int. It is not worth it.

Sunday T-SQL tip: How to save large data packet in SQL Server 2005

Last time we saw that TVP is the clear winner in compare with separate insert calls. Obviously this is not an option with SQL 2005. So let’s see what can we do there.

 We’re trying to accomplish 2 things:

  1. Make save process faster
  2. Minimize transaction duration

XQuery

Let’s start with element-centric xml. Let’s create the following procedure:

Element-centric insert

Now let’s run our tests again. For 5,000 records avg. time is about 8 seconds. For 50,000 records avg time is about 82-84 seconds. Quite surprising results. Let’s take a look at the execution plan.

As you can see, SQL Server basically does the join for every element/column in the XML packet. Obviously – more elements/columns you have – more joins it would produce and slower performance would be.

 Let’s check attribute-centric XML. Procedure would look almost the same:

 

 For 5,000 records avg. time is about 2,7 seconds. For 50,000 records avg time is about 27-28 seconds. Better, but still slower than classical inserts. Plan also looks slightly better, but still, it uses the same approach with joins.

 

So as we can see, XQUERY performance greatly depends on the number of columns/joins.

OpenXML

 Well. Only other option left with XML is OpenXML. Let’s modify our procedure and run our tests

 

For 5,000 records avg. time is about 1,4 seconds. For 50,000 records avg time is about 14-15 seconds. As you can see, results are much better – even better than classical approach. Only minor problem with this approach – memory. MSXML parser uses one-eights of the total memory available for SQL Server. Huh?

Temporary table approach

So looks like we’re out of luck. Only option which is faster than classical inserts is OpenXML and we cannot use it unless memory is completely not an issue.

Although let’s think what can be done if our main problem is not the performance but concurrency. With classical approach and multiple inserts in the one transaction, we will place first exclusive lock on the new row with the first insert. In some cases we cannot simply afford to keep locks on the rows for a few seconds.

If this is the main issue, we can insert data to the temporary table first and just move it to the main table in one short transaction. If we update our attribute-centric insert stored procedure and insert data to the temporary table variable first, it would increase execution time for 200-300 milliseconds (for 5000 records) although transaction duration would be just 30-40 milliseconds. I’m not posting the code here but it’s available for download.

If we have a lot of columns and xquery is not an option, we can use temporary table created on the connection level. Temporary tables are available through the execution stack – so if you create temporary table on connection level (After SqlConnection.Open() call, for example), you should have access to the table from any stored procedure called in the same connection context. As result, implementation could include 2 steps:

  1. Save data to the temporary table with separate insert statements
  2. Move data from the temporary to permanent tables in one transaction

Script and sources are available for download – “Refactoring for Performance” presentation from SQL Saturday #62

Sources of the performance problems

So all key decisions have been made. But before we go any further let’s think what are the common sources of the performance problems.

SQL Server is bad

When you hear that from someone it means one of the two possible things. This person is either completely green or this person is seasoned professional with a lot of experience in specific areas. Yes, there are some areas where SQL Server (or matter of fact any relation database) is not the best choice. For example, let’s assume you want to create a session store for the web portal. There are no critical data involved and if it crashes customers just need to login to the system again. They would probably be upset but in the long run nothing happened. Would you trade relational database features and transaction consistency for the speed in such circumstances?

Oh, forgot to mention – there is the 3rd reason. Oracle DBAs are typically better paid 🙂

Bad hardware

I would say it’s most popular complain I heard speaking with customers. Yes, for sure, there are the cases when the system outgrows the infrastructure. It would be silly to expect single-cpu box with 4Gb of RAM to handle hundreds gigabytes database. On other hand, in most part of the cases server load produces by bad system design and implementation. For example, IO subsystem can be overloaded because of missing indexes and table scans which same time flush buffer cache and cause memory pressure. CPU load can be forced by extensive recompilations and so on, so on, so on.

Yes, you can improve situation upgrading the server but 1st you need to spend ton of money and you will achieve much better results simply optimizing your system.

Same time let I play Devil’s advocate. Sometimes it’s cheaper to upgrade the hardware than paying team of developers for the system redesign. It always depend on the goals

Bad design

This is good one. It’s so broad and unspecific that everything fits there. System is working slow – bad database design. You have concurrency problems – bad database design. You’re not the person who designed the system – granted, this system is badly designed.

In any case, what is bad database design? I’m not sure I can define it. There are some patterns and some practices needs to be avoided. There are some architectural decisions which cannot scale well. But, generally, if people knew what is the “bad design”, it would be already documented, tough in the universities and nobody would do that.

Honestly I would have the same problem if you ask me what is the good design. Fortunately or unfortunately there is no such thing like golden bullet. But as bare minimum you need to invest into it (hello Agile!).

You obviously should not over-architect. If you design the system which will be used by 50 people department in LAN, you don’t need to have a goal to handle the load from all 50,000 corporate employees accessing it by WAN. Unless, of course, you have unlimited time and budget. On other hand you should think that one day department could be 2 times bigger than now. Or even 10 times bigger.

Bad T-SQL Code

As the application developer, you spent entire life telling compiler how to do things and how to achieve the goal. This approach simply does not work with database development. You should tell database server what do you want to achieve instead.

This is much more complicated task than it sounds. You need to shift your mind into this direction. And it takes time.. A lot of time and practice.

Concurrency issues

Locking, blocking, deadlocks.. Those are making your life miserable. System which works perfectly during development become completely unresponsive after deployment. Fortunately it’s not as complicated as it sounds like. In most part of the cases, concurrency issues could be resolved by optimizing the queries and selecting correct locking strategy & transaction isolation level. We will dive into it later.

Sunday T-SQL tip: A few words about Table-Valued Parameters (TVP)

During my presentation on Sql Saturday #40 event I mentioned that one of the biggest improvements in SQL Server 2008 from development standpoint is table-valued parameters support (TVP). It’s not only simplifying development – think about passing temporary table variable as parameter from within t-sql code as well as from the client applications. It also gives great performance improvement especially if you need to save large data packet. Think about data processing tasks where client application loads large record set, processes it and saves it back in one transaction.

Let’s do some tests. Let’s create the table:

Table creationg statement

First, let’s use classical  approach – use SqlCommand and insert records in the loop

Classical approach

Now let’s use TVP. First of all we need to create a type:

TVP

Next step is creating stored procedure which we will use to insert new records

Insert procedure

Next, let’s modify the code to use TVP and stored procedure

TVP approach

It worth to mention that client code needs to create and populate DataTable object to assign it to the parameter. Obviously it requires some time, on other hand, this is client code – transaction has not been started yet.

Finally, let’s run a few tests:

For the package of 5000 records, avg execution time of the classical method  on my laptop is in around 2,600-2,700 milliseconds. TVP approach takes only about 180 milliseconds where 60 milliseconds is DataTable creation. So actual transaction time takes in around 130 milliseconds only – in about 20 times faster than classical method.

For the package of 50,000 records, avg execution times respectively are 20,500 ms and 1,300 ms with 300 ms as DataTable creation.

As you can see, it’s not only the speed, the transaction duration is about 20 times shorter. Huge difference.

Obviously, if you’re on SQL Server 2005, TVP is not an option. Although there are a couple methods which could help you to improve the classical method. I will show it to you in a few days.

 

Key decision made – time for key decisions

So the big decision has been made – your team decided to use SQL Server. But stop. There are few other things need to be finalized before actual design process begins

SQL Server Version

SQL Server 2005 or 2008? If you can use SQL 2008 – use it. There are quite a few things which make you happy. I would mention only one of them – Table-Valued Parameters ( http://msdn.microsoft.com/en-us/library/bb510489.aspx ). It’s not only simplifying your life, it also increases performance. We’ve tested the pattern of inserting 5000 records (about 250 bytes each) with TVP versus individual inserts from client side, TVP approach was about 7 times faster.

SQL Server Edition

Obviously in most part of the cases decision is not driven by developers. There is the project budget which can include cost of the licenses; existing SQL Server instances; legacy issues and dozen of other factors. It also depends on if you develop hosted solution or box product where you should provide customer clear upgrade path from SQL Express to Data Center edition. In any rate, let’s assume that you have a choice.

There are ton of useful features in the Enterprise edition which make DBA happy. Although from development standpoint there are only 2 major benefits there: Table Partition and Data Compression (SQL Server 2008 only). Usefulness depends on the nature of the system. You need to have those features if you plan to have millions of the records in the transactional tables and plan to implement sliding window type functional and/or data archiving. Of course you can always do those things manually although it would greatly simplify your life if you can use Enterprise edition for those tasks.

Data Access Layer

Everybody agrees that good system needs it. Surprisingly a lot of teams do not implement that. Especially if there are no dedicated database and/or middle-tier developers. It’s time consuming, especially on initial stage. It introduces additional rules and restrict “the creativity’ of the developers, requires better collaboration between team members and so on, so on..

In any case, think about price of database refactoring at the future. It’s complicated process by itself. In the case if you don’t have data access code localized in one place, you will also spend a lot of time looking through thousands lines of code.

Architect DAL up front. It will save you a lot of time later.

SQL generators/ORM

Great thing.. Really simplifies life of middle-tier developers. Everything is simpler, development is faster, stakeholders are happier. Only minor downside – you often don’t have any control over SQL nor even understand what happens below the hood.

I don’t want to say that those frameworks are bad. In any rate, how many people are using assembler or even plain C today? We’re living in commercial development era. But for future sake, do the research. Check how good is SQL generated. And make sure that you can put custom, highly optimized code when needed. In practice, you rarely need to do that but when you need it, you should have this ability. There are dozen frameworks and solutions available on the market – you would be able to find the right one which suit your needs

How Agile is your team?

Agile is popular. Agile is beneficial. Stakeholders love it. Developers love it. Only problem that major Agile principle – “Do not overarchitect and spend time on features you don’t need now” does not work very well with database design and development. Especially if you deploy system to production after each iteration/sprint. The price of refactoring on the database level is far higher than on application level. Especially if system is in production already.

So if your team is using Agile based development process and you expect to have complex database backend with a lot of data and concurrent users – reserve good amount of time for database design. Create user stories like: “As an user I want to be able to use the system with 50 concurrent users querying orders table”. Put those stories to backlog with high priority. And convince the stakeholders that it worth it.

At bare minimum create data access layer. Otherwise you will spend iterations just to accomplish basic refactoring.

Intro

My first acquaintance with SQL Server was in 2002. Granted I had a couple projects where we utilized MSSQL 6.5 but I did not pay much attention there. So in 2002 we decided to use SQL Server 2000 as the backend database for our GPS Tracking system. Why SQL Server? Frankly, I don’t know. We did not care much – we thought about Oracle but it was more expensive, harder to maintain and last, but not least – neither of us had any PL/SQL knowledge. So we made the decision and our journey began.

As I said we did not know PL/SQL but the problem was that we did not know T-SQL either. Granted we knew how to write select/insert/update/delete statements; how to create tables but I would say that’s almost everything we knew. Surprisingly our initial design worked. Well, at least while we had less than 50 customers. At the day when every 5th customer called and asked why the system is so slow, we spent a couple days in research and decided that we need to change “a few things”. It was our first refactoring.

I think one of the biggest benefits and same time disadvantages of SQL Server is simplicity. Everybody can install it and build the database which will work. It does not force you to think ahead. You pay great price and suffer huge pain later but everything looks simple. You will have to refactor a system; then refactor it again, and again, and again..

We are on the same boat. Now we are handling 2000 TPS with servers working on 1/10th of capacity but in order to achieve that we accomplished 5 major refactoring in the last 8 years. And I believe it’s not the final number.

My goal is to share experience. I’m not going to talk about analysis or system architecture in general. Nor tell you how to correctly map business objects into the database tables. I will try to uncover some of SQL Server internals so you will be able to predict how SQL Server behaves. I’ll show you some tricks how to design the database, how to write queries and T-SQL code in the way which makes your system more efficient. I will try to be practical even if it’s not always possible.

If you find anything useful in this blog, I would be happy.