Wrapping up.. Only remaining question is “What isolation level is good for me”? And there is no right answer to that question. As usual, it depends. I’ll try to summarize a few things but again, think about specific details of your system and make decision based on them, not on generic advices below.
When you start to think what isolation level should you use in your system, there are 2 questions you need to answer. First, what is the blueprint of your system – is it OLTP or Data Warehouse type system? OLTP systems usually handles operational activity of the company and serves high volume of short identical queries and short transactions. Data Warehouse blueprint described reporting type system with low volume of long transactions and complex queries. In OLTP systems data is constantly changing, in Data Warehouse systems it rarely the case – data usually updates on the batches based on some schedule.
As the example, let’s think about online store. The database that handles customer facing shopping cart web site is OLTP. There are always new orders coming, existing order updating, customers check the status of existing orders, search the article lists, etc – those are done within short transactions and simple queries. Assuming same time company executives want to run some reports, see some trends and other analysis – there is the good chance that it would be another database for that purpose. This database would be optimized for complex reporting and data in that database could be refreshed every night or perhaps on the weekly basis. That database is Data Warehouse.
In real life, of course, it’s rarely the case. There is always some reporting activity against OLTP database but in any case, you can see what blueprint fits better. And it leads to the second question you need to answer: “How much data consistency do I really need?”. And even if the answer “I don’t need any consistency” is quite popular, it rarely the case in the real life. So let’s dive a little bit more in “no-consistency” mode.
No-consistency in terms of transaction isolation levels mean read uncommitted. Either directly as “set transaction isolation level” or with (NOLOCK) hints. There are some cases when you can decide to use that isolation level – for example, in our system we have a few transaction entities where data has been inserting to our system and never ever updating/deleting after that. Same time, clients are constantly downloading the data and we don’t really care if clients get the data from uncommitted transaction. This is the case when we can downgrade to read uncommitted isolation level to reduce the blocking for the client downloading sessions.
But if you think about that example – it’s rather exception than the rule. In most part of the systems you want to have consistency. Think about reporting for executives – how happy they would be if you provided them incorrect data or, even better, if they get different results running the same report twice? And the real problem that in the real life people often switches to read uncommitted isolation level to solve the blocking issues. Right, it could help in the cases when select queries (readers) are blocked by update queries (writers). But it’s rather masking the problem than solving it. As we already know, in most part of the cases, locking and blocking issues triggered by non-optimized queries. So, if you had a chance, you’d better spend some time on query optimization. Of course, this advice would not help if you are in fire-drill mode and need to fix the blocking issues in production system but in such case there is another option that could work even better – use read committed snapshot instead of
read uncommitted. In that mode writers also don’t block readers and same time give you statement level consistency. Of course there are some performance implications but in most part of the cases you can live with them.
Read committed, which is also default isolation level, in most part of the cases (again, assuming the system is more or less optimized) – this is the good choice especially for OLTP systems. It gives you acceptable compromise between consistency and concurrency. Again, there is blocking involved but when system is optimized – it’s minimal. Speaking of higher isolation levels – repeatable reads and serializable – those are typically bad choice for OLTP. For reporting and data warehouse systems those could be acceptable but same time for such systems optimistic isolation levels (read committed snapshot and snapshot) are better.
And speaking of optimistic isolation levels. For Data Warehouse type systems – use them. I don’t think about any single reason why you would like to avoid them in such systems. For OLTP – consider them. If you can live with performance overhead – it could be the good choice. Be careful – don’t forget about extra 14 bytes and don’t use fillfactor = 100 though.
So, the bottom line. For Data Warehouse – use optimistic isolation levels whenever possible. Only case with Data Warehouse systems when I would suggest to consider different options is when data in the system updates on the real time. And even in such case give optimistic isolation levels the try. For OLTP – if you can use optimistic isolation levels – use them (start with read committed snapshot). If not, use read committed and optimize the queries. Don’t use read uncommitted and (nolock) hints unless you don’t care at all about consistency. And definitely don’t use read uncommitted to reduce blocking. This is the bad choice.
Last, but not least, don’t forget that you can, and often need to use multiple different isolation levels in the system. Use them wisely! 🙂