Microsoft SQL Database storage

by Brad   Last Updated August 30, 2017 18:06 PM

I'm building a new Microsoft SQL database server which will use a SAN for storage. For SAN storage I have:

A small number of SSD disks A larger number of 10K disks

My SAN supports storage tiering.

I'm on the fence about how to use the SSD disks. Here are the options that come to mind:

Option 1: Create tiered storage with the SSDs and 10K disks and use auto tiering to move highly accessed data to the SSDs. Then break that storage pool up into several different drives.

Option 2: Place key (i/o intensive) data on the SSDs and throw everything else on the 10K disks.

I don't have enough SSDs so that everything can reside on SSDs. So what's most important performance wise for SSD storage? tempdbs? LDF files? MDF files? My database is more write intensive than read intensive (if that makes a difference)

Secondarily are there any best practices I should follow when setting up the drives? Should I have separate drives for each tempdb?, LDF files?, MDF files? What about the system databases like master, model etc. Should those go on their own dedicated drives or maybe the C drive?

Any advice would be appreciated.

Thanks, Brad

By the way here is a screenshot of our existing database server disk activity. It seems the tempdbs are the most read/write intensive followed by our MDF file and surprisingly the LDF file seems to have the least I/O

enter image description here

Answers 2

Well, generally:

  • LDF are latency sensitive but linear. A SAN WriteBack cache (or a local Raid controller with BBU and write back) is good enough to handle the latency, the throughput can be handled by a decent HDD.
  • Tempdb CAN be Latency AND throughput sensitive. Generally I would NOT put those on a SAN at all - makes ZERO sense. Use local M.2 NVM discs. Grab a PCIe Raiser card for them if yo u have to. THe point is that TempDB is regenerated on every restart - so there is nothing "lost" when the machine fails over. AND a decent M.2 SSD has a TON of bandwidth for those (onless you se 128g fiber). THat said, most TempDb do not need this as they are never challenged. Depending on what you do with the data, tempdb may have quite a quiet live.

For the rest I would go with storage tiering IF that is sensible. Alternatively use multiple File Groups and put hot ones on SSD if that is by table (using partiioning will get clumsly). Lots depends on the data - but tiering may be the best solution (also from an ease of use point of view).

August 30, 2017 17:17 PM

Well, I would say an obvious thing first. Test both scenarios and see what's the performance difference between Tiered and dedicated SSD.

Then think about downsides of having key data to be SSDs if SSD storage is limited. How limited the SSD space is? When you expect to run out of space based on your current growth of the "Key data"? What are you going to do when you run out of space on SSDs. Compare all that potential headache to the performance difference.

In general, SSDs are good for Random Access operations. The most benefit you get Random Reads. If your application is write-intensive and doing a lot of bulk writes you might see a minimum performance improvement on SSDs, but if your application does a lot Random Writes then it might be a different story. So you need to test it your production workload.

Alexander Sharovarov
Alexander Sharovarov
August 30, 2017 18:05 PM

Related Questions

SQL TempDB PCI-E NVMe card - 4K or 64K

Updated November 30, 2017 01:06 AM

Tempdb size strategy

Updated May 02, 2018 18:06 PM

database file extent 101

Updated September 09, 2016 09:02 AM

High write latancy in temp db

Updated November 04, 2016 09:02 AM