Two years ago, I spoke at TechED in the United States and in Europe. My presentation involved a complicated lab setup of eight servers. At the time, I was travelling from Australia with a high-powered laptop running Windows Server 2008 R2. I had a copy of the lab’s virtual machine (VM) files on the local volume, as well as a custom-built enclosure that I could connect to. This enclosure had two solid state disks (SSDs) in a mirrored configuration. I actually ran the demo off this enclosure because I could boot the lab in about 90 seconds that way, as opposed to needing almost 30 minutes to boot the same lab when running off the laptop’s internal hard disk.
Granted, this example of running a workload on SSD versus on a magnetic disk is fairly extreme. And I ran the lab off an external enclosure because SSDs for laptops were small and expensive. But at that time, the Windows Server OS didn’t support native deduplication, so I could only squeeze my lab onto disks that didn’t host the OS and applications.
To get the speed I needed for a particular workload at a reasonable cost, I had to come up with a crazy hybrid configuration. The server OS ran off one type of slower spinning disk media; the workload I hosted ran off solid state media. I could have run everything off SSD, but the cost of the storage would have exceeded the cost of the already expensive laptop.
An interesting and generally unknown fact about application data, be it virtual hard disk (VHD) files, SQL Server database files, files hosted on file shares, or Exchange Server mailbox database files: Only a small fraction of the data that is stored on a volume is accessed frequently. The size of that small fraction depends on the workload. But in some cases, 90 percent of a volume’s read/write operations might be made to only 10 percent of the data stored on that volume.
That’s what makes hybrid storage exciting. This functionality, available as part ofR2 Storage Spaces, allows you to configure local tiered storage so that frequently accessed data is stored on local SSD storage, where it can be retrieved rapidly. Data that is accessed less frequently is pushed out to progressively less expensive storage.
For example, rather than hosting a large, expensive volume on a storage pool comprised entirely of SSDs, you can use the tiered storage functionality of Windows Server 2012 R2 Storage Spaces to create a storage pool that mixes expensive SSDs and cheaper spinning disks. The built-in logic of tiered storage in Storage Spaces automatically shifts the frequently accessed data to the part of the volume that is hosted on the fast SSD storage. The remaining data—in other words, the data that is accessed far less often—is stored on the cheaper spinning disks. You can even override this logic and pin certain files to the faster storage, should you think it necessary. You have additional options when using Storage Spaces with storage tiering. If you are interested in further redundancy, you can configure the Storage Space volume to use double or triple mirroring. It’s also possible to configure Storage Spaces so that data is striped across disks to increase performance. Striping isn’t necessary with storage tiering, but it might provide an extra performance boost.
By moving frequently accessed data to fast storage, you’ll get most of the performance gains of SSDs, without the costs associated with using SSDs to host all your application data. To find out more about Storage Spaces and other Microsoft Windows Server 2012 R2 storage solutions, visit the Enterprise Storage at http://www.microsoft.com/en-us/server-cloud/solutions/storage.aspx.