Configure your virtual disks for optimal performance and useability
Creating virtual machines (VMs) in Microsoft Hyper-V is a snap: You simply right-click New and select Virtual Machine. In a few short moments, you can be working with a brand-new VM. After you install an OS and some applications, you have a new production server ready for use.
However, simply creating VMs inside Hyper-V and immediately going to work isn’t the best approach. You need to consider the repercussions of the decisions you make during VM creation, because your actions can introduce problems down the road—particularly performance problems. One of the most important decisions you must make is your choice of Hyper-V disk format.
You’re surely familiar with Virtual Hard Disks (VHDs), which is the virtual disk format that Hyper-V uses (as do other hypervisors). But you might not be aware that VHDs are only one of many disk formats Hyper-V can use. Even VHDs themselves have different configurations, each with capacity and performance implications. Understanding the differences between disk formats will help you determine which type of disk to use in your various Hyper-V implementations.
By default, new VMs are created with an attached VHD. These disks represent Microsoft’s open format for virtual disks, and they have some very useful benefits. Hyper-V, even in its R2 version, requires boot disks to be IDE. All other disks can be SCSI. But before you consider this a performance bottleneck, you should know that both IDE and SCSI disks in Hyper-V leverage the same VM bus, which results in functionally similar performance between both disk types.
Nevertheless, SCSI VHDs include some additional features that make them a better choice for all but your OS drive. SCSI VHDs can be hot-added in Hyper-V R2. In addition, SCSI VHDs have higher limits on size and quantity, with SCSI disks supporting sizes up to 2TB. You can also add many more SCSI disks to a VM, circumventing IDE’s four-disk limitation. For these reasons alone, it’s considered a best practice to use Hyper-V’s SCSI disks for everything except storing your core OS.
Another important consideration with Hyper-V disks is managing storage capacity. Hyper-V has three options for creating new VHDs: fixed size, dynamically expanding, and differencing. As you can probably guess, fixed-size VHDs provision the entire disk size as the disk is created. Dynamically expanding disks consume only as much space as is actually used by data on the disk.
In the first version of Hyper-V, the performance difference between fixed and dynamic disks was fairly significant—enough so that Microsoft recommended using fixed disks for all production workloads. Hyper-V R2 reduces this difference somewhat, with Microsoft now reporting that dynamic disks see between 85 percent and 94 percent of fixed disk performance. (This 9-percent span in performance has much to do with the type of workload for which the disks are being used.)
In the white paper “Virtual Hard Disk Performance: Windows Server 2008 / Windows Server 2008 R2 / Windows 7”, Microsoft reports on the performance of applications tested across a series of workloads. According to Microsoft’s findings, fixed VHDs in all workloads performed better than their dynamic counterparts; however, the difference in many cases was exceptionally slight.
Although performance should be a key factor in your fixed versus dynamic decision, remember that fixed disks also increase storage costs. When you use fixed disks, you consume storage for what amounts to empty space. If your organization has limited storage space or simply doesn’t want to spend money on wasted storage, you should consider trading a slight performance degradation for higher storage utilization.
Differencing VHDs let you link multiple VHDs to one another. Child VHDs begin their lives with the same set of data as the parent disk; they only increase in size as their data changes in comparison with the parent disk. Although Microsoft’s performance tests found that differencing disks experience about the same level of performance as dynamic disks, you need to take special care with these kinds of differencing disks because of their dependencies on one another. Differencing disks are linked to each other, and in fact multiple differencing disks can be linked together to create a chain of disks. As a result, although the storage consumption of these disks can be less than that of other disk types, this benefit must be considered alongside the risk of inadvertently breaking the disk linkage.
Microsoft doesn’t officially suggest not using differencing disks; however, these disks are most often relegated to nonproduction scenarios. One production implementation in which they’re often used is in Virtual Desktop Infrastructure (VDI) architectures, where child VHDs are provisioned from a master reference image (sometimes called a golden image). Because virtualized desktops tend to remain very similar to their parents and can be easily discarded when users are finished with them, this pairing can be a good idea if you’re considering VDI desktop deployment.
Yet another type of disk, called a pass-through disk, isn’t a VHD at all. These disks are created by attaching a disk volume to a Hyper-V host, typically through either an iSCSI or Fibre Channel connection. After the disk volume is attached to the Hyper-V host, the disk is then passed through to an awaiting VM—hence the name.
Unlike VHDs, pass-through disks don’t encapsulate data into a virtual disk format. Their raw format lets data remain in the standard NTFS format on the SAN. Keeping data in its native format can improve the backup and restore process, as well as eliminate the 2TB limitation of VHDs.
Further results from Microsoft’s white paper “Virtual Hard Disk Performance: Windows Server 2008 / Windows Server 2008 R2 / Windows 7” suggest that pass-through disks experience better performance than fixed VHDs across every scenario, although the performance difference is slight. This might be because a lower level of CPU utilization is required for addressing data on a pass-through disk. Another reason might be improved sector alignment on the SAN (i.e., the sectors that comprise the disk volume are correctly aligned with those recognized by the SAN hardware). Misalignment between the volume and the SAN is often a cause of poor disk performance.
Although pass-through disks are first connected to the Hyper-V host, they do support Live Migration. In fact, pass-through disks have been reported to see better Live Migration performance than VHDs because of the fact that the VM doesn’t need to mount the disk’s file system during a Hyper-V Live Migration.
Pass-through disks have their own disadvantages—the greatest of which is the fact that they don’t enjoy the typical benefits gained by disk virtualization. Another drawback is that pass-through disks’ initial connection to the Hyper-V host can require additional management and due diligence, particularly in the case of VMs that participate in a host cluster. Proper masking and zoning to Hyper-V cluster hosts is required but can cause administrative headaches because each pass-through disk can be used by only one VM. Finally, you can’t back up pass-through disks through the Hyper-V VSS writer and thus any host-based backup solution. Therefore, your backup and restore tactics will require either backing up the disk’s LUN from the SAN or via an agent installed on the VM.
Hyper-V offers a wide range of options for configuring VM disks: IDE or SCSI, fixed or dynamic, VHD or pass-through. However, there’s no absolutely correct answer regarding which options you should choose. Some disk configurations work well in certain circumstances but not in others. Other configurations fix some limitations but add other caveats. The moral of this story is to carefully plan your disk configuration before you ever click New, Virtual Machine. A little extra thought in the beginning can save you a substantial headache down the road. (For information about creating virtual disks on VMware ESX VMs, see "VMware ESX Disk Configuration Options.")