Prior to the release of Windows Server 2012, one of the companies I worked with wanted to move its Tier 1 SQL Server database workloads over to Hyper-V. Unfortunately, the workload was highly resource-intensive. The company couldn’t make the move because the application ran on a Windows Server 2008 cluster, in which each node was a two-socket system with quad-core processors and 64 GB of RAM. Windows Server 2008 R2 Hyper-V could have barely handled the memory requirements; the processor requirements put it completely out of reach.

Response time and availability were critical. Understandably, the company didn’t want to scale back on the CPUs available to the applications, and the fact that there was no headroom for memory made them too wary to move the application. This issue kept that application running on physical systems until Windows Server 2012 was released.

Microsoft made many fundamental changes to Hyper-V, starting in Windows Server 2012 and continuing in the Windows Server 2012 R2 release. These changes significantly increase the performance and scalability of the Hyper-V hypervisor. The Windows Server 2012 release boosted the basic virtual machine (VM) scalability maximums; the subsequent Windows Server 2012 R2 release embodies several underlying architectural changes:

  • Non-uniform memory access (NUMA) support
  • Dynamic Virtual Machine Queue (DVMQ)
  • Virtual Receive Side Scaling (RSS)

These changes eliminate several major VM performance limitations. With Windows Server 2012 R2, almost no physical workload is too intensive to run on Hyper-V.

Hyper-V Scalability

First, let’s take a quick look at Windows Server 2012 R2 Hyper-V scalability. You can see that Windows Server 2012 R2 provides significantly more scalability than the Windows Server 2008 R2 release. Windows Server 2012 R2 Hyper-V also meets or exceeds VMware vSphere 5.5 in all these scalability categories.

 

Scalability Capabilities

Windows Server 2012 R2

Windows Server 2008 R2

vSphere 5.5

Maximum logical processors

320

256

320

Maximum physical RAM per host

4TB

1TB

4TB

Maximum active VMs

1024

384

512

Maximum virtual CPUs per VM

64

32

64

Maximum virtual RAM per VM

1TB

64

1TB

Nodes per cluster

64

16

32

 

The underlying architectural changes in Windows Server 2012 R2 take this scalability one step farther by eliminating some virtualization bottlenecks that can prevent VMs from matching the performance that physical systems provide. These changes include NUMA support for VMs, DVMQ, and virtual RSS.

Important Enhancements

NUMA support in Windows Server 2012 R2 Hyper-V eliminates an important performance advantage that physical systems previously had over VMs. NUMA is a multiprocessor-system processor-memory architecture that groups memory and processor nodes. Access to memory that is local to the processor is faster than memory that is attached to remote processors. Some applications, such as SQL Server, can recognize and take advantage of NUMA for improved application performance. With Windows Server 2012 R2, the default virtual NUMA topology that Hyper-V VMs use can match the host’s NUMA topology, so VMs and NUMA-aware applications can take advantage of NUMA performance advantages.

DVMQ is another important improvement in the Hyper-V hypervisor. DVMQ enables VMs in a server-consolidation environment to have significantly improved network performance. The Virtual Machine Queue (VMQ) is a hardware-virtualization technology that enables a physical network adapter to appear as multiple network adapters to VMs. Without VMQ, the Hyper-V Virtual Switch is responsible for routing VM traffic, resulting in increased host CPU processing that must be handled by CPU0. VMQ support in Windows Server 2012 enables the host’s physical NIC to create virtual network queues for each virtual network adapter, reducing the load on the host CPU. In Windows Server 2012 R2, DVMQ dynamically distributes incoming network-traffic processing to multiple host processors, based on their load. This results in a better match of network load-to-processor use and increased network performance.

Windows Server 2012 R2 support for virtual RRS is another enhancement that enables VMs to deliver the same kind of performance as physical systems. RSS enables network adapters to distribute the kernel-mode network-processing load across multiple processor cores in multicore computers. Prior to Windows Server 2012 R2, RSS was available only for physical processors. Windows Server 2012 R2 virtual RSS enables RSS for virtual processors. Virtual RSS works by scaling a VM's receive-side traffic to multiple virtual processors, eliminating bottlenecks for any single virtual processor.

Other important Windows Server 2012 R2 enhancements that help support mission-critical applications include the improved mobility and availability that live migration and storage live migration provide. Live migration and storage live migration enable VMs—and, optionally, all their assets—to be moved between Hyper-V hosts, without end-user downtime or interruption of services. This capability enables IT to reduce planned downtime by moving all the VMs from one Hyper-V host to another, allowing you to perform any planned hardware or software maintenance on the original Hyper-V host. Both live migration and storage live migration can be performed with no user downtime.

Unlike with previous versions of Hyper-V, you don’t need a SAN or shared storage to perform a live migration. Windows Server 2012 R2 Hyper-V supports SMB live migration, in which the VM assets are stored on an SMB file share. Windows Server 2012 R2 Hyper-V also supports shared-nothing live migration, in which there is no shared storage between the Hyper-V hosts. For protection from unplanned downtime, Windows Server 2012 R2 provides failover clustering at both the Hyper-V host and VM guest. Windows Server 2012 R2 clusters can consist of up to 64 nodes and can provide protection from Hyper-V host failure by automatically moving the VMs to another host. At the host level, failover clusters can span multiple geographic sites. For guest VMs, failover clusters can span multiple Hyper-V hosts.

Dynamic memory management is core to Windows Server 2012 R2 Hyper-V, and with extensions from System Center (including Dynamic Optimization and Power Optimization), memory-management features work to create a dynamic and automated IT infrastructure. The dynamic memory feature allows VM memory to be automatically increased and decreased as the VM’s workload requires. This capability allows resource-intensive workloads such as SQL Server to gain resources as usage increases and to scale back when workload decreases. Dynamic Optimization takes this automation a step further by monitoring the virtualization host for CPU, memory, disk space consumption, disk I/O, and network I/O levels. Dynamic Optimization can then automatically initiate a live migration to balance workloads between Hyper-V hosts, by moving one or more VMs to a new host if the performance falls outside predefined boundaries. Likewise, Power Optimization works within IT-defined polices, using live migration to automatically move workloads off Hyper-V hosts with low utilization and then using out-of-band (OOB) management to power down Hyper-V servers. Power Optimization can power on those Hyper-V hosts later, when demand increases.

Needed Performance

With the release of Windows Server 2012, the company I mentioned was able to build a new Windows Server 2012 cluster to run Hyper-V. We moved the SQL Server database workload onto a VM that was part of a Hyper-V guest cluster. After the migration, the company got the performance it needed—plus the availability benefits provided by Windows Failover Clustering and live migration.

For more information about Hyper-V and Windows Server 2012 R2 virtualization capabilities, visit http://www.microsoft.com/en-us/server-cloud/solutions/virtualization.aspx#fbid=-2s7iNZnD88.