I generally agree with the common sentiment that you can never have too much RAM. Most people in the IT world know about Moore's Law, and many are becoming aware ofKoomey's Law, which states that the energy efficiency of CPUs has roughly doubled every 18 months. The fact that I could put 8GB of RAM in my laptop for a mere $70 is a product of both of these laws in action.

Of course, certain applications can require more RAM. Chief among them: virtualization, one of the biggest drivers of the need for hardware capacity. Virtualizing Microsoft Exchange servers offers the possibility of lowering cost by getting better use out of hardware, simply by keeping it busier. Putting multiple virtual machines (VMs) onto a single virtual server is supposed to increase efficiency, and it does . . . provided you don't run out of critical resources, such as RAM.

Of course, like virtualization itself, RAM shortage isn't a new problem. Computer scientists working for IBM in the 1960s came up with the answer in the form of virtual memory. You could allocate a large virtual address space to an application, and the OS would swap data in and out of physical RAM as needed to maintain the illusion that the application actually had that much physical RAM. Windows has incorporated virtual memory for years, and every Windows administrator is familiar with how you create and manage page files (and the occasional error Windows throws when you run low on virtual memory and it enlarges the page file for you).

The advent of virtualization means that there's a new wrinkle in virtual memory management. In ordinary operation, each VM on a host is allocated a set amount of the host's physical RAM. The VMs themselves can use virtual memory in their OSs, but you have to choose the amount of physical RAM you want to allocate to each VM. The VMs get that amount to work with—no more and no less.

Although that approach is reasonable, VMware (followed later by Microsoft) decided to add a complication in the form of what's known as memory overcommitment; Microsoft calls it dynamic memory. No matter what you call it, this feature lets you give your VMs more RAM than you actually have in your server. For example, let's say you have a VM host with 32GB of RAM running four Exchange 2010 Mailbox servers. In normal use, you'd allocate a few gigabytes to Windows itself, then divide the remaining RAM among your Mailbox servers, giving each one, say, 7GB of RAM. With overcommitment, you could give each mailbox server 8GB, or more, of RAM, with the VM server making up the difference.

VMware and Microsoft use different techniques to accomplish this magic, but that doesn't change the fact that there's no such thing as a free lunch. This capability might seem like a great way to get extra RAM for free, until you remember the way that Exchange Mailbox servers work with RAM: They consume as much as possible for buffering and caching, releasing RAM only if the page fault rate on the server indicates contention between Exchange and other applications.

This approach is fine, but remember that by definition Exchange running in a VM has no idea what's happening on its VM host server. If you allocate 12GB to each of our four hypothetical Mailbox servers, each of them will do its best to use 12GB (less overhead for Windows) to cache mailbox data—but they won't have nearly that amount of RAM available, and so your VM host is headed for swap city.

One of the big advantages we gained by moving to 64-bit Windows for Exchange 2007 and later was that the potential amount of RAM available for caching was greatly expanded. However, Microsoft's design doesn't, and can't, take into account the possibility that some of the available RAM doesn't actually exist. For that reason, Microsoft recommends against using Hyper-V dynamic memory or VMware overcommit on Mailbox servers. For other server roles, these technologies still aren't recommended, but the potential performance impact is considerably less because only the Mailbox role is typically memory-constrained.

In practical terms, this restriction is little more than an inconvenience. Sizing RAM for Mailbox servers has always been a simple matter of allocating as much RAM as you can afford, within the capacity limits spelled out by the sizing calculator. The availability of hypervisor-based magic pixie dust doesn't change the basic fact that Mailbox server performance benefits from having lots of physical RAM. You ignore that fact at your peril.