Microsoft's enterprise-class clustering solutions have evolved like a fine wine, getting better with age. Unfortunately, like such wine, the cost of implementing and testing Microsoft clustering solutions is out of reach for many. I've worked with several organizations that have Microsoft Exchange Server and Microsoft SQL Server clusters in production but don't have the funds to maintain test server clusters in their labs.

To teach clustering in classrooms, I've had to purchase a dozen SCSI host bus adapters (HBAs), cables, and external SCSI drives. Even then, I was limited in what I could show the students because each practice cluster contained only one shared drive. In my travels, I've spoken with many people who wish that clustering were more portable. Administrators and systems engineers would like to be able to practice server cluster configurations outside of production. Microsoft partners and resellers would like to be able to demonstrate on their laptops a clustering solution for clients. I have the answer.

VMware—A Cluster Geek's Best Friend
I discovered VMware a couple of years ago, and I immediately liked the product. Within a month, I had Novell NetWare, Linux, and Microsoft virtual machines (VMs) running on a virtual network on my laptop, and my classroom demonstrations reached a new level. In addition to letting VMs share a network on the same system, VMware lets you configure shareable resources. (This article assumes that you have a fundamental understanding of VMware concepts. If you don't, you can find documentation at

The fact that VMware lets you share virtual hard disks between two virtual computers filled my imagination with countless virtual-clustering possibilities. I soon had a documented method to build virtual clusters on one laptop (or desktop) system. Here are the basic ingredients of my VMware clustering recipe:

  • one laptop or desktop PC with 512MB of RAM and 6GB of free disk space
  • one VMware Workstation 3.0 (or later) license
  • two Windows 2000 Advanced Server VMs on a virtual TCP/IP network, each configured to use 128MB of RAM
  • one or more shared nonpersistent virtual SCSI disks (for cluster storage)
  • one 2GB IDE local virtual disk per VM (for OS data)
  • one or more virtual network cards on each VM (for cluster communication)

Building the Virtual Cluster
As with any good recipe, the ingredients alone aren't enough. When building a virtual cluster, the order of preparation is key. To begin, you need two instances of Win2K AS running as VMs on your system. For this article, I built a virtual cluster on a Dell Inspiron 8100 with a 1GHz processor and 512MB of RAM and with Windows XP Professional Edition as the host OS. I used VMware Workstation 3.2 to create the VMs. With this horsepower, I've successfully demonstrated the installation and configuration of both Exchange 2000 and SQL Server 2000 server clusters.

To build the two Win2K AS VMs, create a VM (VM1) and install and configure it as a standalone server running Win2K AS. When the installation has finished, power down VM1, then copy the contents of VM1's folder on the host system to a second folder, named VM2 (or any name you choose). Open VM2 in the VMware console. Open Configuration Editor for VM2, go to the Options tab, and give the VM a unique name. Then go to the Hardware tab and change the path for VM2's virtual disk file to point to the virtual disk file in the VM2 folder. After you configure VM2, boot it up and change its host name and IP address so that they're different from those of VM1 (I usually use names such as Node1 and Node2). This shortcut gives you a second functioning server in significantly less time than reinstalling Win2K AS from scratch on a second VM.

Now that you have two VMs, take the following steps to prepare the two servers to be part of a cluster:

  1. Run dcpromo.exe to promote VM1 to be a domain controller (DC) and forest root. Use whatever domain name you want.
  2. After the Active Directory Installation Wizard finishes and VM1 reboots, join VM2 to the domain. At this point, both VMs should be powered up.
  3. On the DC (VM1), open the Microsoft Management Console (MMC) Active Directory Users and Computers snap-in and create a cluster domain user account. I typically call the account Cluster. Set the account so that the user can't change the password and the password never expires, then add the account to the Domain Admins user group.
  4. Shut down and power off both VMs.

Sharing the Storage
Now that you have a domain and a cluster user account, you need to prepare the shared storage. To have the flexibility to later cluster a database application such as Exchange or SQL Server, I prepare three virtual SCSI disks. Follow these steps:

  1. In the VMware main window, right-click the VM1 object and select Settings.
  2. On the Hardware tab, click Add.
  3. Select Hard Disk and click Next.
  4. Leave the Create a New Virtual Disk option selected, and click Next.
  5. Set the disk size to 0.5MB (used for cluster quorum data), and click Next.
  6. Click Browse and create a new folder to store the shared virtual disks. Name the disk Shared1.vdmk and click Open.
  7. Verify that the Disk File field shows the proper filename and path, then click Advanced.
  8. Select SCSI 0:0 as the virtual device node, and click Finish.
  9. Repeat Steps 2 through 8 to create two more virtual SCSI disks, named Shared2 (1GB) and Shared3 (0.5GB), and have the disks use the SCSI 0:1 and SCSI 0:2 virtual device node types, respectively.

Figure 1 shows Configuration Editor's Hardware tab with the three SCSI disks listed. If you have limited disk space, you can configure just one disk—that's all clustering requires.

Earlier, I wrote that the disks need to be nonpersistent, but after you follow the previous series of steps, the disks are persistent. For now, you need to leave the disks as persistent. That way, when the OS is loaded on the two VMs, it will see the disks as new disks and will write a signature to them. Before both nodes can share the disks, both nodes must be aware of the disks.

So far, you've configured VM1 to share the disks. Now you must configure VM2 to see the shared disks. To do so, follow these steps:

  1. Open a second instance of VMware.
  2. In the VMware main window, right-click the second VM and select Settings.
  3. On the Hardware tab, click Add.
  4. Select Hard Disk and click Next.
  5. Select Use an Existing Virtual Disk and click Next.
  6. Click Browse, locate and select the Shared1.vmdk file, and click Open.
  7. Verify that the path to the file is correct and click Next.
  8. Select Persistent as the Mode Type and click Finish.
  9. Repeat Steps 2 through 8 for the Shared2 and Shared3 files.

The disk configuration of VM2 should now match that of VM1, with the exception that each VM's local IDE disk is unique.

Writing Disk Signatures
Now that you've configured the disks for both VMs, the disks need signatures. To begin, perform the following steps on VM1:

  1. Boot the system and log on.
  2. Right-click the My Computer icon and select Manage.
  3. Click the Disk Management folder.
  4. When the Write Signature and Upgrade Disk Wizard appears, click Next.
  5. When prompted to select the disks to write a signature to, select the check box for each new virtual SCSI disk and click Next.
  6. Clear the check boxes for all three disks so that none of them are upgraded to dynamic disks, then click Next.
  7. Click Finish to close the wizard.
  8. Now that all three disks have signatures, create a partition on each disk and format it as NTFS. To create a partition, right-click the unused space on a disk and select Create Partition. I like to give the first volume the name Quorum and the letter Q, the second volume the name Data and the letter R, and the third volume the name Logs and the letter S. Figure 2 shows the three formatted disks in Disk Management.
  9. Close Computer Management.
  10. Shut down and power off VM1.

Now that the disks have signatures, you must verify that VM2 can see the disks. Power up and log on to VM2. Open Computer Management and verify that the disks are listed in Disk Management. The drive letters at this point won't match the drive letters you assigned in VM1, but they will match after you join VM2 to the cluster. Now that you've verified the presence of the disks, power down VM2.

Your final task before installing Microsoft Cluster service on the nodes is to convert the disks from persistent to non-persistent on each VM so that the VMs can simultaneously share the disks. To change the disks, perform the following steps:

  1. In the VMware main window, right-click VM1 and select Settings.
  2. On Configuration Editor's Hardware tab, select the first virtual SCSI disk, then select the Nonpersistent: Discard changes after powering off option.
  3. Repeat Step 2 for the second and third virtual disks.
  4. Click OK to close the Configuration Editor.
  5. Repeat Steps 1 through 4 for VM2.

The disks don't lose their signatures when converted from persistent to nonpersistent, as you'll see after you boot up VM1 and install Cluster service. Note, however, that nonpersistent disks lose their data when they're powered off. I show you how to get around the data-loss problem shortly.

Installing Cluster Service
Now that the virtual disks are ready to go, all you need to do is perform a standard Cluster service installation. Let's begin with VM1:

  1. Power up VM1 and log on as Domain Administrator.
  2. Open the Control Panel Add/Remove Programs applet.
  3. Click Add/Remove Windows Components.
  4. Select the Cluster Service component and click Next.
  5. If prompted to do so, insert the Win2K AS CD-ROM and click OK.
  6. At the first Cluster Service Configuration Wizard panel, click Next.
  7. Click I Understand to acknowledge that Microsoft doesn't support cluster hardware that isn't on the Hardware Compatibility List (HCL), then click Next.
  8. Select the option that installs the first node in the cluster, and click Next.
  9. Enter a name for the cluster and click Next.
  10. Provide the cluster domain username and password (which you created earlier), and click Next.
  11. Make sure that the three disks are selected as Managed Disks, and click Next.
  12. Select the 500MB Q drive as the Quorum disk and click Next.
  13. Click Next to configure the cluster network.
  14. Enter a network name and select the Enable this network for cluster use check box, as Figure 3 shows. This VM has only one virtual NIC, so I select the All communications (mixed network) option and click Next. If you had two virtual NICs, you could have a public network and a private network.
  15. When notified that only one adapter is configured, click OK.
  16. Provide an IP address and subnet mask that are on the same virtual network as the two VMs, then click Next.
  17. Click Finish to close the Cluster Service Configuration Wizard.
  18. When notified that Cluster service has started successfully, click OK.
  19. Close the Add/Remove Programs applet.

Leave VM1 running and power on VM2. Use the Domain Administrator account to log on to VM2, then perform the following steps:

  1. Open the Add/Remove Programs applet.
  2. Select Add/Remove Windows Components.
  3. Select the Cluster Service component and click Next.
  4. In the first Cluster Service Configuration Wizard panel, click Next.
  5. Click I Understand to acknowledge that Microsoft doesn't support cluster hardware that isn't on the HCL, then click Next.
  6. Select the option that installs the second node in the cluster, and click Next.
  7. Enter the name of the cluster that you specified during the cluster installation on the first node, then click Next.
  8. When notified that only one adapter is configured, click OK.
  9. Enter the password for the cluster user account, and click Next.
  10. Click Finish to join VM2 to the cluster.

You now have a working two-node cluster on your system. To verify that the clustering works, initiate a failover:

  1. On either VM, click Start, Programs, Administrative Tools, Cluster Administrator.
  2. Expand the cluster's Groups folder.
  3. Right-click Cluster Group and select Move Group.
  4. As Figure 4 shows, you should see the cluster resources residing on the second node.

After you initiate a failover for the Cluster Group and for each disk group, the drive letters for the shared disks on the second node will match those on the first node. If Chkdsk runs during the first failover, just let it run to completion.

So What's the Catch?
I've encountered one small problem with clustered VMs and sharing nonpersistent virtual SCSI disks: The disks lose their data when you power off their VM. (The problem occurs only when you power off a system, so you can reboot without any problems.) To avoid this problem when you want to shut down your system, click the Suspend button in the VM's window, then close the window and shut down the host system. This technique causes the VM to pause rather than shut down, so the next time you turn on your system, you can resume where you left off.

I've successfully run a virtual cluster on my laptop for more than 6 months, uninstalling and reinstalling both Exchange and SQL Server countless times without incident. The $299 cost of VMware Workstation 3.2 makes it possible for anyone to practice clustering and take a cluster on the road without the cost and inconvenience of extra equipment.