A. In Server 2008, the placement of VMs between nodes in the cluster is fairly random if no preferences have been configured. A VM is placed on a box, and it attempts to start. If there is not enough memory/resources to start it, the start fails and the VM will move to another node. Checks are not made prior to the move of a VM to ensure the node has enough resources to start the VM. If all nodes have been tried and don not have enough resources, the VM will go into a failed state for an hour and then try again. In terms of controlling which nodes are used we have some options. First, as with all resources in a cluster, you can set preferred owners for a resource group, which places the preferred nodes at the start of the node list. This approach controls the order that resources are failed over between nodes in the cluster. The default node list order can be seen by viewing HKLM\Cluster\Nodes, select each key number and inspect the NodeName value, as the figure shows.

Note that the preferred owners list just places the preferred nodes at the start of the node list. It doesn't mean the resource can't run on a non-preferred node. For example, imagine you have four nodes: A,B,C, and D. Nodes A and C are made preferred, so our node list would look like A,C,B,D for the resource group. If the resource is currently running on C and it fails, it moves to the next node in the list so it would move to B and not A, since B is next on the list. This is explained in detail at http://support.microsoft.com/kb/299631. The next item of configuration is Possible Owners, which lets you configure which nodes in a cluster can host a resource. If a node is not a Possible Owner, then the resource group containing the resource will try all the Possible Owners first and only go to a non possible owner as a last resort; even then it will not come online. Possible Owners are set on a resource, such as a disk or name, and not a resource group. By default all nodes are set as Possible Owners, as the figure shows.

There is another factor. There may be resource groups that should not run on the same nodes as another resource group. A property, AntiAffinityClassNames, can be defined on a resource group. In the event of a failover, a node that has no resource groups that have any of the same AntiAffinityClassNames as the resource group being moved is chosen ahead of any other nodes, even those defined as preferred. Essentially, this lets us keep resource groups separated on different nodes. Let's say you virtualize two domain controllers (DCs). You wouldn't want them running on the same node, so you could set the AntiAffinityClassNames for each resource group hosting a DC VM to "DCVM," which would ensure the two VMs would not run on the same nodes unless there were no other options. To set AntiAffinityClassNames, use the command below:

cluster group "<group name> /prop AntiAffinityClassNames="<value1>"," <value2>"
This is shown in the figure below. Note the AntiAffinityClassName is a multi-string value stored at HKLM\Cluster\Groups\<GUID of group>\AntiAffinityClassNames, so you can have multiple values for each resource group to set up multiple anti-relationships.