Windows Server 2012 has so many new features that it's tough to keep track of them all. However, some of the most important new IT infrastructure building blocks are found in the improvements for failover clustering. Failover clustering originated as a technology that was designed to protect mission-critical applications such as Microsoft SQL Server and Microsoft Exchange, but since that time failover clustering has evolved into a high availability platform for a number of different Windows services and applications. Failover clustering is part of the foundation for Dynamic Datacenter and technologies such as live migration. With Server 2012 and the improvements in the new Server Message Block (SMB) 3.0 protocol, failover clustering has been further expanded to enable continuously available file shares. For an overview of the all features in Server 2012 failover clustering, you can check out "New Features of Windows Server 2012 Failover Clustering."

I'll show you how to build a two-node Server 2012 failover cluster. First, I'll cover some the prerequisites and provide you with an overview of how the hardware environment, network, and storage are set up. Then, I'll dive into the details of how to add the Failover Clustering feature to Server 2012 and use Failover Cluster Manager to configure a two-node cluster.

Understanding the Failover Clustering Prerequisites

To build a two-node Server 2012 failover cluster, you need two systems running either the Datacenter or Standard edition of Server 2012. They can be physical systems or virtual machines (VMs). You can create clusters with VM nodes using either Microsoft Hyper-V or VMware vSphere. I'll be creating the cluster using two physical servers, but the cluster configuration steps are same regardless of whether the cluster nodes are physical or virtual. However, a key point is that the nodes should be similarly configured to enable the backup node to handle the workloads that might need to be supported in the event of a failover or live migration. You can see the overview of the components I used for my Server 2012 failover cluster in Figure 1.Figure 1: Reviewing the Cluster Components
A Server 2012 failover cluster requires shared storage, which can be an iSCSI, Serially Attached SCSI, or Fibre Channel SAN. In this example, I'm using an iSCSI SAN. When using this type of storage, you need to be aware of the following:

  • Each server must be equipped with at least three NICs: one NIC dedicated to iSCSI storage connectivity, one NIC dedicated for cluster node communication, and one NIC for external network connections. If you're planning to use the cluster for live migration, you should consider having a fourth NIC dedicated to it. However, live migration can also occur over the external network connection—it'll just be slower. If you're using your servers for Hyper-V virtualization and server consolidation, you'll definitely want additional NICs to handle the VMs' network traffic.
  • Faster is always better with networking, so the iSCSI connection should be running at a minimum of 1GHz.
  • The iSCSI target must support the iSCSI-3 specifications, which include the ability to create persistent reservations. This is required by live migration. The iSCSI 3 standard is supported by almost all hardware storage vendors. If you're trying to implement a cluster in an inexpensive lab environment, you should make sure the iSCSI target software you're using supports iSCSI 3 and persistent reservations. Older versions of Openfiler didn't support this standard, but the new version of Openfiler with the Advanced iSCSI Target Plugin does support it. In addition, StarWind Software's StarWind iSCSI SAN Free Edition is fully compatible with Hyper-V and live migration. Certain versions of Windows Server can also act as an iSCSI target that's compatible with the iSCSI 3 standards. Server 2012 includes an iSCSI target. Windows Storage Server 2008 R2 includes support for iSCSI target software. Plus, you can download Microsoft iSCSI Software Target 3.3, which runs on Windows Server 2008 R2.

You can find more details about how I configured the iSCSI storage for my failover cluster in the sidebar "An Example of How to Configure iSCSI Storage." For more information about the requirements for failover clustering, you can check out "Failover Clustering Hardware Requirements and Storage Options."

Adding the Failover Clustering Feature

The first step in creating a two-node Server 2012 failover cluster is to add the Failover Clustering feature using Server Manager. Server Manager automatically opens when you log on to Server 2012. To add the Failover Clustering feature, select Local Server and scroll down to the ROLES AND FEATURES section. From the TASKS drop-down list, select Add Roles and Features, as shown in Figure 2. This will start the Add Roles and Features wizard.

Figure 2: Starting the Add Roles and Features Wizard
The wizard opens with the Before you begin welcome page. Click Next to go to the Select installation type page, which basically asks if you're installing a feature on the local computer or installing a feature to a Remote Desktop service. For this example, select the Role-based or feature-based installation option and click Next.

On the Select destination server page, select the server on which you want to install the Failover Clustering feature. In my case, it was a local server named WS2012-N1. After selecting your local server, click Next to go to the Select server roles page. For this example, you won't be installing a server role, so click Next. Alternatively, you can click the Features link in the left menu.

On the Select features page, scroll through the Features list until you see Failover Clustering. When you click the box in front of Failover Clustering, the wizard displays a dialog box listing all the different components that will be installed as part of this feature. As you can see in Figure 3, the wizard will install the Failover Cluster Management Tools and the Failover Cluster Module for Windows PowerShell by default. Click the Add Features button to return to the Select features page. Click Next.

Figure 3: Adding the Failover Clustering Feature and Tools

The Confirm installation selections page will list the Failover Clustering feature along with the management tools and PowerShell module. This page gives you a chance to go back and make any changes if needed. Clicking the Install button will begin the actual feature installation. After the installation completes, the wizard will end and Failover Clustering will be displayed in the ROLES AND FEATURES section of Server Manager. This process must be completed on both nodes.

Validating the Failover Clustering

After adding the Failover Clustering feature, the next step is to validate the configuration of the environment in which you'll create your cluster. To do this, you can use the Validate a Configuration wizard in Failover Cluster Manager. This wizard checks the hardware and software configuration of all the cluster nodes and reports on any issues that might prevent the cluster from being created.

To open Failover Cluster Manager, select the Failover Cluster Manager option on the Tools menu in Server Manager. In the Management pane, click the Validate Configuration link shown in Figure 4 to run the Validate a Configuration wizard.

Figure 4: Starting the Validate a Configuration Wizard

The wizard first displays a welcome page. Click next to go to the Select Servers or a Cluster page. On this page, enter the names of the cluster nodes that you want to validate. I entered WS2012-N1 and WS2012-N2. Click Next to display the Testing Options page, where you can select the tests that you want to run. You have the option to select specific sets of tests or to run all the tests. For at least the first time, I recommend that you select the option to run all the tests. Click Next to go to the Confirmation page, which shows the tests that will be run. Click Next to start the cluster validation testing process. The tests will check the OS level, network configuration, and storage of all the cluster nodes. A summary of the results are displayed when the test is finished.

If the validation tests succeed, you can create the cluster. Figure 5 shows the Summary screen for a successfully validated cluster. If errors are encountered during the validation tests, the validation report will display a yellow triangle for warning errors and a red X for severe errors. Warning errors should be reviewed, but they can be ignored. Severe errors must be corrected before the cluster can be created.

Figure 5: Reviewing the Validation Report


Creating the Failover Cluster

At this point, you can create the cluster on any of the cluster nodes. I created the cluster on the first node (WS2012-N1).

To create a new cluster, select the Create Cluster link in either the Management pane or Actions pane, as Figure 6 shows.

Figure 6: Starting the Create Cluster Wizard

This will start the Create Cluster wizard, which begins with a welcome page. Click Next to go to the Select Servers page shown in Figure 7. On this page, enter the names of all the cluster nodes, then click Next.

Figure 7: Selecting the Servers for the Cluster

On the Access Point for Administering the Cluster page, you specify your cluster's name and IP address, both of which must be unique in the network. In Figure 8, you can see that I named my cluster WS2012-CL01 and gave it an IP address of 192.168.100.200. With Server 2012, you can have the IP address of the cluster assigned by DHCP, but I prefer to use a statically assigned IP address for my server systems.

Figure 8: Configuring the Cluster Access Point

After you enter the name and IP address, click Next to display the Confirmation page shown in Figure 9. This page lets you verify your cluster creation choices. If needed, you can page back and make changes.

 Figure 9: Confirming the Cluster Creation Selections

Clicking Next on the Confirmation page creates the cluster on all of the selected clustered nodes. A progress page is displayed as the Create Cluster wizard goes through the steps of creating a new cluster. When it finishes, the wizard will display a Summary page that shows the configuration of the new cluster.

Although the Create Cluster wizard will automatically select the storage for your quorum, it often doesn't choose the quorum drive that you want. To check which disk is being used by the quorum, open the Failover Cluster Manager and expand the cluster. Then expand the Storage node and click the Disks node. The disks available to the cluster will be displayed in the Disks pane. The disk that the wizard selected for the cluster quorum will be listed under Disk Witness in Quorum.

In my example, I used Cluster Disk 4 for the quorum. It was sized at 520MB, which is slightly larger than the quorum minimum of 512MB. If you want to use a different disk as the cluster quorum, you can change the quorum configuration by right-clicking the name of the cluster in Failover Cluster Manager, selecting More Actions, and choosing Configure Cluster Quorum Settings. This will display the Select Quorum Configuration wizard, which will let you change the cluster quorum.

Configuring Cluster Shared Volumes and the VM Role

Both nodes in my cluster have the Hyper-V role installed because I want to use the cluster for high-availability VMs supporting live migration. To help with live migration, the next step is to configure Cluster Shared Volumes (CSVs). Unlike Server 2008 R2 CSVs, Server 2012 CSVs are enabled by default. However, you still need to tell the cluster which storage should be used for the CSVs. To enable a CSV on an available disk, expand the Storage node and select the Disks node. Next, select the cluster disk that you want to use as a CSV and click the Add to Cluster Shared Volumes link in the Failover Cluster Manager's Actions pane, as you see in Figure 10. That cluster disk's Assigned To field will then change from Available Storage to Cluster Shared Volume, as shown in Figure 10.

Figure 10: Adding a CSV

Behind the scenes, Failover Cluster Manager configures the cluster disk's storage for CSV, which includes adding a mount point in the system drive. In my example, I enabled CSVs on both Cluster Disk 1 and Cluster Disk 3, which added the following mount points:

  • C:\ClusterStorage\Volume1
  • C:\ClusterStorage\Volume2

At this point, the two-node Server 2012 cluster has been built and CSVs have been enabled. Next, you can install clustered applications or add roles to the cluster. In my case, I'm building the cluster for virtualization support, so my next step is to add the Virtual Machine role to the cluster.

To add a new role, select the cluster name in Failover Cluster Manager's navigation pane and click the Configure Roles link in the Actions pane to launch the High Availability wizard. Click Next on the welcome page to go to the Select Role page. Scroll through the list of roles until you see the Virtual Machine role, as you see in Figure 11. Select that role and click Next.

Figure 11: Adding a Virtual Machine Role

On the Select Virtual Machine page, all the VMs on all the cluster nodes will be listed, as shown in Figure 12. Scroll through the list and select the VMs that you want to be highly available. Click Next. After confirming your selections, click Next to add the Virtual Machine roles to Failover Cluster Manager.

Figure 12: Selecting the VMs that You Want to Make Highly Available

See This Process in Action

In this article, you learned how to create and configure a basic two-node Server 2012 cluster. In addition, you learned how to add CSVs to the cluster and make a VM highly available. To see this process in action, check out the video "Windows Server 2012: Creating a Two-Node Cluster" that accompanies this article.