If a cluster is split for any reason, such as a network failure, and the cluster becomes partitioned, the quorum model is designed to ensure that only one partition can have quorum. The partition with quorum is determined by the number of voting elements present in the partition, or which partition has access to a specific resource, such as a disk witness. The partition without quorum will shut down all its services.
Voting-element status can be assigned to:
- nodes in the cluster
- a shared disk witness (formerly?) known as the quorum drive), but each cluster may have only one
- a file share witness, which each cluster may also have only one
You choose which resources can vote. Once votes have been assigned, the number of nodes required for quorum is based on the following formula:
Total Voters/2 + 1 = majority (rounding down)
For example, if I have a five-node cluster, the formula would read
5/2 + 1 = 3.5
The formula rounds down when there is a fractional result, so in this example, I would need three nodes present to make quorum.
If I had a four-node cluster with a disk witness, I would still have five votes (four nodes plus the disk witness' vote), and would still need three votes to make quorum (which could be three nodes or two nodes and the disk witness).
This quorum model's advantage is that there isn't a single point of failure, which existed in previous models that relied on the quorum drive. As my first example showed, quorum can be made even if the disk witness is unavailable.