The concept of storage virtualization has superseded the traditional notion of RAID volumes. In the past, when configuring storage volumes for a server, you'd allocate x number of spindles for a database volume and y number of spindles for transaction log volumes. Messaging system designers needed to determine the actual number of disk spindles in use. In general, the more spindles over which you spread I/O load, the better the I/O subsystem performance.

With the advent of large disks, a pleasant side effect of many spindles was huge capacity. However, this capacity was often wasted because 180GB of capacity simply wasn't necessary when you allocated six 36GB disks in a RAID 5 set for a 40GB database volume. Furthermore, because of the constantly improving mean time between failures (MTBF) for disks, having many spare disks is common, even though MTBF statistics suggest that a smaller number would suffice—particularly in RAID 0+1 configurations.

Although I don't describe storage virtualization in great detail, I want to provide a global view. Storage virtualization eliminates several problems: You no longer need to worry about the exact configurations of disks required, the "burden" of excess capacity, or the inefficiency of spare disks. Essentially, you provide many disks (typically hundreds) in a storage array, and you specify the level of redundancy that you require (e.g., "I'll tolerate the loss of 15 disks out of 100"). The controller then configures the disks into a huge RAID set and manages the set to achieve the desired redundancy. Then, you define the volumes that you want to host from the SAN and present to the servers. The storage controller works out which data goes where. From a storage consumer's point of view, you get the best of many worlds: logical volumes striped across hundreds of disks (thus high performance), great usage of raw capacity versus usable space (no capacity waste), and good balance of spare disks against disks' MTBF.