As deployments of Storage Area Networks (SANs) have increased, storage professionals have quickly recognized that the real costs accrue not from buying and implementing SANs but from managing them. In fact, according to a recent Gartner study, managing a SAN costs four to seven times as much as buying the hardware.

Consequently, administrators are constantly looking for strategies that let them control costs and best use their resources. Storage virtualization is one attempt to help administrators better use their resources and reduce costs. Through storage virtualization, administrators use capacity more effectively by developing a logical view of their infrastructures rather than a physical view. Although storage virtualization is relatively immature—major storage vendors are still tentative about what kind of metadata their systems will share and what kinds of services will remain proprietary—companies have begun looking at opportunities other than storage virtualization to increase management control.

Fabric virtualization applies the same concepts of storage virtualization to the fabric or network itself. That is, fabric virtualization abstracts the physical attributes of the fabric to a logical layer and manages the storage network. Administrators can assess and control all the data-fabric attributes—the quality of service, cost, and latency. McDATA, a global leader in open-storage-networking solutions, heavily promotes fabric virtualization and uses the telephone network to describe the concept. For example, when companies place calls from Baltimore to Beijing, they care primarily about three attributes: Quality of Service (QoS—i.e., how clear the call is), latency, (i.e., how long it takes to connect and the lag in sound transmission), and cost. Companies don't care about the 10 technologies needed to support the call; they just want to complete the call quickly, efficiently, and inexpensively.

Fabric virtualization can apply those same metrics to the data fabric. If storage administrators can abstract the physical attributes of the underlying subnetworks, they can determine the attributes associated with each data path and develop policies to determine how to complete different tasks by using different data paths. For example, they can route data from low-priority applications along different paths than data from high-priority applications. If a specific data path can't deliver the QoS an application requires, the administrators will receive an alert and either make the necessary changes or alert their users that they can't achieve the expected service level.

Although several different approaches to storage virtualization exist, lodging the virtualization intelligence in the network is perhaps the most promising. Network-based virtualization consists of two approaches. In one approach, known as "out-of-band" or asymmetric virtualization, a separate data path delivers the metadata about the virtual-to-logical translation. In the alternative approach, known as "in-band" or symmetric virtualization, metadata uses the same path as the data itself. According to a recent study by ITCentrix (which specializes in developing IT benchmarks), asymmetric virtualization—when compared with symmetric virtualization—can reduce the total cost of ownership (TCO) for a storage network managing 1TB of data on five hosts by 18 percent over 1 year and by 6 percent over 5 years. The same study showed that asymmetric virtualization can raise the capacity usage of such a network by as much as 30 percent. When compared with a centralized storage infrastructure, asymmetric virtualization can reduce the TCO of such a storage network by 75 percent over 1 year and 16 percent over 5 years.

Fabric virtualization goes the next step and virtualizes the network. Most observers agree that the even the idea of fabric virtualization is still in the visionary stage. In the long run, however, automated path management and provisioning could be as important a tool in storage-network management as storage virtualization.