Storage virtualization, once branded as a mainframe application, is moving rapidly to the mainstream of storage management. In fact, Tony Prigmore, a senior analyst at the Enterprise Storage Group in Milford, Massachusetts, argues that virtualization will emerge as one of the five key elements of storage management in 2002.

As with many emerging technologies—most virtualization products have been on the market for less than a year—the definition of storage virtualization is a moving target. Simply put, virtualization lets IT managers pool all their managed storage devices with little regard to where stored information physically resides. In this way, companies can increase their storage capacity by adding inexpensive commodity disk and tape drives and can allocate those resources as needed.

Storage vendors (e.g., EMC, Network Appliance) have offered virtualization at the hardware level for a long time, and software companies (e.g., VERITAS Software) have provided virtualization at the host level. A new group of companies (e.g., DataCore, FalconStor, StorageApps) are now offering virtualization at the network level.

Storage virtualization addresses three key challenges confronting storage-management professionals. First, the amount of data that needs to be stored continues to increase exponentially. As Nick Allen, vice president and research director at Gartner, pointed out at that company's Symposium/ITExpo 2001, storage demands increase 50 to 100 percent per year and even faster in companies with extensive Internet applications.

Second, because storage growth has been so dynamic, companies have resorted to what industry consultant Jon Toigo calls "panic buying." These companies have installed a wide array of storage systems from different vendors. Now they find themselves with a heterogeneous storage platform that doesn't follow any plan.

Third, virtualization decouples storage from application servers. Too frequently, as companies install new, sophisticated applications, they generate and capture the data in storage stovepipes that they can't access easily for other uses.

Allen said that at the bottom line, many companies can afford to add more storage capacity, but they can't effectively manage their storage infrastructure. Storage virtualization addresses those management issues.

But Dan Tanner, a senior analyst at the Aberdeen Group, argues that storage virtualization solutions can force users to forgo exploiting some of the features and benefits that users associate with the underlying disks and subsystems. As Tanner sees it, storage devices vary widely in capacity, transfer rates, access performance, and other Quality of Service (QoS) metrics. These features are useful for determining how to distribute data across the storage infrastructure.

Unfortunately, most virtualization solutions don't store this information. Why? According to Tanner, the problem is that accepted QoS normalization standards, measurement metrics, and calibration benchmarks simply don't exist. Consequently, users who commit themselves to using a specific virtualization package can use only the services available in that package, even if the underlying platform offers additional features. In short, storage-virtualization solutions treat all storage as if it were the same, even though it isn't. Although users have the option of using the services from a specific vendor's product when they conduct operations solely on those products (the virtualization platform looks like another host in that scenario), that approach raises additional management and support issues.

Tanner's solution is for the industry to develop a storage-virtualization standard that would specify the key QoS information from all the suppliers active in this arena. That way, administrators would have the information they need to maximize their systems' performance while reducing their administrative costs. Administrators also would no longer need to trade off performance features of the underlying infrastructure to realize the advantages of virtualization.