If you ask several technology devotees about InfiniBand, you're likely to elicit blank looks because the name and the technology are new. InfiniBand is the server bus standard that will replace the aging PCI bus, the result of the so-called Bus Wars between two competing standards, Next Generation I/O (NGIO) and Future I/O. The two sides merged the technologies (and their organizations) and named the result InfiniBand. The second InfiniBand Developers Conference in Orlando, Florida, gave me a chance to learn more about this developing standard and better gauge InfiniBand's prospects for success.

InfiniBand is a fabric architecture that provides three types of channel-based transport: 1X, 4X, and 12X. 1X has 4 wires—2 wires running in each direction; 4X and 12X have, respectively, 16 and 48 wires and corresponding throughput. (Sun's first offering, for example, is 4X.) InfiniBand is a very high throughput bus that could replace fibre channel, if the industry delivers the technology at a competitive price. Most people at the conference believe that InfiniBand will co-exist with other buses for some time and have minimal impact on the storage industry for the first 2 to 3 years.

InfiniBand isn't often in the news these days; the technology is too new, and few products are available. Most vendors I spoke to will start shipping products in the fourth quarter of 2001. However, you can buy Host Channel Adapters (HCAs) and Target Channel Adapters (TCAs) as part of the software development kits being offered, like the one Vieo offers. Many fibre channel switch vendors are developing InfiniBand switches. (QLogic is one of the first to offer the gate with the InfiniBand switch.) From what I understand, switch vendors expect to price their InfiniBand products at or near the per-port price of current high-speed fibre channel switches, which is disappointing. The pricing should be lower to capture the marketplace.

InfiniBand technology has a logical management framework. Each channel has 15 virtual lanes for traffic, with lane 15 dedicated to management traffic. We'll see new InfiniBand consoles from companies such as Lane15 and Prisa Networks, and in about a year, we should see mainstream products such as SANPoint Control auto-discover and map InfiniBand fabric. What we won't see is InfiniBand-enabled hard disks, even though you can put up to 64,000 identification numbers on an InfiniBand subnet; port prices make this approach impractical. Expect to see vendors use InfiniBand in controllers for new intelligent IDE drives; the controller can be used to fan out to disks.

During the conference, I spoke with Mitch Shults, vice president of business development at ExaNet Corporation. Shults was the point person for Intel's NGIO efforts, and joined ExaNet when he learned about the company's new InfiniBand product. ExaNet is building a storage server that uses InfiniBand as a pipe to perform massive parallel processing using a DPS and distributed caches to create a very high-performance file and block storage server. Shults said that although ExaNet's system can't compete on a single processor basis with what BlueArc and Cereva, when taken as a whole, ExaNet's InfiniBand product will offer breakthrough performance for both files and blocks. ExaNet's system scales from 1TB to 40PB in capacity and from 150MBps to 600GBps in bandwidth from system to client. Shults also told me to watch for ExaNet's Standards Performance Evaluation Corporation (SPEC) testing results to come out later this year—ExaNet expects that it can blow the doors off current SPEC numbers. ExaNet is a perfect example of an application that shows how InfiniBand can make a difference in system architecture.

The InfiniBand standard has benefited from previous standards. Its "verb" implementation for commands and bus management will let vendors implement proprietary APIs, a compromise that probably will bring Microsoft to the table when the bus starts to achieve market acceptance.

InfiniBand lets you separate server and storage assets, which in turn lets you size server, storage, and bus to your needs. InfiniBand's ability to densify, provide a fat server I/O pipe, and remotely locate storage is an asset. With fibre, you can support metropolitan area network (MAN) installations a kilometer away. Additionally, InfiniBand's ability to service chassis containing multiple I/O boards might lead to some interesting new server types—for example, print servers or high-speed scanners. Over the InfiniBand bus, you can control the amount of pipe that applications have, and, as a result, provide outstanding Service Level Agreement (SLA) control and Quality of Service (QoS).

To learn more about InfiniBand, read the analyst white papers on the InfiniBand Trade Organization Web site. For an introduction to the management features specifically, read Volume 1, Chapter 3, of the specification on the site.