Your network cabling is one of the first things you should scrutinize when considering an upgrade to Gigabit Ethernet over copper. Although the Gigabit Ethernet standard specifies that Category 5 cable is adequate, don’t assume your current cable is up to the task. The Lab has a variety of Cat 5 and Cat 5 Enhanced (Cat 5e) cables. Even though we regularly inspect and test the cables that we use in the Lab, I found several that couldn’t pass muster for Gigabit Ethernet. Because Gigabit Ethernet uses all four pairs of conductors in a standard Cat 5 cable, any cables with fewer than eight conductors won’t work. A couple of Lab cables with only two pairs of conductors had been working well for Fast Ethernet. A couple of cables had one bad conductor that went unnoticed because it wasn’t part of the transmit/receive pairs that Fast Ethernet uses. Patch panels and wall jacks are frequently at fault when a cable doesn’t pass Cat 5 testing. You can often solve problems with near- and far-end crosstalk by repairing the connections at these locations. If you do have to pull new cables, use nothing less than Cat 5e.

Another thing to consider before upgrading your host segments to Gigabit Ethernet is whether your hosts can utilize the bandwidth. Many good books are devoted to performance tuning and the elimination of bottlenecks, but you can look at a server’s PCI bus width and speed to get a general idea of its throughput capabilities. Gigabit throughput is beyond what you can reasonably expect out of a standard 32-bit 33MHz PCI bus. Gigabit Ethernet NICs are designed for a 64-bit bus, and although the NICs I tested can work in a 32-bit bus, you’re wasting a good deal of the NICs’ potential as well as the money you spent for them. A host that doesn’t have a 64-bit bus probably isn’t worth the expense of upgrading it beyond Fast Ethernet. In the Lab, I was able to push a high-end workstation with a 32-bit, 66MHz bus to a throughput of 200Mbps using Gigabit Ethernet. That speed was well beyond what I could achieve on a Fast Ethernet link, but you’ll need to decide whether that limited boost in performance is worth the cost for you.

If you’re looking into bringing workstations on line with Gigabit Ethernet, be aware that certain Intel motherboards have a RAIDPort socket that looks identical to a 64-bit PCI socket. Don’t rely on a visual inspection of the sockets; check the motherboard specifications. Fortunately, the RAIDPort socket is keyed differently than a standard 64-bit socket, so you can’t mistakenly install a 64-bit NIC in the RAIDPort socket.

PCI busses run at the speed of the slowest installed device. Thus, a server with a 64-bit, 66MHz PCI bus will run at only half speed if one of its sockets is populated with a 33MHz PCI card. Scrutinize your hardware to be sure you can use a worthwhile portion of the bandwidth that Gigabit Ethernet provides.

The disk subsystem is another area you’ll want to investigate for a potential performance bottleneck. The raw output of the fastest SCSI drives currently available is about 28MBps (approximately .23Gbps). Host adapters, RAID configurations, communications overhead, the number of drives, and PCI bus width and speed all come together to influence how much data you can realistically push through a Gigabit Ethernet interface.