Most business network configurations consist of a variety of components. A company's users might require thin-client applications, dial-up communications, groupware, gateways, file and print services, and fault-tolerant servers. A typical Windows NT network includes numerous PDCs and BDCs and Internet Information Server (IIS), SQL Server, and Exchange Server systems. The network administrator's challenge is to deploy multiple servers and keep space requirements and power costs to a minimum. In many cities with high office costs, housing multiple servers can cost thousands of dollars every month. Even in less-costly areas, the need to get the most out of limited space can be pressing. Cubix designed the fault-tolerant Density System 1100 to help cost-conscious companies centralize their servers and minimize the space those servers require.
Under the Hood
To begin my examination of the Density system, I opened the test system's top-mounted cover to gain access to the backplane, which contained server boards, I/O boards, and ISA and PCI slots for each of four server groups. (Cubix refers to the combination of a server board and an I/O board as a server group.) The Density system has three backplane options: It can house four dual-processor SMP server boards, eight single-processor server boards, or two dual-processor SMP server boards plus four single-processor server boards. The system I tested had the four-server-board option and included two DP6200 server boards (each of which ran dual 333MHz OverDrive Pentium Pro processors) and two SP7300 server boards (each of which ran one 300MHz Pentium II processor). Three of the server boards had 256MB of RAM each, and the remaining board had 1GB of RAM. The Density system includes room for as many as 12 third-height vertically mounted drives that you can mount in single-, dual-, or triple-drive mounting assemblies. The Lab's test system included four removable 4GB hard disks—one for each server card—although the system had enough room for four more disks. The disks easily slid into the chassis and plugged into I/O boards on the backplane. The bottom of the chassis held three hot-swappable fault-tolerant power supplies. Cubix configured the power supplies to provide constant power to all the electronic parts in an N+1 configuration in which at least two power supplies must run to support system operations.
To save space in its standard rack-mountable 19" X 12.25" X 23.25" chassis, the Density system delegates mass storage to external RAID hardware via a Mylex DAC960 controller card. Every Density system chassis includes a multiplexor (MUX) card that lets you interconnect multiple chassis and control the chassis' servers with one monitor, keyboard, and mouse. You can manage multiple Density systems with Cubix's GlobalVision software—you simply connect a cable to the GlobalVision adapters in each chassis. You can manage as many as 16 chassis from one console.
The Next Group Select button on the Density system's front panel lets you control which server group the monitor displays; each time you push the button, you move to the next server group in sequence. The chassis' front contains three recessed buttons. One button lets you power each server group on and off individually after you've powered on the system. Another button lets you reset each server group, and the third button lets you select which server group the CD-ROM drive serves. You must use a pointed object (e.g., a ballpoint pen) to push the buttons, which are recessed a quarter-inch inside the chassis. The benefit of this design is that you can't power off or reset a server group or change the CD-ROM drive's group accidentally. LEDs on the system's front cover signal the working status of the Density system's power supplies and fans and the system's internal temperature.
I pressed the Density system's main power button on the rear of the chassis and waited for the four server groups to completely power up. I pushed the Next Group Select button to switch to group 1, then I pushed the recessed CD-ROM drive button to enable group 1 to use the drive. One chassis idiosyncrasy I noticed was that unless the system's LEDs are at eye level, determining which server group you've selected can be difficult. Cubix reports that this problem results from light spillover. The company says it's fixing the problem.
I installed NT Server 4.0 on group 1's 4GB hard disk. Cubix included 3.5" disks that contained all the video, Ethernet, and SCSI drivers I needed. After the installation completed, I received a prompt to restart the group 1 server, so I rebooted it. Then, I pushed the Next Group Select button to move to group 4, which contained a dual 300MHz Pentium Pro server board. Group 4's I/O board included two DEC Fast Ethernet 21140 network adapters and a SCSI-2 connection for use with an external RAID cabinet. I installed NT Server 4.0 on group 4's hard disk and restarted the group 4 server when I received a prompt to do so. I pushed the Next Group Select button to return to group 1. I was amazed by how seamlessly the Density system let me move between the servers.
Show Me the Performance
Saving space is important but shouldn't hamper server performance. I tested the Density system's servers on the Lab's benchmarking network, which consists of client machines on a 100Mbps Ethernet network. (For details about the Lab's test setup, go to the Windows NT Magazine Web site at http://www.winntmag.com. Click WNT Magazine Labs under the Hardware and Software heading on the site's navigation bar, then click Lab Testing Environment.) I installed Adaptec ANA-6944A/TX four-port adapters in the free PCI slot of each server group I tested. I used Bluecurve's Dynameasure 1.5 as the workload engine. (To read a Lab review of this product, see Carlos Bernal, "Dynameasure Enterprise 1.5," September 1997.) Running Dynameasure in the Lab's test environment simulates typical user workloads and provides quantitative benchmarks by which to compare hardware and software performance.
I used Dynameasure's Copy All Bi-directional tests because these tests randomly order 16 different transactions that copy compressed and uncompressed data, binary files, text files, and image files between the server and clients. I tested a range of 5 to 80 users. The Density SP7300 server board measured a peak throughput of 1.28MB at 66 users and an average response time of 1.8 seconds. I ran the same test against the Density DP6200 server board, which measured a peak throughput of 2.18MB at 66 users and an average response time of .94 seconds.
Making the Grade
The Density system makes you reevaluate why you need a separate CD-ROM drive, 3.5" disk drive, and power supply for each of your servers. You can connect hundreds or thousands of Density chassis using GlobalVision, and each chassis' servers can connect to external RAID devices for additional storage. Cubix manufactures the system's chassis and backplanes, so the company can custom-build a Density system to fit almost any need.
At first glance, $24,400 seems like a lot to spend on a small chassis, until you remember that you're really talking about four servers with SCSI and network support built in. If you consider that the Density system centralizes four servers to effect immediate reductions in space and power costs, you realize that your potential long-term savings can easily offset the product's price.
|Density System 1100|
| Contact: Cubix * 702-888-1000|
System Configuration: Two 333MHz Pentium II server boards, Two 300MHz Pentium Pro OverDrive server boards, 256MB to 1GB of RAM per server, Four 4GB hard disks, Two DEC dual-Fast Ethernet 21140 adapters, Two DEC single-Fast Ethernet 21140 adapters, Mylex DAC960 disk array controller card, GlobalVision management processor adapter