Keep your Web site's service consistent and uninterrupted

All load balancing distributes a common workload among multiple machines, but the term is difficult to define more specifically in a Web context. The definition of load varies among load-balancing products for Web servers: One product defines load as the retrieval of static HTML pages, and another defines load as the back-end processing that dynamic Web content requires. Vendors don't use a common definition of balance, either. Web-based load-balancing products differ significantly in their methods for distributing servers' loads.

Load-Balancing Techniques
You can achieve load balancing on a Windows NT Web server using the Domain Name System. DNS's round-robin function spreads incoming TCP/IP requests evenly among several servers. This load-balancing technique is easy to implement and doesn't require software other than NT. But DNS doesn't keep track of which servers are active and which are down, so it sends TCP/IP requests to servers whether they are online or offline. To distribute your Web site's load without interrupting its performance, you need to implement a load-balancing solution that works seamlessly when a server fails. (For more detailed information about using round-robin DNS, see Douglas Toombs, "Load Sharing for Your NT Web Server," April 1998.)

To get a handle on what load balancing can do for a Web server running NT, I tested four products: Resonate's Central Dispatch 2.1B3, Bright Tiger Technologies' ClusterCATS 1.1, Valence Research's Convoy Cluster 2.03, and Tandem's NonStop WebCharger 1.0A+. Each product uses one of three techniques to achieve load balancing: virtual IP addressing, HTTP redirecting, or application load balancing.

Virtual IP addressing.
Virtual IP addressing lets multiple servers respond to requests for one IP address. For example, suppose you have three NT servers with the IP addresses 222.222.231.1, 222.222.231.2, and 222.222.231.3. You can use virtual IP addressing software to configure all three servers to use IP address 222.222.231.10. You designate one server as the routing, or scheduling, server. The scheduling server receives all inbound traffic and routes requests for Web content to other servers based on load-balancing parameters you set. If the scheduling server fails, virtual IP addressing software assigns routing responsibilities to another server.

Because the Web interface uses one IP address, virtual IP addressing achieves uninterrupted service unless all your Web servers fail. The disadvantage of virtual IP addressing is that you can't easily watch the flow of Web traffic from an external network monitor because all the servers use one IP address.

HTTP redirecting.
HTTP redirecting distributes a Web site's load among multiple servers by connecting users' browsers directly to the servers. When you select a Web site's URL, you usually connect directly to the computer servicing that URL. For example, type http://www.winntmag.com, and the server designated to respond to requests for that HTTP address will provide the Windows NT Magazine Web site. However, if the Windows NT Magazine site has a replica on a server with the URL http://www.winnt mag2.com, an HTTP-redirecting program can redirect users' browsers to http:// www.winntmag2.com to balance the Web site's load. HTTP-redirecting software automatically directs browsers to a Web site replica if the primary URL's server fails. HTTP redirecting's main disadvantage is that it doesn't work with all Web browsers.

Application load balancing.
Application load-balancing software distributes a Web site's workload among servers according to the content browsers request. A primary Web server accepts all incoming Web traffic and performs tasks such as static HTML file transmissions. The primary server redirects back- end applications, such as Active Server Pages (ASP) and Common Gateway Interface (CGI) programs, to other computers. Servers process back-end applications more efficiently when those ap- plications aren't intermingled with HTML file processing, so application load balancing reduces response time for back-end applications.

The Tests
To evaluate the four load-balancing products I selected, I compared their load-balancing techniques and examined their features. I determined whether the products include built-in alert options such as email, Simple Network Management Protocol (SNMP) messaging, or paging. I tested whether the products let a node re-enter a load-balancing group when the node reboots. I looked at whether each product supports load balancing of other TCP/IP services, including FTP and Telnet. I checked whether each product works on multiple Web server products, and whether each product includes a built-in monitoring tool or replication capability. I considered the hardware the products require (their overhead) and each product's price. Finally, I examined each product's installation process, documentation, online Help, and technical support. Table 1, page 69, shows my analysis of the products' features.

After I evaluated the products' features, I used two tools to test their load-balancing capabilities. First, I used the Web Capacity Analysis Tool (WCAT), which comes in Microsoft Windows NT Server 4.0 Resource Kit, Supplement One, to test the products that use virtual IP addressing and application load balancing. WCAT ran scripts that generated a variety of workloads on two test servers and measured the products' ability to balance each workload. (For more information about WCAT, see T.J. Harty, "Testing Your Web Environment," December 1997.) Second, because WCAT can't test HTTP redirecting, I used Technovations' Websizr to generate HTTP-redirect scripts and test the load-balancing capability of ClusterCATS. (For more information about Websizr, go to http:// www.technovations.com.)

Central Dispatch 2.1B3
Resonate's Central Dispatch uses virtual IP addressing. The product uses intelligent scheduling to direct browser requests to the server it deems most available according to preset rules. Central Dispatch 2.1 was still in beta when I conducted my reviews; I tested version 2.1B3.

Central Dispatch comes on a CD-ROM. The package includes an installation manual and user's guide. Central Dispatch also has an online Help file. Before installing Central Dispatch, I had to manually create manager and monitor user accounts on every computer I installed the software on. The accounts' logon names and passwords had to be identical on each computer. Having to manually create these accounts was frustrating because Resonate could include them in the installation wizard, which would make the process much faster and easier. The software also required me to install the MS Loopback Adapter (from the Adapters list in the Network applet in Control Panel) and bind a virtual IP address to the Loopback Adapter. The installation guide includes step-by-step instructions for all these preinstallation tasks.

To install Central Dispatch, I accessed the setup file in the CD-ROM's WINNT subdirectory. The installation wizard offers three installation options: Central Dispatch Node Installation, Dispatch Manager/Java Development Kit (JDK), and Full Central Dispatch Installation. Central Dispatch Node Installation installs the node components that Central Dispatch requires to run on a server. Dispatch Manager/JDK runs Central Dispatch's monitor tools, which track each node's CPU load, open connections, number of hits per second, and network latency statistics. Full Central Dispatch Installation installs the server node components and manager components, including the JDK. When I completed the Central Dispatch installation, the setup wizard created a virtual adapter. The system then prompted me to reboot the computer. The installation went seamlessly, and the GUI made the software easy to configure.

The first time you access Central Dispatch, the program opens to the System Nodes tab in the Site Setup dialog box. On this tab, you add nodes to the load-balancing group and assign weights to nodes for load balancing. By assigning load-balancing weights to individual nodes, you can configure the load-balancing software to take into account each node's processor speed and the other applications each node services. Also on the System Nodes tab, you must select the Enable nodes as servers check box for every server in your load-balancing group. And, you'll want to enable nodes to become servers when a system reboots after a failure so that you don't have to reselect the Enable nodes as servers check box every time a server comes back online.

I like Central Dispatch's resource-based scheduling option, which you can select on the Site Setup dialog box's Schedulers tab, as Screen 1, page 70, shows. The resource-based scheduling option instructs the scheduler to send browser requests to eligible servers based on each server's processor load and the scheduling rules you establish for particular content. For example, if I have two nodes named CGI1 and CGI2, I can establish a port rule that requires the scheduler to send all requests for CGI programs to one of these two nodes. I can even specify that the scheduler send all requests that include the program name neatstuff.cgi to the CGI2 node.

To test Central Dispatch, I used the software to create a virtual IP address for my two test servers. Next, I used WCAT to send scripts and transactions to the IP address. Central Dispatch did an excellent job of balancing the load between my test servers according to the servers' workloads. Finally, I shut off power to the primary server to simulate server failure. The remaining server continued to process requests without incident.

I was pleased with Central Dispatch's performance, but the software has a limited scope. A primary and secondary scheduler handle all scheduling. If both fail, you lose all load-balancing capability until at least one of the two machines restarts or you manually configure another node as a scheduler. In addition, I was disappointed to find that when a node has a power failure, Central Dispatch does not automatically restart. Central Dispatch doesn't currently provide a comprehensive solution to system failure.

Central Dispatch 2.1B3
Contact: Resonate * 650-967-6500, Web: http://www.resonate.com
Price: Starts at $10,000 (2 hosts)
System Requirements: Pentium processor or better, Windows NT Server 4.0 or NT Workstation 4.0, Service Pack 2 or later, 20MB of hard disk space 48MB of RAM

ClusterCATS 1.1
Bright Tiger Technologies' ClusterCATS uses HTTP redirecting for load balancing. ClusterCATS is an Internet Information Server (IIS) filter and runs as a service on NT.

ClusterCATS comes on one CD-ROM. The package includes printed and online QuickStart and Administrator's Guide manuals. The software has three components: Server, Explorer, and Observer. The Server component includes the files that every Web server needs to run ClusterCATS software. You must install the ClusterCATS Explorer component, which creates and configures load-balancing groups, on one computer in your group's domain. The ClusterCATS Observer component provides support for a standalone Web server by letting a group of ClusterCATS servers redirect and respond to user requests in case of the server's failure. I didn't install this feature for my tests.

I installed the Server and Explorer components on one test computer and the Server component on my other test computer. The product's installation wizard helps you install the components. The wizard worked in my tests, but its explanations are meager and it doesn't clearly identify where you need to install each component. ClusterCATS' installation would be better if it were more user friendly.

Licensing is a factor during installation. The license for individual load-balancing groups limits groups to two servers and requires you to provide Bright Tiger with the group name before the company generates your license key. The enterprise-wide license lets you create larger groups and doesn't require predetermined group names. It automatically reports your load-balancing groups to Bright Tiger via email. I used the enterprise-wide license. To create a load-balancing group, I clicked Cluster Manager on the ClusterCATS Explorer menu bar and selected New Cluster. I followed the prompts, and soon my new load-balancing group (called Winnlab) appeared, as Screen 2 shows. I added a second computer to the group through a similar process. ClusterCATS includes a replication tool and automatic content discovery, so it adds a server's Web site hierarchy to the group's directory structure when you add the server to the group.

ClusterCATS performed adequately in my Websizr tests, and NT's Performance Monitor showed how the software redirected requests to the second computer in the group. ClusterCATS uses IP aliasing to provide continued service when one server fails. For this functionality, you need at least two servers, each with two NICs, that are running ClusterCATS on the same subnet. Each server sends out heartbeats that other servers in its group detect. If a server fails to produce a heartbeat on schedule, ClusterCATS assigns the failed server's IP address to another server, called an alias server. When the failed server comes back online, the alias server returns the IP address to the first server. In my tests, I shut down my primary computer and directed requests to that machine. ClusterCATS redirected my requests to my other server, which recognized and processed the requests. I had to reboot the system twice when the failed server was reinitializing. The ClusterCATS documentation says that groups are supposed to automatically reboot twice.

The ClusterCATS replication tool is useful, but the installation and configuration processes are confusing. The requirement that users on your Web site have browsers capable of HTTP redirecting potentially limits your Web traffic. Finally, the product's requirements that every server in a group have a 133MHz Pentium processor, 64MB of RAM, and 35MB of hard disk space and that every alias server have two NICs means that a large load-balancing group requires a substantial investment.

ClusterCATS 1.1
Contact: Bright Tiger Technologies * 978-263-5455, Web: http://www.brighttiger.com
Price: Starts at $10,000 (2 hosts using 256MB of RAM or less)
System Requirements: 133MHz Pentium processor or better, Windows NT Server 4.0, Service Pack 3 or later, 35MB of hard disk space, 64MB of RAM

Convoy Cluster 2.03
Valence Research's Convoy Cluster uses virtual IP addressing to achieve load balancing. I downloaded the 250KB zipped Convoy Cluster files from Valence Research's Web site. I unzipped the files and immediately accessed the product's Help file, which contains two sets of installation instructions: QuickStart instructions for experienced NT Server users and more detailed instructions for less-experienced users. The Convoy Cluster setup files create a virtual network adapter. Installing the virtual adapter is similar to installing a NIC, except that you don't have a physical network card. The installation process is fairly easy.

After you install the adapter, the Setup dialog box (which Screen 3 shows) appears. (After your initial installation, you can access the Setup dialog box via the network adapter's Properties tab.) In the Setup dialog box, you establish the load-balancing group's shared IP number and shared media access control (MAC) address (in the Network address box), the computer's priority in relation to other servers in the group, the computer's dedicated IP address, and whether you want multicast support. If you use the multicast support option, you need only one network card for each computer. Multicast support lets a server's network adapter use the group's virtual IP address to communicate with the group and use the server's dedicated IP address to communicate with individual computers.

Convoy Cluster's final configuration step requires you to bind the Convoy Driver to the virtual adapter and disable the TCP/IP protocol and Windows Internet Naming Service (WINS) client bindings to the physical network adapter. The installation and configuration procedures are identical for every computer in a load-balancing group. The installation on my first server took about 30 minutes. Subsequent installations took me only 15 minutes each.

I used Performance Monitor to watch Convoy Cluster's load balancing. I added my two computers to Performance Monitor and applied several WCAT files to the load-balancing group. Convoy Cluster demonstrated a steady pattern of load balancing between the computers, regardless of the type of load WCAT sent them. Convoy Cluster continued to work perfectly when I unplugged one computer. When my failed machine rebooted, it joined the load-balancing group automatically and immediately participated in load balancing.

Convoy Cluster's system requirements are minimal. I found that the product requires less than 2MB of hard disk space and less than 1MB of RAM. It works with IIS and other Web server software, and it costs less than the other products I reviewed.

Convoy Cluster is small and relatively cheap, but its performance was impressive. A few changes could make it even better. First, it could include an installation wizard to save users the manual configuration it now requires. Second, it could include a performance monitor tool to aid in tracking load-balancing activities. Finally, it could include a replication tool to help users keep content up-to-date across the load-balancing group.

Convoy Cluster 2.03
Contact: Valence Research * 503-531-8718, Web: http://www.valence.com
Price: $2000 (2 hosts)
System Requirements: 386 processor or better, Windows NT Server 4.0 or NT Workstation 4.0, 1.2MB of hard disk space, 4MB of RAM

NonStop WebCharger 1.0A+
Resource-intensive applications can quickly tie up your system resources if you don't use load-balancing software. Tandem's NonStop WebCharger, which is an add-on to IIS, provides load balancing for resource-intensive ASP and CGI programs. On a system that uses WebCharger, IIS handles browser requests for static Web pages, but WebCharger intercepts requests for ASP and CGI programs and routes them to other servers for processing.

WebCharger comes on one CD-ROM with an online Help file that offers overviews of how WebCharger components work. The package led me to believe that installing the program is easy. It's not. The installation wizard appeared after I double-clicked the setup file on the CD-ROM, but the wizard didn't provide much information. For example, the wizard didn't tell me the proper order for installing the WebCharger components, and it didn't tell me which computers to install the components on.

The wizard offers four setup choices: Administrative Tool, Configuration Repository, Distributed Processing Server, and WebCharger Server. Because I didn't know which component to install first, I began installing the Administrative Tool. Only when the software prompted me to enter the name of the computer that held the Configuration Repository did I realize I needed to install that component first. I had to restart the installation. During my second installation attempt, I continued to struggle with the software's inadequate documentation, so I contacted Tandem for assistance. I found out that you need to install the Configuration Repository and WebCharger Server on one computer, preferably the Primary Domain Controller (PDC) or Backup Domain Controller (BDC). You need to install the Distributed Processing Server component on the computers that you want to share the back-end processing. My installation would have been much smoother if the wizard included this information.

After installing the WebCharger components, I opened WebCharger, and the Administrator tool appeared. You assign ASP and CGI programs to distributed processing servers through the Administrator tool. I opened Network Neighborhood, selected each computer I wanted to install as a distributed processing server, and dragged the computers to the Administrator tool's left pane. I then opened NT Explorer and dragged sample CGI and ASP applications onto the All Applications entry in the left pane. (You can set the number of instances a CGI program can run, but each ASP application can run only one instance.) The final step in WebCharger configuration involved dragging each ASP and CGI icon in the Administrator program's right pane onto the server in the left pane that I wanted to process the application. Screen 4 shows the results of assigning applications to servers.

After completing the WebCharger configuration, I ran simulated tests that accessed static Web pages and ASP and CGI programs. IIS handled the requests for static Web pages and redirected the ASP and CGI requests to the distributed processing servers, as it was supposed to.

The product worked well after I installed and configured it, but the poor installation and configuration documentation was a problem. In addition, WebCharger's inability to provide load balancing for any programs other than ASP and CGI limits the product's scope. Finally, WebCharger's future is uncertain due to recent corporate acquisitions. These three problems make WebCharger an unattractive load-balancing solution.

The Final Verdict
When I tabulated the results of my tests, Convoy Cluster stood apart as the clear winner of the Editor's Choice award. Convoy Cluster balanced the network load well, and when one server failed, the second server answered browser requests flawlessly. Convoy Cluster was easier and faster to install and configure than the other products I tested, and it requires less hardware than the other products. Who can complain about a product that requires about 2MB of hard disk space and 1MB of RAM per machine?

In addition, you can reintroduce a failed server to a Convoy Cluster load-balancing group without rebooting the system. And if an entire Convoy Cluster load-balancing group goes down because of a premature reboot or short-term power failure, the machines will begin load-balancing again as soon as they come back online. Convoy Cluster is affordable for businesses of almost any size, and our tests show it's well worth the low price.

NonStop WebCharger 1.0A+
Contact: Tandem * 408-285-6000, Web address not available at press time.
Price: Starts at $1000 per distributed processing server
System Requirements: 486 processor or better, Service Pack 3, 1.2MB of hard disk space, Internet Information Server 3.0 or later, Active Server Pages installed, 10MB of hard disk space, 32MB of RAM