The best way to analyze any performance data is to establish a baseline, or reference point, that you can compare the results of other tests with. The baseline test needs to take the system beyond the stress point, where the increase in system load causes a decrease in system performance. The stress point identifies the maximum throughput for a particular system under the given conditions. A typical throughput Figure for a Dynameasure for File Services test has the shape shown in Figure 1; in this example, the stress point occurs in Step 4.

Performance Test Measures
The File Services test measures for performance are Bytes Per Second (BPS), Average Response Time (ART), and Motors Per Step (MPS), which you can view with Dynameasure's Analyzer. BPS reports the total number of bytes all the motors copied during the measure phase of a step divided by the time of the measure phase. BPS measures system capacity. The type of transaction, the number of motors, and the hardware capacity of the system influence the BPS. ART is the average time in seconds to complete a transaction during the measure phase of each step. ART measures system speed. The type of transaction, the number of motors, and the hardware capacity of the system also influence the ART. MPS is the number of motors that reported results for each step of a test. MPS measures the total number of motors assigned to a step that complete the transactions. MPS is a direct measure of load on the system.

Figures 2, 3, and 4 represent the BPS, ART, and MPS results from running the Copy All Bi-directional File Services test on the Windows NT Magazine Lab's network (for details about the Lab's test environment, see the sidebar, "The Lab's Testbed,"). The test consists of 16 different transactions in which Dynameasure copies compressed and uncompressed data, text, and image files between the server and the clients. I conFigured the Lab's test for a 5.6MB (scale 0.01) data set, 5-second Think Time, and six steps (with 10, 20, 40, 60, 80, and 100 motors).

From these Figures, you can analyze how the network performed during the test. The stress point occurred during the initial steps of the test. Figure 2 shows that throughput peaked at roughly 4100 KBps in Step 3. Figure 3 shows that the ART was low (i.e., the network was fast) through Step 3, but then it increased considerably. The MPS Figure shows that starting with Step 3, the number of motors able to execute the test transactions fell increasingly short of the number of assigned motors. In Step 6, only 56 of the assigned 100 motors completed the test transactions. The ART decreased in Step 6, which indicates that the network got faster; however, the BPS decreased in Step 6, which implies that the increased speed resulted from the considerable drop in network capacity. From this information, you can deduce that the system bottleneck is a network problem: The network can support only an average of 27 users before system performance degrades.

Unfortunately, I incorrectly assumed that the problem was with the Dynameasure software and not the Lab's network because the same network configuration had handled a load exceeding 100 users in previous SQL tests. With help from Bluecurve's technical support team and the software's user manual, I was able to troubleshoot and fix my network problem.

Troubleshooting the Problem
To establish a baseline, I ran a low-load test (Copy All Bi-directional, with only one motor per test client, minimal file size, 5.6MB data set, 10-second Think Time, and one step). On subsequent tests, I tuned and refined the test setup based on my evaluation. Figure 5 shows the number of motors that executed transactions and the number of assigned motors for six tests; Figure 6 depicts network throughput. The Figures do not represent all the testing iterations, only the significant steps in the baseline tuning.

During the first four tests, as you can see, motors dropped out. I reviewed Dynameasure's Test Summary information (which I'll describe below) and found that the dropout motors occurred on test clients that connected to one Cogent repeater. I turned off each workstation connected to the suspect hub and then ran another test; every remaining workstation generated its one motor. In Tests 5 and 6, I added more motors to the remaining workstations. In Test 5, all assigned motors reported to the Manager agent and completed transactions. I swapped out the Cogent repeater, turned on all the workstations, and ran the same tests. Test 6 displays the impressive results. All the workstations were running, and load reached the license limit of 100 motors. I also found that network throughput more than doubled. In the previous testing iterations, I had turned off the workstations attached to the faulty hub, but I never disconnected the hub from the network. The hub was causing the throughput bottleneck.

Other Analysis Tools
After establishing a baseline, you can run individual tests tailored for specific applications. Through the Analyzer, you can easily select and compare any of the test iterations. You can display test results in Table format and easily export them to Microsoft Word, Excel, and Access.

Dynameasure offers several performance tools that let you view data while a test is in progress. To monitor individual workstations, you can view the Operator Detail window (shown in Screen 1) to evaluate operator-to-manager communications. For each motor, you can monitor the test process on the Test Parameters tab (shown in Screen 2) and the Results tab (shown in Screen 3) of the Motor Details window.

After you complete a test, you can generate and print a variety of reports, such as Test Summary and Result Details. The Test Summary report for a File Services test includes BPS, ART, MPS, Result Details, Machine Attributes, Test Environment Attributes, Transaction Details, Test Specifications, and individual Motor Information. The Result Details information is useful. For each transaction in a test, you can look at test client, step, file size, and possible errors. The end of the Test Summary contains helpful motor information, such as which step the motor becomes active in, the last step the motor successfully completes, and the last phase the motor completes. If a motor drops out of a test, the final phase column displays the reason. I used this information to discover which motors were dropping out of the test at a specific step and why. With this information, I determined that the dropout motors came from the same test clients, which in turn led me to the faulty repeater.