How do you know where to start when you need to add storage capacity to your existing Windows NT solution? You can't simply add more disk space and expect to improve performance. As you increase your storage capacity in an enterprise environment, you need to be able to detect bottlenecks in your RAID subsystems, know which RAID levels to consider, and know how to size your RAID arrays according to current and future performance requirements.
If you're new to RAID or just need to brush up on your technology, see Table 1, page 186, for a comparison of RAID levels and definitions or go to the RAID Advisory Board Web site (http://www.raid-advisory.com) for an extensive RAID review. The disk subsystem is one of the most flexible resources you can configure in NT. How well you design your disk subsystem can drastically influence NT's overall performance.
Getting the Big Picture
Before you can detect a disk subsystem bottleneck, you need to determine whether your system is suffering from other bottlenecks associated with the CPU, memory, disk, network, applications, clients, and NT resources. (For information about tuning NT to improve performance, see "The Beginner's Guide to Optimizing Windows NT Server," part 1 and part 2, June and August 1997.) If you add resources to an area of NT that isn't throttling your system's performance, you won't improve NT's overall performance. Tuning a resource or purchasing additional hardware only to find that your efforts were in vain can be frustrating. Assuming your disk subsystem is causing the only bottleneck on your NT system, you can take several steps to detect and correct the bottleneck and improve your system's disk performance.
Detecting Single-Disk Bottlenecks
Detecting a bottleneck in your disk subsystem is an important first step in helping you determine how much additional disk space and disk performance capacity you need. On NT systems with one hard disk, the disk becomes a bottleneck that throttles the system when the disk can't keep up with the requested workload. As a result, the disk's response time for processing application requests becomes unacceptable. This delay forces applications to wait on disk service.
NT's Performance Monitor is an excellent tool for detecting disk bottlenecks (for an explanation of Performance Monitor, see John Savill, "Troubleshooting NT Performance Moni-
toring," April 1998). To collect disk subsystem statistics for use with Performance Monitor, you must type
at the NT command prompt and reboot the server; otherwise, the performance counters will all report zero. The -y option tells NT to start the disk counters when you restart NT, and the -e option enables the disk counters you need to measure the performance of physical disks in striped disk sets. (You might not have a striped disk set now, but turning on these counters will save you from having to reboot later.)
Selecting Disk Counters
The number of disk-related counters that Performance Monitor provides can be overwhelming. A good counter to watch is %Disk Time, which is available under Performance Monitor's LogicalDisk object. The %Disk Time counter reports the percentage of elapsed time that the selected disk is busy servicing read or write requests. If %Disk Time averages 60 percent to 80 percent, the disk is not causing a bottleneck. However, this level of performance warrants taking a closer look at the disk in question. When %Disk Time exceeds 80 percent, the disk is getting busy. At this level of performance, the time the disk requires to service each request increases, and you need to closely monitor several other disk-related counters that are also available under Performance Monitor's LogicalDisk object.
The first of these additional counters is Avg. Disk Queue Length. This counter measures the average number of read and write requests that NT queued for the selected disk during a sample interval. A hard disk becomes a serious bottleneck when the Avg. Disk Queue Length exceeds 2 for a sustained period. When this delay occurs, applications are waiting to access the disk.
Another counter to watch when the %Disk Time exceeds 60 percent to 80 percent is the Avg. Disk sec/Transfer. This counter measures the time in seconds of the average disk transfer (i.e., the time the disk needs to service each request). A disk can complete only so much work before its service begins to degrade. When disk performance begins to degrade, the Avg. Disk sec/Transfer increases dramatically. This increase affects NT's overall performance.
You will want to review the Disk Transfers/sec counter to determine the amount of work a disk is completing. This counter measures the rate of read and write operations (also known as the rate of input/output per second) on the selected disk. The amount of work a disk can support depends on the disk technology and the I/O workload the disk encounters. In my experience, an Ultra Fast/Wide SCSI 7200rpm disk encountering a mixed I/O workload (random, sequential, write, and read operations) supports approximately 50 disk transfers per second to 100 disk transfers per second before its performance degrades. Monitoring the Avg. Disk sec/
Transfer counter lets you observe this performance degradation.
Detecting RAID Bottlenecks
RAID technology lets you group multiple hard disks and present them to NT as one logical disk device. To detect a RAID bottleneck, you use the single-disk bottleneck detection techniques I just described, but with a twist. The %Disk Time counter uncovers problems that are brewing in any RAID device. When you're attempting to detect a RAID bottleneck, RAID 0, disk striped sets, is the easiest RAID level to work with. RAID 0 takes advantage of all the disks in the array equally. Thus, a three-disk RAID 0 array can support three times as much workload (i.e., disk requests) and three times as many outstanding disk requests (Avg. Disk Queue Length) as a one-disk configuration before becoming a bottleneck.
In a RAID 1 mirror with two disks, the array uses both disks for all write activities. To determine the workload that a RAID 1 mirror can support (i.e., the number of transfers per second), use the following equation: (disk reads/sec + \[2 * disk writes/sec\])/(number of disks in the RAID array). Today's RAID 1 mirrors use a two-disk configuration. Despite greater availability, RAID 1 arrays support a slightly lower workload in a write-intensive environment than systems with one hard disk. However, if your Avg. Disk Queue Length divided by the number of disks in the array exceeds 2, you have a serious bottleneck in a RAID 1 mirror.
The RAID 5 disk stripe with parity environment is similar to a RAID 0 stripe set for read-intensive environments. A RAID 5 array with five disks supports almost five times as much workload and up to five times as many outstanding disk requests as a one-disk system before becoming a bottleneck. To calculate how many disk requests a RAID 5 array can support, use the following formula: (disk reads/sec + \[4 * disk writes/sec\])/(number of disks in the RAID array). A RAID 5 array's performance is different than a RAID 0 stripe set's performance because of additional disk activity associated with parity generation. In a RAID 5 array, parity information is spread across all the disks in the array for fault tolerance. To calculate this parity information, each RAID 5 write operation reads the data block, reads the parity block, logically exclusive Ors (XORs) the data, writes the data block, writes the parity block, and so on for each single write operation. Thus, each write request in a RAID 5 array incurs four disk operations. This parity generation slows write operations in RAID 5 environments compared with RAID 0. However, this parity information lets you continue operations if one of the disks in the RAID 5 array fails. You can replace the failed disk and reconstruct the failed disk's data on the new disk using parity information from the other disks in the array.
You can use hardware-based RAID solutions to avoid the performance pitfall associated with generating this parity information. Hardware-based RAID controllers generate parity information using their own CPU, not the system's CPU. As a result, a system using a hardware-based RAID solution can handle more disk I/O operations than a software-based solution. An additional benefit of offloading parity generation to a hardware-based RAID solution is that you can recover and use processing power elsewhere on your system that might otherwise be wasted on disk I/O parity.
Sizing Additional Disk Capacity for RAID Arrays
If you evaluate your RAID array's performance and determine that the array is causing a bottleneck in your system, you can intelligently size additional storage capacity. Without the information that the Performance Monitor counters provide, you can only guess how much disk space you need to add to improve performance.
Adding a RAID-based disk subsystem to NT can improve a system's performance, availability, and manageability. However, you need to consider fault-
tolerant support, cost, capacity, and performance when sizing RAID subsystems.
Determining How Many Disks to Add
How do you know how many disks you need to meet your performance requirements? The primary performance requirements for a RAID array are adequate throughput and response time. The workload you place on the array and the amount of work the RAID array can support (i.e., transfers per second) influence both requirements.
To help you know what steps you need to take when adding storage capacity to NT, let's look at an example. Imagine that you have a server with a RAID 5 array composed of three 4GB Ultra Wide SCSI 7200rpm hard disks. Having historical information to work from when adding storage capacity is helpful, so imagine that you've stress tested your NT file server using Bluecurve's Bi-Directional copy workload to simulate a file server workload (for information about Bluecurve, see Carlos Bernal, "Dynameasure Enterprise 1.5," September 1997).
From the Bluecurve stress test results, you learn that the maximum throughput that this configuration (configuration 1) provides at the 20-user level is 3.8MB per second (MBps) with a response time of 13.9 seconds. When you review the corresponding Performance Monitor log to determine what's happening inside NT during the tests, you see that the %Disk Time stays at 100 percent. As a result, I omitted this counter to ease viewing the chart you see in Screen 1, page 187. As the Disk Transfers/sec increases against the disk array, the Avg. Disk Queue Length grows to almost 16 and the average RAID array response time (i.e., Avg. Disk sec/Transfer) increases to 0.121 second, which is slow. This information indicates that this RAID array is causing a bottleneck. Now that you know a bottleneck is occurring, you can use this information to determine the best economical solution to remove the bottleneck and increase the usable disk capacity.
Estimating Required Additional RAID Performance Capacity
The Avg. Disk Queue Length for configuration 1 is 16, which exceeds the maximum recommended rating of 6 (3 disks * 2 outstanding requests each). Also, the maximum transfers per second are 139 (\[126 + (4 * 73)\] / 3) per disk, which exceeds the suggested workload that one disk can support. The combination of long queues and excessive numbers of transfers per second slow the Avg. Disk sec/Transfer response time to 0.121 second.
You want to limit each disk in the array to no more than two outstanding requests at a time, so you need a minimum of eight disks to remove the bottleneck. I recommend you replace the three-disk RAID 5 array with a 10-disk RAID 5 array. Adding two more disks than the system requires gives you some room for possible surges in workload and room to accommodate future requirements. This configuration removes the disk bottleneck and provides 36GB of usable storage capacity.
Graph 1 shows how the average response times of the RAID array in configuration 1 compare with those of the new configuration (configuration 2). Graph 2 shows how the throughput levels of the RAID array for configuration 1 compare with those of configuration 2. Configuration 2 lowered the aver-
age response time from 13.9 seconds to 9.2 seconds and improved the throughput from 3.8MBps to 4.9MBps at the 20-client level. The Avg. Disk Queue Length dropped from 16 to 12, and Avg. Disk sec/Transfer dropped from 0.119 second to 0.049 second. These results provide insight into the reason why the throughput and response time reported by the Bluecurve clients improved significantly. In addition, Performance Monitor reported that the RAID array provided greater than 7.34MBps of disk throughput while supporting a workload of 68 (\[147 + (4 * 117)\]/9) transfers per second per disk. This sizing solution provides improved performance with room to grow.
Disk Storage Capacity vs. Disk Performance Capacity
In the example in this article, you learned how to determine the number of disks you need to add to a RAID array to remove a disk bottleneck and provide the necessary storage capacity. This example provides 36GB of usable disk storage capacity. So why did I suggest you create a RAID array using ten 4GB disks instead of five 9GB disks to provide 36GB of usable storage capacity? The answer has to do with the supported disk workload. Just because disk capacity increases from 4GB to 9GB, the workload each disk can support doesn't increase if the disks in the RAID array are from the same family (e.g., Ultra Wide SCSI 7200rpm). Regardless of the disk storage capacity, each 7200rpm disk can support only about 100 transfers per second. Thus, if you use five 9GB disks instead of ten 4GB disks, you meet the storage capacity goal of 36GB, but the RAID array is still a bottleneck. You can also use nineteen 2GB disks to provide even better performance, but this solution is economically prohibitive.
Meeting Your Storage and Performance Needs
Understanding how to use and evaluate NT's built-in metrics and distinguishing between storage capacity and disk performance capacity is important. After you understand these concepts and the relationships of the information that Performance Monitor provides, you can remove the guesswork associated with sizing your RAID array and meet your storage and performance needs. In a future article, I'll show you how you can tune your NT RAID solution for maximum performance.