Don't let your disks go to pieces

Recently, I began researching defragmentation for a worldwide deployment of 34 Windows NT servers. Having spent a decade with Digital's VMS operating system (OS), I'm familiar with the performance problems associated with fragmented disks. I remember one case in which logon time decreased from 75 seconds to less than 10 seconds after I defragmented several VMS disks that had not been maintained for 2 to 3 years. Fortunately, maximizing disk performance with NT is an achievable goal.

In the US, the UK, France, and Germany, my company deployed HP NetServers configured with standard RAID hardware. Because my company will manage these remote systems from Denver, Colorado, reliability and redundancy are critical. The servers have two disk configurations of four to six 4GB-to-9GB hard disks. The high-end configuration uses RAID 1 (mirroring) for the system disk and RAID 5 (stripe sets with parity) for the remaining disks. The low-end configuration consists of four disks in a RAID 5 set. As I worked on the NT configuration details, one question kept resurfacing: Does defragmenting a RAID set make sense? Because many users believe RAID hardware doesn't require defragmentation, I decided to research the subject.

What Is Disk Fragmentation?

Executive Software, the company that makes Diskeeper 3.0 for Windows NT Server (the software defragmentation tool I used in my research), defines disk fragmentation as a condition in which pieces of individual files on a partition are noncontiguous, or scattered around the partition, and the free space on a partition is in many small pieces rather than a few large ones. Under ideal circumstances, a file system physically stores a file in a contiguous section of the disk. Finding contiguous space for a file is easy when a disk is new and contains only a few files. However, as you delete and add files over time and the amount of free space declines, the number of large blocks of contiguous free space also declines. The file system must write a file in multiple segments across different portions of the disk. The file system manages these fragments, so they are transparent to applications and end users.

Fragmentation tends to increase when the file is less than 20MB and the number of deletions is high in proportion to the number of writes. Database files are especially susceptible to fragmentation. If your database has an automatic compaction utility, you should schedule it to run at regular intervals.

Retrieving a file stored in a contiguous section requires only one seek operation and at least one transfer. Retrieving a fragmented file requires multiple seeks and transfers. The performance cost is in the seek time associated with each segment. When the Master File Table (MFT) fragments, the file system must perform multiple seeks and reads just to locate the file and its starting block number. Directory fragmentation has an even worse effect on I/O performance, and the combination of directory and data-file fragmentation can make a high-performance disk subsystem operate at a snail's pace.

RAID 1 maintains a full online copy of the system disk. Every write to the system disk is duplicated on the mirror disk. If the primary disk fails, you can access the OS on the mirror disk. RAID 1 can provide up to 70 percent performance improvement for read operations because the read can occur on either the original or the mirrored disk. However, writes are less efficient because they must be done on both disks. Typically, a system disk is read far more often than it is written, especially if the pagefile is on another disk, so mirroring a system disk can be a highly effective performance technique.

The goal of a striped disk array is to spread one file across multiple disks so that you can access the entire file in one read operation. RAID 5 adds parity information to each disk; in the event of a disk failure, you can reconstruct the missing disk from parity information on the adjacent disks.

Example of Performance Degradation
Suppose you want to read a 40MB file that's written contiguously on one disk. Searching and transferring the file takes 1000 milliseconds (ms): The seek operation takes 9ms, and the transfer takes 991ms. However, if you store this 40MB file on a four-disk stripe set, each disk stores 10MB (or a quarter) of the data. Therefore, the same read operation will take 9ms to seek and 250ms to transfer because the system performs the I/O with each disk concurrently.

Let's assess the effect of fragmentation in this 40MB file. If it takes 9ms to seek and 991ms to read the contiguous file, the overhead is 9 ÷ 1000 or less than 1 percent. If this same file has 10 fragments, the overhead is 90 ÷ 1000 or 9 percent. If you apply the same overhead to the stripe-set situation, the degradations are 3.6 percent (9 ÷ 250) and 36 percent (90 ÷ 250), respectively. Fragmentation on standalone disks or RAID sets always introduces disk performance problems.

Diskeeper Server 3.0
Contact: Executive Software 800-829-6468
Web: http://www.diskeeper.com

Getting Results
After defragmenting five servers with a total of 16GB of disk space, I made several observations. First, I needed at least 20 percent free disk space to achieve even a minimal reduction in fragmentation. Even with 25 percent free disk space, cleaning up a disk with more than 10,000 fragments took several hours. Cleanup time decreased to about 15 minutes for a 1.34GB disk when it contained fewer than 1000 fragments.

Second, FAT partitions defragmented faster than NTFS partitions because the directory structure is simpler. Third, you might need to defragment the same disk multiple times to improve performance. Once the disk is almost clean, you'll notice a significant increase in speed.

Fourth, the continuous defragmentation option is practical only when you have multiple disks. On a single-disk system, the slowdown is frustrating if you try to work while the defragmentation utility is active. The defragmentation process does not consolidate directories or the MFT because other processes access these structures while Diskeeper is running. However, you can select directory consolidation as a boot-time option.

Fifth, you can reduce the amount of time required to defragment disks manually by deleting the memory.dmp file. Sixth, the summary page Diskeeper presents after defragmenting a drive delivers an average-fragments-per-file ratio. This information isn't helpful except as an overall indicator of fragmentation problems. Reviewing the individual files reported in the lower portion of the text report window is more informative.

Finally, ensuring that the application log is large enough to hold the level of logging you select is important. You may need to clear the application log on a weekly basis if you enable all the logging options on all local drives.

Diskeeper 3.0 for NT Server
To defragment disks on my network systems, I installed the NT Server version of Diskeeper 3.0 on each machine. Once I installed the software, the Diskeeper service started whenever I booted the system. (You can start the utility on any system and wait for the software to discover all the servers in the domain.) When I double-clicked the server name, I initiated a scan of the local drives on that system, as Screen 1 shows.

I found many helpful features on Diskeeper 3.0, including manual defragmentation of one or all disks; scheduled defragmentation of local or network system disks, as Screen 2 shows; and five priority levels for the defragmentation process. The product also has an exclusion list for temporary directories or scratch files, an option for consolidating a directory at boot time, and selective logging of defragmentation activity in the system log. Diskeeper includes a Systems Management Server (SMS) Portable Data Format (PDF) file as well.

Necessary Additions
Although Diskeeper is a well-equipped defragmentation tool, I would like to see Executive Software add two features to the product. First, Diskeeper needs a resizable text report window that lets you see all the details at once. With such a window, you wouldn't have to scroll up and down the smaller window that contains the fragmented file names. Second, Diskeeper needs an option to save the text analysis as an ASCII file. That way, you can examine the file more carefully with a text editor.

I encountered one quirk in the product. When Diskeeper could not contact a network server, it reported a bogus error that it couldn't connect to the local system. After two error messages stating that the machine might be running an old version of the software or might not be available, the local drives appeared.

Diskeeper and Defragmentation
After experiencing the significant performance improvements of a clean disk, I plan to recommend a defragmentation tool for every new server that I configure. As a preventive measure, Diskeeper 3.0 is second to none. For all you folks who have not examined your disks closely for a year or more, my deepest sympathies. But, l bet you can't top my 13,768-fragment total on the disk I ignored for a year!

NT 5.0 will include Diskeeper's utility as a base component. For additional details on the performance improvements that defragmentation provides, see the May Web Exclusives review "Diskeeper 3.0 for Windows NT," by Carlos Bernal on Windows NT Magazine's Web site (http://www.winntmag.com).