December is typically a slow month for my consulting business. On Friday, December 17, I had a light schedule and was hoping to do some last-minute Christmas shopping. However, that morning I received calls from three clients within a 10-minute period, all with network-down emergencies. One of the more serious problems was a Microsoft Exchange Server 2003 crash. According to the client, the server would report the “Inaccessible boot device” error when he tried to restart the server. The server was an HP ProLiant ML370 G4 with a three-drive RAID 5 array with one hot-spare drive, so I thought maybe the server lost the controller and crashed. When I arrived on site, I could see what happened. Two drives were flashing red. Evidently, two drives in the array failed simultaneously. Even when you designate a drive as a hot spare, the hot-spare drive must have enough time to resynchronize with the array. With two drives failing at the same time, the hot-spare drive didn't have enough time to become fully active, therefore the server crashed. Fortunately this client had an identical server with the same drive configuration. I disabled the hot-spare drive on the running server, pulled out the bad drives in the down server, and installed the hot-spare drive from the running server into the crashed server. I called HP and ordered replacements for the failed hard drives.

The Wednesday night backup was good, but the Thursday night backup was questionable. The server might have crashed during the Thursday backup. The server was using VERITAS Backup Exec 9.1, and an HP Ultrium 460 to back up itself and all other servers on the network. Because this server was a domain controller (DC) as well as the Exchange server, the restore process was tricky. Here's a summary of the steps I performed to get the company's server back up and running.

1. Install Windows Server 2003 and Backup Exec 9.1. For the Windows 2003 installation leave the server in a workgroup--don't join the domain, and don't make the server a DC. Make sure to give the server the same name as the old server and partition the hard disks the same as they were before. For more information on this process refer to http://seer.support.veritas.com/docs/236240.htm. To be safe, I service pack the server to the same level it was before the crash. Even though you’re going to overwrite the OS files with the files from tape, sometimes files are in use and must be installed during the next reboot. This can cause a crash during the reboot after the full system restore because the newer drivers might not be compatible with older OS files. This step is more crucial on a server that's running an earlier version of an OS--such as Windows 2000 Server--because it has multiple service pack releases for the OS. In this particular case, this step was unnecessary because Windows 2003 has no service pack releases.
2. Catalog the latest backup tape. Unfortunately, I was unable to catalog the backup tape from Thursday night, so I had to use the Wednesday night tape. This verified that the server crashed during the Thursday night backup. 3. Select the local hard disks and system state and perform a complete restore. Because I was restoring to the same hardware, I used the overwrite registry option in the Advanced tab of the restore job. I also gave the restore job the highest priority (medium is the default) to speed up the restore process. Make sure not to restore the Backup Exec database--this will cause problems with Backup Exec.
4. Verify that the server works after the restore. Reboot the server and test. Sometimes the server network card might get corrupted after a full system restore. Symptoms of a corrupted network card can include the inability to browse My Network Places, problems with the net view \\ command, and incorrect TCP/IP settings on the network card. If you have network card problems, try uninstalling and reinstalling the network card with the latest drivers. Verify that all the services have properly started and check the Event Viewer for any relevant error messages. If the server doesn't reboot, try starting the server in safe mode or use the Recovery Console to get the server running.
5. Restore Exchange. Verify that all the Exchange services are properly started. When the server is restarted after the initial restore, Exchange shouldn't have mounted any Private or Public information stores. Use the Exchange System Manager (ESM) and go into the database properties for each information store, and mark that each store can be overwritten by a restore. Create a restore job to restore all private mailstores. The Backup Exec public mailstore process relies on a messaging API (MAPI) login and will fail if the private store is not recovered before attempting a public restore. After the private store is successfully restored, create a second restore job to restore any public mailstores. During the restore job setup, if this is the last backup to be restored, select the “Commit after restore completes” and “Mount database after restore” checkboxes under the Exchange options. After the restore jobs are complete, verify all stores are mounted correctly and users can access their mail. For more information refer to http://seer.support.veritas.com/docs/235756.htm.
6. Test. Verify users can access their data files, mail and print. Review the Event Viewer for any relevant error messages and correct them as necessary.

Restoring a DC and Exchange is always tricky. Fortunately, it’s not a job that you perform everyday. Use these tips to smooth out the recovery process and get your users back up and running.

Tip: Compaq Firmware on Raid Controllers
On HP late model array controllers such as the 6400, the controller might incorrectly report a failed battery on the controller. To correct this problem, download the latest firmware and driver for this controller at http://h18007.www1.hp.com/support/files/storage/us/locate/69_5618.html.