If you're a network administrator who has recently become a database administrator, too, the task of creating and maintaining database files for your company's developers might seem daunting. Fortunately, this task is much easier with Microsoft SQL Server 2000 and SQL Server 7.0, compared with earlier versions. (If you're using SQL Server 6.5 or earlier, I strongly recommend upgrading to SQL Server 2000 or SQL Server 7.0.) Using SQL Server Enterprise Manager, you can create the necessary database files and set the necessary database properties quickly and easily. Expanding, shrinking, and deleting databases is also easy. However, before I show you how to perform these tasks, you need to know about the types of files a SQL Server database contains and where to place those files.

SQL Server Files


Every SQL Server database has at least one data file and at least one transaction log file. By default, the data file has the extension .mdf and the log file has the extension .ldf. You can add files to permit databases to span physical disks or to split your database into sections for better performance and easier maintenance (more on that later). Additional data files have the extension .ndf; additional log files keep the .ldf extension.

A SQL Server file can't span logical disks. If your database will be large and won't fit on one logical disk, you have several options. You can split the database into several files on different disks. You can put the database on a RAID array, which appears as one logical disk. Or you can use the Windows 2000 and Windows NT tools to create volume sets or, in Win2K, dynamic volumes, which also appear as one logical disk.

The transaction log should be on a separate physical disk. That way, if you lose the disk on which the data is stored, you can at least restore the database from your last backup and the transaction log. In addition, I recommend that you not put the transaction log on the RAID array. The log is written and read sequentially, so it doesn't benefit from the multiple drive heads in the RAID array. Nor is putting the transaction log on the RAID array worth the additional cost of calculating the parity of data that will be read only once or twice before being discarded. Mirroring the drive that contains the transaction log is a better idea and well worth the cost.

Creating a Database with the GUI


Developers who constantly create and delete test databases typically use a script. However, when you need to create only a few production databases, using Enterprise Manager is easier. Open Enterprise Manager and expand the hierarchy so that you can see your server, then Databases. Right-click Databases and select New Database. The first order of business is to provide a name for your database. Select the General tab of the new database's Properties dialog box, and enter a database name in the Name field. Use a name that's easily recognizable and short enough so that it can be typed quickly. You might not be typing it that much, but your developers will. In SQL Server 2000, you have to supply the collation for the database, as Figure 1 shows. Leave the default setting unless you have a good reason to change it (e.g., you're using a third-party database application that requires a specific collation). If you're unfamiliar with collation, see my Web-exclusive article "Installing SQL Server," http://www.winnetmag.com, InstantDoc ID 21742.

In SQL Server 7.0, the General tab doesn't include a collation setting because you set the collation when you installed SQL Server 7.0. After you've set the collation, it's fixed for all databases on that server.

The next order of business is to supply a logical, or internal, name for the primary data file. SQL Server creates and names the physical data file in the default data directory according to the supplied logical filename. Although you can customize the physical filename, letting the logical filename determine the physical filename is easiest. To supply the logical name in SQL Server 2000, select the Data Files tab and enter the logical name for the primary data file in the File Name field, as Figure 2 shows. In SQL Server 7.0, you provide the logical name for the data file on the General tab. For now, don't worry about the filegroup, which Figure 2 shows as PRIMARY.

Notice the Automatically grow file check box at the bottom left of Figure 2. In SQL Server 7.0 and later, the data and log files grow automatically when they've filled their current space. Although this automatic growth feature sounds like it could save you a lot of trouble, a better approach is to allocate as much space as you think the data file will need by specifying that amount in the Initial size (MB) field. By making the file the correct size from the beginning, you avoid the fragmentation that allocating new areas of the disk to the database causes. You also prevent someone else from using that space and leaving your database with no room to grow.

By default, the Automatically grow file check box is selected. You can turn off this feature, or you can use it as a safety valve in case your initial space estimates are off or someone uses the database differently from what you planned. You can set the growth to occur by percent or megabyte. By default, the data file is set to grow in increments of 10 percent. When growth occurs by percent, the data file grows the specified percentage according to its current size. So, you need to be careful with the percent option—the bigger the data file, the more space it grabs with each growth spurt. A better choice is to use the In megabytes option, in which the data file grows the specified amount during each growth spurt.

By default, no limit is set on how large the data file can grow, so it can grow until the disk is full. To prevent the data file from filling the disk, you can set an upper limit on the file's size by selecting the Restrict file growth (MB) option and specifying the maximum file size.

The final order of business is to select the Transaction Log tab and set up the log file. As Figure 3 shows, enter the log file's logical filename in the File Name field and its physical path and filename in the Location field. You can also set the log file to grow automatically. If a log file fills up, everything stops. No more data modifications can occur until you delete the log—and when you delete a log without backing it up first, your database is vulnerable. If a data disk were to fail at this point, you would have no data and no log, so you would lose everything from your last backup onward. But again, don't rely on only the automatic growth feature. Instead, set up alerts to let you know when the log file is filling up, or even better, to let you know and automatically start a log backup. Then, as you did with the data file, you can use the automatic growth feature as a safety valve as long as you set an upper limit on the log file's size.

After you've finished with the Transaction Log tab, click OK, and SQL Server creates your database. If you don't see the database in Enterprise Manager, don't panic—at least not yet. Right-click Databases, select Refresh, and your database should appear. You can also look for the .mdf file in the data directory.

If you look for the .mdf file in the data directory, consider checking file permissions. The only users who need access to this file (and by implication, the directory in which it resides) are you (i.e., the administrator) and the SQL Server account (i.e., the account that starts the SQL Server services). When users think that they're writing a record in the database, SQL Server is actually writing the record to the data file.

Setting the Database Properties


After you create the database, you need to set the database properties. Right-click the database name, then select Properties. As Figure 4 shows, the Properties dialog box contains six tabs. The first three tabs are General, Data Files, and Transaction Log, which show the general database information and the information for the database's data and log files, respectively. You've already set the basic properties for these three tabs. The fourth tab displays information about filegroups. When you create a database, SQL Server automatically creates the Primary filegroup to hold the primary data file. You can add more filegroups, which I discuss later. The fifth tab, Options, shows information about the database options. The options in SQL Server 2000 and SQL Server 7.0 are different, so I cover each version separately. The final tab is Permissions. See my article "Introducing SQL Server Security," October 2000, for a discussion about how to set permissions.

Setting the Database Options in SQL Server 2000


You rarely need to change database options, but knowing about them is worthwhile. As Figure 4 shows, the Options tab contains four sections: Access, Recovery, Settings, and Compatibility.

Access. You use the first section to control access to the database. By default, the access options are turned off. You might limit access to the db_owner, db_creator, or sysadmin roles, for example, when restoring a database from a backup to prevent the client applications from trying to make changes. You might need to select the Single user option to repair a damaged database. The Members of db_owner, dbcreator, or sysadmin option might seem the same as the Single user option, but it isn't. The Single user option means one connection to the database, not one user. One user can have multiple connections open to the SQL Server database, but not if you select the Single user option.

By default, the Read-only check box is cleared. Selecting this check box prevents users from making changes in a database. Selecting this check box has another benefit: Queries run faster because SQL Server doesn't have to lock the data rows before reading them and unlock them afterward. Setting the Read-only option and locking the data would be redundant. Before you set the Read-only option, make sure that the database should be read-only. For example, suppose you want to maintain a "read-only" copy of a production database for data analysts to use. If you're replicating data into the duplicate database from the production database, the duplicate database can't be read-only.

Recovery. You have three recovery options from which to choose. The default is the Full recovery option, which keeps track of all changes to the database, even those operations that weren't written to the transaction log in earlier versions of SQL Server, such as bulk data loads. Using this option requires more storage but enables a full recovery from the previous backup and the transaction log. The Simple recovery option is the same as the trunc. log on checkpoint option in SQL Server 6.x and earlier. You can recover only the data in your last full backup because the information needed for the full recovery isn't retained in the log. The Bulk-Logged recovery option is an intermediate option. You can recover all data if you haven't performed any bulk-loading or nonlogged operations. If you have performed any of these operations, you can recover only the data in your last transaction log backup. If the data disk fails, you won't be able to retrieve the portion of the log recorded since the last backup because the Bulk-Logged option logs bulk operations differently from typical transactions. It places only a pointer in the log, and during the backup, it retrieves the data from the actual rows. Obviously, the Bulk-Logged option can't retrieve the data if the data is gone. So, choosing this option adds overhead to the logging operations, with some increase in safety, but not as much overhead or safety as the Full recovery option.

Settings. I recommend that you leave the ANSI NULL default and Use quoted identifiers options at their default values (i.e., cleared), unless the developers in your organization have specifically requested otherwise. These settings don't affect database administration, but they're important to developers. Developers use the ANSI NULL and other ANSI settings hidden from view within the database to change the behavior of SQL Server so that it complies with the ANSI standards or is backward compatible with earlier versions of SQL Server. The Use quoted identifiers option lets you use spaces in object names (e.g., Order Details), but other implications exist for developers, so let them make the decision.

The Recursive triggers check box is also cleared by default. Don't select this check box unless the developers insist and you get written approval from your boss. A change to the data activates a trigger, and the trigger often changes related data. For example, when a user modifies a record, the trigger might obtain the date, time, and username of the person who changed the data, then update the record with those values.

If you select the Recursive triggers check box, all the triggers in the database become recursive and cause all sorts of problems. The trigger updating the row with the name of the person who changed it will cause the trigger to fire again, which will update the row, which will cause the trigger to fire once more, and so on. I recommend that you leave this check box cleared until Microsoft redesigns it so that you can set individual triggers as recursive.

The Auto close option lets SQL Server close databases when no connections are open and no processes are active. The intent is to free up memory for the other active databases or applications to use. However, some users have reported problems when they try to connect to a closed database. If you think that the Auto close option might be useful, experiment with it. The check box is cleared by default.

Like the Auto close option, the Auto shrink option check box is cleared by default. Selecting this check box means that the database is put on the list of databases to be checked periodically to see whether space can be reclaimed from the database, perhaps after a major archive operation or a cleanup of old or duplicate records. The alternative to using this option is to schedule a task to periodically shrink the database size (more on this later).

One option in which a major difference exists between SQL Server 2000 and SQL Server 7.0 is the Torn page detection option. Torn pages result from SQL Server deleting a record, thus freeing up space, then attempting to write another record in the same place but failing to write it completely. On retrieval, the returned data is a mixture of the old and new data, just like you would see if you tore half a page out of a magazine. In SQL Server 7.0 and earlier, this check box is cleared by default, but in SQL Server 2000, it's selected by default. SQL Server 2000 must perform extra work to make sure that torn pages aren't occurring. Thus, you might want to read SQL Server Books Online (BOL) and perhaps run some tests to decide whether the extra overhead outweighs the risk of data corruption.

I recommend that you don't change the Auto create statistics and Auto update statistics options, unless you want to annoy your developers by making their database perform slowly. The statistical information that these processes gather lets SQL Server find the optimal way to run queries. Without this information, SQL Server won't run as well as expected, which is why these settings are enabled by default.

Compatibility. The compatibility level can be set to 60, 65, 70, or 80. These levels correspond to SQL Server 6.0, SQL Server 6.5, SQL Server 7.0, and SQL Server 2000, respectively. Setting the compatibility level forces SQL Server to interpret certain SQL instructions by using the default setting for the chosen version. You should change the default setting only if you've upgraded databases from earlier versions but haven't had time to change the SQL code. The impact on performance is minimal because SQL Server still uses all the newer performance-enhancing techniques introduced in SQL Server 2000 and SQL Server 7.0.

The default setting for SQL Server 2000 is 80; it's also the setting for SQL Server 7.0 databases upgraded to SQL Server 2000. For SQL Server 7.0, the default setting is 70. For databases upgraded from SQL Server 6.5 or 6.0, the existing compatibility level is retained. Because the older SQL instructions might not be supported in future SQL Server releases, you should encourage your developers to update the SQL code, then set this value to 80.

Setting the Database Options in SQL Server 7.0


SQL Server 7.0's database options and default settings differ from those in SQL Server 2000. Variations even exist between different editions of SQL Server 7.0. Here's how SQL Server 7.0 differs from SQL Server 2000.

Access. In the Access section, the options are single user and dbo use only. The dbo use only option limits database access to the db_owner role only.

Recovery. In the Recovery section, you have two additional options: Select Into/Bulk Copy and Truncate Log on Checkpoint. The Select Into/Bulk Copy option permits nonlogged operations. Selecting this option doesn't turn off the log—you can't turn off the log in a relational database—but rather lets SQL Server 7.0 perform certain operations, such as bulk-loading, without writing a log entry for each inserted row. When SQL Server 7.0 performs a nonlogged operation, you no longer have the option to back up just the log because the log doesn't contain a complete record of changes to the data. You have to back up the whole database.

At frequent intervals, SQL Server 7.0 uses checkpoints to make sure that any changes to data in memory are actually written to the disk. If you change the default and select the Truncate Log on Checkpoint option, you're telling SQL Server 7.0 to remove completed transactions from the transaction log after it writes those completed transactions to the disk. Thus, this option is dangerous for a production database. If you lose the data disk and the information is no longer in the log, you have to roll back to your most recent backup and lose all work done since then. So, leave this option cleared for production databases. The only exception I know to this policy is when you use Microsoft Systems Management Server. SMS is self-healing—even if you lose some inventory entries, within a few days, all will be well again with the inventory.

Settings. In the Settings section, SQL Server 7.0 Desktop Edition has some extra settings turned on by default to make life easier for traveling or disconnected users. (SQL Server 2000 Personal Edition, which replaced the Desktop Edition, also has these extra settings enabled. The SQL Server 7.0 Enterprise and Developer Editions don't.) The Auto close and Auto shrink options are enabled to free up memory and disk space on what might be a less powerful computer (e.g., a laptop). The Select Into/Bulk Copy option is enabled so that the users can possibly bulk-load data without having to know how to enable this option. If the users do so, they would have to perform full backups, not log backups, but that's a moot point because the Truncate Log on Checkpoint option is also enabled by default. Selecting the Truncate Log on Checkpoint option avoids problems with the transaction log either filling up or taking over the hard disk.

Compatibility. SQL Server 7.0's Enterprise Manager doesn't include the Compatibility Level option on the Options tab of the database Properties dialog box.

Expanding the Database


Like the universe, databases are constantly expanding. Even with the Automatically grow file option, sooner or later you'll run out of disk space and have to add more. If the original disk still has room, you can expand the primary data file. If not, you have to add a new file on another disk.

You can use Enterprise Manager to expand an existing file. In the database's Properties dialog box, type the new size for the data or log file, then click OK. As you exit from the dialog box, SQL Server adds the extra space to that file.

If you have to expand onto another disk or if you prefer to add another file on the same disk, you can add a file and make it a member of the Primary filegroup. As Figure 5 shows, you enter the new data file's logical filename, physical path, and initial size and specify PRIMARY as the filegroup. As you exit from the dialog box, SQL Server creates the file. Thereafter, SQL Server will save data to all the available files rather than fill up one file, then move on to the next.

Another way you can expand a database is to add data files and assign them to different filegroups. A common practice is to put different filegroups on different physical disks, as Figure 6 shows. Developers can then achieve performance gains by assigning tables and indexes to specific disks, which they do by creating the table or index on a specific filegroup. Before you add a filegroup, consult with your developers. They will know which tables should go into which filegroup and how big the tables will be.

In addition to improving performance, putting different filegroups on different disks is beneficial for backups and restores. You can use a rotating schedule to back up filegroups. Consider this strategy if you have a big database and suspect that the backup might take longer than the time you've allotted. Restoring a database can also be easier: If you lose one of the physical disks, you can restore just the appropriate filegroup rather than the entire database.

Shrinking a Database


Although databases typically increase rather than decrease in size, you might need to shrink a database for several reasons. You might have a 20GB database with 1GB of data in it because the original estimate of the space required for the database was far too large. Or you might want to reclaim the space gained from moving the last 5 years' worth of sales data from your production database to a data warehouse. The steps you take to shrink a database differ between SQL Server 2000 and SQL Server 7.0.

SQL Server 2000. To shrink a database, right-click the database, then select All Tasks, Shrink Database. As Figure 7 shows, you specify the maximum percent of free space in the files after shrinking. Be sure to allow some room for the database to expand again. You can also compact the database (i.e., move the data pages to the beginning of the file space) before reclaiming the free space. Compacting and shrinking takes longer than shrinking alone, so you might consider using the scheduling options to compact and shrink the database during off-hours.

In SQL Server 2000, you can use Enterprise Manager to compact and shrink individual files in a database. Select Files, then choose the target file from the Database file drop-down list, as Figure 8 shows. You can compress and truncate or just truncate a file as you can with a database, but unlike a database, you can specify a final file size. You also have the option of migrating the target file's data to the other files in the filegroup. For example, if your database contains three files but needs only two, you can move one file's data to the other two files so that you can delete the empty file.

SQL Server 7.0. To shrink a database in SQL Server 7.0, highlight the database in Enterprise Manager. Right-click the database, then select All Tasks, Shrink Database. You'll see two options: The first option compacts the database, and the second option frees up the space at the end of the database file. To free up the maximum amount of space, select both options. However, you can't shrink the database to a size smaller than its initial size. If you made the database too big to start with, you can't fix this problem by shrinking the database. (You can use commands to fix this problem, though. See DBBCC SHRINKFILE in SQL Server 7.0 BOL.) You can also schedule these shrinking tasks to run regularly.

Deleting a Database


Deleting a database is perhaps the easiest task of all. Simply highlight the database in Enterprise Manager, press Delete, confirm that you want to delete the database, and it's gone. SQL Server cleans up all the associated .mdf, .ndf, and .ldf files, too.

Getting Better All the Time


Now you're ready to create, expand, shrink, and delete databases for your company's or a client's development staff. SQL Server keeps making these tasks easier with each new release.