Q: I'm trying to find the easiest way to poll SQL Server systems and route email to certain people if the server doesn't respond. One approach is to have the Windows NT scheduler run a simple batch file running ISQL and sendmail.exe every few minutes. What do you suggest?
The polling solution is easy to implement and maintain, but many vendors provide utilities that can do this task for you. We've heard a lot of praise for NetIQ's AppManager (http://www.netiq.com), which lets you monitor tuning and maintenance statistics for every BackOffice application on your network, and we know other vendors have products with similar capabilities. Do readers have favorite ways to handle this problem?
Q: Can you use triggers for tasks other than referential integrity checks?
You can use triggers to do almost anything you want. You usually use them for referential integrity, but triggers can be any code that you call in conjunction with data modification operations.
Q: I need to create a table that has 21 FOREIGN KEY constraints. I can create this table, but when I try to insert a row, I receive an error message (Too many tables involved in this query). How many foreign keys can I define for one table?
Our search of SQL Server 6.5's Books Online (BOL) found this excerpt: "A table can have a maximum of 31 FOREIGN KEY references. This limit is an absolute upper limit, but the maximum may be lower depending on the number of work tables the server has to create to enforce the constraint, and the limit varies by the type of query being executed. FOREIGN KEY constraints are not enforced for temporary tables." This upper limit (31) is tied to the fact that a query can't reference more than 16 tables. You'll be able to reference up to 63 foreign keys in SQL Server 7.0.
Another issue to keep in mind: You don't automatically get indexes with foreign keys the way you do with primary keys. If you want to create an index for your foreign keys column, you need to do it explicitly. Finally, remember that SQL Server doesn't enforce FOREIGN KEY constraints for temporary tables.
Q: I just installed Service Pack 4 (SP4) on a couple of test servers, and the @@microsoftversion value I'm getting is 406271, the same as it was for SP3. I use the @@microsoftversion value to maintain an accurate count of the service packs our Windows 95 servers are running. The alternative option, the @@version variable, changes format across versions of SQL Server, so it has always caused me problems. Do you have any suggestions?
Microsoft has confirmed that the global variable @@microsoftversion will not return correct information in versions later than SP3. You need to use @@version to programmatically determine which version of SQL Server you're running. The @@microsoftversion variable returns a number similar to 406271, or 0x000632FF in hexadecimal. The low word of this value (0x0006) contains the major server number, and the high word (32FF) is divided into the minor server version (32) and the build number (FF). This storage technique means that 255 is the highest build number that @@microsoftversion can represent. Unfortunately, SP4's build number is 281. Microsoft states in Books Online (BOL) that you must always use @@version for custom version tracking because @@microsoftversion is for internal use only. This problem is a good example of why you shouldn't use unsupported features even if they're exposed.
Q: How do you hide an object in the Enterprise Manager (EM)?
In this context, hiding an object means that it won't show up in the list when you select the Manage Tables option from the EM. To hide an object, you modify the display of an object's status column. Every user table in a database is represented by a row in sysobjects and will have a value for type = 'U'. The second bit of the sysstat column in sysobjects controls whether an object will be displayed in the table GUI view of EM. You can turn off this bit, which prevents the table from appearing in EM's GUI, by following these steps:
- Open ISQL/w and USE the pubs database.
- In the Query text box, type SELECT * FROM sysobjects
WHERE type = 'U'
- Change the sysstat value of publishers from 83 to 81. (The sysstat value of 83 represents the bit setting 1010011. The second bit from the right is the display bit, which you must turn off: 1010001. This bit setting corresponds to sysstat value 81.)
- Allow updates to the system tables by entering the following statements in ISQL/w: sp_configure 'allow updates', 1
RECONFIGURE with override
- Update sysobjects by entering the following statements in ISQL/w: This command tells the system to flip the current setting of the second bit in sysstat. The first time you run this command, it turns off the bit; rerunning the command turns on the bit.SET sysstat= (sysstat^2)
WHERE name = 'publishers'
sp_configure 'allow updates', 0
RECONFIGURE with override
Q: Sometimes Open Database Connectivity (ODBC) 3.5 chooses the wrong connection when I'm using connection pooling with similar data source names. What's happening?
Connection pooling is the major new feature in ODBC 3.5. Connection pooling is a process that enables an application to create a connection and save it in a pool. ODBC keeps the pool of connections open and hands out a connection from the pool when a request for a new connection comes in. This process is more efficient than repeatedly opening and closing a connection.
In certain circumstances, ODBC 3.5 (the version included with the Windows NT 4.0 Option Pack) can choose the wrong connection. When a client application uses SQLConnect, the OBDC Driver Manager looks at only the characters in the first half of the Data Source Name (DSN), USERID, and Password to decide whether OBDC Driver Manager needs to choose a connection in the pool. Similarly, when a client application uses SQLDriverConnect, the OBDC Driver Manager looks at only the characters in the first of half of the connection string.
For example, when a client application is using two DSNs (e.g., MyDatabase1 and MyDatabase2), a connection to MyDatabase2 can incorrectly satisfy a request for a connection to MyDatabase1. The problem occurs only when the two strings have the same length and the names and the passwords differ only in their second halves. When MyDatabase1 and MyDatabase2 have different schemas, the application will probably experience SQL errors such as Table xxx does not exist. But when the two databases have the same schemas (as in a row-partitioning situation), selects, updates, and inserts could go to the wrong database. Applications using ActiveX Data Object (ADO) or Object Linking and Embedding Database (OLE DB) can also encounter this problem because these services call ODBC. To avoid the problem, make sure names have different prefixes instead of different suffixes when you name a set of related data sources.
Q: In your October 1997 column, you proposed a way for desperate database administrators to recover a suspect database. Do you have any other solutions?
When you bring up SQL Server, it needs an exclusive lock over all devices before it can perform a recovery, and it marks as suspect any database it has trouble locking. Databases marked as suspect have a special bit set (bitmask=256) in the status column of master..sysdatabases. In a corporate network, circumstances can prevent SQL Server from accessing devices even though the database is not corrupt. Sometimes a database is marked suspect because the system is backing up the file, or someone accidentally renamed a directory. The data is still in perfect shape, but SQL Server can't open the device file so it marks the database as suspect. A database administrator must remain calm and first investigate the Windows NT event logs for possible clues about locking problems rather than resorting to emergency mode. The following script will let SQL retry recovery:
sp_configure 'allow updates', 1
RECONFIGURE with override
SET status = 64 WHERE name =
sp_configure 'allow updates', 0
RECONFIGURE with override
Microsoft provides sp_resetstatus, a useful stored procedure for resetting the database-suspect bit in master..sysdatabases. This stored procedure is not installed by default, so you need to copy it from "Resetting the Suspect Status" in Books Online (BOL). If you decide to run this stored procedure, be sure to read the associated warnings in BOL.
Q: How much does batching transactions improve SQL Server performance?
A Transact-SQL (T-SQL) batch is a series of T-SQL commands compiled and executed together. A SQL script can contain many batches, which are separated by the go end-of-batch keyword. We conducted a test to measure the performance effect of batching commands. To avoid measuring CPU, memory, or disk utilization of the server, we repeatedly ran the command SELECT 1. This command creates a one-row, one-column result with a value of 1. As Listing 1 shows, our benchmark consisted of two SQL scripts. The first script had 1000 SELECT statements in 1 batch, and the second script contained 1000 batches with 1 SELECT statement in each batch.
Running the first SQL Server script on a high-end Compaq server took about 13ms, and the second script took more than 9 minutes. Ouch! You knew batching commands is important, but we bet you didn't realize it is that important. The overhead comes from all the extra network roundtrips as the client and server conduct what we call "the teaspoon fire brigade"--passing teaspoons of water down a line of people to put out a fire. A similar process goes on when you use tiny batches. (For some pointers about tuning SQL Server at the application level, including how to spot inefficient application design problems, see Brian's article "Tuning Your Database with SQL Trace," page 129.)
Q: My SQL Server 6.5 tempdb database mysteriously fills up almost daily. I have tried scheduling regular dumps every hour and increasing the size of tempdb to 8MB with no success. My SQL Server applications crash trying to allocate space for syslogs because logsegment is full. Manually dumping tempdb corrects the error. The culprit seems to be a document management system we have installed. How can I determine the exact source of this problem?
Try running SQL Trace to see what your applications are doing and how they're using tempdb. Chances are you have selected Create temporary stored procedures for prepared SQL statements in your Object Database Connectivity Data Source Name (ODBC DSN). If the setting is selected, try clearing it.
Q: Currently, each product in our company's online directory includes a description. The description can be 500 characters long, so we break the description into two fields--each a varchar(255)--for storage in SQL Server. We concatenate these two fields when we display the description on the Web. I'm thinking about changing this arrangement to one text datatype field. Would one text field drastically increase the database's memory requirements, and would it adversely affect database performance?
Text is harder to handle in Transact-SQL (T-SQL) if you're writing any SQL batch. In SQL Server 6.5, a 100-byte text column takes up 2KB, which adds up quickly. Also, you can't use
to compare text files; you must use
Text columns can't be indexes, and you can't assign a text column to a local variable in a T-SQL batch, which makes string processing difficult. Our conclussion is that using varchar is better if you have lots of data and performance is a big issue.
Q: How do you optimize for symmetric multiprocessing (SMP)?
SQL Server 7.0 will improve scalability immensely, but in the meantime, we are optimizing SQL Server's scalability by using SMP systems. The SMP concurrency option available through sp_configure can have a significant effect on both the performance and the stability of a SQL Server configuration. For information about configuring your SMP systems for SQL Server 6.5, read two Microsoft Knowledge Base articles, "INF: SQL Server and Windows NT Thread Scheduling" (http://support.microsoft.com/support/kb/articles/q111/4/05.asp) and "INF: Proper SQL Server Configuration Settings" (http://support.microsoft.com/support/kb/articles/q166/9/67.asp).
SMP concurrency controls the number of threads SQL Server releases to NT for execution, thereby limiting the number of CPUs that SQL Server can use at one time. The default SMP concurrency value (0) tells SQL Server to autoconfigure and use n - 1 processors (where n is the number of processors in the computer). An SMP concurrency value of -1 means use n processors, and any positive value means use that number of processors. To test your understanding, cover the right column in Table 1 and use the data in the first two columns to determine the number of processors SQL Server is using.
As you can see, settings of both -1 and 4 let SQL Server access all four CPUs at the same time, but the settings are different. A -1 SMP concurrency setting causes NT to schedule SQL Server threads at a base priority level of 15; a setting of 4 causes NT to schedule SQL Server threads at a base priority of 7. For more information about these settings, read the Knowledge Base articles previously referenced.
Microsoft recommends that you don't play with the default value because changing the value can make your system unstable. This advice is sound for most shops, but sometimes you can get a noticeable performance boost by letting SQL Server run on all the processors. Depending on your transaction profile, relative performance will probably improve if you're running on a dual SMP machine because a 2-way system uses a lower percentage of total processors than 4-way or 8-way systems.
Microsoft's recommendation to leave SMP concurrency at the 0 default value makes sense in most situations. You're not wasting a processor if SMP concurrency is set to 0 because SQL Server calls NT for all network, I/O, and memory requests, and the operating system (OS) needs CPU time to service these calls. SQL Server might not be running a query connection on all the available processors, but NT could be using a CPU. (We would rather have a stable computer running at less than full capacity than an unstable system running at full CPU capacity.) However, setting SMP concurrency to -1 can be beneficial, depending on the kinds of queries your application usually runs.
Suppose you have a dual-processor SMP computer in a heavy online transaction processing (OLTP) environment, running a large number of simple transactions (such as SELECT or INSERT a row) from many clients at the same time. If you set the value to 0, SQL Server can run only one user query on one CPU at a time. The second processor might not be completely idle, but it won't be running SQL queries.
However, if you set SMP concurrency to -1 SQL Server can run two user queries simultaneously and produce a more level CPU usage pattern. Doubling the number of queries you run simultaneously might increase your throughput dramatically. But remember, this kind of processing can cause a problem if the queries in your transaction profile are CPU intensive and take a while to execute. Setting SMP concurrency to -1 makes sense only in a relatively pure OLTP environment running simple queries with fast execution times.
How can setting SMP concurrency to -1 make SQL Server unstable? A -1 value on a dual SMP box causes two events to occur. First, NT schedules SQL Server threads to run with a higher-than-usual base priority. (Table 2 shows the relationship between priority, SMP concurrency, and boost priority.) Second, SQL Server can release two separate threads (because it is using two processors) running two separate user queries to NT for execution at the same time. Two CPU-bound queries executed simultaneously can hog all the CPU time and starve other processes. At best, this occurrence can cause the NT system to become sluggish; at worst, it can prevent important tasks such as I/O calls and network handlers from doing what they're supposed to do. (Refer to Knowledge Base article Q111405 for a discussion of why the system might become unstable.)
Don't change this parameter from the default and expect a big gain unless you've done a good job cataloging your transaction profile (SQL Trace is a great tool for this job). Reset SMP concurrency to the 0 default if anything weird starts happening in your SQL Server.
New SQL Server Book
Check out Ron Soukup's new book Inside Microsoft SQL Server 6.5 (Microsoft Press). Soukup, one of the original members of the SQL Server team at Microsoft and general manager of the group through the end of 1995, has provided the SQL Server community with a very useful technical resource. Although the book doesn't provide as many secrets as we had hoped, we wager that even the most savvy SQL Server veterans will find a few nuggets that justify the investment. It is an excellent publication.