"SQL Server Savvy" is moving! May marks the final appearance of "SQL Server Savvy" in Windows 2000 Magazine. The column will appear in our sister publication, SQL Server Magazine, starting with the June issue.
I administer a Windows NT 4.0 Service Pack 5 (SP5) system that runs SQL Server 7.0 with SP1. When my developers use an ALTER TABLE command to change the length of a column's VARCHAR value, the column's nullability changes from NOT NULL to NULL. We haven't selected the ANSI NULL default, and according to SQL Server Books Online (BOL), the default is NOT NULL. This problem also occurred when I tested the command on an unmodified database on my home SQL Server system. How can we prevent the switch?
When the DLL that creates a new table doesn't provide an explicit NULL or NOT NULL setting, three settings control column nullability in the table: the ANSI NULL default, which you set by using sp_dboption, and SET ANSI_NULL_DFLT_ON and SET ANSI_NULL_DFLT_OFF, both of which you use the SET command to enable.
SET ANSI_NULL_DFLT_ON and SET ANSI_NULL_DFLT_OFF are mutually exclusive. Only one of these settings can be on at a time, so if you set one to ON, the other automatically toggles to OFF. These settings take precedence over ANSI NULL default, so ANSI NULL default controls column nullability only if you set both options to OFF. According to BOL, ANSI NULL defaults are always on for ALTER COLUMN, so the column is nullable until a developer changes the default setting. When you change the data type of an existing column, the database settings are irrelevant. You must supply an explicit NULL or NOT NULL setting, or the column will automatically set to allow NULLs.
You might also find the T-SQL function getansinull() helpful. This function returns the nullability for the database for the current session. In addition, always specify NULL or NOT NULL settings in your Data Definition Language (DDL). Don't rely on server defaults because they're easy to change and often differ from server to server. For more information about proposed standards, download John Hindmarsh's white paper "SQL Server 7 Standards" (http://www.sql7dba.com).
Does SQL Server provide a system stored procedure that I can use to trigger a script whenever someone has altered a table within the past 24 hours?
No documented interface is available for this purpose, but you can get the information directly from the system tables. Start with the SCHEMA_VER column from master..sysobjects. According to SQL Server Books Online (BOL), SQL Server increments this integer column every time a table's schema changes. SCHEMA_VER documentation is sparse, and we don't have information about the rules that determine the new schema version number after a change. However, we need to know only that the version number will be different if the table's schema changes. You'll need to build the working T-SQL code that is appropriate for your situation. You might use the following technique to create a simple solution. First, create a schema management table:
Second, once every night (or at whatever time interval you need to track schema updates), run a SQL batch that grabs NAME, CRDATE, and SCHEMA_VER columns FROM sysobjects WHERE type = 'U'. Then, compare the control table you created with the results from the SELECT statement to discover whether the tables' schema_ver numbers are different. In our proposed model, we also captured the CRDATE column, which tells us when someone created the table. This extra measure prevents anyone from changing the schema by dropping a table and recreating it with the same name and different table schema. The schema_ver number might remain the same, but the CRDATE column would quickly reveal that the table is new. If you add a little more code, you can discover the logon of the user who made the changes.
How can I implement custom application-locking schemes to control access to crucial SQL Server applications?
Usually overriding the database engine and changing the way the engine's locking models work is a bad idea. Locking is a crucial part of a database engine's code, and creating your own locking schemes isn't practical. However, occasionally providing custom application-level lock-management support makes sense. SQL Server 7.0 and earlier versions let developers build custom lock managers by creating a lock-management table that contains the name of the locked application resource (e.g., a payroll process that one user at a time can run serially). These techniques work, but they're prone to problems—an application can easily lose track of what is locked and what isn't, especially if a connection terminates abnormally.
SQL Server 2000 provides custom application-lock management features that you use the new sp_getapplock and sp_releaseapplock procedures to implement. The following example from SQL Server 2000 Books Online (BOL) shows how you can use the new procedures to lock and unlock the resource Form1 in the sample Northwind database:
DECLARE @result int
EXEC @result = sp_getapplock @Resource = 'Form1',
@LockMode = 'Shared'
EXEC sp_releaseapplock @LockResource = 'Form1'
Sp_getapplock provides five lock modes: Shared, Update, Exclusive, IntentExclusive, and IntentShared. To simplify cleanup, locks on resources associate with either the current transaction or the current session. When the transaction commits or rolls back, current transaction locks release. When the session logs off, session locks release.
Both dBase and Paradox provide a way to create indexes that use expressions or functions, but I can't get SQL Server 7.0 to perform this task. For example, if I use the LEFT function in a query for the condition
WHERE left(field1,5) = "xxxxx"
SQL Server 7.0 won't use an index, even if an index on Field1 exists. Will this function work if I create an index such as
No, you can't create an index that uses functions or expressions with SQL Server 7.0 or earlier. However, SQL Server 2000 lets you directly index computed columns in a table and place indexes on views. Carefully read the SET Options That Affect Results section of SQL Server 2000 Books Online (BOL) before you set up advanced types of indexes. Several restrictions for indexes exist. For example, an index on a computed column or view works only if all operations that reference the index use the same algorithms to determine the key values. This limitation means that any SET options that affect the results that T-SQL statements generate must have the same settings for all operations that reference the index.
Seven SET options affect the results in the computed columns that views return. For all connections that use indexes on computed columns or indexed views, you must set six of those options to ON: ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, CONCAT_NULL_YIELDS_NULL, and QUOTED_IDENTIFIER. You must set the NUMERIC_ROUNDABORT option to OFF.
SQL Server 2000 BOL gives the following example to illustrate the purpose for these restrictions:
PartMaterial + PartColor
INSERT INTO Parts VALUES (1, 'Table', 'Wood', 'Red')
INSERT INTO Parts VALUES (2, 'Chair', 'Fabric', 'Blue')
INSERT INTO Parts VALUES (3, 'Bolt', 'Steel', NULL)
The value of the PartDescription column for the row in which PartID is 3 depends on the CONCAT_NULL_YIELDS_NULL option. If you set CONCAT_NULL_YIELDS_NULL to ON, the calculated value is NULL. If you set CONCAT_NULL_YIELDS_NULL to OFF, then the calculated value is the string Steel. To properly maintain an index on the Part-Description column, all INSERT, UPDATE, and DELETE operations must have the setting CONCAT_NULL_YIELDS_NULL ON as the connection that created the index. The optimizer doesn't use the index for any connection with a CONCAT_NULL_YIELDS_NULL setting that is different from the connections that created the key values.
PartDescription doesn't really exist. The value materializes as a real value only when an index key stores it. But, the value in the index key changes if you change some of the SET settings (e.g., CONCAT_NULL_YIELDS_NULL). SQL Server 2000 requires these extreme measures to ensure that connections can properly use indexes on created columns and views.