A. Apologies for the length of this FAQ article, but there is a lot of stuff to cover.
SQL 7.0 generally has a much more stable and better performing kernel than SQL 6.5 SP5. For every bug you find with the basic kernel of SQL 7.0 there will be 20 known bugs with SQL 6.5 SP5, and an order of magnitude more of unknown ones. However saying that there are going to be a small percentage of queries that will run slower or fail under SQL 7 than under 6.5. As these are reported they will be improved/fixed in future service-packs, but the re-writes to make 98% of queries run faster is going to have a detrimental effect somewhere. Some specific points are noted later.
SQL 7.0 shouldn't be treated as a version 1 release - even though most of the code has been re-written, it has had far more testing (both internal and external to MS) than any previous version of SQL Server - though as always, if in doubt, wait for a service pack or two to come out and check the SQL newsgroups/mail lists to see the problems that early adopters are experiencing (if any). Note that SP1 is released.
A concerted effort was made to test and fix all reported bugs with 6.5 on the 7.0 code-base. In addition an automated testing tool (RAGS) that generated and validated hundreds of complex SQL queries every minute was written and run against SQL 7.0. (See http://research.microsoft.com/users/DSlutz/ for details).
Most of the original bugs/issues were with the new features such as text indexing, DTS gui and the MMC tools - many of these are cleared up in SP1. SP1 itself underwent a significant beta test.
Where you do get errors with the kernel these could well be to do with SQL 7's more strict checking of SQL/ANSI rules. Though many of these are picked up and even simulated (yes Microsoft deliberately reproduced broken code in some instances) under the 6.5 compatibility mode, you may find extra things "break" when you go to full SQL 7.0 mode. (see sp_dbcmptlevel in the docs). You need to go to SQL 7.0 mode for new syntax and stuff so it is recommended that you do this as soon as possible.
For any new release you are advised to apply and test on non-production systems first, and if at all possible do a stress-test so that any problems that only occur under stress are found before rolling out to a production system.
Some known bugs/issues with SQL 7 are discussed below - for the official current list search the Microsoft knowledgebase for sql70 and kbbug.
The rest of this article is entitled "SQL 7.0 - the Good, the Bad and the Ugly". Stuff that isn't in the documentation - or doesn't jump out at you from the docs. Also things to think about and check out before/during a SQL 7 migration. It isn't meant to be all encompassing, but read it in conjunction with the Microsoft supplied docs and Kb articles.
1. Some things to check before a SQL 7 upgrade :-
All db's are checkdb'd, newalloc'd and checkcatalog'ed
No db's are in suspect or loading status
Free space in master for new 6.5 system procs - 5Mb or so should do
At least 10Mb tempdb
At least 6.0 SP3 or 6.5 SP3 is required on the source system
Syscomments source is available for all sp's, triggers etc.
1. Speed. Speed is overall much better than with 6.5 - there are exceptions, noted in the bad/ugly sections where known. As an example, the speed of a large batch processing suite was improved by 300% by going to SQL 7 from 6.5 - a 200% increase was achieved just by migrating with no changes and the extra improvements were made by optimiser fixes in SP1, plus changing bcp to BULK INSERT, parallel index creates and some re-writes of queries that were manually optimised in the past for SQL 6.5. The batch run consists of about 30 separate jobs each with 5-10 steps, consisting of bcp in of data, index creation, lots of select into's on criteria, lots of group by's/aggregation etc. Time taken has reduced from 10 hours to 3hrs 20 with no change in hardware/memory etc.
2. The optimiser in SQL 7.0 is far more powerful than in 6.5, utilising many more types of join, multiple indexes etc. It is worth looking at old SQL that may have been split into multiple steps in SQL 6.5 or earlier because the optimiser couldn't handle things like > 4 way joins or multiple aggregations. This SQL could be re-written back into one large query for better performance. The downside to all these optimiser improvements is that it can take significantly longer to work out the best query plan compared with 6.5 and can result in increased cpu usage.
3. Reliability. You should get far fewer AV's with SQL 7.0 than with 6.5. Also fewer database corruptions - to date, 6 months from SQL 7's RTM, there has not been a single instance of database corruption with SQL 7.0 that was not caused by hardware failure (info provided by Microsoft PSS).
N.B. I've found SQL 6.5 SP4 and above to be pretty resilient to corruption. Make sure all 6.5 systems are on at least SP4 - I've seen about a dozen systems needlessly fall victim to a pre-SP4 bug with LRU corruption when buffers get stressed on SMP systems.
4. Create indexes concurrently. Under 6.5 this would cause locks on system tables and the indexes would build serially. Under 7.0 if you kick off creates of non-clustered indices on the same table at the same time, from separate connections, they will all create concurrently. The re-use of the read-ahead buffers from the main table gives near linear performance. Especially useful for data-warehouse/MI type systems with lots of NC indexes on large tables.
5. Max Async IO. Under 6.5 if you allocated max async io "too high" then performance would nose-dive. With 7.0 this doesn't seem to be the case - the law of diminishing returns still applies, but max async io can be set much higher - even to the maximum of 255 - without worrying too much.
6. DBCC performance. On complex/fragmented databases I have seen DBCC performance for a full checkdb/checkcatalog of 60 times faster than the equivalent checkdb/newalloc/checkcatalog on a 6.5 system. For non-fragmented, simple databases where not so much disk head movement was needed anyway under 6.5 I still see dbcc times that are twice as fast.
7. Query parallelism. Really does work. Seen large queries greatly increase in performance by running on an otherwise identical 4-way box compared to a 2-way.
8. Database size. Due to the new 8K page size and improvements in the storage of text/index data most databases will shrink in size when migrated to SQL 7. This isn't guaranteed and there are lots of factors involved (see NC index stuff in the ugly section), but reports of 50Gb SAP systems shrinking to < 30Gb are common.
9. Under 6.5 the realistic maximum size systems that you could put on SQL Server was around the 300Gb mark due to backups/healthchecks/recovery taking too long. With 7.0 this rises to around the 1Tb mark and quite possibly higher given appropriate hardware. (SQL 7 has been backed up at 600Gb per hour using a stripe of 24 AIT tape drives - and more importantly restored at 500Gb/hour!).
1. BCP slower. Under many circumstances a BCP in to a file with few indices will be slower under SQL 7.0 than 6.5 - this is probably due to the overhead of using OLE-DB internally, however it should mean an end to problems caused by using fast-bcp and it's bypassing of certain SQL checks. With many NC indexes it is quicker than 6.5 though. Using the new BULK INSERT command returns it to as fast or faster than BCP due to the removal of context switches. Setting table-lock on really helps due to the extra overhead of row-locks. I've seen jobs reduce from 80 mins to 50 mins with this change.
2. MMC. The MMC is dog-slow regardless of the power of the machine it is being run on. Service Pack 1 noticeably improves the speed (and reliability) of MMC.
3. If you repeatedly open server-side cursors with small resultsets then you will see significantly slower performance from SQL Server. This because the query is being optimised each time and this takes time with SQL 7. This is fixed in SP1 (there was a trace flag needed for the original hotfix but the default behaviour is changed in SP1). After applying SP1 the plans generated for cursors are kept like they are for stored-procedures. See Q197800 for more details.
4. Large queries monopolising system. This was an issue with SQL 6.5, but if anything it is "worse" with 7.0 - the queries generally complete a lot quicker with 7.0 though. If you have a very large query running and it grabs all the processors to run it will make the system very slow/unresponsive to other users and queries. SQL dynamically decides how many processors to use for a query when it kicks off, and if there is no other activity at the time it will be given all the processors by default. This is not necessarily what you want. The max number of processors a query can use can be set per query (MAXDOP hint) or via sp_configure for a server wide setting - the default is all cpu's. Once a query has started and has grabbed all the cpu's it can't give them back.
5. Long stored-procedures that create lots of temporary tables as they go can take a lot longer to run under SQL 7.0 - this is due to SQL 7.0 re-evaluating the sp every time a new temporary table is created. The idea behind this is that it can then choose the best access plan, but re-evaluating is an overhead. Create all temp tables at the start of the stored-proc if possible.
If you have two stored-procedures, the first creating a temp table, calling a second sp, and second sp manipulating it, this will run fine as long as only one copy of each is running at a time. If more than one process runs concurrently then sp2 will re-compile. Users have reported 10-20 times performance degradation in this scenario. To work-around it either bring the sp2 code in-line to sp1, or create temporary sp's uniquely named for every spid and call them to prevent re-compilation.
6. From another FAQ entry. Q. If I lose my data file (MDF) in SQL 7 I can't backup the log to recover to a point in time - why not?
A. A small oversight on MS's part - which will be fixed in the next major (not service-pack) release of SQL.
Under 6.5 you would do BACKUP LOG <dbname> TO <device> WITH NO_TRUNCATE.
Under SQL 7 this gets you an error
Server: Msg 3446, Level 16, State 1
Primary file not available for database <xx>
This is because the metadata that tells SQL where the files are for the database are in the primary file (which you've just lost!) - in SQL 6.5 this info was held in system tables in master.
A workaround for this is :-
Use only ONE file in the primary file group (the primary file), and place this file on the same drive (mirrored) as your log files. This gives it the same protection as the log.
Add another file group, with one or more files for data - obviously this goes on different disks to the log. Mark this second file group as the default (Alter database)
Then, when your data file is lost, "backup log with no_truncate" will work because the primary file with the metadata in (but no user data) is still available.
7. Nested raiserror's cause an AV. Repro script below. (This error only occurs when a global variable is used)
create procedure PROC2
create procedure PROC1
if @@ERROR <> 0
8. If you have a trigger on a table with an identity field and the trigger inserts into another table without an identity then the value of @@identity is set to NULL when the trigger finishes. This is different to 6.5. Fixed in SP1.
Ugly (or just different)
1. SQL 7.0 is much more "fussy" about things like invalid dates and times, lengths of data, numeric overflows etc. SQL 6.5 and below would often just convert stuff to NULL if it couldn't handle it or truncate a string if too long and not tell you. You can revert most of this back to the SQL 6.5 behaviour if you "SET ANSI WARNINGS OFF". Many of these changes only appear when you switch a database from 65 compatibility mode to 70. I prefer to fix the SQL/code/data concerned as the SQL 7.0 behaviour is better.
2. The MMC EM interface has no option to set column level permissions via the gui. This won't be fixed until the Shiloh release, so it's back to TSQL until then.
3. If a user/login doesn't have permissions to a database they can still see it in the drop down ISQL/W box or in SQL EM. This is a difference from 6.5 that may be fixed in SP2. It is due to the overhead of scanning all the NT users/groups in the new integrated role security.
4. 16-bit ODBC applications may have problems with the new potential extended size of char data (even if the new size isn't used) and/or you may get gpf's in vbdb300.dll when accessing SQL Server. There is a separate FAQ entry on this one with a fixed (but unsupported because Microsoft didn't make the changes) sp_datatype_info that resolves these problems. Microsoft should be incorporating these changes in SP2.
5. The new 3.70 ODBC drivers have a few compatibility issues. e.g. I have seen odbc call failed errors with complex Access 97 front ends and MDAC 2.1 drivers (problem fixed in MDAC 2.5 beta and 2.1 SP2). Also if you use PASSTHRU then integer columns (datatype int4) are returned as decimal instead when the ODBC datasource is configured as "use regional settings".
6. SQL 7 is supposed to grab/release memory as needed to work with varying workloads. It does this quite well unless Exchange 5.5 SP2 or below is on the machine in which case they don't get along too well. Presumably this will be fixed in SQL 7.0 SP2 / Exchange 5.5 SP3...
7. The enhanced optimiser as well as improvements in disk throughput mean that SQL 7.0 is more likely to be cpu bound than 6.5 was. Most systems will still be i/o bound though. The new sp_datatype_info used extensively for ODBC connections is much more heavy on cpu due to the extra calculations, increased numbers of datatypes and attempt at backwards compatibility. If you constantly break/make connections this can add up - one user reported a system running at 92% cpu, largely because of this. When he amended the proc to just return a fixed set of types stored in a permanent table for the purpose the cpu for the system dropped from 92% to 72%.
8. Some databases/tables may grow in size with SQL 7. One reason is because NC indexes point to the clustering key if available rather than a page number. With long keys this can increase the size of the indexes significantly. Watch out for this if any key sizes are very large.
9. Another reason for tables growing in size is that NULL columns used to take up no space at all, but that is no longer true. So a NULL CHAR(10) now uses 10 bytes rather than 1, a NULL MONEY now uses 8 instead of 1. A NULL varchar will take 0 as before.
10. DECIMAL and NUMERIC data can take up a bit more space. The tables for 6.5 and 7.0 respectively:
Precision Storage bytes
1 - 2 2
3 -4 3
Precision Storage bytes
1 - 9 5
11. Watch out for 16-bit db-lib applications and network packet size (sp_configure option). The default with SQL 6.0 and below was that the client dictated the network packet size, and would default to about 1500 bytes. With 6.5 and 7.0 the server option overrides this if no explicit "dbsetlpacket" is issued. The default server size is 4096 and so this significantly increases the client memory buffer space needed. With multiple 16-bit db-lib applications, or a db-lib application that makes multiple connections this can easily overflow the fixed size allowed for all connections in the client buffers and caused out of memory errors, gpf's or hangs. It doesn't matter whether it's on an NT client - the restriction is in the 16-bit db-libs and it's use of "low dos" memory. To resolve either change the application to specifically ask for a smaller packet size or reduce the server "network packet size" via sp_configure. (Note this is a SQL Server TDS application packet size and in no way relates to actual physical network packet sizes that are configured per NIC).
12. When you do an outer join to a table with a bit datatype on SQL Server 6.5 and there is no corresponding record, the bit column yields a 0 instead of a null. On SQL Server 7, the same query yields a null.
13. Constants returned by a case statement are implicitly rtrim'ed. Will be fixed in SP2.
14. On an insert if a trigger fires it thinks all columns have been updated even if some fields have not been explicitly inserted. Fixed in SP1.
15. SQL Server 7 no longer guarantees that inserted records can be retrieved in the same order UNLESS you use an ORDER BY clause or have a clustered index. In SQL 6.5 this would work without them. With 7 if you were using this facility to create a table just to bcp the contents out then you will need to insert some sort of ascending key to keep the rows in order.
16. Changes in binary handling.
The format for display of binary (hex) data in isql/w from version 6.x is in lower case, but in isql/w from 7.0 is upper case.
The @@microsoftversion variable is documented in 6.x books-online as follows
@@MICROSOFTVERSION - Is a version used internally to track the current version of the server. Note that in 7.0 the major version is now in the most significant byte.
The order of the date and time components in convert( binary(8), getdate() ) has been reversed.
See the code below for examples of this.
SELECT convert( binary(4), @@microsoftversion ), convert( binary(8), getdate() )
<SQL Server 6.x (6.0 in this case)>
<SQL Server 7.0 build 623>
17. With 6.5, you could call stored procedure calls from ASP without prefixing them with the "exec" command. With SQL 7.0 you have to have the exec command there.
18. SQL Server 6.5 internally treated nullable chars not as fixed length, but as varchars. SQL 7 does not treat them this way when the compatibility mode of the db is set to 70. If you depend on the 6.5 behaviour then you might get different SQL results.
declare @var1 char(50)
select @var1 = 'option_cmt_des_txt_no'
declare @var2 char(50)
select @var2 = right(@var1, 6)
With 6.5 would return 'txt_no'
With 7.0 would return blanks
1. SP1. SP1 is heartily recommended. A few sub-optimal optimiser decisions are fixed. A lot of bugs and speed problems with the gui stuff like MMC and DTS are fixed. Overrall a very stable service pack with lots of improvements. There's no reason to run SQL 7.0 without it.
2. IE5 and Win9x. A bug in SQL's use of a common internet control causes crashes on Win9x machines with IE5 installed. SP1 fixes this.
3. SQL Server out of locks. You can still get this message even though SQL Server dynamically allocates locks. What the error really means is that SQL Server is out of memory - the lock manager has been denied it's request for more memory. Either give the machine more memory or reduce the load SQL Server is under.
4. DTS. This is a marvellous tool, but had quite a few glitches pre-SP1. One "feature" that is still there is that if you refer to columns in a transform by their column name it can be significantly slower than by their column number - i.e. DTSSource("EmployeeID") rather than DTSSource(13). One DTS job sped up from 2hrs 45mins to 32 mins just by hard-coding column numbers instead of names.
5. Watch out for the default setting of SET CONCAT_NULL_YIELDS_NULL. This is ON which means that SELECT ‘abc’ + NULL yields NULL. This has been known to break some applications.
6. Remember that when upgrading a 6.5 database it stays in 65 compatibility mode. In this mode things like the TOP command don't work - you need to go into 70 mode for the new keywords to work properly.
7. Log files. These can get very big if set to autogrow. If you then try to shrink them with dbcc shrinkfile invariably it doesn't work too well. This is because the log file is internally divided into smaller virtual logs and shrinking will only happen when the active part of the log is at the beginning of your log file. You can use the dbcc loginfo(dbname) command to see where the active part of the transaction log is. If it is at the end of the logfile you could write a small while loop that does some inserts in a test table to move the active part to the beginning of the real file. Then the shrinkfile command should work ok - note that shrinkfile works asynchronously. As a last resort you can always checkpoint the database, shutdown SQL Server and delete the physical logfile. When SQL restarts it will create a new 2Mb log device.
8. Putting the same column into a group by twice causes an AV. It's stupid and basically incorrect syntax to do this anyway, but SQL 6.5 let it go. Repro script based on Northwind.
Discount=SUM(CONVERT(int, Discount * 100)),
ExtendedPrice=SUM(ROUND(CONVERT(money, Quantity * (1 - Discount) *
FROM Products P
inner join \[Order Details\] Od on (Od.ProductID = P.ProductID )
inner join Orders on (Od.OrderID = Orders.OrderID)
inner join Suppliers on (P.SupplierID = Suppliers.SupplierID)
Inner Join Customers on (Orders.CustomerID=Customers.CustomerID)
Inner join Shippers on (Orders.ShipVia=Shippers.ShipperID)
Inner join Employees on (Orders.EmployeeID = Employees.EmployeeID)
Group by Employees.EmployeeID,P.ProductID,
Order by Employees.EmployeeID
9. Msg 8623,Internal Query Processor Error: The query processor could not produce a query plan. This happens with some complex queries - especially on single-processor machines. Some queries that fail on 1 processor machines work ok on multi-processor machines.
10. Tempdb performance may be affected by the auto-creation of statistics. You can experiment with turning off this feature just for tempdb and seeing if it helps.
11. sysfiles virtual table does not behave as others do in stored procs located in master and named beginning in sp_. i.e. write your own sp in master called sp_my_name_is_Norm then do:
then sysfiles will still refer to the master database files.
A. Apologies for the length of this FAQ article, but there is a lot of stuff to cover.