A. First ask yourself what your reason is for doing this? Your performance will be degraded and you will be more prone to database corruption due to network glitches (which are far more common than scsi/fibre bus glitches). Putting i/o's across a network (even a fast, switched, network) is typically orders of magnitude slower than via scsi/fibre and the latency is a lot longer. If you don't have a dedicated switched connection then you may get slowdowns caused by contention with other traffic.

SQL Server currently has no concept of sharing a database that is held on a.n.other server. Only one server can access the database file at any one time - the exception being that multiple SQL Servers could probably open a read-only database on a shared-drive. Therefore there is no advantage to having it on a "network drive" - it can only be backed-up/accessed from the server running SQL Server anyway.

If the reason for wanting SQL databases on a network drive is to keep all your storage central, then you can't achieve this anyway as you can't boot NT from a network drive, so you would still need disks in local servers for NT, pagefiles etc. And these should be protected via hardware raid as the loss of an NT disk will prevent users getting at your databases just as much as the loss of a disk containing the database itself.

Now saying that it IS possible to store databases on network drives as long as SQL is fooled into thinking they are in fact local drives. Under 6.5 you must map a drive letter to a network share - UNC paths will not work. With SQL 7.0 UNC paths will work as long as you use trace flag 1807.

There is more information on this in Q196904. This describes the support being allowed in SQL 7.0 for use against Network Appliance networked raid units only. Note that these will suffer the same performance penalties as if you were accessing a network share on an NT box, as effectively that is what they are. These boxes run a proprietary operating system on an embedded Alpha chip that talks the SMB protocol required to handle NT-style network file-io. They connect to the LAN via a standard Ethernet interface.

If you want centralised storage a better method is to use a shared-scsi/fibre disk array - these can be attached to servers via scsi or fibre connectors and can achieve distances of up to 20Km using optical extenders. These arrays can support up to 64 or so separate servers and are sold by Digital (Compaq) Storageworks and EMC amongst others. Although it is a "single raid unit", each server sees a physically separate set of "disks" - the partitioning logic in the raid array can allow different servers to use the same physical disks but they are logically partitioned and the different servers cannot see this and see their storage as dedicated. There is no sharing of data at the partition/file/database level.

Another method is to use a SAN - storage area network. These are fibre or copper based "networks" of storage and/or backup devices. The "network" is dedicated for data access. Each attached device is usually fibre-channel based, or is scsi with an appropriate connector. Each device may be able to be partitioned into sets of available resources (disk/tape), but each resource can currently only be allocated to a single server attached to the "network". Servers attach to a SAN with a SAN "nic" card. As SAN technology matures it may be possible to share resources between multiple servers, but this needs changes to the NT kernel as well as the SAN/fibre drivers.