Recently, a company that's thinking about deploying a SAN for its Exchange servers contacted me. The company wanted to know whether a SAN made sense for its organization as well as how best to configure and tune a SAN. I'm going to answer these two questions separately, one this week and one next week.

Figuring out whether a SAN makes sense for a given organization can be tricky because the term spans a wide range of technology and complexity. For example, you could claim that the old Dell 650F storage enclosure I owned several years ago was a SAN. It had a Fibre Channel interconnect, and I used it as shared storage for a three-node cluster. It didn't, however, have replication, dynamic load balancing, or much expandability. Since that time, the SAN category has broadened so that it includes two primary classes of devices.

Fibre Channel SANs use optical fiber (or, in rare cases, copper cables) to interconnect SAN devices. Each node on the SAN requires a Fibre Channel host bus adapter (HBA), and most Fibre Channel SANs use a fibre switch to provide mesh-like connectivity between nodes. Fibre Channel speeds range from 1GBps to 4GBps and, with the right implementation, can span distances up to 100 kilometers.

iSCSI SANs are a relatively new, lower-cost way to implement SANs. Instead of using optical fiber or copper, iSCSI SAN uses TCP/IP over ordinary network cabling. Its advantages are pretty obvious: lower costs and more flexibility. Instead of spending big bucks on Fibre Channel HBAs and switches, you can deploy lower-cost Gigabit Ethernet HBAs (which are, more or less, ordinary network adapter cards) and switches, and it's much easier to extend the distance between SAN devices without resorting to a backhoe.

In either case, the primary advantages of SANs are their flexibility, performance capabilities, and support for high availability and business continuance. Let's consider each of these advantages separately.

SAN gets its flexibility from the fact that it's a big collection of physical disks that you can assemble in various logical configurations. For example, if you have an enclosure with 21 disks, you can make a single 18-disk RAID-5 array with three hot spares, a pair of 9-disk RAID-5 arrays with three hot spares, or one whopping RAID-1+0 array (although I would be loath to give up those spares). In theory, these configurations let you build the precise mix of logical volumes you need and tailor the spindle count and RAID type of each volume for its intended application. (In practice, sometimes this doesn't happen, as I'll discuss next week.)

SAN's performance capabilities are the result of two primary factors: lots of physical disks and a big cache. Which of these is the dominant factor? It depends on the mix of applications you use on the SAN, how many disks you have, and how they're arranged. When you look at a SAN's raw performance potential, remember that the SAN configuration will have a great effect on whether you actually realize that degree of performance.

When it comes to high availability and business continuance, even if you use your SAN only as a big RAID array, you'll still get the benefit of being able to move data between hosts on the SAN. SANs also make it much easier to take point-in-time copies, either using Microsoft Volume Shadow Copy Service (VSS) or vendor-specific mechanisms. Add replication between SAN enclosures, and you get improved redundancy and resiliency (albeit at a potentially high cost).

SANs are often deployed in conjunction with clusters, but they don't have to be. A SAN shared between multiple unclustered mailbox servers still offers the benefits I describe above--without the complexity of clustering. SANs themselves are fairly complex beasts, which is one common (and sensible) reason why organizations that could use SAN's performance and flexibility sometimes shy away from SAN deployments. If you aren't comfortable setting up, provisioning, and managing a SAN, being dependent on it can actually leave you worse off than you would have been without it.

Cost is also a factor to consider. Obviously, the actual cost of a given solution varies according to its specifics, but all this capability doesn't come cheap. Purchase and maintenance cost is the other big reason why SANs aren't more prevalent; many organizations find that they get more business value from spending their infrastructure dollars in other ways.

Next week, I'll discuss SAN configuration for Exchange and why you need to educate yourself to prevent costly missteps on the part of your SAN vendor. Until then, I'd love to hear how you're using SAN technologies with Exchange--or why you've chosen not to.