Consolidate and manage your storage resources
Storage Area Networks (SANs) are gaining acceptance at an astounding rate as a solution for organizations that are experiencing explosive storage growth. A SAN consolidates your storage for easier storage management and better use of storage resources. A SAN will allow for scalability as you add more hosts and more storage and minimizes the downtime associated with those activities compared with a Direct Attached Storage (DAS) environment. A SAN environment can increase backup performance by offloading traffic from your IP network and freeing your host's processor from packaging the backup data for IP transmission. You can also take advantage of a SAN's any-to-any connectivity to implement redundancy and disaster tolerance both locally and across long distances.
Because SANs are similar to other network technologies you've probably worked with, you can draw some comparisons to bring yourself up to speed on SAN components and how they operate together. The various pieces fall into three categories: interconnecting devices, storage devices, and hosts. Interconnecting devices include hubs, switches, bridges, and routers. Storage devices range from Just a Bunch of Disks (JBOD) to highly redundant and intelligent RAID enclosures that scale to meet the needs of the largest enterprises. Hosts are the computers that attach to the SAN through a host bus adapter (HBA), which is akin to a NIC in a typical data network.
Deploying Fibre Channel switches in a SAN offers the same performance and manageability benefits as deploying switches in an Ethernet environment. Fibre Channel bridges and routers perform functions similar to their data-networking counterparts, enabling Fibre Channel connectivity to SCSI, Asynchronous Transfer Mode (ATM), and even Ethernet across long distances.
A wide range of SAN storage choices are available to fit different applications. On the low end, JBODs provide the most raw storage for the least amount of money. You can use native or third-party volume-management tools to implement software RAID on the disks in the JBOD enclosure, but Fibre Channel RAID enclosures offer more robust solutions along with better availability and manageability. Typical RAID enclosures include redundant core components and built-in management facilities.
Each host uses an HBA to connect to the storage devices in a SAN. Technically, legacy SCSI controllers are HBAs that connect the host to a SCSI device; HBAs on a SAN perform the same service in letting a host attach to Fibre Channel devices. HBAs process block-level I/O without relying on the host's CPU, thus minimizing the CPU and bus utilization I/O burden on the host. You can install multiple HBAs in a host to achieve redundant data paths and improve availability. In this Lab feature, I look at some SAN components and implement them as a fictional company, Datahogs, to show how a SAN can benefit your organization.
Datahogs SAN, Phase 1
Datahogs is a midsized, rapidly growing company that has several core Windows 2000 and Windows NT servers and a Sun Microsystems server running Solaris. During the past year, the IT staff has performed at least one storage upgrade on each server, and the Windows systems have nearly consumed the upgraded storage. In addition to the hassle of another storage upgrade, the backup administrator says that the company's current data has outgrown the time window available for backup operations, and another storage upgrade will only make the problem worse. Datahogs decides that investing in SAN technology will alleviate some of the company's headaches and provide a quick Return on Investment (ROI).
The company decides to use QLogic's SAN Connectivity Kit 1000, a relatively inexpensive kit that will let IT build a SAN from scratch. To maintain the company's recent investment in legacy SCSI storage and bolster the effectiveness of its existing SCSI tape library, the company will use ADIC's ADIC Gateway; to increase the speed, capacity, and availability of storage for the core servers, the company decided to purchase Dot Hill Systems' SANnet 7124 RAID enclosure.
SAN Connectivity Kit 1000
The SAN Connectivity Kit 1000 includes one SANbox-8 eight-port 1Gbps switch with embedded SANsurfer Tool Kit switch management software, four SANblade 2200 HBAs, four dual SC to dual SC fiber-optic cables, and four Gigabit Interface Converters (GBICs). The kit also includes hard-copy documentation and CD-ROMs containing HBA drivers, management software, high-availability software, and additional documentation.
The process of physically setting up the switch and HBAs was straightforward, and I didn't encounter any problems installing the HBAs into Windows or Solaris servers. I skipped the installation section that referred to creating disks on the Solaris system and executed the package installation directly from the Temp directory because my server didn't have a 3.5" disk drive. I installed the fiber-optic GBICs into the switch and connected four servers to the switch by using the supplied fiber-optic cables. You can configure each port on the SANbox switch as a fabric, segmented loop, translated loop, or trunk port.
After powering up the switch, I attached a cable to the Switch Management port, a Category 5 Ethernet port, on the front of the chassis. I configured it with an IP address on my Ethernet segment so that I could manage the switch by using Telnet, Trivial FTP (TFTP), SNMP, or the switch's embedded management application, SANsurfer. SANsurfer provides a GUI for configuring and managing all aspects of an individual switch or all the SANbox switches in your fabric.
|SAN Connectivity Kit 1000|
| Contact: QLogic * 949-389-6000 or 877-975-6442 |
Pros: Exceptionally priced solution
for building your own SAN; good technical support
Cons: Eight ports fill up quickly,
especially if you use multiple paths
The 2U (3.5"), rack-mountable ADIC Gateway is equipped with two Low Voltage Differential (LVD) SCSI and two High Voltage Differential (HVD) SCSI channels to support legacy SCSI device connectivity to the SAN. Datahogs intends to connect existing SCSI disk enclosures and its legacy SCSI tape library to the gateway. To add the ADIC Gateway to the SAN fabric, I installed an additional GBIC in the SANbox switch and used another dual SC to dual SC fiber-optic cable to connect the gateway to the switch. The ADIC Gateway also includes a 10Base-T Ethernet port to let you manage the gateway by using a Telnet client or the provided ADIC SAN Director software. Win2K computers detected the ADIC device on the SAN and asked for a device driver, which I downloaded from ADIC's Web site and installed. The rubber started to meet the road as soon as I attached the drive enclosures to the gateway.
Windows systems see each disk drive as locally attached and prompt you to ready them for use by writing a signature on them. Understandably, this approach causes much consternation with UNIX administrators in heterogeneous environments because this process can destroy their live disk volumes. In reality, even careful planning and mapping of volumes doesn't provide a safe enough haven for enterprise data. Several solutions to the problem are available, including using a Network Attached Storage (NAS) front end to the SAN, storage virtualization software, and zoning. For an optional upgrade to the ADIC Gateway, you can purchase licenses to run ADIC's Virtual Private SAN (VPS) and Virtual Private Map (VPM) software to help manage host access to the gateway-attached storage devices. I enabled the VPS software and used the VPS menu option in the ADIC SAN Director software to open the VPS interface. The VPS interface provided an intuitive grid for determining which hosts have access to which LUNs on the gateway. I configured some of the drives to be accessible by Windows hosts and the remaining drives to be accessible by the Sun host. To avoid inappropriate tape drive access, I initially configured VPS to let only the backup server access that device.
| Contact: ADIC * 425-881-8004 or 800-336-1233 |
Price: $11,335 as tested
Pros: Provides robust management of traffic and connectivity in addition to SAN connectivity for legacy devices
Cons: You must have significant investment in legacy SCSI storage for price point to make sense
The 4U (7"), rack-mountable SANnet 7124 RAID enclosure arrived completely assembled except for the GBICs and the ten 36GB removable hard disks, which came packaged in a foam tray. The front of the chassis contains slots for installing the hot-swappable drive sleds and two preinstalled hot-swappable RAID controllers. The chassis contains a small LCD display and four buttons for status display and configuration operations. After a quick perusal of the users' manual, I installed the drives, plugged the two AC power cords into the back of the chassis, and powered up the system. All the modules are hot-swappable, field-replaceable units. They include three Power Supply Modules, three Cooling Fan Modules, two dual-port Drive I/O Modules, two Event Reporting Modules, two Serial and Modem ports, two High Speed Serial Data Connector (HSSDC) Host Modules, and two GBIC Host Modules. That's a lot of modules and ports at first glance, but the only ports of immediate concern are the HSSDC Host Module ports and one of the serial ports.
You can perform the initial configuration of SANnet enclosures by using the front panel buttons and LCD display or by connecting a terminal or terminal emulator to one of the serial ports. Both methods provide the same functionality, but the terminal approach is easier to navigate, so I chose that method for configuring the disk and controller options on the SANnet. Dot Hill gives you a great deal of control over logical disk creation, maintenance, and presentation to the SAN. In addition to initial setup, you can perform maintenance tasks such as adding disks, expanding a logical disk, and regenerating parity on an array.
I set up a logical disk with several partitions and plugged two HSSDC cables from the SANnet into HSSDC GBICs installed in the SANbox switch. The Device Manager for each of the Win2K hosts immediately listed the partitions as available disk drives. The multiple connections between the storage and the switch alleviate redundancy and bandwidth concerns by providing more than one data path. You can manage the data paths by using the optional SANpath software, which lets you use up to 32 data paths to configure I/O load balancing and data failover. Another optional piece of software, SANscape, facilitates configuration, management, and monitoring of your installed switch. SANscape uses agents to monitor storage devices attached to individual hosts, and the information the agents collect feeds into a central SANscape console. The included license covers one agent and one console. Both SANpath and SANscape run on a variety of platforms.
| Contact: Dot Hill Systems * 800-872-2783 |
Price: $64,125 as tested,
including Gigabit Interface
Converters and cables; $995 for SANscape software;
$5995 for SANpath software
Pros: Well-instrumented chassis and redundant components make for a reliable storage solution
Cons: Takes a while to become familiar with layout and nomenclature of modules
Zoning on the QLogic SANbox Switch
I now had all the hosts and storage devices connected and communicating through the switch and gateway. Although you can manually maintain the relationships between what storage needs to be available to which hosts, the QLogic SANbox switch offers a better option. I used the embedded SANsurfer software to configure zoning, which lets you divide ports on the switch into functional groups for more efficient and secure communications. The SANbox switch supports Hard Zones, Broadcast Zones, Name Server Zones, and SL_Port Zones. Hard Zones group together user-specified port numbers of the switch; Broadcast Zones can isolate SAN devices from IP broadcast traffic; Name Server Zones specify (either by Port or World-Wide Name—WWN—definition) which devices can receive Name Server information about other devices in the fabric; and SL_Port Zones are for ports designated as a segmented loop.
For the small number of devices attached to my SAN, I chose the simplest, most flexible zoning method and implemented a Name Server Zone by Port definition. I created two zones and selected the member ports of each zone. Within seconds, the rules of connectivity that the zones specified went into effect on the switch and were reflected in changes in the Device Manager's contents on my Win2K systems.
Alleviating Backup Congestion
Because the hosts I want to back up are now connected to the Fibre Channel SAN, I can nearly eliminate the backup traffic on the LAN and relieve the hosts of the burden of packaging that I/O for delivery over the LAN—a setup known as LAN-free backup. I used BakBone Software's NetVault 6.5 to facilitate LAN-free backup in my environment. First, I modified the VPS configuration on the ADIC Gateway so that all hosts could see the tape library. I then reconfigured the NetVault software to use the tape library and drives in shared mode. The hosts now see the SAN-attached tape library as a local device and communicate with it directly over the SAN, resulting in an immediate performance improvement of between 20 and 30 percent (depending on server I/O speed) with no other modifications.
Datahogs SAN, Phase 2
When we revisit Datahogs a year after the initial SAN implementation, we learn that through increased staff productivity, system availability, and more efficient usage of server and storage assets, the company has realized a substantial ROI on the initial SAN setup. However, Datahogs just merged with a larger company, resulting in more servers to manage, increased storage capacity requirements, and an emphasis on high availability for crucial servers.
The company that acquired Datahogs had previously purchased but not yet implemented two of Brocade's SilkWorm 3800 Fibre Channel switches, a StorageTek 9176 RAID disk subsystem, a variety of Emulex HBAs, and Quantum's Fibre Channel ATL M1500 tape library. The merging companies need to integrate these components into Datahogs' existing SAN. SAN technology has made great strides in interoperability; in fact, the ability to choose storage solutions from among multiple vendors helps drive down SAN implementation costs. Configuring the QLogic 1Gbps switch to interoperate with the Brocade 2Gbps switches, however, presents a problem.
The SilkWorm 3800 is a 1U (1.75"), 16-port Fibre Channel switch that supports link speeds up to 2Gbps. Each port can configure itself to act as a fabric, fabric loop, or expansion port. For greater port density, SilkWorm 3800 uses small form-factor pluggable media that uses LC connectors instead of the larger GBICs. Hot-swappable, redundant power supplies and fans bolster switch reliability and availability. You can manage the SilkWorm 3800 by using Telnet, Brocade WEB TOOLS, Brocade SCSI Enclosure Services, and standard SNMP management applications. I first used the included serial cable to provide an IP address for the switch, then used Brocade WEB TOOLS for monitoring and subsequent configuration.
After the initial setup, I tried to get the Brocade switches to communicate with the QLogic SANbox but was unsuccessful. I called both Brocade and QLogic for advice about making these devices work together, but in the end, all parties agreed that the differences between Brocade's 2Gbps products and QLogic's 1Gbps product were too great to overcome. Everyone also agreed that QLogic's 2Gbps switch products would communicate with the Brocade switches with minimal effort. Unfortunately, I didn't have enough time to test QLogic's 2Gbps switch in the Lab.
SAN vendors continually test and document the interoperability of hardware from various vendors, so when you make a purchase, I suggest you leverage the homework these vendors have done to ensure you get compatible components. Better yet, build in some time with a systems engineer as part of your purchase to help you design and implement an end-to-end functional solution.
Datahogs loses its investment in its SANbox switch but uses the remaining SAN Connectivity Kit 1000 components with the two SilkWorm 3800 switches. The company also needs to upgrade the GBICs in its Dot Hill storage enclosure from copper to optical to be able to attach the enclosure to the SilkWorm 3800 switches. I evenly distributed the hosts, storage, and gateway from the SANbox switch to the SilkWorm 3800 switches and connected the switches with an LC-to-LC cable. The switches automatically configured the interconnecting ports as E-Ports, and devices began to communicate across the switches seamlessly.
The Brocade WEB TOOLS provide a robust, easy-to-use interface for managing and monitoring the SilkWorm 3800 switches. I can click a graphical representation of a port on the switch to get details about that port, or I can drill down through buttons representing operational categories such as Status, Admin, and Events. Brocade WEB TOOLS also include a wide range of optional performance and administration tools, such as Advanced Zoning, Inter-Switch Link (ISL) Trunking, and Advanced Performance Monitoring. Although each feature offers value, they can make the cost of the switch vary by as much as $8000, so factor in those optional costs when making a purchasing decision.
| Contact: Brocade * 408-487-8000 or 888-283-4273 |
Price: Ranges from $18,000 to $30,000
Converters and cables; $995 for SANscape software; $5995 for SANpath software
Pros: Easy setup and configuration;
excellent support infrastructure from planning through implementation and ongoing maintenance
Cons: Can be difficult to integrate into existing environments
LightPulse LP8000 and LightPulse LP9002L
Emulex provides common firmware and binary-compatible drivers that work for every HBA in the company's product line. So, regardless of the generation or model number of the HBAs installed throughout your SAN, you can use the same firmware revision and the same driver on same-OS hosts. I installed Emulex's LightPulse LP7000E, LightPulse LP8000, and LightPulse LP9002L adapters in Win2K, NT 4.0, and Solaris hosts with ease. The LP8000 is a 64-bit, 33MHz PCI 1Gbps HBA, and the LP9002L is a 64-bit, 66MHz 2Gbps HBA equipped with a small form factor LC interface. The LP7000E is no longer available; I used it only to verify that Emulex's unified driver worked on a broad sample of the company's products. Emulex provides several DOS-based utilities to let you update firmware and load boot code on the HBAs. I used the utilities to create a 3.5" boot disk, which I used to update all the HBAs to the same firmware revision and to enable BootBIOS on one of the adapters. BootBIOS lets you boot from a Fibre Channel-attached device rather than a local disk.
|LightPulse LP8000 and LightPulse LP9002L|
| Contact: Emulex * 714-662-5600 or 800-854-7112 |
Converters and cables; $995 for SANscape software; $5995 for SANpath software
Pros: Easy implementation and management because of single driver and firmware version support across product line; supported by all major SAN vendors
Cons: Can result in a higher per-node cost for implementation than other vendors' products
The StorageTek 9176 RAID disk subsystem provides two controllers, each with redundant connections to two 1Gbps Fibre Channel connectors for host connections and two pairs of redundant Fibre Channel arbitrated loops connected to the StorageTek 9170 disk arrays. I tested a unit that had two disk arrays populated with ten 36GB disks each. The StorageTek 9176 can scale to a capacity of 14.4TB using dual storage processors and 72GB disks. Usually a StorageTek engineer preconfigures the StorageTek 9176 before its installation, but I configured it myself to get a feel for the process. For each of the two processors, I first had to connect through a serial cable and configure the Ethernet interface. After each processor had an IP address, I performed management tasks and status monitoring by using StorageTek's SANtricity Storage Manager 8.0.
I installed SANtricity Storage Manager from the included CD-ROM, launched the program, and performed a device discovery. The discovery identified the StorageTek 9176 as a single storage array that owned both of the IP addresses I had given it. I double-clicked the storage array icon to launch the Array Management window, which provided a simple, intuitive means for managing and mapping the controllers, volumes, and disks within the array. After creating two RAID 5 volumes and assigning them to separate controllers, I attempted to perform LUN mapping but discovered the feature wasn't licensed. When you purchase premium features from StorageTek, you get a feature key that you use to enable the feature from SANtricity Storage Manager. I phoned the company and requested a key for SANshare Storage Partitioning, which enables LUN mapping. I received the key in my Inbox within minutes and enabled the feature. SANshare Storage Partitioning lets you group hosts logically, then designate which storage volumes those hosts have access to. I created Windows and UNIX groups and provided access to one volume for each group.
| Contact: StorageTek * 303-673-5151 or 800-786-7835 |
Price: $95,400 as tested; $16,000 for
SANtricity Storage Manager; $6500 for SANshare
Pros: Scalable, redundant storage
solution; SANtricity Storage Manager's
management and configuration interface are easy to use
Cons: Software option costs are multiplied in multiplatform environments
The ATL M1500 is a 4U (7") rack-mountable modular automated tape library available with an optional integrated Prism FC310 Fibre Channel router for direct Fibre Channel connectivity. Each ATL M1500 module can contain two DLT or Linear Tape-Open (LTO) tape drives and 20 DLT or 24 LTO cartridges. The modular approach makes adding additional libraries easy, and you can stack up to 10 ATL M1500 modules as a single library by using Quantum's StackLink, which moves cartridges between the M1500 modules.
I implemented one ATL M1500 equipped with the FC310 router and two Super DLT (SDLT) tape drives alongside Datahogs' legacy tape library to increase throughput and capacity in the SAN environment. I ran the same test backups that I ran earlier, changing only the destination from the legacy gateway—attached tape library to the ATL M1500. Through performance gains from hardware upgrades, the throughput for a single-drive backup job was more than double the old library's throughput.
| Contact: Quantum * 949-856-7800 |
Price: $26,722 as tested
Converters and cables; $995 for SANscape software; $5995 for SANpath software
Pros: Good performance and scalability with factory Fibre Channel connectivity
Cons: Single Fibre Channel port can be single point of failure
Choosing the Right Solution
I hope this glimpse of SAN technology has given you an idea of the capabilities and the hardware involved in building SANs. I took a somewhat backward approach by using equipment from such a disparate array of vendors. If possible, enlist the help of an integrator when evaluating, designing, and implementing your SAN. Many companies include integration services with the purchase of their product. Also, keep your SAN hardware as homogeneous as possible to limit the potential for incompatibilities and finger-pointing between vendors. Vendors are working on partnerships and alliances to ensure interoperability among products, so when you need storage from one vendor and connecting devices from another, you can choose a certified solution that vendor partners have thoroughly tested.
If cost is a factor and you have the technical ability to implement your own solution, look at QLogic's offerings, which offer a tremendous value proposition. On the other hand, if data is the foundation of your organization, you don't want to cut corners. When you purchase a more expensive solution such as those from Dot Hill, StorageTek, and Brocade, you're also paying for extensive support, qualification, and training services. The ADIC Gateway makes sense when you have a sizable investment in legacy storage and tape. If that's not the case, you might consider upgrading your tape library to something like the ATL M1500 when you implement a SAN to take advantage of the throughput gains that Fibre Channel and new tape technology offer.