Get a grip on network data storage
Understanding and managing network data storage becomes exponentially more difficult as your network grows in size and complexity. The data that employees need might be organized across dozens of servers, and the use of Networked Attached Storage (NAS) to provide inexpensive room for data growth throws another curve at data management. (Imagine what the already complex task of server backups—not to mention the management of the physical devices that make up any given set of user data—begins to look like when crucial data spans multiple servers that aren't even geographically collocated.) Microsoft Dfs lets you create virtual share points that end users can map to and lets you modify the physical shares that make up a virtual share without changing the end users' configuration. This virtualization of the storage architecture creates a logical namespace that can span an entire corporate environment and make your data's physical location irrelevant to end users, but Dfs provides only rudimentary management tools, in the form of the Microsoft Management Console (MMC) Distributed File System snap-in.
Enter NuView StorageX 3.0. StorageX is enterprise storage-management software that runs on Windows 2000 Server, builds on Dfs technology, and significantly extends the logical namespace concept. The product provides a set of tools that give real manageability to shares, storage, and home directories, all of which exist in what NuView calls the global namespace. (Understanding the StorageX global namespace is the key to understanding the product; the product documentation sufficiently explains the concept.)
The World in 5 Parts
If you're thinking, "I already use Dfs and don't have any problems managing the shares in my Dfs namespace," you probably don't have an environment large enough to benefit from the advantages that StorageX provides. But think about the amount of time necessary to keep a large enterprise environment running. When you deal with hundreds or thousands of shares daily, the advantages of a centralized management tool that tackles all the problems related to share management become immediately obvious.
StorageX groups features fall into five basic categories: global namespace, business continuity, file-server expansion and consolidation, data movement, and NAS management. Global namespace features deal with what you'd typically think of as Dfs-related activities: managing the logical namespace and deploying and managing Dfs. Business-continuity tools perform disaster-recovery tasks such as providing automatic failover and assuring continuous data availability. File-server expansion and consolidation features tackle data migration and the ability to add servers, Storage Area Network (SAN) devices, and NAS devices. Data-movement features involve what you might think of as data replication (i.e., copying data to multiple locations to help ensure data availability), as well as data migration, application, and publication. NAS-management tools concentrate on StorageX's cross-platform nature and provide the means to monitor Network Appliance filers and manage NetApp Snap-Mirror software. StorageX carries out all these tasks through a policy-based automation engine that lets you create policies for key management functions, then apply those policies to any data that the application manages.
I don't have a large corporate-enterprise network that I can reconfigure at will, but I set up a small test network that let me thoroughly test StorageX's various features. I created a global namespace, stretched across six servers and 12 clients, for the product to manage. I ended up with a moderately complex Dfs environment comprising almost 140,000 files in 11,000 folders for a total of almost 200GB of data.
Because of my slow network infrastructure (i.e., 100Mb Ethernet), I limited data-replication tests to a maximum of 5GB transfers to reduce the time necessary to complete my testing. (In an actual disaster-recovery scenario, you'd probably perform data replication of an entire data set in advance of configuring the replication, then regularly update the data set to keep it current.)
After installing a late beta version of StorageX 3.0 on a Win2K server—an exceptionally simple process—I accessed the main StorageX console, which contains four default objects: Admin View, Logical View, Physical View, and Reports. My first step was to use the Logical View object's context menu to display an existing Dfs root. The context menu also provides a method to create a Dfs root from scratch, but I chose to use my network's existing Dfs roots.
Next, I needed to add the existing shares that I wanted to be accessible from the Dfs root (I also had the option to create new shares). To do so, I used the Dfs root object's context menu, which offers options to add one link or multiple links. I selected the Add link option to add one share.
At this point, I found a bit of strange behavior: The application opened a blank selection box, and clicking Add brought up a series of screens that I needed to navigate to search the network for the share I wanted to add. The Add multiple links menu option, however, behaved differently. When I tested that option, clicking Add Links immediately opened a network browser so that I could select the shares that I wanted to add. (I mentioned this discrepancy to the vendor, which told me that it would look into the behavior.)
After I added links to the Dfs root object, I could select a linked share object (under the Dfs root object) to display or edit that share's status and the physical resource that the share represented. As Figure 1 shows, StorageX displays these statistics in the interface's right-hand pane.
As I selected shares to add to the Dfs root object, StorageX automatically created corresponding server objects (for the servers on which the selected shares resided) under the Physical View\Namespace Resources folder, which Figure 2, page 39, shows. I continued to add servers and shares to the Dfs root until I had configured four servers and 100 shares.
At this point, I performed a quick test of the product's basic data-migration capability. To do so, I right-clicked a folder from one of the configured servers under the Physical View\Namespace Resources object. I used the context menu's Data Migration option to create a new folder on a second server and move the original folder's contents to the new folder. I then used a client computer to access the data. As far as the client was concerned, the data resided where it always had.
Power Through Policies
Next, I turned my attention to the Admin View object, which holds the product's policy objects. StorageX provides six core management and automation policies that you can customize. I tested the Disaster Recovery, Home Folder, Namespace Availability, and Replication Manager policies. The two remaining policies—SnapMirror and UNIX Namespace—required hardware I didn't have (i.e., a NetApp filer and a UNIX server, respectively).
To test the Disaster Recovery policy, I configured two servers with similar amounts of storage, making one server the primary member of the Dfs root. I configured the second server as a target for the policy, which copied the primary server's contents to the secondary server and made that server the target in the event of a necessary failover. To simplify the testing, I manually copied about 80 percent of the data from the primary server to the secondary server before I started the Disaster Recovery policy, which moves data according to a schedule that you set.
I set the policy to begin moving data, let it update the secondary server with the remaining data, then disconnected the primary server from the network. With no noticeable difference, my network client could still access the shared files that the backup server was now serving.
The Home Folder policy is designed to simplify the creation of users' home directories. I pointed the policy at my network's domain controller (DC), and the policy created home directories for all user accounts that didn't already have them. The process was simple and straightforward and worked as I expected.
The Namespace Availability policy let me synchronize two Dfs roots, providing the redundancy necessary for a fail-safe environment. I used the policy's default settings, which configure the Dfs roots to synchronize every 12 hours. I then intentionally took servers offline and had no problem getting failover to work.
To test the Replication Manager policy, I set up a specific pair of shares by copying 30GB of data from one data drive to a fresh drive on another server. I let the copying process run overnight to establish the test conditions, then added 100MB of data to the primary drive and modified approximately 150 files. After the policy processed the data, I confirmed that the two drives' contents were identical.
Last, I evaluated StorageX's report-generation features. The application can generate several short reports on the fly, and here I discovered another bug. Some of the drives I used for testing connected to my computers through either USB 2.0 or IEEE 1394 connections. Although StorageX worked correctly with those drives' contents, the application couldn't enumerate the drives' capacity properties and thus couldn't report those properties, as Figure 3 shows. The vendor assured me that it would fix this problem; the problem might even be resolved by the time you read this.
I uncovered one other problem when I attempted to generate a report of all the user permissions on all the shares that StorageX was managing. Even for my relatively small test network, this report took nearly 15 minutes to generate. If I'd been working with a large enterprise, I might still be waiting for the report to finish.
A Defining Product
All in all, my tests barely challenged the capabilities of StorageX. This category-defining storage-management product builds on familiar Windows technologies and metaphors to make storage management a one-administrator task, regardless of your environment's size.
|NuView StorageX 3.0|
| Contact: NuView * 281-497-0620|
Price: $2000 per managed server
* Incredibly simple to get running, especially for an enterprise-class product
* Functions are clear and well defined after you understand the fundamental metaphor of the global namespace
* Requires a commitment to the vendor's concept of storage management
* Can't manage NAS storage devices that don't use NetApp filers or Windows networking features