Executive Summary:
Continuous Data Protection’s (CDP’s) real value is in a product’s ability to go beyond simply providing continuous backup--to also support larger objectives. Compare CA XOsoft WANSyncHA, SonicWALL CDP 3.0 on the SonicWALL CDP 4440i appliance, Microsoft System Center Data Protection Manager (DPM) 2007, and TimeSpring Software’s TimeData to determine which CDP product is right for your environment.

As I conducted my research to write this review, I learned that the term continuous data protection (CDP) has a variety of meanings. To some, CDP means an ability to back up changed data at regular intervals. Broader definitions include network and server protection. For the purpose of this article, I’m defining true CDP as the ability to recognize changes to data at an application’s transaction level, as well as the ability to recover the state of the data to the point in time represented by the completion of any transaction. Many organizations don’t need the point-in-time recovery enabled by true CDP products to meet their Recovery Point Objectives (RPOs) for an acceptable level of data loss resulting from an incident. An additional measure of a CDP application’s suitability to an organization includes its ability to meet Recovery Time Objectives (RTOs)—that is, how quickly the application can be back up and running after an incident. In the following reviews, I try to give you a sense of the process and level of effort each product requires during recovery, to help you determine whether the product fits your applications and requirements.

Although many products are marketed as CDP solutions, I was able to consider only a few for this review. I narrowed the field by looking for products that offer support for two of Microsoft’s key applications: SQL Server and Exchange Server. I focused my testing on SQL Server support. I didn’t consider applications that occurred to me as primarily backup solutions.

During testing, I found that CDP’s real value proposition is in a product’s ability to go beyond simply providing continuous backup—to also use CDP to support larger objectives. A common thread among the four products I ultimately reviewed is their ability to maintain copies of protected data both locally and at a remote location in support of disaster recovery scenarios. Beyond that, the four products have more differences than similarities. CA XOsoft WANSyncHA replicates data to a second, potentially remote, server with support for automatic failover, failback, and point-in-time data rewind for recovery from events that corrupt data. SonicWALL CDP 3.0 on the SonicWALL CDP 4440i appliance, as I only discovered in testing, doesn’t support true transaction-level CDP for SQL Server; instead, it captures SQL Server transaction logs as frequently as every 30 minutes and restores to the state represented by any log file. Like SonicWALL’s CDP solution, Microsoft’s System Center Data Protection Manager (DPM) 2007 offers near-CDP alternatives, providing full backup frequency as often as every 30 minutes—or, when the protected application supports differential or log backup, every 15 minutes. TimeSpring Software’s TimeData used with SQL Server captures transaction-level changes even for simple recovery model databases and can recreate database files representing the point in time you select, but it leaves you with the task of placing the data back into production.

CA XOsoft WANSyncHA
CA XOsoft WANSyncHA, see Web Figure 1, incorporates CDP technology into an application server high availability product. The CDP portion of WANSyncHA is intended to bridge the protection gap between snapshots. By relying on the existing known good application data states provided by traditional backup and snapshot technologies, WANSyncHA minimizes the disk space required for the transaction-oriented rewind data. As a high availability product, WANSyncHA maintains a usable replica of the protected application server and can quickly fail over to the replica when necessary. A key application feature called Assured Recovery simplifies periodic testing of the failover process, including a rollback of test transactions applied to the replica during testing. WANSyncHA works in 32-bit and 64-bit Windows environments, and with AIX, Linux, and Sun Solaris servers.

WANSyncHA has two installable components: the XOsoft engine, which runs on every server participating as a master or a replica; and the management console, WANSync Manager, which you can install on any Windows workstation or server. Installation requirements are pretty basic. In a SQL Server environment, both the master server and the replica server must have the same configuration—both must be running the same service pack and hotfix level, as well as have the same storage configuration for protected databases. The management console, if you plan to use the remote installation facility, must run Microsoft .NET Framework 2.0. The installation procedure is well documented in the Operations Guide for the application you plan to protect. Because I used SQL Server for testing, I used the WANSyncHA SQL Server Operations Guide. I installed WANSync Manager and the XOsoft Remote Installer on the Windows XP system I was using; basic installation was quick and painless. Installing the XOsoft engine on the master and replica SQL Server machines of my choice was similarly easy.

The XOsoft engine, running as a service on each participating server, is WANSyncHA’s interface to each supported application—Microsoft IIS, Exchange, SQL Server, Oracle, and NTFS, including 64-bit versions of supported applications. You can use the WANSync Manager to create WANSync scenarios that control what the engine does. The wizard-driven process is easy to complete. After I selected a SQL Server high availability scenario, the wizard had me select master and replica SQL Server machines—the servers I had previously installed the XOsoft engine on—and then displayed the SQL Server instances and databases it found on the master server. I selected check boxes to indicate the databases I wanted to protect. Next, the wizard displayed the directories holding the database files that WANSyncHA would replicate. A properties page provides options to tailor WANSyncHA’s behavior on the master and replica servers. By default, CDP—the data rewind capability—is turned off, so I enabled it and configured the maximum disk space utilization. The next screen displays default switchover properties, including the method WANSyncHA will use to redirect network traffic to the replica server. The default is to redirect the application’s DNS name to the replica’s IP address. You can configure switchover to occur automatically when WANSyncHA detects failure of the master, or manually. Similarly, you can configure replication back to the master after switchover to start automatically or manually. WANSyncHA caches updates locally when the destination server is unavailable, and sends the data when communication is restored. WANSyncHA will use the Assured Recovery feature to schedule integrity testing to run periodically, or let you trigger it manually. By default, Assured Recovery performs only basic testing (i.e., connecting to the database). You can script more complex testing to meet your needs. Near the end of the scenario creation wizard, WANSyncHA runs more than 100 proactive configuration checks to help prevent unexpected problems later. When complete, the wizard offers the option to run the scenario—performing initial replication and beginning data protection. I selected this option, and when initial replication completed, WANSyncHA produced an HTML report detailing the data sets that were replicated. Figure 1 shows the active scenario status, displayed by WANSync Manager.

To test WANSyncHA, I used an application that writes system events to a SQL Server database. I used the application’s status screen to note when it was successfully connected to the database. With automatic switchover configured, I performed several failover and manual switchover tests. In each case, it took both my application and WANSync less than a minute to notice the absence of the active server. WANSync Manager displayed the progress of the switchover as the XOsoft engine on the stand-by server started SQL Server services and altered the DNS name of the primary server to point to the stand-by server. Within a minute or two, my application again reported a successful connection to the database server, now operating on the alternate system. After bringing the original server back online I performed a manual switchback, and it worked much the same—only in reverse. This process could be lengthy in a production environment, because WANSync, not knowing the state of the database on the primary server, must fully replicate the database back to the primary server before the switchback can occur. When testing switchover with both servers staying online, however, you can instruct WANSync to perform reverse replication from the secondary-now-active server back to the primary-now-stand-by server, and avoid the need to fully replicate the database when you switch back.

Next, I tested the data rewind capability. Although the documentation implies that you can simply stop replication to a stand-by server, I had to stop the scenario by clicking the stop scenario icon, stopping database operations on both the active and the stand-by servers. A data rewind operation can use either server as the source, and either replicate the new version of the database to the other server, or leave the other server unchanged. WANSync Manager presented me with a list of available timestamped checkpoints from which to choose. I selected the option to leave one server unchanged, selected a checkpoint, and ran the rewind. I checked both servers and found that the record count of a key table was unchanged on one server and reduced on the other server, as I expected. I was able to perform incremental data rewind operations, checking the state of the altered database after each rewind operation, as you would need to do if you were uncertain which checkpoint file would result in the most current, clean data.

I was impressed with WANSyncHA’s operation. Although it doesn’t perform the rapid failover you’d see in a server cluster—which you shouldn’t expect in a product such as this—it performed very well in my tests. Administrators will appreciate the ability to tailor the product to meet their needs, both in configuring switchover and in customizing the Assured Recovery features to fully test the recovery procedures in various environments. The data rewind feature is easy to use and gives you plenty of flexibility (enough, but not too much) to retain rewind data to meet your applications’ needs.

Summary
CA XOsoft WANSyncHA 4.0.72
PROS: Maintains a remote server replica and supports automatic failover; leverages snapshot and traditional backup data to minimize the transaction-oriented storage requirements; Assured Recovery simplifies periodic testing of recovery procedures; data rewind operations work quickly for point-in-time recovery
CONS: Switching back after a failover caused by loss of the primary server requires a full replication of data back to the original primary server
RATING: 4.5
PRICE: $2,000 to $7,200 per server, depending on the server’s OS (Virtual Machine, Standard, Enterprise, Cluster) and applications to be protected (files, SQL Server, Exchange)
RECOMMENDATION: CA XOsoft WANSyncHA combines data rewind with application failover to a remote server for both offsite data protection and rapid application recovery. When you need true CDP with rapid remote application recovery, look at WANSyncHA first.
CONTACT: CA • 800-225-5224 • www.ca.com/us

SonicWALL CDP 4440i
SonicWALL’s CDP appliances are based on technology SonicWALL obtained through its acquisition of Lasso Logic. These appliances provide real-time backup for permanently and intermittently connected Windows systems. The models differ in raw storage capacity. The low-end 1440i has one 160GB IDE disk, the 2440i, see Web Figure 2, a 250GB IDE disk, the 3440i two mirrored Serial ATA (SATA) disks (size unspecified), and the 4440i three SATA disks in a RAID 5 configuration. Product literature reports compressed capacities from 192GB to 1.2TB. I tested the SonicWALL CDP 4440i, the largest of SonicWALL’s four CDP appliance models; its Enterprise Manager reported about 550GB of storage capacity. All the models have a single enabled Ethernet connection. The 3440i and 4440i support Gigabit Ethernet and include a second, currently unusable network interface. The two higher-end appliances also employ an advanced compression algorithm to make more efficient use of available disk space.

All models support open file backup. For users, CDP supports protection of Microsoft Office Outlook (through version 2003) and Outlook Express. All but the 1440i support backup of Active Directory (AD), SQL Server, and Exchange. When I selected the SonicWALL CDP appliance for review, I was under the impression that it supported true CDP for SQL Server machines—that is, a transaction-oriented backup allowing any-point-in-time restore. In fact, CDP supports recovery to the level of an individual transaction log backup, which the SonicWALL CDP device lets administrators schedule as frequently as every 30 minutes.

SonicWALL offers several optional features and services. Site-to-Site Backup lets you replicate backup data to one or more remote CDPs. An Offsite Data Backup Service lets you store backup data sets remotely at SonicWALL-managed sites, using encrypted transmission and stored in an encrypted format. Bare Metal Recovery for servers and workstations lets you recover a full system to new hardware, with all models including one or more workstation recovery licenses, and the 3440i and 4440i bundled with one and two server licenses respectively. Ongoing support services and software (including firmware) updates also require optional contracts, costing $1,679 and $1,359 for one year, respectively.

A Getting Started Guide walked me through the initial installation and configuration. I was able to use the 4440i’s Web interface to configure the appliance for my network. I downloaded the current SonicWALL CDP software package and installed it on the XP system I wanted to use as the management console. The software package installed the CDP Agent, which is the key client-based component. The agent explores the local system for supported applications and monitors selected applications and directories (the ones you configure) for changes to data, sending new and changed data blocks to a CDP device. The software package also installed two user interfaces. The CDP Enterprise Manager is used to configure reporting, alarms, and data recovery, as well as policies for use with agent systems. The CDP Agent Tool provides users on protected systems (those with the agent installed and configured) with an interface they can use to configure and monitor local CDP operations and to recover file versions. The software installation procedure also installed a service and added three programs to the Windows Firewall Exceptions list. The Getting Started Guide indicated that I’d be presented with “Complete” and “Custom” options, but I wasn’t. Apparently, the Enterprise Manager is always installed along with the agent on protected systems. I used the Enterprise Manager to configure basic administrative settings, including the password that protects access to the 4440i’s configuration.

Enterprise Manager also lets you create agent policies in which you can specify a disk quota for the agent, default folders and applications to protect, and backup exclusions at the file extension level. The policy feature doesn’t seem to be fully developed; it presented only Outlook and Outlook Express as the application options and didn’t give me an option to configure SQL Server-related or Exchange-related policies. Nor did it allow browsing either local or remote root (e.g., C$) drive shares to assist the creation of file backup policies. Because policies are the only tool that allows remote management of CDP devices’ backup configuration, a more complete policy feature would be extremely useful. Instead, administrators must use the Agent Tool installed on a computer to configure protection for that system. Remote configuration is possible using a remote desktop application.

I used the Agent Tool to designate a directory on my XP console for protection. The agent immediately backed up the directory to the 4440i. As I altered files in that directory, I was able to display and restore previous versions of a file to any location, including the original. This method for creating secure file version backups for local users is both easy and effective. Note that CDP supports protection of local disks but not remote file shares.

Because I was testing recovery in a SQL Server environment, I installed the agent on a SQL Server 2005 system with several active databases. I found that the application-specific CDP documentation is surprisingly sketchy. Discussions of SQL Server backup and recovery in the CDP 3.0 Administrators Guide are only half a page, and one page, respectively—and they simply tell you the steps to follow. Information related to AD and Exchange backup and recovery is similarly brief, and I found no similar discussion related to Outlook or Outlook Express. I called SonicWALL support to fill in some of the gaps. I learned that the agent uses the standard SQL Server API to perform full, differential, and log backups, so CDP won’t create log backups for databases set to the simple recovery model.

The facilities for protecting and restoring SQL Server databases are easy to use, with few options. After installing the agent and tools to a SQL Server 2005 system, I was able to quickly configure protection for several databases. For each database I was able to specify the backup interval for full, differential, and log backups, which default to monthly, weekly, and every two hours, respectively. CDP retains two full backup files, as well as associated differential and log backups. Recovery of a SQL Server database is similarly simple. As Figure 2 shows, you select Restore Database from the Agent Tool on the protected server; click the full, differential, or log backup file that represents the point in time you want to restore to; and specify a temporary disk that CDP will use for staging the backup. CDP then performs the restore. A second Restore to Disk option is similar but simply copies the necessary .bak backup files to the directory you designate. You can restore these .bak files with standard SQL Server facilities, or maintain copies to meet requirements for long-term storage.

Overall, SonicWALL’s CDP 3.0 and the 4440i appliance were very easy to implement and use. Although CDP 3.0 doesn’t meet my definition of true CDP in its support for SQL Server, it will meet many organizations’ RPOs. The Agent Tool is very end-user friendly, but the lack of a more complete set of remote management features would hinder deployment in larger enterprises. Given the availability of remote desktop utilities, you can work around this shortcoming. The combination of continuous, automated backup (with options for automatic offsite storage of backup data) with the Bare Metal Recovery facility can be a useful part of a disaster recovery plan for critical servers and workstations; from that perspective, I can easily recommend SonicWALL CDP 3.0 on the appliance model that best meets your needs. Once configured, it does what it does well, and in the right environment (i.e., protecting critical servers and workstations managed by professionals), it can be an easy-to-implement component of an overall recovery strategy.

Summary
SonicWALL CDP 3.0 on the SonicWALL CDP 4440i Appliance
PROS: Easy to implement data protection appliance for SQL Server, Exchange, AD, and NTFS files; optional features support automatic offsite replication of protected data; CDP 3.0 makes recovery to a previous state simple, with automatic selection of the correct set of backup files
CONS: Rather than supporting true transaction-aware CDP for SQL Server, SonicWALL’s CDP 3.0 recovers to the state contained in a log backup, which you can configure the SonicWALL appliance to capture as frequently as every 30 minutes; incomplete support to configure protection for a system remotely—alternatives include local configuration or use of a remote desktop product
RATING: 3.5
PRICE: $7,999 to purchase; support contracts start at $1,679 for one year of 5 ´ 8 support with software updates
RECOMMENDATION: Take a look at SonicWALL’s CDP line if the 4440i’s storage capacity and the 30-minute recovery point meet your needs, and if you can live with local or remote desktop access to configure protected systems. The appliance is effective and easy to use, especially when paired with one of the optional offsite replication alternatives.
CONTACT: SonicWALL • 888-557-6642 • www.sonicwall.com/us

Continue to next page

DPM 2007
DPM 2007, see Web Figure 3, is the latest enhancement to Microsoft’s near-CDP application and system recovery suite. The product has specific support for a variety of Volume Shadow Copy Service (VSS)-supporting applications, including Windows Server 2008, Windows Server 2003, SQL Server 2005, SQL Server 2000, Exchange Server 2007, Exchange Server 2003, Windows SharePoint Services (WSS) 3.0, Microsoft Office SharePoint Server 2007, and Microsoft Virtual Server 2005. DPM also supports protection of XP- and Windows Vista-based (except Home editions) shares. In all cases, DPM supports protection of both 32-bit (x86) and x64 versions, with no support for IA-64 environments. DPM support for non-Microsoft VSS-enabled applications is possible, if the application vendor provides the necessary VSS interface definition.

DPM’s architecture is easy to understand. The product runs on a Server 2003 or Windows Storage Server 2003-based system, where it maintains replicas of protected data. A DPM agent runs on all protected systems and copies protected data to the DPM server at user-specified times or intervals in two ways: using what DPM calls an Express Full Backup, to create a full recovery point, and using what DPM calls Synchronization, to create an incremental backup. Express Full Backup uses a data block-oriented copy to create a replica of the protected object (e.g., a database, Exchange storage group, or Virtual Hard Disk—VHD) on one of DPM’s storage disks. The agent tracks new and changed data blocks within protected objects on the volume and sends only those blocks to the DPM server when subsequent Express Full Backups are run. Synchronization, the creation of incremental recovery points, is available only when the protected application supports incremental backup. In the case of SQL Server, DPM lets you specify a Synchronization frequency only when the protected database maintains a transaction log file. For simple recovery model databases, only Express Full Backups create a recovery point. The DPM agent makes use of the VSS writer’s ability to quiesce application activity and produce a snapshot of the data object in a stable, usable state. Once the data is preserved at the DPM server, the agent deletes the snapshot, freeing its storage. Similarly, when recreating a Full Express Backup replica after an outage, the agent compares the blocks of the existing replica on the DPM server to a snapshot of the current data object on the protected system and sends only the differences. You can schedule Express Full Backups to occur as frequently as every 30 minutes by selecting the days of the week and the times of day DPM will perform the backup. You schedule Synchronization by selecting an interval of as little as 15 minutes. Although the incremental backup provided by Synchronization lets you recover changes that occur after the prior Express Full Backup, recovery is often faster using a recent Express Full Backup, with fewer incremental recovery points (i.e., log backups) to apply.

The licensing model is, for once, simple. You buy a license for each DPM server, a Standard Server license for each system requiring only file system and system state protection, and an Enterprise Server license for each protected system requiring application or bare-metal restore protection. Despite its name, the Standard Server license is applicable to workstations and other supported clients that need only file system and system state protection.

DPM has several prerequisites. Currently, all systems require VSS-related hotfixes. The DPM server requires the Windows PowerShell scripting environment. DPM also requires an instance of SQL Server, which it installs on the local system by default. Once the prerequisites were met on my system, DPM installed quickly.

Microsoft provides two user interfaces: the DPM Administrator Console, which is DPM’s primary management GUI; and the Management Shell, a command-line interface that supports scripted operations. The Administrator Console, which Figure 3 shows, is well designed. I found it very easy to navigate and simple to use.

DPM stores protected data within its storage pool, which consists of one or more physical disks dedicated to DPM. After installing DPM, adding at least one physical disk to the storage pool is the first configuration task. DPM allocates volumes on storage pool disks for each protected data object where it stores replicas and Synchronization recovery points, extending volumes when necessary.

The next implementation steps require installing the DPM agent on systems you want to protect, then defining protection groups. The Administrator Console makes both steps easy. Selecting Install from the Administrator Console’s Management tab starts the agent installation wizard, which installs the agent and optionally restarts each system after letting you enter the necessary administrative credentials and selecting target systems. Agent installation on my test SQL Server system took only a few minutes. Similarly, selecting Create Protection Group from the Protection tab started a wizard to guide me through the few necessary steps. The wizard displayed all systems running the agent in an Explorer-like view, letting me expand a system to display shares, volumes and—in my case—SQL Server machines, as Figure 4 shows. Simply click to select an item, and it displays in the Selected Members pane. Subsequent wizard screens let you select preset times to perform Express Full Backups for group members, as well as select how often to use Synchronization for incremental backups. When the DPM server has access to a tape device, you can choose to write protected data directly to tape, and to define long-term data retention policies. Because creating an initial data replica of large data objects over a network can take much longer than creating it locally, DPM gives you both options when you create a protection group.

I discovered one inconvenient feature. I choose to modify a protection group, adding two file directories comprising about 20MB of data. Even though my storage pool had more than 13GB of available space, the Administrator Console reported insufficient space for the protection group. After I freed more space by deleting a protection group, DPM allocated 15GB of disk space for the 20MB of file data. I suspect that in circumstances like this—when your data structures’ growth and change behavior doesn’t match DPM’s built-in assumptions—administrators would prefer to use DPM’s support for custom storage volumes that let you, rather than DPM, manage the space allocations. DPM lets you specify a preallocated, custom storage volume only when you add a member to a protection group.

Data recovery is also a simple, wizard-driven process. Clicking the Recovery tab displays a tree structure that includes all protected data objects, letting you select the recoverable object (e.g., a database, file directory, or file) and the recovery point. DPM allows you to direct the recovered data to its original location or another location. In the case of a SQL Server database, the alternatives include disk, tape, or another SQL Server system running the DPM Protection Agent. I tested all but the tape alternative to recover a SQL Server database, and the results were mostly as I expected. I recovered a selected Synchronization recovery point to the database’s original location, and the data in the resulting database was consistent with my expectations. Similarly, recovery to an alternative location while specifying a new database name resulted in an operational database. In the latter case, DPM renamed the database but didn’t rename the MDF and LDF files to be consistent with the new database name. DPM warned me and allowed me to select a new directory location for these files, so they wouldn’t overwrite the production files if I wanted to create the copy on the original database server. Restoring to a network folder resulted in an MDF and LDF file in the target folder. When selecting a recovery point, DPM lets you specify “latest” to select and restore data as of the most recent recovery point. When recovering protected Exchange data, DPM permits granular recovery down to the mailbox level.

The Monitoring tab lets you view the status of all DPM jobs, which implement protection tasks and recovery operations. Exceptional conditions are also reported on the Monitoring tab’s Alerts pane.

The Reporting tab lets you schedule and generate six types of reports in Web, PDF, and Microsoft Excel formats: Disk Utilization, Protection, Recovery, Status, Tape Management, and Tape Utilization. Disk Utilization, as the name implies, details disk usage and free space availability. The Protection report details available recovery points, and Recovery reports on the performance of recovery jobs during an interval. The Status report shows the activities related to each recovery point during a specified time period. The Tape Management report supports tape rotation, whereas the Tape Utilization report is organized to show where volumes are currently allocated.

By adding another DPM server you can easily configure offsite storage of protected data. To the “offsite” DPM server, the primary DPM server is just another resource you can protect, which lets you include just the recoverable data objects that you want to protect offsite in the offsite server’s protection groups.

Overall I found DPM to be very easy to use. Its integration of disk and tape technology let you easily implement DPM as an extension or evolution of existing backup and recovery procedures. The licensing structure is easy to understand, and the pricing seems reasonable. Administrators will most appreciate the recovery features’ simplicity and flexibility. Microsoft has a winner here. When you want to simplify managing your backup data, simplify recovery, or create a working copy of application data for another use, DPM is an easy choice.

Summary
System Center Data Protection Manager 2007
PROS: Broad support for Microsoft applications, and third-party VSS-enabled applications with vendor support; designed for ease of use—all tasks are wizard driven and easy to complete; clean integration of both disk and tape into short- and long-term protection policies let you easily implement data retention and recovery for the full life cycle of your data
CONS: DPM’s disk space allocation for storing protected data might not efficiently match your data’s growth and change patterns; however, there’s an option that lets you manage storage allocations yourself
RATING: 4
PRICE: $573 per DPM server; $426 per Enterprise Server management license; $155 per Standard Server management license
RECOMMENDATION: Small organizations will find DPM easy to incorporate into their protection strategy, and large organizations will appreciate the clean support for Microsoft applications. DPM 2007 is a well-designed near-CDP product that I recommend to all users of the key Microsoft applications that it supports.
CONTACT: Microsoft • 800-642-7676 • www.microsoft.com

TimeData
TimeSpring’s TimeData, see Web Figure 4, is a CDP solution for protecting files, Exchange data, and SQL Server databases. TimeData is a true CDP solution, allowing recovery to any point in time.

TimeData comprises several components. The key system is TimeData’s repository server, where TimeData stores protected data. TimeSpring recommends that this server have at least 2GB system memory when protecting one to three data servers, and at least 4GB when protecting four to eight data servers. TimeData stores protected data in its Event Log file and installs an instance of SQL Server 2005 (provided with a limited-use license) that TimeData uses to index the data in the Event Log file. To enhance performance, both the Event Log file and the TimeData database should be on separate disks, with system and paging files on other disks. The Event Log disk should be six to eight times the size of the data you’re protecting. TimeSpring recommends use of a separate, dedicated network for the transfer of data from protected servers to the repository server.

You install the TimeData Agent on servers whose data you want to protect—which TimeSpring calls Data Servers. TimeData supports protection of NTFS files (except TimeData and Windows system files), SQL Server 2005 (both x64 and x86 versions), SQL Server 2000, WSS 2.0, Exchange 2003, and Exchange 2000 Server, with limited support for Exchange 2007. Although the repository server must run an x86 version of Server 2003 or Windows 2000 Server, the agent is supported on both x86 and x64 versions of those OSs. The SharePoint agent is an exception, requiring an x86 OS. Protected data must reside on locally attached, SAN, or iSCSI storage.

I installed TimeData Data Server 2.7.1.344, according to the TimeData Planning and Installation Guide procedure. Installation of the repository server took the better part of an hour, with much of that time spent on SQL Server 2005 installation. Installing the agent on a SQL Server 2005 system took about 20 minutes. The agent queues data on its way to the repository using an Event Cache, which for performance reasons should be on its own disk on each data server.

You use the TimeSpring Management Console on the repository server to configure and manage TimeData. Remote management is possible only with the use of a remote desktop application.

Although the following discussion focuses on my testing with SQL Server, working with NTFS files and Exchange is similar. After installing the software, the next step is to create a content group—a named collection of files and data structures on a single data server that you want TimeData to protect. To give you some control over the granularity of potential restore points with SQL Server, TimeData lets you configure when it will create a new version, which is what TimeData calls a potential restore point. The alternatives are every time an application commits a transaction to the database; only when the commit for a named transaction or a database checkpoint commit occurs; or only when a database checkpoint occurs. I started with the default, named transactions only, when I created a content group for two databases on the single data server I had configured. Although a content group protects data from one server only, you can configure many content groups for any server. After I created the content group, TimeData began the initial backup of the databases I had configured.

For additional flexibility, including offsite backup, TimeData lets you configure a data server with more than one repository. Using the second repository server, you can import existing content groups from the data server, or create new content groups. Allowing different content groups from a single server to connect to different repositories lets you be selective about the data that occupies a WAN link, and lets you distribute the protection of large, active data servers between several repositories.

TimeData provides a lot of flexibility for recovering data. You start by using the TimeSpring Management Console, see Web Figure 5, to display and select a version of the database to work with. The console will display as many as 1,000 timestamped versions at a time, and it lets you filter by time range to help locate the desired version. After selecting a version to work with, you create a fixed time retrieval view of the content group, which TimeData adds to the console tree, as Figure 5 shows. Within the console, you can select a database from the view and have TimeData write it to another location on disk. TimeData also presents the files on the TimeData drive, a virtual disk drive on the repository server mapped by TimeData to a drive letter you specify at installation time. TimeData creates the virtual drive with a network share, letting you access the files across the network. I copied the database to a second SQL Server machine, attached it, and verified that the data it held was consistent with the recovery time I had selected.

One of TimeData’s benefits is its ability to run databases configured for simple recovery mode. Because the software can store data at each commit and checkpoint, it provides very granular recovery ability without the need to retain SQL Server transaction log data.

Working with Exchange data is similar. A fixed time retrieval view of a content group that protects an Exchange installation provides you with access to a point-in-time version of an Exchange EDB file. When you license TimeData for Exchange, TimeSpring provides a license to use Ontrack PowerControls and its ability to perform message-level restore operations from the offline EDB file.

Overall I found TimeData easy to install, configure, and use to retrieve point-in-time versions of data files. The fixed time retrieval view and the TimeData drive provide rapid access to point-in-time data across a network share. I discovered a limitation to the usefulness of the TimeData drive share when the number of characters in the path to a SQL Server MDF file was too long for remote access—I had to create a new share at the folder that contained the files I wanted to copy. TimeData’s ease of use ends with rapid access to the file. It lacks features to recover production databases to a point in time, leaving it up to you to use standard SQL Server database tools to work with the point-in-time database. On the plus side, its architecture seems well suited to a highly scalable implementation.

Summary
TimeData 2.7.1
PROS: Easy to implement and manage—the Management Console allows effective centralized configuration and management of all protected systems; architecture is very flexible and seems highly scalable; transaction-level recovery even for simple recovery model databases; includes tools for message-level restore when licensed for Exchange
CONS: Relatively heavy system memory and storage resource requirements; lacks integrated tools for recovery of protected applications—TimeData provides the point-in-time data, and you employ standard tools to use the data
RATING: 3.5
PRICE: Per server pricing: NTFS $1,295; SQL Server $3,995; Exchange starting at $3,995
RECOMMENDATION: This traditional CDP product focuses on providing easy access to point-in-time data. When TimeData is implemented as part of an application or disaster recovery plan, you must create your own procedures to apply the recovered data to your application or recovery environment. Use TimeData if its custom recovery methods fit your environment.

CONTACT: TimeSpring Software • 888-375-7634 • www.timespring.com

The Bottom Line
Each of these four products will find its niche. CA XOsoft WANSyncHA is an easy to implement high availability product, with a fast, effective data rewind feature. The SonicWALL CDP series of data protection appliances is easy to implement and simplifies restoring SQL Server databases to a point in time by automating the selection and use of the appropriate set of backup files. DPM offers the most complete support for Microsoft applications, near-CDP recovery points, and—except perhaps when it comes to managing disk utilization for protected data—ease of implementation and use. TimeData’s ability to quickly construct and make available on a network share a point-in-time view of a protected file or database, even for simple recovery model databases, will be attractive to many administrators.

In the end, I selected my Editor’s Choice by looking at how well each product fulfilled on the promise of it features. My Editor’s Choice goes to CA XOsoft WANSyncHA for its ease of use, for its effective failover and failback feature set, and for the balance it strikes between effective data protection and system resource requirements.