The core file system capabilities in Windows Server have not changed radically since the earliest version of Windows NT. Windows 2000 introduced dynamic disks. There have been improvements to NTFS reliability and performance. File services have had incremental changes, as has the file-sharing Server Message Block (SMB) protocol, but nothing groundbreaking. File Server Resource Manager (FSRM) added capabilities around screening, quotas, reporting, and classification but didn't change the core capabilities of the file system and how file services are used.
When I think of file services, I think of a server on which to store your Microsoft Word and PowerPoint documents. If this server needs to be highly available, it can be clustered, with one server at a time offering the file share. Volumes with fault tolerance can be created on dynamic disk by using Windows mirroring (RAID 1) or striping with parity (RAID 5) capabilities. However, IT administrators must manually perform both the selection of disks and any repair actions. Furthermore, more advanced features, such as thin provisioning of storage and the easy addition of more storage to a pool in which volumes can be created, just aren't possible without the use of a separate SAN or NAS. But this changes in the next version of Windows Server,(formerly code-named Windows Server 8).
Using Storage Spaces
As an IT administrator or even an end user, you often need storage that might sometimes require fault tolerance. Other times, you just need to be able to store information that is protected in another way. You can open the Disk Management Microsoft Management Console (MMC) snap-in, examine the physical disks, convert them to dynamic disks if necessary, and then create a volume that meets your requirements. If the volume needs to grow, you might be able to extend it (depending on the physical disks), but you can't add additional disks to an existing volume, to provide easy scalability. For small and midsized organizations or even large organizations with smaller remote locations (i.e., locations with just a couple servers and for which neither SAN nor NAS is economical), providing a good storage solution for services is a huge problem. At the other end of the scale, power users on desktops also struggle to organize their data across internal drives and USB-connected disks.
Storage Spaces is a new feature in Server 2012 (and the Windows 8 client). This feature enables a completely new way to think about and administer storage. With Storage Spaces, the physical disks that provide underlying data storage are completely abstracted from the process of requesting new volumes, now known as spaces. The Storage Spaces technology automatically performs any necessary actions to restore data redundancy if a disk fails, provided that sufficient physical disks are available.
The first step is to create a storage pool, which is a selection of one or more physical disks that are then pooled together and can be used by the Storage Spaces technology. Storage pools support USB, Serial ATA (SATA), and Serial Attached SCSI (SAS) connected disks in a Just a Bunch of Disks (JBOD) scenario. With no hardware-based high-availability support such as RAID happening behind the scenes, Storage Spaces takes care of fault tolerance. The use of USB-connected drives is great on the desktop; servers focus on SATA- and SAS-connected drives. In addition, Storage Spaces fully supports shared SAS. You can connect a disk enclosure to several hosts in a cluster, and the Storage Space on those shared SAS drives will be available to all nodes in the cluster and can be used as part of Cluster Shared Volumes (CSV). If you use an external disk enclosure, then Storage Spaces supports the SCSI Enclosure Services (SES) protocol, which enables failure indications on the external storage. For example, you could enable a bad disk alert if Storage Spaces detects a problem with a physical disk.
Other technologies, such as Microsoft BitLocker Drive Encryption, can also be used with Storage Spaces. When a new storage pool is created, the disks that are added to the storage pool disappear from the Disk Management MMC; the disks are now virtualized and used exclusively by Storage Space technology. You can see the disk state in the storage pools view in File and Storage Services in Server Manager (on a Windows 8 server ) or by using the Storage Spaces Control Panel applet (on a Windows 8 client). This article focuses on using Storage Spaces on the server with Server Manager and Windows PowerShell, but all the features that I write about here are also available on the client. (The only difference is that on clients, the Storage Spaces applet, instead of Server Manager, is used for management.)
Start Server Manager. Make sure that the server you want to manage has been added to the list of servers on your Server Manager instance (see the sidebar "Windows Server 2012 Management"), then open File and Storage Services. Select the target server from the Servers tab, and then select the Storage Pools tab, which shows information about existing storage pools and disks that can be used in a storage pool. (These disks are system disks that aren't hosting any volumes.) These unused disks are shown in a Primordial Storage Space and are the building blocks from which storage pools and Storage Spaces can be created, as Figure 1 shows.
To create a storage pool, follow these steps:
- From the Tasks menu, select New Storage Pool to launch the New Storage Pool Wizard.
- Enter a name and an optional description for the new storage pool, and then click Next.
On the next screen, select the physical disks that are available to add to the new pool. Also select the disks' allocation (Data Store, by default). You can allocate the disks as part of the virtual disks that you will create later or reserve them as hot spares, as Figure 2 shows. Click Next.
- Read the displayed confirmation. Click Create to complete the storage pool creation.
A storage pool is now available. The next step is to create virtual disks within the storage pool. You can then create volumes on those disks so that the OS can use them.
Storage Spaces introduces a feature that was previously available only when using external storage solutions such as SANs and NAS devices: the ability to thin-provision storage. During the creation of a virtual disk, you have two options. The first is to create the disk as fixed, meaning that all the space for the size of the virtual disk is allocated during its creation. The second is to create the disk as thin, meaning that space is taken from the pool only as needed. Using a thin-provisioned disk, you can create a virtual disk that is much larger than your actual available storage. Now, this capability doesn't mean that you can store more data in the thinly provisioned disk than is actually allocated to the pool. But volumes typically fill up over time. I might create a 10TB thin disk that initially has only 1TB of associated physical storage; as the amount of data increases and approaches 1TB, I can add another 1TB of physical storage to the pool simply by adding more disks. As the data approaches 2TB, I can add another 1TB of storage by adding still more disks, and so on. As long as I add physical disks before the virtual disk fills, there's no issue. Alerts can be generated to notify me that a storage pool is reaching its threshold, giving me time to add the required storage.
When you create a virtual disk, all you need to know is in which storage pool to create the disk. No knowledge of physical disks is required or even openly available. The point of Storage Spaces is to create virtual disks as needed. To create a virtual disk, follow these steps:
- Select a storage pool in which to create a new virtual disk. In the Virtual Disks section, select the New Virtual Disk task.
- Confirm that the correct server and storage pool are selected in the storage pool selection page of the wizard, and then click Next.
- Give a name and optional description for the new virtual disk, and then click Next.
- Select the storage layout, which can be simple (i.e., no data redundancy and data striped over many disks), mirrored (i.e., data duplicated to additional disks), or parity (i.e., spread data over multiple disks but add parity data to help protect against data loss during a disk failure). Prior to Storage Spaces, these layouts would have been referred to as RAID 0, RAID 1, and RAID 5, respectively. That nomenclature isn't used with Storage Spaces layouts because of differences in implementation. Make your selection, and then click Next.
- Select the provisioning type (i.e., Thin or Fixed) and then click Next.
- Specify a disk size. If you choose Thin as the provisioning type, you can select a larger size than the available physical free space. Click Next.
- A confirmation is displayed; click Create.
After the virtual disk is created, it is available in Server Manager and the Disk Management MMC, where you can create volumes and format the disk with a file system. You can see the actual amount of space that the virtual disk uses in a storage pool in Server Manager (or in the Storage Spaces Control Panel applet on a client). See the accompanying video for a walk-through of this process.
You can also use PowerShell to manage Storage Spaces. For example, to create a new storage pool that uses three physical disks, I can use the following commands:
$storSub = Get-StorageSubSystem
New-StoragePool -FriendlyName "Stuff" -PhysicalDisks $phyDisks , -
$phyDisks, $phyDisks -StorageSubSystemFriendlyName $storSub.FriendlyName
To create virtual disks in the pool, I can use these commands:
New-VirtualDisk -StoragePoolFriendlyName "Stuff" -ResiliencySettingName Parity -Size 10TB -Provisioningtype Thin -FriendlyName "Data2"
I can output the results of the Get-VirtualDisk cmdlet as a list to get details about a virtual disk. For example, I can use the following command to get information about the number of data copies for a mirror and its operational status:
Figure 3 shows a small part of the resulting output.
Server Message Block: Better than Ever
In a future article, I'll go into detail about the new storage protocols in Windows 8. But if I want to explain why I think of Server 2012 as a great file services platform, I need to talk at least briefly about how to use the great new Storage Spaces enabled volumes.
Windows has used SMB as its protocol of choice for remote file access for a long time. Server 2012 introduces SMB 2.2. Although seemingly only a minor version increase from the SMB 2.1 in Server 2008 R2, SMB 2.2 actually makes a vast change to both the performance and capabilities of the SMB protocol.
The overall performance of SMB has been greatly improved, making access to data via SMB essentially equivalent to direct access to the storage. This access is enabled by several changes, including SMB Multi-Channel, which now allows multiple TCP connections to be established over multiple NICs if available for a single SMB session. This change enables bandwidth aggregation because multiple NICs and CPUs can be used for network processing when Receive-Side Scaling (RSS) and multiple NICs are leveraged. This also works for Server 2012 native NIC teaming. (Yes, Server 2012 has an inbox NIC teaming solution!) High availability to file shares with failover clustering has also dramatically improved with a new Active-Active mode, which enables a CSV that's accessible to all nodes in a cluster. Multiple hosts in the cluster can simultaneously share the CSV for key workloads such as Hyper-V virtual machines (VMs) stored on a file share or even Microsoft SQL Server databases. This Active-Active file sharing allows for zero downtime and no loss of handles in the event of a failover.
Beyond SMB, Server 2012 has iSCSI as a role service, as part of File Services in File and Storage Services. After it’s installed, this service allows a Server 2012 server to act as an iSCSI target, enabling access to its storage from both a file level (using SMB) and a block level (using iSCSI). iSCSI targets on a server are typically virtual hard disks (VHDs), and full configuration of access and authentication services is possible.
The Tao of Chkdsk
There is still one concern related to the use of Windows Server file services, specifically NTFS, which has very large volumes with many files. When something goes wrong, you might need to run the chkdsk utility to repair the problem. Chkdsk is very good at its job but it's a long laborious job, which must go through all the disk content looking for problems, and then perform the repair, which -- due to the nature of disks and their speed -- can take a very long time (possibly days for large volumes with many files). The result is a period of days during which the volume is offline while the repair operation is performed. Along with considerations of performing a data restore after a disaster, this action is why many times NTFS volumes are kept below a certain size: to ensure that chkdsk can be run in a reasonable period (i.e., a few hours) if the worst happens.
The new Resilient File System (ReFS) file system, which will become more prevalent in future versions of Windows, aims to reduce the chances of corruption. NTFS itself has become more resilient, with self-healing capabilities, but chkdsk is still needed at times. Windows 8 has solved once and for all the concerns about running a chkdsk on even the largest volume.
Chkdsk is slow. As I already mentioned, the tool must go through the entire disk and all its content looking for problems, which takes time. As it finds the problems -- which will be on a minuscule number of actual files -- Chdsk fixes them. These fix operations take almost no time (i.e., seconds). The problem is that chkdsk takes the volume offline, making the content unavailable as it performs the health checking and fixing.
Server 2012 breaks the chkdsk fix process into two parts. The first part scans the disk and data, looking for problems. If a problem is found, then that problem is marked and noted as requiring fixing. The big difference is that the volume is still online, whereas the long search-and-checking process is performed because no fix is actually being performed. Once the scan is completed, if problems need to be fixed, and the chkdsk is run again in a spotfix mode, which takes the volume offline as it performs the fixes on the identified problems that were found during the scan. The volume is now only offline for seconds instead of hours or days. The scan process has been separated from the actual repair process. Using chkdsk, the two commands are as follows. The first one will take a long time as it's performing the scan, but there will be no impact to volume availability. The second command would take the volume offline or trigger at next reboot.
Chkdsk /spotfix J:
The accompanying video provides a full example.
If you're using PowerShell, use the following commands:
Repair-Volume –SpotFix D
If CSVs are used, there is actually zero downtime when running the spotfix action. The reason is that CSV adds another level of indirection between the disk and how it's accessed. Also, CSV can actually pause I/O operations for about 20 seconds. This means that when the spotfix action is run, CSV just pauses I/O to the volume while it's taken offline and fixed. So, as far as users of the CSV volume are concerned, there was just a slight delay in access and no actual offline action or loss of file handles.
A Powerful Storage Solution
If you asked me to describe Storage Spaces in five words or fewer, it would be "a poor man's SAN." But as I've tried to demonstrate, the Storage Spaces functionality, combined with the in-box iSCSI and SMB 2.2, actually provides a very powerful storage solution for servers and desktops alike. This isn't a poor man's anything but rather a very powerful storage platform. When other technologies, such as the new data de-duplication feature, are also considered, the use cases for the Windows file services platform explode. Going forward, many organizations can start using Windows server-based file servers as a first-class storage citizen.
Sidebar: Windows Server 2012 Management
The next version of Windows Server, Windows Server 2012, embraces the management philosophy "the power of many, the simplicity of one." Even with virtualization, most server management is still performed either by connecting to the OS for Remote Desktop management or by remotely connecting to one server at a time via Server Manager. With Server 2012, all management can and should be performed remotely. After installing the Remote Server Administration Tools for Server 2012 on a Windows 8 client machine, start Server Manager, and create groups of servers, which you can then manage as an entire group. Dashboard views (such as the one in Figure A) make it easy to see any problems on any server in the group and to perform actions on multiple servers simultaneously. Typical management actions, such as adding or removing roles and features, are possible, as are configurations such as Storage Space actions. This doesn't mean that you can't run Server Manager locally on servers. But as the trend to optimize management continues, managing a single server makes less sense. As more servers run Server Core instead of a full installation, management will need to be performed remotely unless you want to use Windows PowerShell for all administration.