Last week, about 35 journalists (including my colleagues Paul Thurrott, Jeff James, and Mike Otey) trooped into a conference room in Building 43 on the Microsoft campus, signed NDAs, and were reminded forcefully of the consequences of any leaks. We then spent the next two days being stunned by the new capabilities of Windows Server 8 as architect after general manager after program manager showed off a mere fraction of the more than 300 new features that are forthcoming in this new OS version.


And the new features they showed us in this pre-beta version aren’t just small features, either. Windows Server 8 architect Jeffrey Snover introduced PowerShell version 3, which has exploded from 300 cmdlets to more than 2,300 and is one of the core management engines of the OS. Jeff Woolsey, Principal Program Manager Lead for Windows Virtualization, spent two full hours at breakneck speed, having his team demo just the most significant new Hyper-V capabilities. Sandeep Singhal, general manager of the Windows Networking group, had his team members demonstrate new feature after new feature until we were feature-numb. There was literally a line of product managers, all wearing sky-blue Reviewer's Workshop polos, on deck in the back of the room, who would come up to the podium, demo, then exit out a side door by the podium.


There are several prevalent themes to Windows Server 8. The most significant theme, in my opinion, is that it’s a "cloud-enabled OS." What Microsoft means by this is that the OS needs to scale up, out and in (e.g,. consolidation) to an unprecedented degree. The OS has to “just work” – by supporting standards and having some degree of self-repair. (A great quote from Jeffrey: “You’re the computer; YOU figure it out.”) Continuous availability is critical, leading to advances in fault tolerance so that very few issues must be attacked immediately. Management of a cloud-enabled OS must be able to view and correct multiple technologies across multiple machines with a single dashboard, and Server Manager has been considerably enhanced to handle this task. Another result of this is a great increase in the abstraction of physical resources. Everything that still had a direct association with the real world – disk and network in particular - can now be abstracted away from users and applications within the OS itself. For example, the new Storage Spaces takes commodity SATA, SAS, and JBOD arrays and puts them into resilient storage pools that can be allocated as thinly-provisioned (i.e., don’t consume actual blocks until needed) disks to the rest of the OS.


I think we were all most impressed with the advances made for Hyper-V, particularly for VM mobility. Ben Armstrong demonstrated SNO (Shared Nothing) Live Migration of a VM between two unclustered Hyper-V hosts with only direct-attached storage using only a crossover cable - no expensive shared storage required. Live Migration can now migrate as many VMs simultaneously as your hardware can support - no limitations. Hyper-V now has Live Storage Migration to move VHDs - no interruption in service. It has Hyper-V Replica, a simple yet powerful replication of VMs between local or remote Hyper-V hosts, which Woolsey described as "disaster recovery for everyone." How about Fibre Channel support for VMs? Done.  How about removal of essentially all practical Hyper-V maximums, with Windows Server 8 Hyper-V hosts supporting up to 21TB of RAM and up to 160 logical processors and VMs supporting up to 32 virtual processors and 512GB of memory? Done. Bitlocker-encrypted clusters. Offloaded data transfer (ODX) to reduce VM migration between cluster nodes to seconds. Woolsey ran out of time before he ran out of features.


Some demonstrations were of welcome improvements to existing features. The DirectAccess demo, for example, was most impressive because of its simplicity. DirectAccess is a network feature that allows users to access domain resources as if they’re on the corporate intranet, from anywhere they can gain Internet access, without a VPN. However, in Windows Server 2008 and R2, the infrastructure requirements to implement DirectAccess are quite steep, and despite the attractiveness of the technology I suspect it’s not been adopted as well as Microsoft would have wished. In Windows Server 8, DirectAccess implementation into an ordinary IPv4 network consists of choosing one of three configurations in the installation wizard, and pressing Install. That’s it. I predict Microsoft will track a dramatic rise in DirectAccess adoption with Windows Server 8.


We ended the workshop with a half-day lab session at Microsoft’s Enterprise Engineering Center (EEC), an amazing facility full of cutting-edge computing equipment? dedicated to helping customers evaluate and test Microsoft software and vendor hardware. This place is nirvana for hardware geeks. Among many other devices, it features a 10TB SAN built entirely out of RAM, and one of the world’s largest high-performance computing clusters. (Look for an upcoming article, with photos, on our visit.) We spent the morning working through 16 different preconfigured labs that allowed us to test drive many of the OSs’ new features, with helpful program managers answering our questions and collecting our feedback and impressions as they continue to refine the product.


I recently watched the movie Brainstorm, a favorite sci-fi movie of mine from the mid-80s. In the movie, Cliff Robertson is the CEO of a technology company with Christopher Walken (in a rare role as a non-scary guy) as a research scientist. In the movie, Walken is describing a breakthrough in his virtual research and after he demonstrates to Robertson, the impressed CEO tells him, “You knocked my socks off!” It's safe to say that if this Windows Server version keeps the features I saw in this workshop - and let me point out again that everything demonstrated was functional in this pre-beta release - Microsoft has knocked my socks off. Windows Server 8 is as significant an OS update as Windows 2000 was in its day.


What has changed in the company to enable them to do all this? Instead of the previous practice of beta-testing then withdrawing a new feature, this time they under-promised and over-delivered. I asked Jeffery Snover and Mike Neill (GM, Windows Server Planning and Management) this question. They attribute their success to a new approach of scenario-based engineering, championed by their boss Bill Laing, Corporate Vice President of the Server and Cloud Division. In this approach, initial planning for the OS was much more thorough than usual, collecting information from 200 Customer Focused Design sessions and more than 6,000 customer statements. This data was boiled down to a relatively small number of scenarios that represented the vast majority of these customers’ needs, and the product was designed to meet these scenarios. Development teams with interdependencies (such as virtualization, storage, and network) actually talked to each other and worked in an integrated manner. Common communication, especially between engineering types, is often an uncommon occurrence in large corporations. We could see the benefits of this communication when a new feature was described, because the feature often took advantage of another new feature (for example, BranchCache using new data deduplication technology built into the OS). The fact that they work, this early in the product cycle, is all the more impressive.


Microsoft has shown us a really impressive product; even in its pre-beta form, it shows impressive capabilities beyond Windows Server 2008 R2. It’s safe to say it exceeded the expectations of all the reviewers, and I think you’ll reach the same conclusion.