Desktop virtualization is the first step
Virtual Desktop Infrastructure (VDI) is a hot topic these days. VDI is a form of desktop virtualization in which the user’s entire desktop experience is hosted in the data center. The user remotely connects to the desktop from some type of client device; none of the user’s desktop, applications, or data actually resides on the local device.
I have several upcoming articles in which I discuss what VDI is, where it fits best, the technologies involved in VDI, creating the right infrastructure, and how Citrix’s XenDesktop enhances Microsoft’s approach to VDI. In this first article, however, I focus on some technologies that at first glance seem to have nothing to do with VDI. These desktop virtualization solutions aren’t actually part of VDI but are critical components of a successful VDI architecture.
The most important aspect of a user’s computer experience is the applications, which perform functions and manipulate both local data and data on servers. The OS is the primary tool for running applications and managing data. Users often customize their OS environment with special backgrounds, screen savers, shortcuts, and favorites. Although these changes might seem disposable to an administrator, users might spend hours trying to find a lost application or data, or recreate their shortcuts. Thus, maintaining the user’s environment is important.
Three main areas comprise the desktop environment: user data and settings, applications, and the OS. The underlying hardware that the OS is installed on is also important.
In most desktop environments, all these components are intertwined rigidly with one another. The OS is locally installed on the user’s desktop computer; the applications are installed directly onto the OS, making changes to the file system, registry, and other OS components; and the user data and settings are stored on the local file system. These layers are often depicted as Figure 1 shows because we typically install the applications onto the OS, the user customizes the OS and applications, and the user has data—but in reality, the applications and user settings and data are actually bound onto the OS layer.This tight, localized coupling introduces several problems:
- Having data solely locally on a client machine introduces the risk of data loss because of hardware loss, hardware failure, volume corruption, or accidental deletion. Data is also inaccessible if the user utilizes alternative hardware. The same problem applies to user settings and configuration.
- A failure of the OS or desktop hardware means complex procedures to recover information and settings. Installed application listings must be obtained before the hardware can be replaced or the OS reinstalled; then all applications must also be reinstalled.
- An application failure results in complex troubleshooting and uninstallation processes because of changes that must be made in multiple places on the OS.
- Deploying applications and application updates can be very complex and time consuming.
- The solution to all these problems is to virtualize each of the layers, making them independent and abstracted from one another. This solution offers a flexible environment that’s easy to deploy, easy to maintain, and easy to enable a consistent “anywhere access” experience.
The desire to separate elements of a system to make it more flexible is common: Many products advertise an easy-to-switch modular design. We’ve all bought those all-in-one TV/DVD/video combinations only to curse when the video part breaks and we’re left without TV or DVD while the video component is repaired. But with separate TV, DVD, and video components, it’s easy to swap out one component without losing functionality of the others. That’s what we want for the user’s desktop experience—for the OS, applications, user settings, and data to be separate blocks that are pulled together and assembled depending on the logon environment. This environment could be a local desktop, a VDI OS, or a Terminal Services/Remote Desktop Services session. Because the components are separate blocks, there’s no delay waiting for installation or configuration. The components are just layered on top of each other to provide the full desktop experience.
In this article I discuss the technologies that let us separate these typically heavily intertwined layers to allow on-the-fly assembly that produces a complete desktop environment no matter where a user logs on. Figure 2 shows a sample environment that includes various solutions.
User Data and Settings
The goal is to be able to extract a user’s settings and data from the desktop environment so that the settings and data are protected and available no matter where the user connects—for example, the user’s own desktop, someone else’s desktop, or a remote desktop session to a corporate presentation virtualization solution such as Terminal Services (Remote Desktop Services), Citrix’s XenDesktop, or a VDI-hosted remote client OS. We’ve had technologies to handle the abstraction of user settings and data for a long time in Windows, but with Windows 7 the technologies are tuned and enhanced to a functionality and experience level that doesn’t negatively affect the user experience but instead improves the settings and data availability.
Let’s consider user settings first. Each user has a profile on his or her machine, under the C:\Users folder for Windows Vista and later. This profile consists of several files and folders, including the ntuser.dat file that contains all the user-specific registry information. Although this file is small, it constitutes the bulk of a user’s customization. The profile also contains Internet Favorites, documents, searches, and other types of information. I cover data items later in the article.
To achieve a consistent user experience, a user’s profile must be available no matter where the user connects. Thus the profile must be stored on the network. This capability, called roaming profiles, has been possible for a long time in Windows.
In the past, organizations were reluctant to use roaming profiles because of how they worked. A user’s profile was simply uploaded to the network during logoff, which caused a long delay in logging off. Windows 7 introduced background synchronization of roaming profiles. This feature is disabled by default, but you can use Group Policy to enable it. Background synchronization syncs the user’s profile at specific times or at a certain periodic time interval. Periodically synchronizing data means there’s less data to synchronize when it’s time to log off, which results in a far smoother end user experience.
Another reason that roaming profiles can be problematic is that before Windows 7, user data was left as part of the profile, which resulted in a huge profile. In reality, roaming profiles were never designed to handle the replication of user data. Even with the advancements in profiles in Windows 7, the user data must still be stripped from profiles—which leads us to the topic of folder redirection.
Several data storage locations are available to users, such as the Documents and Pictures folders, which by default are subfolders of the user’s profile. If roaming profiles are used, these folders and the data they contain are replicated as part of the profile replication process, which as we already established isn’t optimal. Folder redirection is an alternative technology that lets us configure the standard folders to point to a specific location on the network. For example, when a user accesses the Documents folder, he or she is actually accessing a network location—although this fact is entirely transparent to the user. You use Group Policy to configure folder redirection.
One of the biggest changes from Windows XP to Vista is the restructuring of the user profile namespace to allow greater separation of the various types of data. In addition, the number of distinct profile folders that can be redirected increased from 5 distinct storage areas in XP to 13 in Vista, which includes separation of the Documents, Pictures, Videos, and Music folders. In XP, Pictures, Music, and Videos are all subfolders under Documents and therefore follow that folder’s redirection configuration. In Vista, you can opt to not redirect the Music and Videos folders if you choose. Folder redirection now lets you redirect all the user data you want, which makes the user profile very small and lets roaming profiles easily handle the remaining data synchronization (i.e., ntuser.dat and some other minor data files).
The Offline Files feature (also known as Client Side Caching) lets you store a local copy of the user’s data from the network on certain configured machines that still require access to data even when disconnected from the network (e.g., laptops). Offline Files synchronizes the data changes at a delta level when network connectivity is restored. This delta-based replication means that only changes to files are replicated, instead of the entire file.
When we build a machine, the first thing we do after installing the OS and its updates is install the applications, which can include Microsoft Office, line of business (LOB) applications, security services, and other types of software. Installing software typically takes a significant amount of time because of the setup routines and configurations required to update file systems, add registry values, and register resources. Typically, only organization-wide applications are installed during OS installation. Applications that are user or department specific might be installed at first logon, which adds further delays.
- In addition to time delays, the following installation problems can occur:
- Application-to-application compatibility—Because of how applications modify the OS, they often cause incompatibilities with other applications or even with different versions of the same application. Thus significant regression testing is necessary before putting a new or updated application into production, and certain application combinations might be prohibited.
- OS bloat—As each piece of software is installed, extra services are added that use resources but that don’t always provide value. In addition, the registry increases in size, which uses memory and slows down the system. Even when applications are later uninstalled, pieces are often left behind on a system.
- Application updates—The processes for updating applications can vary widely, which requires significant planning and infrastructure for deployment.
These problems are all related to the fact that applications are installed on the OS. Imagine users logging on to different computers, remote sessions, and VDI environments—and all needing different applications. Installing every application that every user might ever need on every OS environment simply isn’t practical and would result in a hugely bloated OS that would be a nightmare to maintain because every application update would have to be applied to every OS instance. Using traditional software installation methods as users log on to different environments isn’t feasible because of the time expense, not to mention the additional problem of uninstalling applications on logoff.
One solution is traditional presentation virtualization, in which applications are installed on terminal servers, then executed on the terminal server while the application’s window displays on the user’s local desktop. This solution requires a significant server infrastructure to host the application execution and prohibit offline application execution. In addition, we still have the problem of all the applications needing to be installed on several terminal servers. However, this solution might work for certain applications—for example, an application that requires access to large amounts of data that’s housed in the data center. When such an application is run in the data center via presentation virtualization, the network traffic associated with the data access is restricted to the data center network, which is typically very fast. Running the same application locally on a user’s desktop sends all the data over the network, which uses a lot of bandwidth and slows the application execution.
The other major application solution is application virtualization. The big difference between application virtualization and presentation virtualization is that in application virtualization, the applications actually execute on the local OS instance. And even more importantly, the applications execute without needing to be installed on the local OS thanks to a per-application virtual environment that lets applications execute without changing the local OS.
Remember that when an application is installed, changes are made to the file system (e.g., the application’s executables and DLLs are placed in C:\Program Files, changes are made to the registry, services are installed). Application virtualization works by capturing all these system changes during a process known as sequencing (Microsoft App-V’s term), which involves converting an application’s installation routine to a binary stream that can be used with application virtualization. With sequencing, system changes are captured during application installation. These changes are saved in virtual layers, such as the file system layer, registry layer, services layer, fonts, OLE, and configuration, which can then be loaded into the virtual environment when the application launches. The application thinks its files, registry, and services are all on the local OS but in reality the application just sees the virtual layers that the application virtualization technology facilitates. Nothing is actually written to the underlying OS.
Figure 3 shows the application virtualization layers. Note that although virtualized applications can’t write to OS resources beyond such items as user configuration and data, they can read information from the host OS.
App-V is Microsoft’s application virtualization solution. By default, App-V works as a streaming technology. The first time a virtualized application is launched on an OS, the App-V client communicates with an App-V streaming server that sends to the client the part of the application’s binary stream that’s necessary to initially launch the application. This portion of the binary stream is known as Feature Block 1 and is typically about 10 to 20 percent of the total binary stream. It’s sent to the client very quickly—in Office 2010, about a 3-second delay occurs between the user clicking an application icon and the application window launching.
The App-V sequencing process determines what needs to be placed in Feature Block 1 by actually launching the application during the sequencing and monitoring the parts of the stream needed. The necessary items go to Feature Block 1 and the rest goes to Feature Block 2. After the application launches, Feature Block 2 is sent to the client in the background. This binary stream is cached on the local client OS, so if the application is launched again the stream doesn’t have to be sent over the network again.
Although I said application virtualization makes no changes to the local OS, it obviously caches this stream. However, the stream is cached into one file in the All Users profile and this single cache file is shared by all users and applications. Applications don’t write anything anywhere else in the file system or make any configuration or registry changes. This cache also means that virtualized applications are available even when the machine is offline. In a VDI environment we can actually configure this cache to be placed on a file share that’s common to all the VDI client virtual machines. This approach eliminates the need for each client to have its own App-V cache, which saves disk space and expedites the initial application launch.
App-V’s default distribution method is streaming. However, App-V also supports the creation of an MSI file that contains the complete stream for other deployment technologies, such as Group Policy and Microsoft System Center Configuration Manager (SCCM) 2007 R2. We can even leverage traditional file shares and Microsoft IIS for App-V stream distribution. A lot of options are available to suit different organizational needs and infrastructures.
Note that basic cut and paste, object linking, and embedding still works between virtualized applications, but for deeper integration we can create dependencies between virtualized applications. This dynamic suite composition lets separate virtualized applications see each other in a controlled manner.
- Application virtualization cures several deployment problems. For example:
- Applications can be rapidly deployed on a per-user, as-needed basis, effectively in real time, which lets administrators slot applications into the desktop environment as necessary.
- Application-to-application compatibly issues are solved because separate virtualized applications no longer see one another. Applications have their own unique virtual file systems, registries, etc. This isolation also removes most of the regression testing needed when introducing new or updated applications.
- Rolling out updates is a simple process. The sequenced application is updated only once, and App-V rolls out the changes to the stream to all clients, without any user action.
- The OS doesn’t suffer from bloat because applications aren’t actually installed on the OS.
Organizations that are moving to Windows 7 might face application compatibility problems. If a newer version of an application isn’t available that’s compatible with Windows 7, or a similar product doesn’t exist, OS virtualization is an option. Several ways exist to virtualize the client OS.
Windows 7 introduced Windows XP Mode, which allows applications that won’t run on Windows 7 to execute on a local XP virtual machine (VM). The application window seamlessly displays on the user’s main Windows 7 desktop, totally transparently to the user. In such a case, we’re still running a separate legacy OS on the user’s Windows 7 desktop that needs to be managed. Microsoft Enterprise Desktop Virtualization (MED-V), which is part of the Microsoft Desktop Optimization Pack (MDOP), simplifies this process by providing a centralized method to not only distribute and update the XP VM but also manage shortcuts on the desktop and URL redirections to Internet Explorer (IE) 6.0. This approach provides the added benefit of solving Windows 7’s IE compatibility problem.
Another approach to client virtualization is to virtualize the user’s entire desktop OS in the data center, giving the user a local client to enable remote connectivity to the data center–hosted client OS. This local client could be a thin device, a legacy PC running Windows Fundamentals for Legacy PCs, or any other type of device or OS that supports the RDP protocol (in the case of a pure Microsoft solution). A benefit of this approach is that because the user’s entire desktop is housed in the data center, sensitive data never actually leaves the data center. In addition, the desktop is available no matter where the user connects, including from a personal machine at home. This solution is great for disaster planning.
Putting It All Together
- When a user logs on to a new OS instance for the first time, whether it’s a local fat desktop, a session on a terminal service, or a VDI-hosted client OS, the following steps occur:
- The user logs on to the OS using his or her Active Directory (AD) account.
- After user authenticated occurs, the parts of the profile that weren’t abstracted through folder redirection are pulled down; this minimal amount of information downloads very quickly. All the customizations are now present in the user’s session, in addition to the folder redirection settings. Thus all the user’s data, favorites, etc. are present.
- The App-V client communicates with the App-V management server to determine the applications that apply to the logged on user and subsequently populates shortcuts on the desktop and Start menu, in addition to configuring the relevant file type associations.
- The user now has a fully functional desktop and can launch applications and access data with no delays.
Desktop virtualization and VDI aren’t the same. VDI leverages desktop virtualization technologies to provide a data center–hosted client OS. Although VDI might be the best solution for certain users, every desktop environment can benefit from some form of desktop virtualization—whether it’s application virtualization, user state virtualization, or OS virtualization. In planning your desktop architecture, especially as part of a Windows 7 rollout, make sure desktop virtualization is considered as part of that architecture. The additional upfront work yields huge long-term benefits. Desktop virtualization not only provides you with a more agile environment but also saves infrastructure and management costs.