Are XenServer’s and Hyper-V’s Hypervisor Approach actually a Superior Solution for Client-Side Virtualization? PART 2

Read Part 1 of this article here.  Now, continuing on with my story…

Now, in the beginning, this was a gutsy play by these two vendors.  In order for the “pass through” to work, servers needed special processor instructions (remember our old friend, Intel-VT and AMD-V?).  These didn’t exist everywhere for a long time.  And in Hyper-V and XenServer’s early days, the need for them was a source of much derision.

But, today, those processor extensions are essentially everywhere.  And they’re even starting to leak into client and consumer devices, like laptops.  This is a good thing.

This movement into the client-side world also exactly illustrates my point with this whole new “virtualization-at-the-desktop” a.k.a client-side virtualization angle.  Remember that with ESX’s approach, the drivers must be encoded into the hypervisor.  Again, this is an excellent idea for the comparatively limited numbers of server hardware in use today.

But its not so great when you want to expand virtualization into every client-side device on the market.  The summation of all the drivers on today’s laptops encompass multiple orders of magnitude more drivers than the summation of drivers on today’s servers.  And, encoding those drivers into the hypervisor would, for VMware, be a project on the scope of the development of Windows itself.  That’s expensive.

Not to mention the fact that adding that many drivers into a hypervisor would make that hypervisor…well…as big as Windows itself (sort of).

So, back to Hyper-V and XenServer.  Remember again that these two hypervisors just simply enjoy whatever drivers are already in their parent partition.  Live Migration is possible because the “pass through” process abstracts (again, soft word, no flaming!) those physical drivers as they’re passed through.  To me, this means that – let’s consider Hyper-V alone here for a minute – every single driver that’s already in the Windows OS can automatically be a part of that vendor’s client-side hypervisor.

That big wow is the source of my serendipity today.  If I’m right, then Citrix and Microsoft are fundamentally well-positioned for succeeding in this new use of virtualization.

Now, obviously, we’re not completely there yet.  Citrix’s XenClient is a great new technology.  But, the key word there is “new”.  And Hyper-V only works atop Windows Server at this point, not yet having been expanded into the client OS.  But, the necessary hardware is indeed catching up, with today’s laptops already having the necessary processor extensions.  And, VMware is assuredly seeing the light when it comes to the hybrid hypervisor model.  They’ve already included a set of paravirtualized drivers to their architecture.

What remains to be seen now is whether the industry will pick up on this fantastic new technology at the client-side.  Most of us have experienced enormous challenges doing this in the past (VMware ACE, the first edition of XP Mode, others).  As a result, most of us are a little gun-shy about going down this road, and that’s not necessarily a bad thing right now today.  But one important fact is that many of those early attempts were all Type 2 hypervisors, running “atop” an existing OS as opposed to “alongside” it, and as a result often suffering from performance problems.

In my worldview, I see some fantastic promise for client-side hypervisor technologies coming in the next 24-odd months or so.  What’s critically important there is that there is a business model for their implementation, one that is much more compelling than the model for the Type 2 approach.  And, I see some fantastic promise for the two underdog virtualization providers in enabling its presence everywhere.

What about you?  What are your thoughts on this entirely-new concept of client-side virtualization?  Is this something you’d implement in your environment today?  Comment below and let me know what you’re thinking.

Discuss this Blog Entry 1

Waethorn (not verified)
on May 18, 2010
What you fail to realize is that the parent partition is only designed for managing VM's. Putting workloads on the parent partition causes performance problems for the hypervisors scheduler and defeats the purpose of using virtualization for multiple workloads. Type-2 hypervisors are a better option for the client because the parent OS manages the VM process as a workload in tandem with its own workload.

If you're talking about abstracting the hardware for a single workload for deployment options, there isn't much point to that what with Windows Vista having HAL-independence.

In both VM platforms, there is still way too much hardware that's emulated, just because it can't be properly virtualized (you'd eat up your videocards physical frame buffer in no time - and then there's Direct3D to OpenGL translation for multi-platform setups...). Video and audio are emulated, meaning it just isn't a good option for any amount of media work, and new software is taking advantage of hardware acceleration, which is absent with emulated hardware.

System virtualization on the client is getting eaten up by application and presentation virtualization anyway. That's where the future of client-side virtualization lies.





Please or Register to post comments.

What's Virtualization Pro Tips Blog?
Blog Archive

Sponsored Introduction Continue on to (or wait seconds) ×