Live Migrating a VM on a Hyper-V Failover Cluster fails – Processor-specific features not supported

 

I have been working on setting up a small cluster of Hyper-V Hosts (running as VMs), nested under a bunch of physical VMware ESXi 5.0 hosts. Bear in mind I am quite new to Hyper-V, I have only ever really played with single host Hyper-V setups in the past. Having just finishing creating a Hyper-V failover cluster in this nested environment, and configuring CSV (Cluster Shared Volume) Storage for the Hyper-V hosts, I created a single VM to test the “live migrate” feature of Hyper-V. Upon telling the VM to live migrate from host “A” to host “B”, I got the following error message.

“There was an error checking for virtual machine compatibility on the target node”. The description reads “The virtual machine is using processor-specific features not supported on physical computer “DEVHYP02E”.

 

So my first thought was, perhaps there is a way to mask processor features, similar to the way VMware’s EVC for host physical CPU compatibility works? If you read the rest of the error message it does seem to indicate that there is a way of modifying the VM to limit processor features used.

 

So the solution in this case is to:

  • First of all power down your VM
  • Using Hyper-V Manager, right-click the VM and select “Settings”
  • Go to the “Processor” section and tick the option on for “Migrate to a physical computer with a different processor version” under “Processor compatibility”
  • Apply settings
  • Power up the VM again

 

Processor compatibility settings - greyed out here as I took the screenshot after powering the VM up again.

 

So now if you try and live migrate to another compatible Hyper-V host, the migration should work.

 

nVidia introduces the worlds “first virtualized GPU”

 

I usually only ever follow nVidia and AMD with regard to their GPU offerings for gamers, this being one of my passtimes, however this press release of the green team’s the other day caught my attention.

 

To summarise, nVidia are unveiling their “VGX” platform, which will allow IT to deliver virtualized desktops with graphics or GPU computing power similar to, or as close to the real deal as possible, for users on any connected device (not necessarily just PCs or Thin clients for example). This VGX platform will consist of a few things, one of which will be the addon cards for servers that are passively cooled and as energy efficient as possible (interesting when considering how much power desktop gaming-grade GPUs generally consume!)

 

Some of the features nVidia are toting for their VGX platform thus far, according to their press release are:

 

  • GPU Accelerated desktops (of course)
  • Ultra-low latency remote display capability
  • Energy efficient, passively cooled hardware. Each board will have
    • 4 x GPUs (each with 192 CUDA architecture cores and a 4GB frame buffer).
    • 16GB memory
    • Industry standard PCI Express interface
  • VGX GPU Hypervisor
    • This is a software layer that should integrate with commercial hypervisors (VMware anyone?), enabling virtualization of the GPU
  • High user densities – shared graphics processing power for multiple users
    • (Up to 100 users to be served from a single server powered by one VGX board apparently)

 

Here are a few videos from the press release:

 

httpv://youtube.com/watch?v=kTnB_9ZgEvg

httpv://youtube.com/watch?v=e6BI2OTM-Hk

 

The article has mention of Citrix technology support, but what about VMware View? I am sure this type of integration should be available – I wonder how PCoIP would work to deliver virtual desktops accelerated by the VGX platform. If the latency reduction claims and acceleration benefits are anything to go by then we should be in for an even better VDI experience!

 

Using Project “Onyx” to find the equivalent PowerCLI script to achieve tasks done in the vSphere Client

 

A few days ago someone dropped a comment on one of my blog posts asking how they could Enter an ESXi host into maintenance without migrating VMs off of it automatically using PowerCLI. The -Evacuate switch for the cmdlet in question (Set-VMHost) didn’t see to be working when assigning the value of $false and hence they were unable to put hosts into maintenance mode without evacuating VMs first with PowerCLI.

Perhaps the cmdlet was being used incorrectly, or there is a better way of doing this, but that is not the point of this post. The point of this post is to show you the power and usefulness of Project “Onyx”. Project “Onyx” is an application released (quite some time ago!) by VMware that is essentially a “script recorder”. It connects to your vCenter instance, and in turn, you connect to it as a proxy using your vSphere client. Communications are not secured when doing this, so everything you do in your vSphere client is able to be recorded. You essentially end up with a “recording” of API calls that can be utilised in PowerCLI. Where this comes in handy is where you are not able to achieve something with PowerCLI’s already huge library of cmdlets. In this case the -evacuate switch of Set-VMHost was not working the way I expected it to work, and so to avoid wasting time trying to figure out what I needed to do, I just fired up Project Onyx, connected to it via the vSphere Client, then told an ESXi host to enter maintenance mode (unticking the migrate powered off / suspended VMs option of course) whilst the Project Onyx application was set to “record” mode.

The console then collected the necessary script, and I just modified it as necessary to create a small script that did the exact same task, but this time in PowerCLI.

 

To use Project “Onyx” simply download it from this page, then run the executable once you have extracted the .zip file. Tell Onyx to connect to your vCenter Server, then use your vSphere Client to connect to the machine that Onyx is running on (IP). Make sure you specify the correct listening port in the vSphere Client connection too – it will be the port listed in the Window Title bar of the actual Project “Onyx” application when it is running. Click the record button in the Application and then perform the required tasks using the vSphere Client.

 

Project Onyx application window with some recorded script. Note the port 1545 in the Window Title Bar.

 

Connecting to Onyx as a proxy

 

 

vExpert for 2012 (Thanks!)

 

 

After being put through what some would only describe as torture this morning (interval training with my wife at gym), I arrived home to relax and check my e-mail. My mailbox was filled with Twitter notifications and upon closer inspection it seemed apparent that I had been awarded the title of vExpert 2012! This is an absolutely huge honour for me, and I must say, it caught me completely off guard.

 

I just wanted to send out a huge congratulations to all the new and returning vExpert awardees for 2012! There are so many talented individuals out there putting out an immense amount of great content, discussion, and effort when it comes to all things VMware. I must say, it has been a great year – I have learnt so much from the community, and thoroughly enjoyed being a part of it.

 

A special thanks go out to two people in particular who spring to mind when it comes to the VMware community, namely; Alex Maier and John Troyer. Thanks to you guys for managing and being the driving force behind the whole community! I would also like to send a special shout out to, and congratulate three of my work colleagues at Xtravirt who were also awarded the vExpert 2012 title today – Gregg Robertson, Darren Woollard and Paul Wood. It is Paul and my first year being awarded vExpert status, and Darren and Gregg’s second. Well done all!

 

To finish off, here is the official list of vExperts for 2012, as well as a definition of the vExpert title/award from VMware

 

  • vExperts 2012
  • The VMware vExpert Award is given to individuals who have significantly contributed to the community of VMware users over the past year. vExperts are book authors, bloggers, VMUG leaders, tool builders, and other IT professionals who share their knowledge and passion with others. These vExperts have gone above and beyond their day jobs to share their technical expertise and communicate the value of VMware and virtualization to their colleagues and community.

 

So here’s to another fantastic year ahead for the community and many more to come!

 

Quick look at the VMware View 5 Android Client (HP Touchpad w/ Cyanogenmod Android 4.0)

Just a very quick blog post written over lunch today to share some screenshots of the VMware View client for Android running on my HP Touchpad (with Android 4 / Cyanogenmod running).

 

I have set up a simple VMware View 5 environment in my home lab and wanted to test out the Android client. I had recently installed Cyanogenmod on the Touchpad so it can now dual boot WebOS or Android 4. As there is no View client for WebOS, I simply grabbed a copy of the View client for Android and tried a quick internal test of my View 5 lab.

 

The interface is nice and clean / well designed (as you would expect for an application from VMware). After connection you are presented with your entitled desktops and can then connect. A gesture summary info screen appears to show you how to perform different functions, such as right-clicks, dragging the mouse cursor, bringing up the keyboard etc… The gesture controls really do work well, especially when you compare them to other tablet based remote control apps.

 

Below are a few screenshots of the actual View client running on my HP Touchpad and connected to a Windows 7 Desktop.

 

Login screen

 

Clean interface

 

Gesture types / info

 

Connected to a Windows 7 Desktop in the View client

 

Conclusion: The View client for Android runs just fine on a modded (Cyanogenmod) HP Touchpad device – as expected!