Enhanced vMotion / X-vMotion / shared nothing vMotion live demo [video]

I was looking for a live video demonstration of the new and improved vMotion in vSphere 5.1 the other day but could not come across one at the time. I therefore decided to get it set up in my lab and record a demo of the new vMotion in action.

This improved version of vMotion doesn’t really have a new name, but some people are calling it: Enhanced vMotion, x-vMotion, or “shared nothing” vMotion amongst other names. I am happy to just call it vMotion for now, with the knowledge that it can now live migrate (powered on) VMs across from local storage on hosts (non-shared) to other hosts with shared or local storage.

You can initiate the migration using the vSphere Web Client. Here is a live demo I recorded using my home lab system with two 1Gbit vMotion interfaces. The VM is small for demo purposes – just 512MB RAM and a very small virtual disk. It was powered up for this demo.

 

httpvh://www.youtube.com/watch?v=95_TWepFMgA

 

The background music for this demo is licensed as per the following link:

PowerCLI 5.1 – new cmdlets and changes between the beta and final releases

I was wondering what new cmdlets had been added in PowerCLI 5.1 as opposed to PowerCLI version 5.0.1. I also wanted to see if there were any changes between the beta release of vSphere 5.1 and the final release which was made public yesterday. The answer is yes, there are indeed changes between all three versions! Here are the cmdlet counts for each version:

 

[table tablesorter=”1″ file=”http://www.shogan.co.uk/wp-content/uploads/powercli-version-cmdlet-counts.csv “][/table]

 

To see what the differences were, I ran the following on each version of PowerCLI (5.0.1, 5.1 beta, and 5.1 final).

 

First of all to get the number of cmdlets and see if there were any changes at a quick glance, I ran a simple count against the Get-VICommand cmdlet:

(Get-VICommand).Count

Seeing differences between each version, I then decided to get a full list of cmdlets for each version, and then run a diff against these.

Get-VICommand | Export-CSV C:\cmdletsforversionX.csv

 

I then opened each CSV file, grabbed the full list of cmdlets from the “Name” column, and ran these against each other using on online difference checking site. Here are the results:

 

vSphere PowerCLI 5.1 beta had an additional 4 cmdlets over PowerCLI 5.0.1, with 1 having been removed.

 

PowerCLI 5.1 beta changes

Removed New
Get-EsxSoftwareChannel Get-DeployOption
Get-EsxSoftwareDepot
Remove-EsxImageProfile
Set-DeployOption

 

vSphere PowerCLI 5.1 (final/public release) had an additional 47 cmdlets over PowerCLI 5.1 beta, with none having been removed. These mostly seem to be related to the vCloud Suite as far as I can tell.

 PowerCLI 5.1 beta to 5.1 public release changes

[table tablesorter=”1″ file=”http://www.shogan.co.uk/wp-content/uploads/powercli-5-1-public-cmdlet-additions.csv”][/table]

 

It is worth noting that in each case I had a full installation of PowerCLI – i.e. had selected to install PowerCLI normal and Cloud cmdlets during installation.

So it looks like I’ll need to spend some time getting acquainted with the new cmdlets. If you are curious as to what each does, don’t forget the built in help – using “Get-Help Cmdletname” and the use of the -examples switch.

 

Distributed Virtual Switch 5.1 Health Check for VLAN configuration issues

With the announcement of vSphere 5.1, one of the new features announced was the network health check feature now available for Distributed Virtual Switches (version 5.1 of the switch). This area has already been covered in detail by two bloggers I know of, namely Chris Wahl at Wahlnetwork, and Rickard Nobel.

However, this is one feature I was really looking forward to testing out myself, and had been preparing for by getting some physical Microserver Hosts up and running in my home lab with multiple NICs and VLAN support. The other day I had a chance to play around with the Network health check functionality with one of my hosts uplinked to a DVS I had created in vCenter.

This evening I was reminded of how useful this feature actually is. I had plugged one uplink from my Dell PowerConnect 5324 switch into the dual port NIC in the host and left the other NIC disconnected as I was short one cable. Tonight (a day later) I connected this up and was immediately notified of an issue on the uplink with the VLAN health status! I had of course, forgotten to setup the port trunking on the Dell switch (VLANs 8 and 10) after having set this up yesterday for just the one port that was connected.

 

Here is a breakdown of what I saw using the vSphere Web Client after selecting my DVS and then choosing the Health tab under “Monitor”. (vCenter also has alarms set up when you enable the feature that show to alert you of the issue).

 

 

A quick change on my switch to set the VLANs up on this particular uplink port meant I was soon up and running again.

 

As you can see the Health Check feature is really useful, providing vSphere admins with an easy way to check network port configurations on the networking hardware without having to login to another interface and check themselves, or rely on another team to do this for them. For more detail, or instructions on how to set this up, I recommend checking out the two blog posts I linked to above by Chris Wahl and Rickard Nobel.

 

My VMware vSphere Home lab configuration

I have always enjoyed running my own home lab for testing and playing around with the latest software and operating systems / hypervisors. Up until recently, it was all hosted on VMware Workstation 8.0 on my home gaming PC, which has an AMD Phenom II x6 (hex core) CPU and 16GB of DDR3 RAM. This has been great, and I still use it, but there are some bits and pieces I still want to be able to play with that are traditionally difficult to do on a single physical machine, such as working with VLANs and taking advantage of hardware feature sets.

 

To that end, I have been slowly building up a physical home lab environment. Here is what I currently have:

Hosts

  • 2 x HP Proliant N40L Microservers (AMD Turion Dual Core processors @ 1.5GHz)
  • 8GB DDR3 1333MHz RAM (2 x 4GB modules)
  • Onboard Gbit NIC
  • PCI-Express 4x HP NC360T Dual Port Gbit NIC as addon card (modifed to low-profile bracket)
  • 250GB local SATA HDD (just used to host the ESXi installations.

Networking

  • As mentioned above, I am using HP NC360T PCI-Express NICs to give me a total of 3 x vmnics per ESXi host.
  • Dell PowerConnect 5324 switch (24 port Gbit managed switch)
  • 1Gbit Powerline Ethernet home plugs to uplink the Dell PowerConnect switch to the home broadband connection. This allows me to keep the lab in a remote location in the house, which keeps the noise away from the living area.

Storage

  • This is a work in progress at the moment, (currently finding the low end 2 x bay home NAS devices are not sufficient for performance, and the more expensive models are too expensive to justify).
  • Repurposed Micro-ATX custom built PC, housed in a Silverstone SG05 micro-ATX chassis running FreeNAS 8.2 (Original build and pics of the chassis here)
  • Intel Core 2 Duo 2.4 GHz processor
  • 4GB DDR2-800 RAM
  • 1 Gbit NIC
  • 1 x 1TB 7200 RPM SATA II drive
  • 1 x 128GB OCZ Vertex 2E SSD (SATA II)
  • As this is temporary, each drive provides 1 x Datastore to the ESXi hosts. I therefore have one large datastore for general VMs, and one fast SSD based datastore for high priority VMs, or VM disks. I am limited by the fact that the Micro-ATX board only has 2 x onboard SATA ports, so I may consider purchasing an addon card to expand these.
  • Storage is presented as NFS. I am currently testing ZFS vs UFS and the use of the SSD drive as a ZFS and zil log / and or cache drive. To make this more reliable, I will need the above mentioned addon card to build redundancy into the system, as I would not like to lose a drive at this time!

Platform / ghetto rack

  • IKEA Lack rack (black) – cheap and expandable : )

 

To do

Currently, one host only has 4GB RAM, I have an 8GB kit waiting to be added to bring both up to 8GB. I also need to add the HP NC360T dual port NIC to this host too as it is a recent addition to the home lab.

On the storage side of things, I just managed to take delivery of 2 x OCZ Vertex 2 128GB SSD drives which I got at bargain prices the other day (£45 each). Once I have expanded SATA connectivity in my Micro-ATX FreeNAS box I will look into adding these drives for some super fast SSD storage expansion.

 

The 2 x 120GB OCZ SSDs to be used for Shared Host Storage
HP NC360T PCI-Express NIC and 8GB RAM kit for the new Microserver

 

Lastly, the Dell PowerConnect 5324 switch I am using still has the original firmware loaded (from 2005). This needs to be updated to the latest version so that I can enable Link Layer Discovery Protocol (LLDP) – which is newly supported with the VMware vSphere 5.0 release on Distributed Virtual Switches. This can help with the configuration and management of network components in an infrastructure, and will mainly serve to allow me to play with this feature in my home lab. I seem to have lost my USB-to-Serial adapter though, so this firmware upgrade will need to wait until I can source a new one off ebay.

 

Getting up and running with the vSphere 5.1 Web Client

Getting up and running with the vSphere 5.1 Web Client and vCenter 5.1 is now easier than before. The steps to follow are listed below, along with the steps you should use if you also have vCenter 5.0 instances to manage with the 5.1 Web Client.

 

  • If you have a vCenter 5.1 Server instance, you’ll just need to install the Web Client using standard installer from the vCenter autorun.
  • Don’t forget to install the latest Adobe Flash too.
  • With vSphere 5.1 you now have integration with the vCenter Single Sign on (SSO) service. If your vCenter server uses the same vCenter Single Sign On server as that which the Web Client uses, then you do not need to manually register vCenter 5.1 instances with the Web Client Server. Instead, just install the Web Client server as normal, and then sign in to it from the local machine at https://localhostl:9443/vsphere-client or remotely from another management machine at https://remotemachine:9443/vsphere-client. The vSphere Web Client can now locate these vCenter Server 5.1 systems by using the VMware Lookup Service.
  • If you run into any errors when you try to access the web client via the URL (local or remote), give it a few more minutes if you have just finished the installation. I found that it took my system up to 3 minutes before I could login. This must be due to automatic registration with the Lookup Service taking place in the background.

This definitely makes life a bit easier when setting up a vCenter 5.1 and the Web client, and makes complete sense as VMware have announced that the standard vSphere Client 5.1 (Windows application) is their final release of the vSphere Client software. From then on, everything will be managed via the Web Client!

Also remember that when you are setting up the vSphere web client, you are asked for the IP or FQDN of your vCenter server. If it uses IPv6 and you want to enter the IP address instead of using the FQDN, you must enter it in IPv6 format (ie. enclosing this address in square brackets).

 

If you are still using vCenter 5.0 or have vCenter 5.0 instances, you are still required to use the machine that the Web Client was installed on, and browse to https://localhost:9443/admin-app and then register these vCenter 5.0 instances as per the screenshots below. You do of course also have a couple of options depending on which vCenter Server 5.0 type you are using (Windows or the Appliance).

 

For vSphere vCenter 5.0 Windows instances you’ll still need to register these with the Web Client, login to the Web client on the machine it was installed on using the localhost address:

Register your vCenter Server 5.0 instance by using the IP or FQDN and correct credentials.

Accept and install the security certificate if applicable.

 

If you are using the vCenter 5.0 appliance, then you’ll need to register these instances using the command-line on the appliance. Use the following script to register your vCenter instance:

/usr/lib/vmware-vsphere-client/scripts/admin-cmd.sh register https://[IP or FQDN of the Web Client]:[HTTPS Port Number]/vsphere-client [VC IP Address] [VC Admin username] [VC Admin password]

If you have any special characters in your password, don’t forget to enclose this in single quote marks ( ‘ ).