PowerCLI – creating port groups and specifications to modify port groups

I wanted to quickly create some standard VM port groups across a particular vSwitch for all hosts in my lab / testing environment at work. Since I was using Standard vSwitches and not a dvSwitch, I didn’t feel like using the GUI to create these on every individual ESXi host. In addition to creating the port group on each vSwitch, I also wanted to change the security policy on each for Promiscuous mode to “Accept”. The reason for this being that this port group is going to be used to run virtual nested ESXi hosts, and this is required to allow nested VMs to communicate on the network.

 

So the obvious solution here for me was to create a quick PowerCLI script to create these port groups on all hosts and set the security option for each too. Here is the script:

 

$vSwitch = "vSwitch0"
$portgrpname = "vInception Portgroup"
$AllConnectedHosts = Get-VMHost | Where {$_.ConnectionState -eq "Connected"}

foreach ($esxihost in $AllConnectedHosts) {
	$currentvSwitch = $esxihost | Get-VirtualSwitch | Where {$_.Name -eq $vSwitch}
	New-VirtualPortGroup -Name $portgrpname -VirtualSwitch $currentvSwitch -Confirm:$false
	$currentesxihost = Get-VMHost $esxihost | Get-View
	$netsys = Get-View $currentesxihost.configmanager.networksystem
	$portgroupspec = New-Object VMWare.Vim.HostPortGroupSpec
	$portgroupspec.vswitchname = $vSwitch
	$portgroupspec.Name = $portgrpname
	$portgroupspec.policy = New-object vmware.vim.HostNetworkPolicy
	$portgroupspec.policy.Security = New-object vmware.vim.HostNetworkSecurityPolicy
	$portgroupspec.policy.Security.AllowPromiscuous = $true
	$netsys.UpdatePortGroup($portgrpname,$PortGroupSpec)
	
}

 
Keep in mind that this script will create the port group on “vSwitch0” – change this if your vSwitch that is hosting this port group on each host is named differently. It will obviously rely on this vSwitch existing to work. You can also modify the $portgrpname to your own choice of course.

Lastly, you can easily modify this script to change other Security options for the new port group, as the port group specification has already been created in this script. Just use the $portgroupspec.policy.Security object to add other specifications.

Enjoy!
 

Enhanced vMotion / X-vMotion / shared nothing vMotion live demo [video]

I was looking for a live video demonstration of the new and improved vMotion in vSphere 5.1 the other day but could not come across one at the time. I therefore decided to get it set up in my lab and record a demo of the new vMotion in action.

This improved version of vMotion doesn’t really have a new name, but some people are calling it: Enhanced vMotion, x-vMotion, or “shared nothing” vMotion amongst other names. I am happy to just call it vMotion for now, with the knowledge that it can now live migrate (powered on) VMs across from local storage on hosts (non-shared) to other hosts with shared or local storage.

You can initiate the migration using the vSphere Web Client. Here is a live demo I recorded using my home lab system with two 1Gbit vMotion interfaces. The VM is small for demo purposes – just 512MB RAM and a very small virtual disk. It was powered up for this demo.

 

httpvh://www.youtube.com/watch?v=95_TWepFMgA

 

The background music for this demo is licensed as per the following link:

PowerCLI 5.1 – new cmdlets and changes between the beta and final releases

I was wondering what new cmdlets had been added in PowerCLI 5.1 as opposed to PowerCLI version 5.0.1. I also wanted to see if there were any changes between the beta release of vSphere 5.1 and the final release which was made public yesterday. The answer is yes, there are indeed changes between all three versions! Here are the cmdlet counts for each version:

 

[table tablesorter=”1″ file=”http://www.shogan.co.uk/wp-content/uploads/powercli-version-cmdlet-counts.csv “][/table]

 

To see what the differences were, I ran the following on each version of PowerCLI (5.0.1, 5.1 beta, and 5.1 final).

 

First of all to get the number of cmdlets and see if there were any changes at a quick glance, I ran a simple count against the Get-VICommand cmdlet:

(Get-VICommand).Count

Seeing differences between each version, I then decided to get a full list of cmdlets for each version, and then run a diff against these.

Get-VICommand | Export-CSV C:\cmdletsforversionX.csv

 

I then opened each CSV file, grabbed the full list of cmdlets from the “Name” column, and ran these against each other using on online difference checking site. Here are the results:

 

vSphere PowerCLI 5.1 beta had an additional 4 cmdlets over PowerCLI 5.0.1, with 1 having been removed.

 

PowerCLI 5.1 beta changes

Removed New
Get-EsxSoftwareChannel Get-DeployOption
Get-EsxSoftwareDepot
Remove-EsxImageProfile
Set-DeployOption

 

vSphere PowerCLI 5.1 (final/public release) had an additional 47 cmdlets over PowerCLI 5.1 beta, with none having been removed. These mostly seem to be related to the vCloud Suite as far as I can tell.

 PowerCLI 5.1 beta to 5.1 public release changes

[table tablesorter=”1″ file=”http://www.shogan.co.uk/wp-content/uploads/powercli-5-1-public-cmdlet-additions.csv”][/table]

 

It is worth noting that in each case I had a full installation of PowerCLI – i.e. had selected to install PowerCLI normal and Cloud cmdlets during installation.

So it looks like I’ll need to spend some time getting acquainted with the new cmdlets. If you are curious as to what each does, don’t forget the built in help – using “Get-Help Cmdletname” and the use of the -examples switch.

 

Distributed Virtual Switch 5.1 Health Check for VLAN configuration issues

With the announcement of vSphere 5.1, one of the new features announced was the network health check feature now available for Distributed Virtual Switches (version 5.1 of the switch). This area has already been covered in detail by two bloggers I know of, namely Chris Wahl at Wahlnetwork, and Rickard Nobel.

However, this is one feature I was really looking forward to testing out myself, and had been preparing for by getting some physical Microserver Hosts up and running in my home lab with multiple NICs and VLAN support. The other day I had a chance to play around with the Network health check functionality with one of my hosts uplinked to a DVS I had created in vCenter.

This evening I was reminded of how useful this feature actually is. I had plugged one uplink from my Dell PowerConnect 5324 switch into the dual port NIC in the host and left the other NIC disconnected as I was short one cable. Tonight (a day later) I connected this up and was immediately notified of an issue on the uplink with the VLAN health status! I had of course, forgotten to setup the port trunking on the Dell switch (VLANs 8 and 10) after having set this up yesterday for just the one port that was connected.

 

Here is a breakdown of what I saw using the vSphere Web Client after selecting my DVS and then choosing the Health tab under “Monitor”. (vCenter also has alarms set up when you enable the feature that show to alert you of the issue).

 

 

A quick change on my switch to set the VLANs up on this particular uplink port meant I was soon up and running again.

 

As you can see the Health Check feature is really useful, providing vSphere admins with an easy way to check network port configurations on the networking hardware without having to login to another interface and check themselves, or rely on another team to do this for them. For more detail, or instructions on how to set this up, I recommend checking out the two blog posts I linked to above by Chris Wahl and Rickard Nobel.

 

My VMware vSphere Home lab configuration

I have always enjoyed running my own home lab for testing and playing around with the latest software and operating systems / hypervisors. Up until recently, it was all hosted on VMware Workstation 8.0 on my home gaming PC, which has an AMD Phenom II x6 (hex core) CPU and 16GB of DDR3 RAM. This has been great, and I still use it, but there are some bits and pieces I still want to be able to play with that are traditionally difficult to do on a single physical machine, such as working with VLANs and taking advantage of hardware feature sets.

 

To that end, I have been slowly building up a physical home lab environment. Here is what I currently have:

Hosts

  • 2 x HP Proliant N40L Microservers (AMD Turion Dual Core processors @ 1.5GHz)
  • 8GB DDR3 1333MHz RAM (2 x 4GB modules)
  • Onboard Gbit NIC
  • PCI-Express 4x HP NC360T Dual Port Gbit NIC as addon card (modifed to low-profile bracket)
  • 250GB local SATA HDD (just used to host the ESXi installations.

Networking

  • As mentioned above, I am using HP NC360T PCI-Express NICs to give me a total of 3 x vmnics per ESXi host.
  • Dell PowerConnect 5324 switch (24 port Gbit managed switch)
  • 1Gbit Powerline Ethernet home plugs to uplink the Dell PowerConnect switch to the home broadband connection. This allows me to keep the lab in a remote location in the house, which keeps the noise away from the living area.

Storage

  • This is a work in progress at the moment, (currently finding the low end 2 x bay home NAS devices are not sufficient for performance, and the more expensive models are too expensive to justify).
  • Repurposed Micro-ATX custom built PC, housed in a Silverstone SG05 micro-ATX chassis running FreeNAS 8.2 (Original build and pics of the chassis here)
  • Intel Core 2 Duo 2.4 GHz processor
  • 4GB DDR2-800 RAM
  • 1 Gbit NIC
  • 1 x 1TB 7200 RPM SATA II drive
  • 1 x 128GB OCZ Vertex 2E SSD (SATA II)
  • As this is temporary, each drive provides 1 x Datastore to the ESXi hosts. I therefore have one large datastore for general VMs, and one fast SSD based datastore for high priority VMs, or VM disks. I am limited by the fact that the Micro-ATX board only has 2 x onboard SATA ports, so I may consider purchasing an addon card to expand these.
  • Storage is presented as NFS. I am currently testing ZFS vs UFS and the use of the SSD drive as a ZFS and zil log / and or cache drive. To make this more reliable, I will need the above mentioned addon card to build redundancy into the system, as I would not like to lose a drive at this time!

Platform / ghetto rack

  • IKEA Lack rack (black) – cheap and expandable : )

 

To do

Currently, one host only has 4GB RAM, I have an 8GB kit waiting to be added to bring both up to 8GB. I also need to add the HP NC360T dual port NIC to this host too as it is a recent addition to the home lab.

On the storage side of things, I just managed to take delivery of 2 x OCZ Vertex 2 128GB SSD drives which I got at bargain prices the other day (£45 each). Once I have expanded SATA connectivity in my Micro-ATX FreeNAS box I will look into adding these drives for some super fast SSD storage expansion.

 

The 2 x 120GB OCZ SSDs to be used for Shared Host Storage
HP NC360T PCI-Express NIC and 8GB RAM kit for the new Microserver

 

Lastly, the Dell PowerConnect 5324 switch I am using still has the original firmware loaded (from 2005). This needs to be updated to the latest version so that I can enable Link Layer Discovery Protocol (LLDP) – which is newly supported with the VMware vSphere 5.0 release on Distributed Virtual Switches. This can help with the configuration and management of network components in an infrastructure, and will mainly serve to allow me to play with this feature in my home lab. I seem to have lost my USB-to-Serial adapter though, so this firmware upgrade will need to wait until I can source a new one off ebay.