My VMware vSphere Home lab configuration

I have always enjoyed running my own home lab for testing and playing around with the latest software and operating systems / hypervisors. Up until recently, it was all hosted on VMware Workstation 8.0 on my home gaming PC, which has an AMD Phenom II x6 (hex core) CPU and 16GB of DDR3 RAM. This has been great, and I still use it, but there are some bits and pieces I still want to be able to play with that are traditionally difficult to do on a single physical machine, such as working with VLANs and taking advantage of hardware feature sets.

 

To that end, I have been slowly building up a physical home lab environment. Here is what I currently have:

Hosts

  • 2 x HP Proliant N40L Microservers (AMD Turion Dual Core processors @ 1.5GHz)
  • 8GB DDR3 1333MHz RAM (2 x 4GB modules)
  • Onboard Gbit NIC
  • PCI-Express 4x HP NC360T Dual Port Gbit NIC as addon card (modifed to low-profile bracket)
  • 250GB local SATA HDD (just used to host the ESXi installations.

Networking

  • As mentioned above, I am using HP NC360T PCI-Express NICs to give me a total of 3 x vmnics per ESXi host.
  • Dell PowerConnect 5324 switch (24 port Gbit managed switch)
  • 1Gbit Powerline Ethernet home plugs to uplink the Dell PowerConnect switch to the home broadband connection. This allows me to keep the lab in a remote location in the house, which keeps the noise away from the living area.

Storage

  • This is a work in progress at the moment, (currently finding the low end 2 x bay home NAS devices are not sufficient for performance, and the more expensive models are too expensive to justify).
  • Repurposed Micro-ATX custom built PC, housed in a Silverstone SG05 micro-ATX chassis running FreeNAS 8.2 (Original build and pics of the chassis here)
  • Intel Core 2 Duo 2.4 GHz processor
  • 4GB DDR2-800 RAM
  • 1 Gbit NIC
  • 1 x 1TB 7200 RPM SATA II drive
  • 1 x 128GB OCZ Vertex 2E SSD (SATA II)
  • As this is temporary, each drive provides 1 x Datastore to the ESXi hosts. I therefore have one large datastore for general VMs, and one fast SSD based datastore for high priority VMs, or VM disks. I am limited by the fact that the Micro-ATX board only has 2 x onboard SATA ports, so I may consider purchasing an addon card to expand these.
  • Storage is presented as NFS. I am currently testing ZFS vs UFS and the use of the SSD drive as a ZFS and zil log / and or cache drive. To make this more reliable, I will need the above mentioned addon card to build redundancy into the system, as I would not like to lose a drive at this time!

Platform / ghetto rack

  • IKEA Lack rack (black) – cheap and expandable : )

 

To do

Currently, one host only has 4GB RAM, I have an 8GB kit waiting to be added to bring both up to 8GB. I also need to add the HP NC360T dual port NIC to this host too as it is a recent addition to the home lab.

On the storage side of things, I just managed to take delivery of 2 x OCZ Vertex 2 128GB SSD drives which I got at bargain prices the other day (£45 each). Once I have expanded SATA connectivity in my Micro-ATX FreeNAS box I will look into adding these drives for some super fast SSD storage expansion.

 

The 2 x 120GB OCZ SSDs to be used for Shared Host Storage
HP NC360T PCI-Express NIC and 8GB RAM kit for the new Microserver

 

Lastly, the Dell PowerConnect 5324 switch I am using still has the original firmware loaded (from 2005). This needs to be updated to the latest version so that I can enable Link Layer Discovery Protocol (LLDP) – which is newly supported with the VMware vSphere 5.0 release on Distributed Virtual Switches. This can help with the configuration and management of network components in an infrastructure, and will mainly serve to allow me to play with this feature in my home lab. I seem to have lost my USB-to-Serial adapter though, so this firmware upgrade will need to wait until I can source a new one off ebay.

 

Live Migrating a VM on a Hyper-V Failover Cluster fails – Processor-specific features not supported

 

I have been working on setting up a small cluster of Hyper-V Hosts (running as VMs), nested under a bunch of physical VMware ESXi 5.0 hosts. Bear in mind I am quite new to Hyper-V, I have only ever really played with single host Hyper-V setups in the past. Having just finishing creating a Hyper-V failover cluster in this nested environment, and configuring CSV (Cluster Shared Volume) Storage for the Hyper-V hosts, I created a single VM to test the “live migrate” feature of Hyper-V. Upon telling the VM to live migrate from host “A” to host “B”, I got the following error message.

“There was an error checking for virtual machine compatibility on the target node”. The description reads “The virtual machine is using processor-specific features not supported on physical computer “DEVHYP02E”.

 

So my first thought was, perhaps there is a way to mask processor features, similar to the way VMware’s EVC for host physical CPU compatibility works? If you read the rest of the error message it does seem to indicate that there is a way of modifying the VM to limit processor features used.

 

So the solution in this case is to:

  • First of all power down your VM
  • Using Hyper-V Manager, right-click the VM and select “Settings”
  • Go to the “Processor” section and tick the option on for “Migrate to a physical computer with a different processor version” under “Processor compatibility”
  • Apply settings
  • Power up the VM again

 

Processor compatibility settings - greyed out here as I took the screenshot after powering the VM up again.

 

So now if you try and live migrate to another compatible Hyper-V host, the migration should work.

 

Finding Vendor specific VIBs on ESXi hosts with PowerCLI

 

Example showing VIBs loaded on a host with a search of the vendor name "VMware"

 

The other day I was trying to find a list of Custom VIBs (VMware Installation Bundles) that were installed on an ESXi host. The reason was that I just wanted to verify that the VIB had actually installed correctly or not. I threw the query out on Twitter and of course @alanrenouf had a solution in next to no time.

So Alan’s solution is to use the Get-EsxCli cmdlet and specify the host name using the VMHost parameter. After that, he simply uses the code property “software” to gain access to the list of VIBs on the host. E.g.

$ESXCLI = Get-EsxCli -VMHost esxi-01.noobs.local
$ESXCLI.software.vib.list()

I have used esxcli on its own before but didn’t realise that PowerCLI had this cmdlet built in to interface with hosts in the same way that esxcli would. This is a great solution and means you can fetch so much more in this regard.

To filter things down a bit more and find the exact match for the Dell OMSA VIB I was looking out for, I used a where clause looking for a match for “dell” on the Vendor property:

$ESXCLI.software.vib.list() | Select AcceptanceLevel,ID,InstallDate,Name,ReleaseDate,Status,Vendor,Version | Where {$_.Vendor -match "dell"}

Thanks again to Alan Renouf for pointing out the use of Get-EsxCli and for providing an example!

 

How to use PoSH or PowerCLI to SSH into Devices & retrieve information (Gathering SHA1 Fingerprints)

 

I was listening to GetScripting podcast #29 the other day. The guest was Pete Rossi (PoSH Pete), and in the discussion he discussed data centre automation. Part of the automation he has set up involves wrapping SSH with PowerShell, and by doing so he is able to automate various functions on devices that can be SSH’d onto. This got me thinking of potential use cases. Soon enough I already had a couple of use case scenarios that could do with automating using SSH and PowerCLI. Pete mentioned he mainly uses an SSH component by a company called “WeOnlyDo Software”, however Alan Renouf also mentioned having heard of “SharpSSH”. I decided I wanted to try both out and figure out how to use both, so with that I set out figuring out how to get them working with PowerShell and PowerCLI. In this post (Part 1) I will cover using the SharpSSH DLL. In Part 2 I will go into the (easier in my opinion) wodSSH component (also paid for) method.

 

SharpSSH (based on Tamir Gal’s .NET library)

 

I believe Tamir Gal originally created this library, however it seems to now be maintained by others.

 

First of all, for SharpSSH to work with PowerShell or PowerCLI, you’ll need to get the relevant DLL that will be loaded by your script. I found a version of SharpSSH being actively worked on and improved by Matt Wagner on Bitbucket. I downloaded this version (called SharpSSH.a7de40d119c7.dll) to get started. To load the functions that we’ll be using to SSH in to devices, I used the following PowerShell function. Just be sure to reference in the correct path of the SharpSSH DLL that you downloaded above in this function. Download the function below:

 

[download id=”13″]

 

Then as long as the functions are loaded in your PoSH session, you should be able to run the example below.

 

How to SSH into ESXi hosts and retrieve SHA1 Fingerprints using PowerCLI and SharpSSH

 

Example output after running the script detailed below against multiple ESX hosts

 

 

Now, first off I’ll say that this isn’t necessarily the best way of retrieving SSL Fingerprints from your ESXi hosts in terms of security – you’d want to do this from the DCUI of the ESXi hosts to confirm the identity of each host is as you expect. (See this blog post and comments over at Scott Lowe’s blog for more detail on the security considerations). With that being said, here is my implementation of SharpSSH, used to SSH into each ESXi host (from a Get-VMHost call) and retrieve the SHA1 Fingerprints. The script will create and output a table report, listing each ESX/ESXi host as well as their SHA1 Fingerprint signatures.

 

Background for the Script

 

I believe this is actually quite an easy bit of info to collect using PowerCLI and the ExtensionData.Config properties on newer hosts / vSphere 5, but in my environment I was working with, all my ESX 4.0 update 4 hosts did not contain this Fingerprint info in their ExtensionData sections when queried with PowerCLI. Therefore I automated the process using SSH as I could use the command “openssl x509 -sha1 -in /etc/vmware/ssl/rui.crt -noout -fingerprint” to generate the Fingerprint remotely on each host via SSH. So with that in mind, here is the script that fetches this info. Note it will prompt for root credentials on each host that is connected to – this could probably be easily changed in the Function (downloaded from above). So here is the final script which will list all ESXi hosts and their SHA1 Fingerprints:

 

$Report = @()
$VMHosts = Get-VMHost | Sort Name

foreach ($vmhost in $VMHosts) {
	New-SshSession root $vmhost
	if (Receive-SSH '#')
	{
		Write-Host "Logged in as root." -ForegroundColor Green
		$a = Invoke-SSH "openssl x509 -sha1 -in /etc/vmware/ssl/rui.crt -noout -fingerprint" 'SHA1'
		$temp = $a | select-string -pattern "SHA1 Fingerprint="
		$row = New-Object -TypeName PSObject -Property @{
			SHA1 = $temp
			HostName = $vmhost
		}
		$Report += $row
		$rootlogin = $true
		Write-Host "Output complete." -ForegroundColor Green
	}
	if ($rootlogin -eq $true)
	{
		Write-Host "Exiting SSH session."
		Send-SSH exit
	}
	Write-Host "Terminating Session."
	Remove-SshSession
}

$Report

 

Well, I hope this helps you out with a way to automate SSH access to devices to retrieve information or change settings. This could easily be adapted to send SSH commands to any other kind of device that accepts SSH as a method of login. Switches, Routers, linux servers, you name it! In my next blog post I will be showing you how to use the wodSSH library (We Only Do Software) to do SSH in PowerShell or PowerCLI – I have found this method to be a bit easier to use when compared with SharpSSH! So look out for my next post coming soon!

Upgrading to vSphere 5.0 from vSphere 4.x – Part 3 – Using VUM to upgrade ESX(i) hosts

 

Introduction

 

In Part 2 of this series, I covered the steps needed in order to manually upgrade your ESX(i) 4.x hosts to ESXi 5.0.0. This is useful for smaller deployments or lab set ups where you don’t have that many hosts to upgrade. However, with larger deployments, you’re going to want to look at a way of automating this process. VMware Update Manager (VUM) is one of these ways, and in this part, I’ll explain how to use it to upgrade your hosts to ESXi 5.0.0 by using baselines.

 

Using VMware Update Manager to upgrade ESX(i) 4.x hosts to 5.0

 

There is a little bit of ground work required to set up your ESXi 5.0.0 ISO image and create a VMware Update Manager baseline in this process, but once this is done, it is quite easy to attach your new baseline to your older hosts and remediate them against this (effectively upgrading them / bringing them up to date against this new baseline). To go through this process, you will of course need a VUM server. Update Manager comes with all editions of vSphere 4 and 5, so if you don’t already have it up and running, you should seriously think about deploying it, as it will save you a lot of time with otherwise routine, time consuming tasks when it comes to upgrading/updating Hosts, VMs, and Applications.

 

Start off by downloading the ESXi 5.0.0 ISO image to your vSphere Client machine if you don’t already have this. Once this is downloaded, you’ll then want to go to your vSphere client home screen and choose VMware Update Manager from the home screen.

 

 

This will load the VUM interface and provide you everything you need to work with Update Manager. Select the “ESXi Images” tab along the top, then click on the link called “Import ESXi Image…” In the wizard that appears, browse for your recently downloaded ESXi 5.0.0 ISO image and select it. Follow through the wizard to Upload the ISO to your VUM server. You may receive a security warning (SSL) which you will need to ignore/accept to continue. All going well, you should reach an “Upload Successful” point and see the details of your ISO, similar to the below screenshot.

 

 

Move to the next step, and we’ll now create a Baseline out of this Image. Tick the box for “Create a baseline using the ESXi image” and give it a meaningful name and an optional description. Finish the wizard when you have named the baseline, and you’ll now have a shiny new baseline with which you can use to remediate your hosts against.

 

 

Move along to the “Baselines and Groups” tab in the main VUM area, and verify that your new baseline is showing and that the details look correct. It should show up as a “Host Upgrade” under the Component column.

 

 

For the next step, we’ll be looking to attach this baseline to the older ESX(i) hosts that need to be upgraded. To attach to all your hosts in the same cluster all at once, go to your Hosts & Clusters Inventory view,  select your Cluster name, then go to the “Update Manager” tab near the end. From here, click on the “Attach” link as seen below:

 

 

Now, you’ll want to select the ESXi5 Upgrade Baseline we created in the previous steps to the selected Cluster. Simply tick the box next to your “Host Upgrade” Baseline name, then click “Attach”.

 

 

Now that your baseline has been attached, it will apply to all the ESX(i) hosts in the cluster. From the same Update Manager screen, click the “Scan” link to initiate a compliance scan. In other words, we’ll be looking to find out which hosts are not in compliance with our new ESXi 5.0.0 image, from which point we can move on to remediating (upgrading). Select the tickbox for “Upgrades“, then click “Scan” to continue. Once this is done, you should notice that all the hosts which have your baseline attached will show as Non-Compliant (unless you have manually upgraded any of these).

 

 

If you are ready to begin the upgrade process for your hosts, click on the yellow “Remediate” button on this screen.

 

 

 

You’ll now be taken to the Remediation Wizard / Selection screen. Tick (Select) which hosts you would like to remediate (upgrade) and ensure the correct Host Upgrade Baseline is selected, then continue on to the next step.

 

 

 

 

There are a few options that you will need to configure in this wizard based on your environment. Preferences such as Host and Cluster Remediation options can be set up, which control how your cluster and hosts should handle the remediation tasks. For example, Power State to put your VMs into, or Cluster features to keep enabled or disable. You can also define a schedule for remediation in this wizard if you wish.  Here is a quick rundown of the steps in this wizard that I went through, along with a couple of screenshots of the options I went with in my lab upgrade.

 

1. Selection screen

2. Read and Accept the EULA

3. Choose whether to remove third-party installed software that is incompatible with the upgrade or not.

4. Optionally, set up task name and description for the upgrade remediation task. Select Immediately, or specify a time to begin the task.

5. Choose how you would like to handle VMs for the remidiation. Note that these options also apply to hosts in clusters. I left mine at “Do not change VM Powerstate” as I have DRS set to fully-automated. Therefore VMs running on this host will be vMotioned off when the host enters maintenace mode, and I won’t need to worry about moving them myself.

6. Select how the cluster should be handled during remediation. For example you can disable DPM during remediation. It will be re-enabled after the task is complete.

7. Check summary and finish the wizard.

 

Host remediation options

 

 

Cluster remediation options

 

 

Checking through the summary and finishing off the wizard

 

 

Once you finish the wizard, your selected hosts will begin the upgrade (Remediation) process. Depending on the options you chose, your VMs should automatically be vMotioned off the current host being upgraded as it enters maintenance mode. If you’d like to follow along with one of the hosts, have a look at the server console to see how progress is made. Working with a cluster of hosts, you should be able to use this method to upgrade hosts systematically in an automated fashion. One of the benefits of this method is obviously along the lines of time saving, but also the fact that you can have zero downtime if it is done correctly. As each host is upgraded to ESXi 5.0.0, VMs can vMotion back to upgraded hosts, allowing you to shift the VM workload around hosts that need to go down for maintenance. Lastly, don’t forget to ensure your licensing is sorted out after the upgrades are complete.

 

In the next part, we’ll take a look at other tasks involved with an upgrade to vSphere 5.0.

 

More in this series:

 

Part 1 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 1 – vCenter Server

Part 2 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 2 – Manually upgrading ESX(i) hosts