Free Virtualization Icon set

For a recent personal project I have been working on (vMetrics for WordPress), I had a requirement for some Icons, all virtualization related. I had a quick look around but couldn’t find many that had no strings attached. I therefore decided to create my own set. These are all original and I have created them myself. You will of course recognise some of the designs from the vSphere Client – these I used as inspiration and re-created from scratch.

 

 

Feel free to use these in your own projects, charts, or presentations. All that I ask is that you drop me a comment below to let me know if they were useful or not 🙂

 

[download id=”18″]

 

Get Virtual Machine Inventory from a Hyper-V Failover Cluster using PowerShell

A colleague was asking around for a PowerShell script that would fetch some inventory data for VMs on a Hyper-V cluster the other day. Not knowing too much about Hyper-V and having only ever briefly looked at what was out there in terms of PowerShell cmdlets for managing Hyper-V, I decided to dive in tonight after I got home.

 

Here is a function that will fetch Inventory data for all VMs in a specified Failover Cluster. This is what it fetches:

  • VM Name
  • VM CPU Count
  • VM CPU Socket Count
  • VM Memory configuration
  • VM State (Up or Down)
  • Cluster Name the VM resides on
  • Hyper-V Host name the VM resides on
  • Network Virtual Switch Name
  • NIC Mac Address
  • Total VHD file size in MB
  • Total VHD Count

 

Being a function, you can pipe in the name of the cluster you want, for example Get-Cluster | Get-HyperVInventory. Or you could do Get-HyperVInventory -ClusterName “ExampleClusterName”. You could also send it to an HTML Report by piping it to “ConvertTo-HTML | Out-File example.html”

Download here, or copy it out from the script block below:
[download id=”15″]
 

# Requires: Imported HyperV PowerShell module (http://pshyperv.codeplex.com/releases/view/62842)
# Requires: Import-Module FailoverClusters
# Requires: Running PowerShell as Administrator in order to properly import the above modules

function Get-HyperVInventory {
<#
.SYNOPSIS
Fetches Hyper-V VM Inventory from a specified Hyper-V Failover cluster

.DESCRIPTION
Fetches Hyper-V VM Inventory from a specified Hyper-V Failover cluster

.PARAMETER ClusterName
The Name of the Hyper-V Failover Cluster to inspect

.EXAMPLE
PS F:\> Get-HyperVInventory -ClusterName "dev-cluster1"

.EXAMPLE
PS F:\> Get-Cluster | Get-HyperVInventory

.LINK
http://www.shogan.co.uk

.NOTES
Created by: Sean Duffy
Date: 09/07/2012
#>

[CmdletBinding()]
param(
[Parameter(Position=0,Mandatory=$true,HelpMessage="Name of the Cluster to fetch inventory from",
ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
[System.String]
$ClusterName
)

process {

$Report = @()

$Cluster = Get-Cluster -Name $ClusterName
$HVHosts = $Cluster | Get-ClusterNode

foreach ($HVHost in $HVHosts) {
$VMs = Get-VM -Server $HVHost
foreach ($VM in $VMs) {
[long]$TotalVHDSize = 0
$VHDCount = 0
$VMName = $VM.VMElementName
$VMMemory = $VM | Get-VMMemory
$CPUCount = $VM | Get-VMCPUCount
$NetSwitch = $VM | Get-VMNIC
$NetMacAdd = $VM | Get-VMNIC
# VM Disk Info
$VHDDisks = $VM | Get-VMDisk | Where { $_.DiskName -like "Hard Disk Image" }
foreach ($disk in $VHDDisks) {
$VHDInfo = Get-VHDInfo -VHDPaths $disk.DiskImage
$TotalVHDSize = $TotalVHDSize + $VHDInfo.FileSize
$VHDCount += 1
}
$TotalVHDSize = $TotalVHDSize/1024/1024
$row = New-Object -Type PSObject -Property @{
Cluster = $Cluster.Name
VMName = $VMName
VMMemory = $VMMemory.VirtualQuantity
CPUCount = $CPUCount.VirtualQuantity
CPUSocketCount = $CPUCount.SocketCount
NetSwitch = $NetSwitch.SwitchName
NetMACAdd = $NetMacAdd.Address
HostName = $HVHost.Name
VMState = $HVHost.State
TotalVMDiskSizeMB = $TotalVHDSize
TotalVMDiskCount = $VHDCount
} ## end New-Object
$Report += $row
}
}
return $Report

}
}

 

Example use cases – load the function into your PowerShell session, or place it in your $profile for easy access in future, and run the following:

# Example 1
Get-HyperVInventory -ClusterName "mycluster1"
# Example 2
Get-Cluster | Get-HyperVInventory
# Example 3
Get-HyperVInventory -ClusterName "mycluster1" | ConvertTo-HTML | Out-File C:\Report.html

 

The function includes help text and examples, so you can also issue the normal “Get-Help Get-HyperVInventory” or “Get-Help Get-HyperVInventory -Examples”. It is by no means perfect and could do with some improvements, for example if there is more than one Virtual Switch Network associated with a VM these would be listed in a row multiple times for each. Feel free to suggest any improvements or changes in the comments.

 

vSphere Home Lab / whitebox builds – 16GB RAM in a HP N40L Microserver

 

I recently purchased a HP N40L Microserver for my home vSphere lab, with the intention of buying a second unit to create a small vSphere cluster for lab work. This would take me away from having nested virtual ESXi hosts. You can actually currently get great deals on this hardware – with HP offering £100 cashback on the purchase cost. I ended up paying around £260.00 for my HP Microserver, getting 100 off, which means it only cost about £160.

 

Great deal - £100 cashback on the HP N40L Microserver

 

For this price, this microserver makes great hardware for a home lab cluster build, however the one thing that has always been a downer on this is the fact that all specsheets and official documentation from HP list the maximum amount of RAM supported as 8GB for the Microserver. This doesn’t leave much room for VMs to run per host.

 

Today I received an interesting e-mail in my inbox from Serversplus.com. They claim to have tested running 16GB of Crucial ECC DDR3 Unbuffered (2 x 8GB modules) in the HP N40L Microserver! This, if it is true (which I am sure it is, as they are now selling bundles with 16GB RAM), is great news for us looking to build home labs on the cheap. Sure, 8GB modules are much more expensive than 4GB at the moment, but we now know that there is no 8GB limit on the N40L Microserver – rather 16GB. As soon as I can afford the two 8GB modules for a total of 16GB, I’ll be looking at upgrading my current Microserver to 16GB. If this works, I’ll definitely be purchasing a second unit.

 

Here is a screencap of the e-mail I got from serversplus.com –

If you are UK based, you can grab the full bundle from Serversplus.com.

 

Live Migrating a VM on a Hyper-V Failover Cluster fails – Processor-specific features not supported

 

I have been working on setting up a small cluster of Hyper-V Hosts (running as VMs), nested under a bunch of physical VMware ESXi 5.0 hosts. Bear in mind I am quite new to Hyper-V, I have only ever really played with single host Hyper-V setups in the past. Having just finishing creating a Hyper-V failover cluster in this nested environment, and configuring CSV (Cluster Shared Volume) Storage for the Hyper-V hosts, I created a single VM to test the “live migrate” feature of Hyper-V. Upon telling the VM to live migrate from host “A” to host “B”, I got the following error message.

“There was an error checking for virtual machine compatibility on the target node”. The description reads “The virtual machine is using processor-specific features not supported on physical computer “DEVHYP02E”.

 

So my first thought was, perhaps there is a way to mask processor features, similar to the way VMware’s EVC for host physical CPU compatibility works? If you read the rest of the error message it does seem to indicate that there is a way of modifying the VM to limit processor features used.

 

So the solution in this case is to:

  • First of all power down your VM
  • Using Hyper-V Manager, right-click the VM and select “Settings”
  • Go to the “Processor” section and tick the option on for “Migrate to a physical computer with a different processor version” under “Processor compatibility”
  • Apply settings
  • Power up the VM again

 

Processor compatibility settings - greyed out here as I took the screenshot after powering the VM up again.

 

So now if you try and live migrate to another compatible Hyper-V host, the migration should work.

 

HP P2000 G3 FC MSA – troubleshooting a faulty Controller (blinking Fault/Service Required LED)

Setting up a new HP P2000 G3 FC MSA with dual controllers over the last couple of days for a small staging environment, I ran into issues from the word go. The device in question was loaded with 24 SFF disks and two Controllers (Controller A and B).

 

On the very first boot we noticed a fault (amber) LED on the front panel. Inspecting the back of the unit, I noticed that Controller A and B were both still flashing their green “FRU OK” LEDs, (which according to the manual means that the controllers were still booting up), even after waiting a number of hours. On Controller A, I could see a blinking amber “Fault/Service Required” LED. Following through the troubleshooting steps in the manuals lead nowhere as the end synopsis was to check the event logs. Even the Web interface was acting up – I could not see the controller’s listed, could not see any disks and the event logs were completely empty. Obviously there was a larger issue at hand preventing the MSA and even the Web interface from functioning properly. To further confuse matters, after shutting down and restarting the device, controller B starting blinking the amber LED instead of A this time, both still stuck in their “Booting up” state. Refer to the linked LED diagram below and you’ll see that the LED flashing green is labelled as 6, and the amber blinking LED is the one labelled as 7 on the top controller in the diagram.

LED Diagram

HP Official documentation

After powering the unit down completely, and then powering back up again, the MSA was still stuck in the same state. Powering down the unit once more, removing and reseating both controllers did not help either. Lastly, I powered it all off again, removed controller A completely, then powered up the device with just Controller B installed. Surprisingly the MSA booted up perfectly, and LED number 6 (FRU OK) went a nice solid green after a minute or so of booting up. No amber LEDs were to be seen. Good news then! Hot plugging controller A back in at this stage with the device powered on resulted in both controllers reporting a healthy status and all the disks and hardware being detected. A final test was done by powering off everything and powering it back up again as it should be from a cold start. Everything worked this time.

 

Here is a photo of the rear of the device once all was resolved showing the solid green FRU OK LEDs on both controllers.

 

 

Bit of an odd one, but it would seem that controllers together were preventing each other from starting up. Removing one then booting up with this seemed to solve the problem, and at the end of the day all hardware was indeed healthy. After this the 24 disks were assigned and carved up into some vdisks to be presented to our ESXi hosts!