Task/Event Storage vMotion (svMotion) differences between Datastores and Datastore Clusters

 

An interesting thread appeared on the VMware Community forums today under the PowerCLI section where someone asked how to find all the svMotions for the past month. LucD answered the post with the solution to get the events listed out for svMotions. I tried this out in my own environment (first of all I datastore migrated (svMotioned a couple of VMs to make sure there would be some events to find). However, I didn’t get any results back. At first I thought this may be because the event names had changed between vSphere 4 and vSphere 5.

 

After some looking into the last 5 events that had run by using “Get-VIEvent -MaxSamples 5” and looking at the .Info section for the svMotions, I found that their Info.DescriptionId sections were showing as “StorageResourceManager.applyRecommendation” rather than “VirtualMachine.relocate” as LucD’s script had indicated they should be. After digging some more this evening, I realised that maybe the difference was down to the fact that I was svMotioning VMs between Datastore Clusters, rather than between normal Datastores (not part of a DS Cluster). I tried svMotioning a VM to a normal Datastore, and low and behold the event’s Info.DescriptionId now appeared as “VirtualMachine.relocate”!

 

So the difference is kind of related to the vSphere edition (in the way that you only get SDRS / Datastore Clusters in vSphere 5. More specifically, the task/event difference is seen when using Storage vMotion to move a VM between two Datastores or two Datastore Clusters.

 

  • Migrating VMs Between two Datastore Clusters, you see the event “Apply Storage DRS Recommendations
  • Migrating VMs Between two normal Datastores, or from a Datastore Cluster to a normal Datastore (or vice-versa) you see the event “Relocate virtual machine“.

 

Here are some screenshots to illustrate what I mean:

 

Recent Tasks - differences between the two svMotion scenarios

 

PowerCLI - listing both svMotion scenario event types

 

Here is the PowerCLI script used to list both types of events if you are using both (or just one of) the Datastore types – i.e. Datastore Clusters or normal Datastores. It is based off LucD’s script in the first post of this thread on VMware Communities, but just modified to also show the “Between Datastore Cluster” types of svMotions too.

 

Get-VIEvent -MaxSamples ([int]::MaxValue) -Start (Get-Date).AddMonths(-1) | Where { $_.GetType().Name -eq "TaskEvent" -and $_.Info.DescriptionId -eq "VirtualMachine.relocate" -or $_.Info.DescriptionId -eq "StorageResourceManager.applyRecommendation"} | Select CreatedTime, UserName, FullFormattedMessage | Out-GridView

 

Can’t find/create new VMFS datastore after removing old one using Add Storage Wizard

 

Yesterday I was doing some out of hours work for a client who had a VM on a VMFS 3 datastore that only had a 1MB blocksize, but needed a data disk larger than 256GB. Naturally, the datastore needed to be recreated with a larger blocksize, as it is in a vSphere 4 environment and running VMFS 3. I did a storage vMotion to move the data off the datastore, then removed the datastore using one of the ESX hosts. I checked the other ESX hosts and noticed it was still showing there too. Refreshing the storage on each solved this and the datastore disappeared as expected.

 

I then went to re-create the datastore, happy with the knowledge that our SAN would still be presenting the LUN to our ESX hosts. When I arrived at the Add New Disk/LUN wizard, I saw that the LUN was in fact, not showing. Here is the troubleshooting process I used to sort this issue out. It turned out to be two old ESX hosts that were decomissioned a short while back who were still “hanging on” to this datastore.

 

  • Checked each live ESX host for the Datastore and made sure it was removed
  • Rescanned each HBA on the ESX hosts for new storage etc…
  • Tried adding datastore again (still couldn’t find the storage)
  • Logged onto FC SAN and ensured the LUN was still being presented to all ESX hosts in the cluster
  • Used vCenter “Maps” feature to select “VM to Datastore” relationship to ensure no VMs were mapping to this datastore or trying to refer to it anymore (like CDs mounted etc).
  • Under Maps, I noticed the datastore in question was still visible at the very bottom, but nothing related to it – i.e. there were no branches to anything.
  • Went to Datastores view in vCenter, and saw the datastore still listed there but had “(inactive)” next to it’s name.
  • Highlighted the datastore and looked at it’s summary (see screenshot below for example) and saw 2 x ESX hosts still showing as connected to it. I then realised that these would be the old, decommissioned ESX hosts in the inventory still “holding on” to the datastore.
  • I removed these old hosts from the vCenter inventory, and the datastore was then gone. I was then able to re-create it and the presented LUN appeared in the add storage wizard as you would normally expect.
 
datastore showing as (inactive) even after removing it from all connected ESX hosts

 

2 x ESX/ESXi hosts were still connected to the old, removed storage, preventing it from being re-created (This example shows one with 5 hosts connected for reference)

 

So if you find yourself removing a datastore, and when arriving at the add new storage wizard and this LUN is not being presented back to you as an option to create a new datastore with, try the above troubleshooting steps to ensure nothing is still referencing the old, deleted datastore. Once sorted out, you should get the option back again.

 

vSphere 5 & HA Heartbeat Datastores

 

I was busy updating my vSphere lab from 4.1 to 5 and ran into a warning on the first ESXi host I updated to ESXi 5.0. It read: “The number of vSphere HA heartbeat datastores for this host is 1, which is less than required: 2”. The message itself is fairly self-explanatory, but prompted me to find out more about this as I immediately knew it must be related to new functionality.

 

The Configuration Issue message

 

Pre-vSphere 5.0, if a host failed, or was just isolated on its Management Network, HA would restart the VMs that were running on that host and bring them up elsewhere. (I have actually seen this happen in our ESX 4.0 environment before!) With vSphere 5.0, HA has been overhauled and I believe this new Datastore Heartbeat feature is part of making HA more intelligent and able to make better decisions in the case of the Master HA Host being isolated or split off from other hosts. This Datastore Heartbeat feature should help significantly in the case of HA initiated restarts, allowing HA to more accurately determine the difference between a failed host and a host that has just been split off from the others for example.

 

vCenter will automatically choose two Datastores to use for the Datastore Heartbeat functionality. You can see which have been selected, by clicking on your cluster in the vSphere client, then choosing “Cluster Status”. Select the “Heartbeat Datastores” tab to see which are being used.

 

Cluster Status - viewing the elected HA Heartbeat Datastores

 

Without going into too much detail, this mechanism works with file locks on the datastores elected for this purpose. HA is able to determine whether the host has failed or is just isolated or split on the network by looking at whether these files have been updated or not. After my lab upgrade I noticed a new folder on some of my datastores and wondered at first what these new files were doing there! If you take a look at the contents of the Datastores seen your Heartbeat Datastores tab, you should see these files that HA keeps a lock on for this functionality to work.

 

Files created on HA Heartbeat Datastores for the new functionality

 

So, if you notice this configuration issue message, chances are your ESXi 5 host in question simply doesn’t have enough Datastores – this is likely to be quite common in lab environments, as traditionally we don’t tend to add many (well at least I don’t!) In my case this was a test host to do the update from 4.1 to 5 on, and I only had one shared datastore added. After adding my other two datastores from my FreeNAS box and an HP iSCSI VSA, then selecting “Re-configure for HA” on my ESXi host, the message disappeared as expected. I believe there should be some advanced settings you could also add to change the number of datastores required for this feature, but I have not looked into these yet. Generally, it is also always best to stick with VMware defaults (or so I say) as they would have been thought out carefully by the engineers. Changing advanced settings is also usually not supported by VMware too. However, if you find you are short on Datastores to add and want to get rid of the error in your lab environment, then this shouldn’t be a problem to change.

PowerCLI – Automate adding NFS Datastores to a cluster of ESX or ESXi hosts

 

The other day I needed to add three NFS datastores to a bunch vSphere ESX hosts in a specific cluster. Rather than go through each host in vCenter individually, adding the datastore using the Add Storage wizard, I thought I would script the process in PowerCLI and get it done in a more automated fashion. Using PowerCLI automation, this helped me save some time. I had about 7 ESX hosts to add the Datastores to, so doing this manually would have taken twice the time it took me to whip up this script and run it. Plus, this can be used in the future for other Datastores or other clusters by simply modifying the script and re-running it.

 

Here is the script:

 

# PowerCLI script to add NFS datastores to ESX/ESXi hosts in a specified Cluster
# Only does this to hosts marked as "Connected"
$hostsincluster = Get-Cluster "Cluster 1 - M" | Get-VMHost -State "Connected"
ForEach ($vmhost in $hostsincluster)
{
    ""
    "Adding NFS Datastores to ESX host: $vmhost"
    "-----------------"
    "1st - MER001 - NAS-SATA-RAID6 (Veeam Backups)"
    New-Datastore -VMHost $vmhost -Name "MER001 - NAS-SATA-RAID6 (Veeam Backups)" -Nfs -NfsHost 10.1.35.1 -Path /share/VeeamBackup01
    "2nd - MER002 - NAS-SATA-RAID6 (ISOs)"
    New-Datastore -VMHost $vmhost -Name "MER002 - NAS-SATA-RAID6 (ISOs)" -Nfs -NfsHost 10.1.35.1 -Path /share/Images01
    "3rd - MER003 - NAS-SATA-RAID6 (XenStore01)"
    New-Datastore -VMHost $vmhost -Name "MER003 - NAS-SATA-RAID6 (XenStore01)" -Nfs -NfsHost 10.1.35.1 -Path /share/XenStore01
}
"All Done. Check to ensure no errors were reported above."

 

So the script above looks for ESX or ESXi hosts in a specified cluster that are in a “Connected” state – i.e. they are not disconnected in vCenter (we wouldn’t want to try add Datastores to hosts that don’t exist!). So we use the Get-Cluster cmdlet to say we are only concerned with hosts in this particular cluster (specified by the “Cluster 1 – M” name in my case. Obviously change this to the name of your cluster you will be working with.) We then use Get-VMHost -State “Connected” to list all of the hosts in this cluster that are in a connected state. In my example I had 2 x ESX hosts that were in a disconnected state, and I didn’t want to include these, so this part worked nicely. This list of hosts in then assigned to the $hostsincluster variable. We then use the ForEach loop to iterate through each host in this list of hosts and do the bit in-between the curly brackets for each host.

 

In my case you may notice that I am adding Datastores from the same NFS (NAS) server. They are just being mounted to different paths on the server and being given different names. I had three Datastores to add, so therefore use the New-Datastore cmlet three times for each host. You will need to adjust this to your needs – maybe you just need to add one datastore to each host, therefore remove the two extra New-Datastore cmdlet parts. Also remember to adjust the -NfsHost and -Path sections to suit your own environment.

 

We could improve on the above script by making it more customisable for future / others to use. Lets give that a quick go then and use variables to define everything at the top of the script. This means that the variables can be changed at the top of script without worrying too much about reading through the whole script to check for things to change. We’ll also add a Connect-VIServer cmdlet in there in case you have not already connected to your vCenter server and authenticated in your PowerCLI session that is running the script.

 

# PowerCLI script to add NFS datastores to ESX/ESXi hosts in a specified Cluster
# Only does this to hosts marked as "Connected"

# Define our settings
$vcserver = "vcenter01"
$clustername = "Cluster 1 - M"
$nfshost = "10.1.35.1"
$nfspath1 = "/share/VeeamBackup01"
$nfspath2 = "/share/Images01"
$nfspath3 = "/share/XenStore01"

# Connect to vCenter server
Connect-VIServer $vcserver

# Do the work
$hostsincluster = Get-Cluster $clustername | Get-VMHost -State "Connected"
ForEach ($vmhost in $hostsincluster)
{
    ""
    "Adding NFS Datastores to ESX host: $vmhost"
    "-----------------"
    "1st - MER001 - NAS-SATA-RAID6 (Veeam Backups)"
    New-Datastore -VMHost $vmhost -Name "MER001 - NAS-SATA-RAID6 (Veeam Backups)" -Nfs -NfsHost $nfshost -Path $nfspath1
    "2nd - MER002 - NAS-SATA-RAID6 (ISOs)"
    New-Datastore -VMHost $vmhost -Name "MER002 - NAS-SATA-RAID6 (ISOs)" -Nfs -NfsHost $nfshost -Path $nfspath2
    "3rd - MER003 - NAS-SATA-RAID6 (XenStore01)"
    New-Datastore -VMHost $vmhost -Name "MER003 - NAS-SATA-RAID6 (XenStore01)" -Nfs -NfsHost $nfshost -Path $nfspath3
}
"All Done. Check to ensure no errors were reported above."

So as you can see we have now defined the name of the cluster, our NAS/NFS server and three paths to different NFS shares at the top of the script, then just referenced these variables later on in the script. This means we can now easily adjust the defined variables at the top of our script in the future to work with different clusters, NAS/NFS servers or paths. The output of your final script when run should give you a nice view of what has happened too. It will output that it is adding NFS Datastores to each host it iterates through, and if it comes across any errors those should be marked in red as PowerShell / PowerCLI normally would do, allowing you to amend or update any details as necessary. PS, don’t forget to change the name of each Datastore in the script to something of your own choice (it is the part after the -Name parameter in each New-Datastore line).

 

Here is the download for the full script (with latest improvements):

[download id=”5″]