vCenter Server Appliance VM fails to boot with fsck failed message

November 5th, 2015 No comments


I had a storage outage to deal with recently, and after the datastores on this storage were taken down, a vCenter Server Appliance VM on the storage got some corrupted files and would not boot. Upon start up, I was greeted with this message:


The error message reads:

fsck failed.  Please repair manually and reboot.  The root
file system is currently mounted read-only.  To remount it
read-write do

After trying the mount command using the maintenance mode bash shell, I restarted and found that the appliance still did not boot properly. I found a thread on the VMware community forums where someone had the same issue and was able to run e2fsck to fix the disk issues. I tried this and found it fixed a whole heap of disk errors on the /dev/sda3 mount, but on restart I noticed more issues on the /dev/sdb2 mount, so I ran the e2fsck command again for this path, and was able to finally reboot the appliance successfully. The commands I ran to resolve were essentially:

  • mount -n -o remount,rw /
  • e2fsck -y /dev/sda3
  • e2fsck -y /dev/sdb2
  • CTRL-D to reboot after fixing the errors using e2fsck



VMware T10 compliant VAAI integration and HP P2000 G3 FC storage

November 5th, 2015 No comments


I’ve recently been updating firmware on some development and testing storage and found that the HP P2000 storage array firmware update TS251R004 and above enable the HP P2000 G3 FC enable T10 compliance for the hardware.


To quote VMware’s documentation on their VAAI implementation specific to T10 compliance:

The second required component can be referred to as a VAAI plug-in specific to the VAAI filter. It implements vendor-specific VAAI functions such as ATS, XCOPY and WRITE_SAME. There were different implementations of the VAAI block primitives in vSphere 4.1, but all of the primitives in vSphere 5.0 have been ratified by T10, so any array that is T10 compliant should be able to use VAAI.


This means that you no longer need to be running the HP P2000 VAAI plugin software directly on ESXi hosts. In fact, HP recommend you uninstall and remove the plugin software before you upgrade the firmware on these arrays, otherwise you could suffer from performance degradation and possible loss of access to datastores.

My process was to first of all login to all hosts and check for the presence of the VAAI plugin.

  • SSH into host as root, run find / -name hp_vaaip_p2000
  • Ensure that nothing comes up with the find command, if it does (you see something like this output: /usr/lib/vmware/vmkmod/hp_vaaip_p2000), then you should use this HP document to ensure it is removed correctly: – this will involve some setting changes, and removing claim rules as well as removal of the HP P2000 VAAI VIB itself.
  • After verifying nothing came up, check other hosts, and once happy all hosts are clear of the plugin, upgrade the firmware for the P2000 system.
  • Ideally reboot ESXi hosts after the firmware update and ensure access to datastores is still there. Check the hardware acceleration status of datastores – they should show up as “Supported”.

VMware vSphere community (free) health check options

October 14th, 2015 No comments

I recently did a presentation at the South West VMUG on VMware vSphere community (free) health check options. In my presentation I covered some of the options available out there at the moment such as:

  • vCheck (vSphere plugins)
  • vGhetto health check script
  • Miscellaneous PowerCLI / PowerShell options
Starting off the session in style - Intro in mspaint, running under Windows 3.1

Starting off the session in style – Intro in mspaint, running under Windows 3.1


The second half of my presentation I dived into a live PowerCLI and PowerShell demo where I demonstrated some PowerCLI basics to get any kind of information out of your vSphere environment using some of the core cmdlets. I demonstrated use of the core PowerCLI cmdlets used for retrieving VM, Host and Datastore information, how to use the pipeline in PowerShell, and taking a look at all properties on any PS object using the Get-Member cmdlet on the pipeline.

After covering these basics, I took a blank vCheck plugin template, and showed how easy it is to create your own custom plugins for vCheck should you find that the existing plugins don’t cover everything you need.

I’ve got a link to download the slides for the presentation below, and hopefully I’ll be able to find a recording of the PowerCLI / PowerShell live demo I did to attach to this post as a follow up.



If you’re based in the South West of the UK, be sure to check out and attend the next SW VMUG meeting!


vSphere 6.0 performance metric limitations in the database (config.vpxd.stats.maxQueryMetrics)

April 15th, 2015 No comments

A change I noticed right away between vSphere 5.5 and vSphere 6.0 is the introduction of a default limiter when it comes to performing database queries for performance metrics.

When querying vCenter 6.0 for performance data, there is a system in place by default that limits the number of entities that are included in a database query. As performance charts in the vSphere Web and C# client depend on this performance data, you may sometimes see an error when attempting to view overview or advanced charts because of this change.

In my case, I am using some custom code to query performance metrics using vSphere APIs and noticed the issue right away, as I was trying to gather a large amount of data.

VMware state that the reason for the change is to protect the vCenter database from receiving intensive or large queries.

If you wish to work around this, or remove the limit, you’ll need to introduce a new key/value pair advanced setting in the advanced settings area for your vCenter server instance. The key should be named “config.vpxd.stats.maxQueryMetrics” (without the quotes) and should have a value set of -1 to disable the limit. This could also be set to a value of 100 for example to limit the entities included in a database query to 100.

A further edit should be made to the web.xml file, however in my case I was not concerned with the limit affecting the client, as I was using the API, and making the first change seemed to do the trick for me.

You can read more about this setting by using this link to the official VMware KB article

Octopus Deploy Endpoint auto configuration on Azure VM deployment

January 28th, 2015 No comments

I’ve been working on a very cool project that involves the use of Microsoft Azure, TeamCity and Octopus Deploy.

I have created an Azure PowerShell script that deploys VMs into an Azure Subscription (Web machines that run IIS) as a part of a single Azure Cloud Service with load balancing enabled. As such, the endpoint ports that I create for Octopus tentacle communication need to differ for each machine on the public interface.

I wanted to fully automate things from end-to-end, so I wrote a very small console application that uses the Octopus Client library NuGet package in order to be able to communicate with your Octopus Deploy server via the HTTP API.

Octopus Endpoint Configurator on GitHub

The OctopusConfigurator console application should be run in your Azure VM once it is deployed, with 4 x parameters to specify when run.

It will then establish communication with your Octopus Deploy server, and register a new Tentacle endpoint using the details you pass it. The standard port number that gets assigned (10933) will then be replaced if necessary with the correct endpoint port number for that particular VM instance in your cloud service. For example, I usually start the first VM in my cloud service off on 10933, then increment the port number by 1 for every extra VM in the cloud service. As the deployments happen, the console application registers each new machine’s tentacle using the incremented port number back with the Octopus master server.

Once the Azure VM deployment is complete, I tell the VMs in the cloud service to restart with a bit of Azure PowerShell and once this is done, your Octopus environment page should show all newly deployed tentacles as online for your environment. Here is an example of an Invoke-Command scriptblock that I execute remotely on my Azure VMs as soon as they have completed initial deployment. What I do is tell the VM deployment script to wait for Windows boot, so once ready, the WinRM details are fetched for the VM using the Get-AzureWinRMUri cmdlet for Azure, which allows me to use the Invoke-Command to run the below script inside the guest VM.


Invoke-Command -ConnectionUri $connectionString -Credential $creds -ArgumentList $vmname,$externalDNSName,$creds,$InstallTentacleFunction,$OctopusExternalPort,$OctopusEnvironmentName -ScriptBlock {
	$webServerName = $args[0]
    $DNSPassthrough = $args[1]
    $passedCredentials = $args[2]
    $scriptFunction = $args[3]
    $OctoPort = $args[4]
    $OctopusEnvironmentName = $args[5]
	function DownloadFileUrl($url, $destinationPath, $fileNameToSave)
	    $fullPath = "$destinationPath\$fileNameToSave"

	    if (Test-Path -Path $destinationPath)
	        Invoke-WebRequest $url -OutFile $fullPath
	        mkdir $destinationPath
	        Invoke-WebRequest $url -OutFile $fullPath

	    Write-Host "Full path is: $fullPath"
	    return [string]$fullPath
	# Download the Octopus Endpoint Configurator to C:\Temp
	[string]$ConfiguratorPath = DownloadFileUrl "" "C:\Temp" ""
	Write-Host "Unzipping" -ForegroundColor Green
    cd C:\Temp
    $shell_app=new-object -com shell.application
    $filename = ""
    $zip_file = $shell_app.namespace((Get-Location).Path + "\$filename")
    $destination = $shell_app.namespace((Get-Location).Path)
    cd C:\Temp

    if (Test-Path -Path .\OctopusConfigurator.exe)
        & .\OctopusConfigurator.exe API-E6WNK91SIJFJU58OGO6IYWWCLG $webServerName $OctoPort
        Write-Host "Reconfigured Octopus Machine URI to correct port number" -ForegroundColor Green
        Write-Host "OctopusConfigurator not found!" -ForegroundColor Red

On VMware Fusion updating

October 27th, 2014 No comments

I’ve not had a lot of luck in the past with updating my VMware Fusion installs. Since version 5.x and upwards through to 6.x I’ve always had some annoying bugs or issues crop up when updating Fusion on my work machine. Whether these be relating to functionality I am used to changing, or bugs that interfere with my use of the software, there has always been something that goes wrong when I update.

So here is some general advice for when a new update for Fusion appears (this is what I now do before updating to the latest and so-called greatest version!)

  • Wait. Don’t update as soon as the new release is out. I generally wait about a month now. Generally this should be enough time for the Fusion team to correct initial problems with releases and give them time to submit a follow up patch to fix issues.
  • Keep tabs on the VMware communities forums for Fusion – users will often post issues with new releases here – judging on how much activity appears after a new release, you can generally tell whether its been a bad release or not.
  • Read the release notes in detail. Does the new release really give you any benefit? Have they patched vulnerabilities, or just simply added more features? What existing issues have been fixed? Make your decision to update based on the release notes. Sometimes a new release might not add any value for the way you use the product, and may add 10 new features. You can almost be sure that at least one of those new features is bound to introduce some sort of bug. (This is the nature of new features being added to any software in general, not just Fusion!)



Categories: Virtualization, VMware Tags: , ,