How to see if a Device/Datastore supports VAAI on ESXi 5 using ESXTOP or ESXCLI

Previously, using my home lab environment I had wondered how to check to see if my shared storage was using VAAI (VMware Storage APIs for Array Integration) – this is a technology that provides hardware acceleration functionality for your storage in vSphere, and was first introduced for vSphere ESX(i) 4.1. Essentially, this allows your hosts to offload certain virtual machine and storage management functions to hardware that supports VAAI.

 

The very first way I found I was able to see if VAAI was active on a datastore was by using ESXTOP. Silly – as it is quite a long-winded way of determining this, but it gave me an idea of whether it was in use or not. Anyway, to get to this ESXTOP view, go to your devices screen (press u) and remove a few columns that are taking up room (press F to add/remove columns), toggle all columns except the DEVICE name column, then add the counters we are interested in for VAAI. (press O – for VAAISTATS). Like so:

 

 

Now you can see the VAAI statistics for your datastores/devices in ESXTOP – notice how my HP/LHN P4000 VSA lab iSCSI datastores are showing VAAI stats as expected, whereas my shared NFS Datastores from a FreeNAS appliance are blanked out / not supported in the screenshot below:

 

 

 

Here is a useful list of ESXTOP counters that are explained in detail if you are wondering what each of these shows exactly: http://communities.vmware.com/docs/DOC-11812 – Section 4.2.9 shows ZERO statistics specifically, which is one of the features VAAI can help hosts offload to hardware.

 

So now, for a slightly easier / clearer way of viewing whether VAAI is active or not – we can use esxcli!

 

Connect up to an ESXi 5 host using SSH or the DCUI and issue the following command (Where the device identifier is specified after -d) – remember to find the device identifier applicable to the device you want to check for VAAI status on – one way of doing this is to identify the datastore using your vSphere client – Hosts & Clusters view -> Select ESXi host -> Configuration -> Storage -> Devices View tab -> Look for device identifier in the name column. Once you have your device ID, issue the command below:

 

esxcli storage core device list -d naa.6000eb3afd81276200000000000000bd

 

You will get a list out of the device along with various settings and feature statuses. We are specifically looking for “VAAI Status” – if it is enabled, we will see “VAAI Status: enabled” as in the example below.

 

 

Brian Ragazzi has also showed an even easier method (thanks Brian!) to get a list of devices along with their VAAI feature support listed using:

 

esxcli storage core device vaai status get

 

using “esxcli storage core device vaai status get” to get VAAI status for devices

 

Interestingly, whilst researching this, I came across a blog post by Jason Langer who also explains the first method I showed above of checking VAAI status – in his case he was using Netapp storage and found some other interesting caveats along the way in terms of getting VAAI enabled on his hosts with his particular device. Well, I hope this post helps and as always, if you spot anything out of place – feel free to chime in on the comments section!

 

PowerCLI Advanced Configuration Settings – Setting Host Syslog options

 

This is a PowerCLI script that will script setting up your ESXi host syslogging in two places: A Shared Datastore (that all ESXi hosts have visibility of) and to a remote Syslog host on the network. Alternatively, you could set it to change the Syslog directory to a different local path on each ESXi host (rather than the default). The two advanced configuration settings we’ll be working with are:

 

  • Syslog.global.logDir
  • Syslog.global.logHost

 

Why did I do this? Well, tonight I got a little sidetracked after looking into a small warning on a new scripted ESXi host I deployed – it had not automatically set up logging during installation, my guess, was because it’s local datastore had failed to initialise during installation – I couldn’t see a local datastore after deploying it via a script and had received a warning during installation. I went ahead just to see what happened. This was a new test host in my home lab. So afterwards, I decided I would set it up a remote Syslog location to clear the warning off. Naturally, I wanted to do this in PowerCLI.

 

The following script will take a few options you specify like Shared Datastore name, Syslogging Folder Name, and a remote Syslog host, and go ahead and configure all the ESXi hosts in your specified DataCenter object to use these options you specify. If you have only one datacenter object the script will see this and use that one by default. Otherwise it will ask you for the name of the specific DataCenter you want to work with. It will then create a “ESXi-Syslogging” folder on the shared datastore you specified (if it doesn’t already exist) and under this, create a sub-folder for each ESXi host. It will then configure each ESXi host to send it’s Syslog info to this datastore. As far as I can tell, certain Sysl0g files are linked from /scratch/log on each ESXi host to the datastore specified. For example:

 

usb.log becomes:

usb.log -> /vmfs/volumes/1bec1487-1189988a/ESXi-Syslogging/esxi-04.noobs.local/usb.log

 

Naturally, this wouldn’t be a good idea if the ESXi host could potentially lose access to this datastore, so the script will also specify a remote Syslog host to log to. If you don’t have one, check out the Free “Kiwi Syslog Server”. Here is the resulting script (download via URL below or just copy the script out the block):

[download id=”4″]

 

# Remote Syslogging setup for ESXi Hosts in a particular datacenter object
# By Sean Duffy (http://www.shogan.co.uk)
# Version 1.0
# Get-Date = 15/12/2011

#region Set variables up
$SyslogDatastore = "SharedDatastore2" # The name of the Shared Datastore to use for logging folder location - must be accessible to all ESXi hosts
$SyslogFolderName = "ESXi-Syslogging" # The name of the folder you want to use for logging - must NOT already exist
$RemoteSyslogHost = "udp://192.168.0.85:514" #specify your own Remote Syslog host here in this format
$DataCenters = Get-Datacenter
$FoundMatch = $false
$SyslogDatastoreLogDir = "[" + $SyslogDatastore + "]"
#endregion

#region Determine DataCenter to work with
if (($DataCenters).Count -gt 1) {
	Write-Host "Found more than one DataCenter in your environment, please specify the exact name of the DC you want to work with..."
	$DataCenter = Read-Host
	foreach ($DC in $DataCenters) {
		if ($DC.Name -eq $DataCenter) {
			Write-Host "Found Match ($DataCenter). Proceeding..."
			$FoundMatch = $true
			break
			}
		else {
			$FoundMatch = $false
			}
	}
	if ($FoundMatch -eq $false) {
		Write-Host "That input did not match any valid DataCenters found in your environment. Please run script again, and try again."
		exit		
		}
	}
else {
	$DataCenter = ($DataCenters).Name
	Write-Host "Only one DC found, so using this one. ($DataCenter)"
	}
#endregion

#Enough of the error checking, lets get to the script!

$VMHosts = Get-Datacenter $DataCenter | Get-VMHost | Where {$_.ConnectionState -eq "Connected"}

cd vmstore:
cd ./$DataCenter
cd ./$SyslogDatastore
if (Test-Path $SyslogFolderName) {
	Write-Host "Folder already exists. Configure script with another folder SyslogFolderName & try again."
	exit
}
else {
	mkdir $SyslogFolderName
	cd ./$SyslogFolderName
}

# Crux of the script in here:
foreach ($VMHost in $VMHosts) {
	if (Test-Path $VMHost.Name) {
		Write-Host "Path for ESXi host log directory already exists. Aborting script."
		exit
	}
	else {
		mkdir $VMHost.Name
		Write-Host "Created folder: $VMHost.Name"
		$FullLogPath = $SyslogDatastoreLogDir + "/" + $SyslogFolderName + "/" + $VMHost.Name
		Set-VMHostAdvancedConfiguration -VMHost $VMHost -Name "Syslog.global.logHost" -Value $RemoteSyslogHost
		Set-VMHostAdvancedConfiguration -VMHost $VMHost -Name "Syslog.global.logDir" -Value $FullLogPath
		Write-Host "Paths set: HostLogs = $RemoteSyslogHost LogDir = $FullLogPath"
	}
}

 

Some results of running the above in my lab:

 

Script output after running against 3 x ESXi hosts

 

Advanced settings are showing as expected
Tasks completed by the PowerCLI script

 

Here is the structure of the Logging in my Datastore after the script had run.

 

 

Lastly, I have found that these values won’t update if they are already populated with anything other than the defaults (especially the LogDir advanced setting). You’ll get an error along the lines of:

 

Set-VMHostAdvancedConfiguration : 2011/12/15 11:37:01 PM    Set-VMHostAdvancedConfigurati
on        A general system error occurred: Internal error

 

If this happens, you could either set the value to an empty string, i.e. “” or you could use Set-VMHostAdvancedConfiguration [[-NameValue] <Hashtable>] (using a hashtable of the advanced config setting as the NameValue) to set things back to the way they were originally. The default value for Syslog.global.logDir is = [] /scratch/log for example. It might be a good idea to grab this value and pipe it out into a file somewhere safe before you run the script in case you need to revert back to the default Syslog local location – you would then know what the value should be. You could use something like this to grab the value and put it into a text file:

 

Get-VMHostAdvancedConfiguration -VMHost $VMHost -Name "Syslog.global.logDir" | Out-File C:\default-sysloglocation.txt

 

Also, remember to think this out carefully before changing in your own environment – it may not suit you at all to set your Syslog location to anywhere else other than the local disk of each host. However, you may find it useful to automatically set the remote syslog host advanced setting for each of your ESXi hosts – in that case, simply modify the script above to only apply the one setting instead of both. I hope this helps, and as always, if you have any suggestions, improvements or notice any bugs, please feel free to add your comments.

 

 

Upgrading to vSphere 5.0 from vSphere 4.x – Part 3 – Using VUM to upgrade ESX(i) hosts

 

Introduction

 

In Part 2 of this series, I covered the steps needed in order to manually upgrade your ESX(i) 4.x hosts to ESXi 5.0.0. This is useful for smaller deployments or lab set ups where you don’t have that many hosts to upgrade. However, with larger deployments, you’re going to want to look at a way of automating this process. VMware Update Manager (VUM) is one of these ways, and in this part, I’ll explain how to use it to upgrade your hosts to ESXi 5.0.0 by using baselines.

 

Using VMware Update Manager to upgrade ESX(i) 4.x hosts to 5.0

 

There is a little bit of ground work required to set up your ESXi 5.0.0 ISO image and create a VMware Update Manager baseline in this process, but once this is done, it is quite easy to attach your new baseline to your older hosts and remediate them against this (effectively upgrading them / bringing them up to date against this new baseline). To go through this process, you will of course need a VUM server. Update Manager comes with all editions of vSphere 4 and 5, so if you don’t already have it up and running, you should seriously think about deploying it, as it will save you a lot of time with otherwise routine, time consuming tasks when it comes to upgrading/updating Hosts, VMs, and Applications.

 

Start off by downloading the ESXi 5.0.0 ISO image to your vSphere Client machine if you don’t already have this. Once this is downloaded, you’ll then want to go to your vSphere client home screen and choose VMware Update Manager from the home screen.

 

 

This will load the VUM interface and provide you everything you need to work with Update Manager. Select the “ESXi Images” tab along the top, then click on the link called “Import ESXi Image…” In the wizard that appears, browse for your recently downloaded ESXi 5.0.0 ISO image and select it. Follow through the wizard to Upload the ISO to your VUM server. You may receive a security warning (SSL) which you will need to ignore/accept to continue. All going well, you should reach an “Upload Successful” point and see the details of your ISO, similar to the below screenshot.

 

 

Move to the next step, and we’ll now create a Baseline out of this Image. Tick the box for “Create a baseline using the ESXi image” and give it a meaningful name and an optional description. Finish the wizard when you have named the baseline, and you’ll now have a shiny new baseline with which you can use to remediate your hosts against.

 

 

Move along to the “Baselines and Groups” tab in the main VUM area, and verify that your new baseline is showing and that the details look correct. It should show up as a “Host Upgrade” under the Component column.

 

 

For the next step, we’ll be looking to attach this baseline to the older ESX(i) hosts that need to be upgraded. To attach to all your hosts in the same cluster all at once, go to your Hosts & Clusters Inventory view,  select your Cluster name, then go to the “Update Manager” tab near the end. From here, click on the “Attach” link as seen below:

 

 

Now, you’ll want to select the ESXi5 Upgrade Baseline we created in the previous steps to the selected Cluster. Simply tick the box next to your “Host Upgrade” Baseline name, then click “Attach”.

 

 

Now that your baseline has been attached, it will apply to all the ESX(i) hosts in the cluster. From the same Update Manager screen, click the “Scan” link to initiate a compliance scan. In other words, we’ll be looking to find out which hosts are not in compliance with our new ESXi 5.0.0 image, from which point we can move on to remediating (upgrading). Select the tickbox for “Upgrades“, then click “Scan” to continue. Once this is done, you should notice that all the hosts which have your baseline attached will show as Non-Compliant (unless you have manually upgraded any of these).

 

 

If you are ready to begin the upgrade process for your hosts, click on the yellow “Remediate” button on this screen.

 

 

 

You’ll now be taken to the Remediation Wizard / Selection screen. Tick (Select) which hosts you would like to remediate (upgrade) and ensure the correct Host Upgrade Baseline is selected, then continue on to the next step.

 

 

 

 

There are a few options that you will need to configure in this wizard based on your environment. Preferences such as Host and Cluster Remediation options can be set up, which control how your cluster and hosts should handle the remediation tasks. For example, Power State to put your VMs into, or Cluster features to keep enabled or disable. You can also define a schedule for remediation in this wizard if you wish.  Here is a quick rundown of the steps in this wizard that I went through, along with a couple of screenshots of the options I went with in my lab upgrade.

 

1. Selection screen

2. Read and Accept the EULA

3. Choose whether to remove third-party installed software that is incompatible with the upgrade or not.

4. Optionally, set up task name and description for the upgrade remediation task. Select Immediately, or specify a time to begin the task.

5. Choose how you would like to handle VMs for the remidiation. Note that these options also apply to hosts in clusters. I left mine at “Do not change VM Powerstate” as I have DRS set to fully-automated. Therefore VMs running on this host will be vMotioned off when the host enters maintenace mode, and I won’t need to worry about moving them myself.

6. Select how the cluster should be handled during remediation. For example you can disable DPM during remediation. It will be re-enabled after the task is complete.

7. Check summary and finish the wizard.

 

Host remediation options

 

 

Cluster remediation options

 

 

Checking through the summary and finishing off the wizard

 

 

Once you finish the wizard, your selected hosts will begin the upgrade (Remediation) process. Depending on the options you chose, your VMs should automatically be vMotioned off the current host being upgraded as it enters maintenance mode. If you’d like to follow along with one of the hosts, have a look at the server console to see how progress is made. Working with a cluster of hosts, you should be able to use this method to upgrade hosts systematically in an automated fashion. One of the benefits of this method is obviously along the lines of time saving, but also the fact that you can have zero downtime if it is done correctly. As each host is upgraded to ESXi 5.0.0, VMs can vMotion back to upgraded hosts, allowing you to shift the VM workload around hosts that need to go down for maintenance. Lastly, don’t forget to ensure your licensing is sorted out after the upgrades are complete.

 

In the next part, we’ll take a look at other tasks involved with an upgrade to vSphere 5.0.

 

More in this series:

 

Part 1 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 1 – vCenter Server

Part 2 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 2 – Manually upgrading ESX(i) hosts

 

Upgrading to vSphere 5.0 from vSphere 4.x – Part 2 – Manually upgrading ESX(i) hosts

 

Introduction

 

In Part 1 of this series, I covered the steps needed to move your vCenter server over to vCenter 5.0, and outlined some points to consider or be aware of. For this post, I’ll be covering two different methods you can use to migrate your ESX(i) hosts over to ESXi 5.0. Which one you use really depends on how big your deployment is. For someone running a small deployment of just a few hosts, it may make more sense to manually upgrade each using an ISO. For others with larger environments, it would be prudent to use an automated method such as VMware Update Manager (VUM). I’ll be covering both methods over the next two posts (today and tomorrow), and will outline the process involved in each.

 

Manually upgrade ESX(i) 4.x to 5.0

 

This is a fairly straightforward process, and simply involves restarting each of your hosts in the cluster, one at a time and booting them off an ESXi 5.0 installer image. To start, you’ll want to prepare each ESXi host as you get to it, by entering it into Maintenance mode. This will migrate any VMs off and allow you to restart it safely with no VM downtime. Ensure all your VMs have vMotioned off before you begin. In my case, my ESXi hosts are virtual, so I simply mounted the VMware ESXi 5.0 installer ISO on each VM. For physical hosts, you’ll want to burn the installation ISO to CD/DVD and pop it into each host of course.

 

Boot off the ISO and select to start the standard installer

 

Once the Installer has loaded, choose the disk you would like to perform the upgrade on. All disks on the host that are VMFS should be marked with an asterisk for you to easily identify.

 

Choose the disk to upgrade

 

 

Press F1 to get the details of the disk you are going to upgrade during installation. The installer should detect your current version of ESX(i) as well as provide a bit of extra information in case you need to confirm you are upgrading the correct disk etc…

 

Viewing the Disk details

 

On the next step, you should get a few options relating to the type of action you want to take on the existing installation of ESX(i). For this type of upgrade, I’ll be choosing to Force Migrate my ESXi 4.1 host to ESXi 5 and preserve the existing local VMFS datastore.

 

 

After continuing, you may get a warning if there are any custom VIBs (VMware Installation Bundle) in your current ESX(i) installation that do not have substitutes on the install ISO. If there are, you should look into whether or not these will still be needed, or check that you have a contingency plan to re-install or reinstate the particular bit of software should you need to after the upgrade. I got a warning about some OEM VMware drivers, but I knew these were just related to the ESXi installation in a VM I am running, so I was happy to continue the upgrade process anyway.

 

Next up, you’ll just need to confirm you wish to Force Migrate to ESXi 5.0.0 on the specified disk you chose earlier.  The installer will also inform you as to whether you will be able to rollback the install or not. Press F11 to continue at this point if you are happy.

 

 

 

Once the migration / install is complete you’ll get the below message. Press Enter to reboot the host.

 

 

 

At this stage, once the host has booted up again, you’ll want to log into it using the vSphere client, or check on it through your vCenter connection in the vSphere client. Make sure everything is configured as you would expect and that it is running smoothly on ESXi 5.0.0. That is all there is to it really – provided all went well, you should have a newly upgraded host ready to run VMs again. The upgrade process is quite quick and straight forward. Test it out by vMotioning a test VM onto the new host and check that there are no issues. Tomorrow, we’ll look at performing more of an autonomous upgrade for your hosts to ESXi 5.0.0 using VUM (VMware Update Manager).

 

More in this series:

 

Part 1 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 1 – vCenter Server
Part 3 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 3 – Using VUM to upgrade ESX(i) hosts

 

Upgrading to vSphere 5.0 from vSphere 4.x – Part 1 – vCenter Server

 

Introduction

 

vSphere 5.0 is a great step up in terms of new functionality and features from vSphere 4.1. VMware have introduced some awesome new features that are definitely worth making use of by getting your environment upgraded. A few that caught my eye are sDRS (Storage DRS), Storage vMotion of VMs with Snapshots, and of course the ability to now run those “monster” VMs (much larger VMs supported now).

 

Before you take the plunge and upgrade, I would recommend doing a good amount of reading best practises documentation and planning. I have linked to two very useful documents from VMware that will help with your planning. There is an upgrade checklist which you can work through systematically, as well as a best practises whitepaper which helps explain the process in good detail with some great screenshots. In my post I will be going through the process I followed to upgrade my lab environment from vSphere 4.1 to 5.0. As it is a lab environment, I didn’t do too much in terms of planning, but for production environments, this would be a good idea as it never hurts to be prepared.

 

As part of the process, I also set up a brand new VMware Update Manager server in a VM once my vCenter server was updated to 5.0 to aid upgrading my ESXi hosts from 4.1 to 5.0. I only have three virtualised ESXi hosts running in my lab cluster, so it would have been quicker to use the ISO and do them manually, but I wanted to go through the process myself as I don’t spend enough time on Update Manager in my work environment.

 

Documentation to read

vSphere 5.0 upgrade checklist

VMware vSphere 5.0 Upgrade Best Practises Technical Whitepaper

 

So without further ado, let’s begin.

 

Update your vCenter Server

 

  • First of all, make sure you are running vCenter Server 4.0 or above. If you are, then chances are you’ll be running a 64bit OS. Just be sure to check though, as it is possible to run vCenter Server 4.0 on a 32bit OS. If you have vCenter Server running on a 64bit OS, then you are good to go, otherwise you’ll need to take a slightly more complicated route, which involves creating a new vCenter server on a 64bit OS, installing vCenter 5.0 and then migrating the existing database over to the new server. For specifics there is documentation on this process in the vSphere 5.0 Upgrade Guide from VMware.
  • Next up, check that the other minimum requirements are met; especially in terms of CPU and RAM.
  • Backup your existing vCenter database – this is clearly a very important step. Make sure you back up everything. Before I upgraded my lab, I took a backup of my vCenter SQL 2005 Express Database as well as my SSL certificates. The vCenter 5.0 upgrade wizard will remind you about this too. Keep your backups safe. For my lab vCenter database, I simply installed Microsoft SQL Server Management Studio Express (free) and used the backup option in there to backup the database as well all other SQL system databases just to be safe.

 

You can use SQL Server Management Studio Express to backup your SQL Express Database.

 

  • Also take a backup of your vpxd.cfg file. On a 2003 Server this would by default be located under %ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\ or for a 2008 Server, it would by default be under C:\ProgramData\VMware\VMware VirtualCenter\
  • Now you should look at your Database options and upgrade path. Review the Prerequisites documentation in the Upgrade Guide for databases and ensure yours is supported. If it is not, then be sure to follow the correct procedure to migrate to a supported one.
  • Ensure your vCenter Server has a 64bit DSN – it will already have one if you are running vCenter Server 4.1 (as this is 64bit only) but could be different if you are still on vCenter 4.0.
  • If you are running a Microsoft SQL Database (as I am in my lab), ensure your System DSN under ODBC connections is using the SQL Native Client driver.
  • Ensure you have Microsoft .NET 3.5 SP1 and Windows Installer 4.5 or greater installed on your vCenter server.

 

Ensure your System DSN under ODBC connections is using the SQL Native Client driver.

 

If you already have a running vCenter 4.x deployment then you shouldn’t have too many more things to sort out, but a few other fairly obvious ones to check (taken from the Upgrade Checklist document I linked above) are:

 

    • Note down all your database login credentials and ensure the vCenter database login has db_owner permissions.
    • Make sure your current installation path of vCenter does not have any commas or periods in it. (I assume this can cause trouble for the upgrade!)
    • Ensure your vCenter Server name is not longer than 15 characters and is registered correctly with your AD Domain’s DNS.
    • Ensure all required ports in/out of the vCenter server are open.
    • Make sure any additional plug-ins you use for vCenter Server are compatible with 5.0 – also make sure you re-enable or reconfigure these post-upgrade.
    • Make sure you know the rest of your hardware is compatible with vSphere 5.0! i.e. check your ESX / ESXi hosts are compatible and there will be no issues there. You can use the vCenter Host Agent pre-upgrade checker utility included with the vCenter 5.0 installation media for this.

 

At this point, if you are happy to proceed and have your backups safe, load up the installation media for vCenter 5.0 and begin the installer. Start the installation wizard for vCenter 5.0 – this wizard does a great job of guiding you through the upgrade process and asking what configuration options you would like. They are fairly straight forward – choose as applicable to your installation. I have captured screenshots of the installer process I followed for my lab – most options for me were fine to leave on their defaults. See the following series of screenshots:

 

The main vCenter 5.0 Installation menu

 

Using Windows Authentication for SQL server.

 

We are upgrading, so this is the obvious choice.

 

Automatic upgrade of vCenter agent on hosts.

 

Configure the vCenter server service account.

 

Configure your ports - in most cases these *should* be the defaults

 

More ports to configure...

 

Web Services JVM Memory Configuration - based on how many hosts you have

 

Increase ephemeral ports available if you have this VM Power on requirement!

 

After this wizard is completed the upgrade will begin. If applicable your vCenter database schema will also be upgraded and soon you’ll be up and running with vCenter 5.0. Note that this upgrade does require your vCenter service be down for the duration of the installer upgrade process – how long this is really depends on how big your vCenter DB is. My lab vCenter upgrade took about 25 minutes to run through, but its inventory is very small – for most production environments with up to 100 VMs or so I wouldn’t see this taking longer than 45-60 minutes in most cases, but remember its dependant on various other factors. Once the installer is finished, it would be a good idea to also update your vSphere client to access your vCenter server with – install the version that comes with your vCenter 5.0 installation media and login again. You should see your inventory as per usual and hopefully all will be well. One thing I noticed immediately following my upgrade was that I had an alert for my datastores – I blogged about this over here actually. This is only really applicable if you are running a lab environment (or small production) with only 1 or 2 shared datastores. Other than this, the rest of my VMs were all running happily on their respective ESXi hosts and everything else was just fine.

 

So coming up in Part 2, I will cover the next step in upgrading your environment to vSphere 5.0 – the ESX(i) hosts along with a couple of different methods of doing this. If there is anything that I have missed, or you have any tips or additional info, please feel free to update using the comments section.

 

More in this series:

 

Part 2 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 2 – Manually upgrading ESX(i) hosts
Part 3 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 3 – Using VUM to upgrade ESX(i) hosts