PowerCLI – script to deploy multiple random VMs

There are probably tons of scripts out there to deploy VMs in a vSphere environment, but I was in the mood for scripting this evening and decided to create my own PowerCLI script to automatically deploy multiple random VMs to my home lab. The point was to create random “content” for something exciting I have been working on 🙂

 

 

[download id=”19″]

 

$i = 1
[int]$NumberToDeploy = 10 # Number of VMs to deploy
[string]$NamingConvention = "homelab-VM-" # Prefix for VM names
[string]$folderLoc = "Discovered virtual machine" # Name of VM folder to deploy into

while ($i -le $NumberToDeploy) {
	$NumCPUs = (Get-Random -Minimum 1 -Maximum 3) #NUM CPU of either 1 or 2.
	$MemoryMB = ("16","32","64") | Get-Random # Get a random VM Memory size from this list (MB)
	$DiskSize = ("256","512","768" ) | Get-Random # Get a random disk size from this list (MB)
	$targetDatastore = Get-Datastore | Where {($_.ExtensionData.Summary.MultipleHostAccess -eq "True") -and ($_.FreeSpaceMB -gt "10240")} | Get-Random
	$targetVMhost = Get-VMHost | Where { $_.ConnectionState -eq "Connected" } | Get-Random # Selects a random host which is in "connected State"
	$targetNetwork = ("VM Network","VM Distributed Portgroup") | Get-Random
	$GuestType = (	"darwinGuest","dosGuest","freebsd64Guest","freebsdGuest","mandrake64Guest",
					"mandrakeGuest","other24xLinux64Guest","other24xLinuxGuest","other26xLinux64Guest",
					"other26xLinuxGuest","otherGuest","otherGuest64","otherLinux64Guest","otherLinuxGuest",
					"rhel5Guest","suse64Guest","suseGuest","ubuntu64Guest","ubuntuGuest",
					"win2000AdvServGuest","win2000ProGuest","win2000ServGuest","win31Guest","win95Guest",
					"win98Guest","winLonghorn64Guest","winLonghornGuest","winNetBusinessGuest",
					"winNetDatacenter64Guest","winNetDatacenterGuest","winNetEnterprise64Guest",
					"winNetEnterpriseGuest","winNetStandard64Guest","winNetStandardGuest","winNetWebGuest",
					"winNTGuest","winVista64Guest","winVistaGuest","winXPPro64Guest",
					"winXPProGuest") | Get-Random
	$VMName = $NamingConvention + $i

	if ((Get-VM $VMName -ErrorAction SilentlyContinue).Name -eq $VMName) {
		Write-Host "$VMName already exists, skipping creation of this VM!" -ForegroundColor Yellow
		}
	else {	
		Write-Host "Deploying $VMName to $folderLoc ..." -ForegroundColor Green
		#Create our VM
		New-VM -Name $VMName -ResourcePool $targetVMhost -Datastore $targetDatastore -NumCPU $NumCPUs -MemoryMB $MemoryMB -DiskMB $DiskSize `
		-NetworkName $targetNetwork -Floppy -CD -DiskStorageFormat Thin -GuestID $GuestType -Location $folderLoc | Out-Null
		}
	$i++
}

 

This script will automatically deploy a variable number of Virtual Machines, based on a certain naming convention you specify. It’ll also randomly choose VM settings. Here is what it does:

 

  • Deploy any number of VMs by changing the total number of VMs to deploy (specify in script as $NumberToDeploy)
  • Deploys each VM to a random ESXi host which is in a connected state
  • Deploys each VM to a random shared datastore (i.e. a datastore with multiple hosts connected)
  • Sets a random vCPU count to each VM (specify options in the script as $NumCPUs)
  • Sets a random Memory size for each VM (specify options in the script as $MemoryMB)
  • Create a virtual disk as thin provisioned on each VM with a random disk size (specify disk size options in the script as $DiskSize)
  • Adds each VM to a random VM network (specify VM network options in the script as $targetNetwork)
  • Specifies a random GuestOS type for each VM deployed
  • Creates each VM in a specified VM & Templates folder (specify in script as $folderLoc)

 

All options are already set to defaults in the script as is stands, but don’t forget to change important options unique to your environment like the $folderLoc (folder to deploy VMs into), and the list of VM networks ($targetNetwork). Other items are automatically determined (datastore and host to deploy to for example).

Also note that some of the built-in guestOS types might not be supported on some ESXi hosts. In these cases, the script just skips creating that VM and moves onto the next. You may see a red error message for the failed VM in these cases. For a full list of GuestID types, check out this page.

My VMware vSphere Home lab configuration

I have always enjoyed running my own home lab for testing and playing around with the latest software and operating systems / hypervisors. Up until recently, it was all hosted on VMware Workstation 8.0 on my home gaming PC, which has an AMD Phenom II x6 (hex core) CPU and 16GB of DDR3 RAM. This has been great, and I still use it, but there are some bits and pieces I still want to be able to play with that are traditionally difficult to do on a single physical machine, such as working with VLANs and taking advantage of hardware feature sets.

 

To that end, I have been slowly building up a physical home lab environment. Here is what I currently have:

Hosts

  • 2 x HP Proliant N40L Microservers (AMD Turion Dual Core processors @ 1.5GHz)
  • 8GB DDR3 1333MHz RAM (2 x 4GB modules)
  • Onboard Gbit NIC
  • PCI-Express 4x HP NC360T Dual Port Gbit NIC as addon card (modifed to low-profile bracket)
  • 250GB local SATA HDD (just used to host the ESXi installations.

Networking

  • As mentioned above, I am using HP NC360T PCI-Express NICs to give me a total of 3 x vmnics per ESXi host.
  • Dell PowerConnect 5324 switch (24 port Gbit managed switch)
  • 1Gbit Powerline Ethernet home plugs to uplink the Dell PowerConnect switch to the home broadband connection. This allows me to keep the lab in a remote location in the house, which keeps the noise away from the living area.

Storage

  • This is a work in progress at the moment, (currently finding the low end 2 x bay home NAS devices are not sufficient for performance, and the more expensive models are too expensive to justify).
  • Repurposed Micro-ATX custom built PC, housed in a Silverstone SG05 micro-ATX chassis running FreeNAS 8.2 (Original build and pics of the chassis here)
  • Intel Core 2 Duo 2.4 GHz processor
  • 4GB DDR2-800 RAM
  • 1 Gbit NIC
  • 1 x 1TB 7200 RPM SATA II drive
  • 1 x 128GB OCZ Vertex 2E SSD (SATA II)
  • As this is temporary, each drive provides 1 x Datastore to the ESXi hosts. I therefore have one large datastore for general VMs, and one fast SSD based datastore for high priority VMs, or VM disks. I am limited by the fact that the Micro-ATX board only has 2 x onboard SATA ports, so I may consider purchasing an addon card to expand these.
  • Storage is presented as NFS. I am currently testing ZFS vs UFS and the use of the SSD drive as a ZFS and zil log / and or cache drive. To make this more reliable, I will need the above mentioned addon card to build redundancy into the system, as I would not like to lose a drive at this time!

Platform / ghetto rack

  • IKEA Lack rack (black) – cheap and expandable : )

 

To do

Currently, one host only has 4GB RAM, I have an 8GB kit waiting to be added to bring both up to 8GB. I also need to add the HP NC360T dual port NIC to this host too as it is a recent addition to the home lab.

On the storage side of things, I just managed to take delivery of 2 x OCZ Vertex 2 128GB SSD drives which I got at bargain prices the other day (£45 each). Once I have expanded SATA connectivity in my Micro-ATX FreeNAS box I will look into adding these drives for some super fast SSD storage expansion.

 

The 2 x 120GB OCZ SSDs to be used for Shared Host Storage
HP NC360T PCI-Express NIC and 8GB RAM kit for the new Microserver

 

Lastly, the Dell PowerConnect 5324 switch I am using still has the original firmware loaded (from 2005). This needs to be updated to the latest version so that I can enable Link Layer Discovery Protocol (LLDP) – which is newly supported with the VMware vSphere 5.0 release on Distributed Virtual Switches. This can help with the configuration and management of network components in an infrastructure, and will mainly serve to allow me to play with this feature in my home lab. I seem to have lost my USB-to-Serial adapter though, so this firmware upgrade will need to wait until I can source a new one off ebay.

 

How to see if a Device/Datastore supports VAAI on ESXi 5 using ESXTOP or ESXCLI

Previously, using my home lab environment I had wondered how to check to see if my shared storage was using VAAI (VMware Storage APIs for Array Integration) – this is a technology that provides hardware acceleration functionality for your storage in vSphere, and was first introduced for vSphere ESX(i) 4.1. Essentially, this allows your hosts to offload certain virtual machine and storage management functions to hardware that supports VAAI.

 

The very first way I found I was able to see if VAAI was active on a datastore was by using ESXTOP. Silly – as it is quite a long-winded way of determining this, but it gave me an idea of whether it was in use or not. Anyway, to get to this ESXTOP view, go to your devices screen (press u) and remove a few columns that are taking up room (press F to add/remove columns), toggle all columns except the DEVICE name column, then add the counters we are interested in for VAAI. (press O – for VAAISTATS). Like so:

 

 

Now you can see the VAAI statistics for your datastores/devices in ESXTOP – notice how my HP/LHN P4000 VSA lab iSCSI datastores are showing VAAI stats as expected, whereas my shared NFS Datastores from a FreeNAS appliance are blanked out / not supported in the screenshot below:

 

 

 

Here is a useful list of ESXTOP counters that are explained in detail if you are wondering what each of these shows exactly: http://communities.vmware.com/docs/DOC-11812 – Section 4.2.9 shows ZERO statistics specifically, which is one of the features VAAI can help hosts offload to hardware.

 

So now, for a slightly easier / clearer way of viewing whether VAAI is active or not – we can use esxcli!

 

Connect up to an ESXi 5 host using SSH or the DCUI and issue the following command (Where the device identifier is specified after -d) – remember to find the device identifier applicable to the device you want to check for VAAI status on – one way of doing this is to identify the datastore using your vSphere client – Hosts & Clusters view -> Select ESXi host -> Configuration -> Storage -> Devices View tab -> Look for device identifier in the name column. Once you have your device ID, issue the command below:

 

esxcli storage core device list -d naa.6000eb3afd81276200000000000000bd

 

You will get a list out of the device along with various settings and feature statuses. We are specifically looking for “VAAI Status” – if it is enabled, we will see “VAAI Status: enabled” as in the example below.

 

 

Brian Ragazzi has also showed an even easier method (thanks Brian!) to get a list of devices along with their VAAI feature support listed using:

 

esxcli storage core device vaai status get

 

using “esxcli storage core device vaai status get” to get VAAI status for devices

 

Interestingly, whilst researching this, I came across a blog post by Jason Langer who also explains the first method I showed above of checking VAAI status – in his case he was using Netapp storage and found some other interesting caveats along the way in terms of getting VAAI enabled on his hosts with his particular device. Well, I hope this post helps and as always, if you spot anything out of place – feel free to chime in on the comments section!

 

Upgrading to vSphere 5.0 from vSphere 4.x – Part 3 – Using VUM to upgrade ESX(i) hosts

 

Introduction

 

In Part 2 of this series, I covered the steps needed in order to manually upgrade your ESX(i) 4.x hosts to ESXi 5.0.0. This is useful for smaller deployments or lab set ups where you don’t have that many hosts to upgrade. However, with larger deployments, you’re going to want to look at a way of automating this process. VMware Update Manager (VUM) is one of these ways, and in this part, I’ll explain how to use it to upgrade your hosts to ESXi 5.0.0 by using baselines.

 

Using VMware Update Manager to upgrade ESX(i) 4.x hosts to 5.0

 

There is a little bit of ground work required to set up your ESXi 5.0.0 ISO image and create a VMware Update Manager baseline in this process, but once this is done, it is quite easy to attach your new baseline to your older hosts and remediate them against this (effectively upgrading them / bringing them up to date against this new baseline). To go through this process, you will of course need a VUM server. Update Manager comes with all editions of vSphere 4 and 5, so if you don’t already have it up and running, you should seriously think about deploying it, as it will save you a lot of time with otherwise routine, time consuming tasks when it comes to upgrading/updating Hosts, VMs, and Applications.

 

Start off by downloading the ESXi 5.0.0 ISO image to your vSphere Client machine if you don’t already have this. Once this is downloaded, you’ll then want to go to your vSphere client home screen and choose VMware Update Manager from the home screen.

 

 

This will load the VUM interface and provide you everything you need to work with Update Manager. Select the “ESXi Images” tab along the top, then click on the link called “Import ESXi Image…” In the wizard that appears, browse for your recently downloaded ESXi 5.0.0 ISO image and select it. Follow through the wizard to Upload the ISO to your VUM server. You may receive a security warning (SSL) which you will need to ignore/accept to continue. All going well, you should reach an “Upload Successful” point and see the details of your ISO, similar to the below screenshot.

 

 

Move to the next step, and we’ll now create a Baseline out of this Image. Tick the box for “Create a baseline using the ESXi image” and give it a meaningful name and an optional description. Finish the wizard when you have named the baseline, and you’ll now have a shiny new baseline with which you can use to remediate your hosts against.

 

 

Move along to the “Baselines and Groups” tab in the main VUM area, and verify that your new baseline is showing and that the details look correct. It should show up as a “Host Upgrade” under the Component column.

 

 

For the next step, we’ll be looking to attach this baseline to the older ESX(i) hosts that need to be upgraded. To attach to all your hosts in the same cluster all at once, go to your Hosts & Clusters Inventory view,  select your Cluster name, then go to the “Update Manager” tab near the end. From here, click on the “Attach” link as seen below:

 

 

Now, you’ll want to select the ESXi5 Upgrade Baseline we created in the previous steps to the selected Cluster. Simply tick the box next to your “Host Upgrade” Baseline name, then click “Attach”.

 

 

Now that your baseline has been attached, it will apply to all the ESX(i) hosts in the cluster. From the same Update Manager screen, click the “Scan” link to initiate a compliance scan. In other words, we’ll be looking to find out which hosts are not in compliance with our new ESXi 5.0.0 image, from which point we can move on to remediating (upgrading). Select the tickbox for “Upgrades“, then click “Scan” to continue. Once this is done, you should notice that all the hosts which have your baseline attached will show as Non-Compliant (unless you have manually upgraded any of these).

 

 

If you are ready to begin the upgrade process for your hosts, click on the yellow “Remediate” button on this screen.

 

 

 

You’ll now be taken to the Remediation Wizard / Selection screen. Tick (Select) which hosts you would like to remediate (upgrade) and ensure the correct Host Upgrade Baseline is selected, then continue on to the next step.

 

 

 

 

There are a few options that you will need to configure in this wizard based on your environment. Preferences such as Host and Cluster Remediation options can be set up, which control how your cluster and hosts should handle the remediation tasks. For example, Power State to put your VMs into, or Cluster features to keep enabled or disable. You can also define a schedule for remediation in this wizard if you wish.  Here is a quick rundown of the steps in this wizard that I went through, along with a couple of screenshots of the options I went with in my lab upgrade.

 

1. Selection screen

2. Read and Accept the EULA

3. Choose whether to remove third-party installed software that is incompatible with the upgrade or not.

4. Optionally, set up task name and description for the upgrade remediation task. Select Immediately, or specify a time to begin the task.

5. Choose how you would like to handle VMs for the remidiation. Note that these options also apply to hosts in clusters. I left mine at “Do not change VM Powerstate” as I have DRS set to fully-automated. Therefore VMs running on this host will be vMotioned off when the host enters maintenace mode, and I won’t need to worry about moving them myself.

6. Select how the cluster should be handled during remediation. For example you can disable DPM during remediation. It will be re-enabled after the task is complete.

7. Check summary and finish the wizard.

 

Host remediation options

 

 

Cluster remediation options

 

 

Checking through the summary and finishing off the wizard

 

 

Once you finish the wizard, your selected hosts will begin the upgrade (Remediation) process. Depending on the options you chose, your VMs should automatically be vMotioned off the current host being upgraded as it enters maintenance mode. If you’d like to follow along with one of the hosts, have a look at the server console to see how progress is made. Working with a cluster of hosts, you should be able to use this method to upgrade hosts systematically in an automated fashion. One of the benefits of this method is obviously along the lines of time saving, but also the fact that you can have zero downtime if it is done correctly. As each host is upgraded to ESXi 5.0.0, VMs can vMotion back to upgraded hosts, allowing you to shift the VM workload around hosts that need to go down for maintenance. Lastly, don’t forget to ensure your licensing is sorted out after the upgrades are complete.

 

In the next part, we’ll take a look at other tasks involved with an upgrade to vSphere 5.0.

 

More in this series:

 

Part 1 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 1 – vCenter Server

Part 2 – Upgrading to vSphere 5.0 from vSphere 4.x – Part 2 – Manually upgrading ESX(i) hosts

 

How to move your VMware vCenter Server database (SQL)

 

My vSphere 5 lab at home had been feeling a little sluggish when it came to running the vSphere client and working with vCenter – I could see that the drive where my vCenter DB was running (SATA) was taking a bit of hit when I accessed Performance graphs and the like from vCenter. I had a little bit of free SSD storage on another drive (OCZ Vertex 2 SATA drive) that would be perfect for my vCenter DB size, so in the search of better performance in my lab, and a bit of a practise run at moving the vCenter SQL database, I set about moving just the SQL databases on the VM to a new, dedicated SSD-based VMDK. This also meant that I didn’t take up too much space moving the entire VM across to SSD storage, as this kind of storage comes at a premium!

 

This post will cover the process I followed for SQL Server 2005 Express. Most labs environments are likely to be running this edition, especially if you upgrade the lab from vSphere 4.1. The steps are quite similar for a SQL 2008 database, but there are differences, so just make sure you follow the correct KBs if you are on a different edition of SQL Server. Here is the high level overview of what was involved in the whole process.

 

High level overview: vCenter 5 steps on SQL Server 2005 Express

 

  • Shutdown lab and make a full clone of the vCenter VM. Power back up again afterwards – Always good to have a rollback plan!
  • Added a new disk to the VM, located on my SSD-based storage.
  • Backup all my SQL databases on the vCenter VM along with the System databases.
  • Noted down all credentials that vCenter uses to connect to the SQL database and checked I was familiar with all my ODBC settings just in case any of these needed changing or updating.
  • Stopped vCenter and VUM services
  • Performed database move steps carefully, verifying everything each step of the way.
  • Started vCenter and VUM services back up and check all was working as expected.
  • Note that there are some additional considerations if you are planning on moving a vSphere 4.x database. Refer to the VMware KB linked below for more info if you are on vSphere 4.x

 

VMware have a fairly high-level KB on moving your vCenter Server SQL database. You can take a look at it over here to see if you need anything else.

 

The Process

 

  • After making a clone of my vCenter VM, backing up all my SQL databases on the vCenter server, and stopping all my VMware specific services, I started with the Microsoft specific steps for moving a SQL 2005 database.

 

  • First off, you need to detach the VIM_VCDB database. Execute the following SQL query in SQL Server Management Studio:

 

use master
   go
   sp_detach_db 'VIM_VCDB'
   go

 

  • After this query completes successfully, move your VIM_VCDB.mdf and VIM_VCDB.ldf files to the new location (where you are moving the database to). Once moved, go back to Management Studio and execute the next query which will reattach the database. Of course you will need to specify the path your database is now going to be located in – the example below references the path I used.

 

use master
     go
     sp_attach_db 'VIM_VCDB','D:\VCDB_SQLDATA\VIM_VCDB.mdf','D:\VCDB_SQLDATA\VIM_VCDB.ldf'
  go

 

  • After this query is successful, you can run the next stored procedure, which should return the new location of your database, provided it has been moved and reattached correctly.

 

use VIM_VCDB
go
sp_helpfile
go

 

  • Now that the Database has been moved, you should be good to start your vCenter services back up again and do some testing to ensure everything is working as expected.

 

Extra steps

 

If you would also like to move your System databases, things get a little more complex. First off, you will need to make sure your management studio is set to only open up a SQL query window on startup. (Tools -> Options -> Environment -> General ->At Startup -> Open new query Window) this is so that when we enter single-user mode for the SQL Server service (part of moving the System DBs), we don’t get errors trying to run our scripts – as by default SQL Server Management Studio tries to make a SQL connection for it’s object explorer, as well as for your query when you execute it. This means you’ll get an error message as there would already be an active SQL connection using the Object Explorer before you can execute your queries to move the System DBs. The Microsoft KB does not explain this, and it took me a little while to realise that this was the problem, so don’t forget to change this before starting! The rest of the steps can be followed through in this MSDN Article on moving SQL Server System Databases. Just make sure the correct edition of SQL Server is highlighted at the top of the page before you begin.

 

The System Databases include:

 

  • Model
  • MSDB
  • Master
  • Resource
  • TempDB

 

Another tip when moving the System databases that was not mentioned in the Microsoft KB article, and will cause you to get stuck unless already configured, is to set the correct permissions on the new folder your SQL System databases are going to be sitting in. There is a local security group on your SQL server that needs to be assigned “Full Control” on your new System database location. If it doesn’t have this permission, then you will get errors when you try to start services up again. See screenshot below for the service name (the name of this security group depends on the name of your Server and SQL Service):

 

 

That should cover the whole process, as well as provide a few tips on the areas I initially got stuck on (that were not explained in the official Microsoft KBs). If you have a different edition of SQL Server, just switch the MSDN article to the relevant edition and take it from there. Good luck!