VM provisioning from e-mail using Python and the VMware Perl SDK

This is a bit of a fun project that I did as a part of my presentation of the vPi project. It doesn’t necessarily achieve anything useful (at least not on the surface), but it does demonstrate some techniques that could be put to far greater use.

 

vpi-provision-script

 

In summary, this integration turns e-mails from people into Virtual Machines on a vSphere environment. It consists of the following components:

  • Raspberry Pi running the vPi image
  • Python script
  • VMware Perl script (vmcreate.pl) + a bit of XML used for the VM template.
  • VMware Perl script (vmcontrol.pl)

The way it works, is a Gmail mailbox is setup to capture e-mails sent to a specific e-mail address. The Raspberry Pi runs a Python script that logs into Gmail, and looks for any new e-mail that has arrived. If an e-mail is found, it takes the FROM address and splits it up into components, determining the sender’s first and last names.

The script then opens up the XML template file that the vmcreate.pl script uses as a basis to creating VMs, and searches it for a bit of bespoke text we placed there called “TEMPLATE_NAME”. Once found, it strips this out and replaces the TEMPLATE_NAME with the sender’s name.

We then move onto the next procedure, which involves invoking the vmcreate.pl script from the Python script, passing it in the parameters required (such as the server to connect to, credentials, and the all important XML template). This runs against the vSphere environment in question, and creates a VM named after the e-mail sender, (appending a random string of text and numbers to the end to ensure that multiple e-mails from the same person do not cause an issue with duplicate named VMs).

Once the VM is provisioned, the Python script invokes the vmcontrol.pl script using the name of the VM we just provisioned to power the VM up. Lastly, the Python script sends an e-mail back to the sender, stating that their VM has been created and powered on. After that, voila! You will have a new VM created and deployed in your Datacenter all triggered from a simple e-mail.

vpi-example

The script files required + XML and XML schema files are available for download below. The main python script is fairly lengthy, so I won’t include the content direct on this post. Just download the file to grab everything.

Notes to get the script up and running:

  • I found the vmcreate.xsd (XML Schema file) for the VMware vmcreate.pl script did not work, so I had to modify it to change some of the property names to match those of which the vmcreate script was expecting. My updated version is included in the download below if you get any errors from the vmcreate.pl script. It’s default location is: /usr/lib/vmware-viperl/apps/schema
  • You will need to find and edit some variables in the main Python script – your mailbox name and password, plus the IP, username and password for the vmcreate.pl and vmcontrol.pl perl script calls.
  • In the vmtemplate.xml file you should define the characteristics of your VMs that are created. GuestOS, Disk size, etc… Of particular importance, is the name of your host to deploy to, Datacenter name, Datastore name to deploy to, and default VM network to use. These are all of course unique to your own environments.

 

[download id=”26″ template=””]
 
[download id=”27″ template=””]

 

Once you start to think of other ways of using this, you can begin to imagine some really great (and even crazy) solutions. As a start, it would be quite easy to begin extending this, so that e-mails undergo some sort of validation first. E.g. does the domain the sender sent from exist in our “Whitelist” of people allowed to provision e-mails, or does a specific “password” required exist in the body of the e-mail etc…

How about having a standard e-mail template, where the sender can specify more details, such as vCPUs, RAM, disk sizes, OS to install? You could then provision from VM templates instead of creating new VMs, that have customization specs attached. Once powered up and provisioned, a script within the VM could be initiated to accept parameters the VM was created with, and use those to send the requestor an e-mail to say “Hey! I’m now ready for you to connect, and here is the IP you can use…”.

 

Of course, this is not limited to vPi and the Raspberry Pi – that was just the platform I demonstrated this on. Being standard SDKs and scripting languages, you could use the above solution anywhere.

 

Using plink to modify ESXi host configuration files via SSH from a PowerCLI script

I am a big advocate of automation and saving time with a good script. Whenever I can find a task that is fairly lengthy, and is likely to be repeated in future, I always consider scripting it. There are many way to configure an ESXi host when it comes to writing build or automation scripts. In fact, I often feel we are quite spoilt for choice. Here are just some of the tools we have available to use:

  • PowerCLI
  • esxcli
  • vMA
  • vCLI

I was working on a build configuration script the other day using PowerCLI and found the need to edit some configuration files on the hosts. I wanted to edit the configuration file /etc/vmware/config during the execution of a single PowerCLI script without needing to stop the script or have an additional step to do myself. The following is what I came up with to achieve this:

  • Configure host as normal using PowerCLI
  • Use PowerCLI to start SSH service on host
  • execute plink script to connect to host, run command via SSH, then disconnect
  • Use PowerCLI to stop SSH service on host
  • Continue with rest of PowerCLI script

 

Plink is a command line connection tool – essentially a command line version of PuTTy. You can call it from dos prompt and issue it with a single, (or list) of commands to run once connected to a specified host. You can download Plink over here.

 

So without further ado, let’s take a look at the script as I described above.

# At start of our script we ask for the host's IP or name (this could be automated if you like)
$hostIP = Read-Host "Enter ESX host IP/dns name: "
$vmhost = Get-VMHost $hostIP

# Start the SSH service
$sshService = Get-VmHostService -VMHost $vmhost | Where { $_.Key -eq “TSM-SSH”}
Start-VMHostService -HostService $sshService -Confirm:$false

# Use SSH / plink to configure host with some additional script
cmd /c "plink.exe -ssh -pw HOSTROOTPASSWORD -noagent -m commands.txt root@$hostIP"

# Stop SSH service
Stop-VMHostService -HostService $sshService -Confirm:$false

 

As you can see, we start off by asking for the host IP or name, this is the only bit of manual input, but even this could be automated. The script then finds the SSH service on the host, and starts it. After this, the script calls the plink.exe file via cmd /c and connects using the root user@ the host’s IP as we entered at the beginning of the script over SSH. Plink is pointed to a commands.txt file (previously placed in the script execution folder), which contains the actual lines of bash script to be executed on the ESXi host via SSH.

Here is the content of the commands.txt file that I refer plink.exe to use (as an example, this bit of script enables copy/paste operations on all VMs running on this host in the guest OS’ console, as per VMware KB 1026437), but could contain any other commands you wish to execute on the ESXi host over SSH.

echo 'isolation.tools.copy.disable="FALSE"' >> /etc/vmware/config
echo 'isolation.tools.paste.disable="FALSE"' >> /etc/vmware/config

 

* Note two very useful techniques show by Alan in the comments section below, showing how to automatically download plink.exe if it is not available when the script is run, and also how to accept the SSH fingerprint key request by piping Y to plink.exe via the script – check out Alan’s blog post here for more detail.

Veeam Backup stats report for all your VM Backup jobs in PowerShell

 

The other day I was asked to collect some statistics on our Veeam Backup & Recovery server from as many VM Backup jobs as possible. The environment has roughly 70 scheduled jobs thats run either daily or weekly. After searching around a bit first I could not find any current solution or built in method to retrieve the info I needed to collect in a quick or automated way. First ideas were to either somehow grab the info via SQL queries from the Veeam database, or to rather take a sampling of 10-20 different types of jobs and their backup sessions over one normal incremental run day, and one normal full backup day (Manually collecting this data from email reports would be quite a slow process).

 

After browsing around the Veeam Community Forums I suddenly remembered that there was a PowerShell module that Veeam Include with B&R. I read the basic documentation and got acquainted with a few simple cmdlets.  I wanted to build a report, that would loop through every single Veeam B&R Job we have, and grab data from the last 7 backup sessions of each (daily backups), therefore giving me a good idea of both full backup and incremental backup runs performance, times taken etc… My first attempt at a script got me almost all the way there (tried during spare time in my evenings!) – I was however having trouble matching backup session data with the right day’s backup file stats – sometimes the ordering was out, and I would get metrics back for a backup file that was not from the correct day. Before I was able to resolve this myself, help arrived from “ThomasMc” over at the Veeam Community Forums. (Thanks Thomas!) We got a script together that was able to match up sessions correctly. I then added a few more features, as well as some nice HTML formatting and the ability to grab statistics for all jobs instead of just one sample job. The resulting script gets the following info for you:

 

  • Index (1 = the last backup sesion, 2 = the day before that, etc)
  • Job Name
  • Start time of job
  • Stop time of job
  • File Name (Allows you to determine if the job was a full or incremental run)
  • Creation Time
  • Average Speed MB – average processing speed of the job
  • Duration – time the job took to complete
  • Result – Success/Warning/Failed (Failed is highlighted in red)

 

Here is an example of the report run on my Veeam Backup & Recovery Lab environment at home (Thanks to Veeam for the NFR licenses they gave out to VCPs earlier this year!)

 
[download id=”1″]
[download id=”8″]

 

So, to run the above script, launch a PowerShell session from within Veeam B&R (Tools -> PowerShell). This will make sure your PowerShell session launches with the Veeam Automation/PowerShell snapin. Execute the script and you’ll get an HTML file output to the root of your C:\ drive. By default, all jobs you have in Veeam will be detailed. If you wish to sample a specific job, or a job with a certain word/phrase in it, adjust the -match parameter for the Get-VBRJob cmdlet line near the top of the script. The default setting is an empty string – i.e. “”. To change how many sessions the the script fetches for each backup job, just change the “$sessionstofetch” variable defined at the top of the script.
I have added comments throughout the script for those interested in how it works. Lastly, you could also quite easily modify this script to e-mail you the report, or even run it as a scheduled task. Let me know if you need help doing this and I’ll gladly modify it as required.

 

PowerCLI – Automate adding NFS Datastores to a cluster of ESX or ESXi hosts

 

The other day I needed to add three NFS datastores to a bunch vSphere ESX hosts in a specific cluster. Rather than go through each host in vCenter individually, adding the datastore using the Add Storage wizard, I thought I would script the process in PowerCLI and get it done in a more automated fashion. Using PowerCLI automation, this helped me save some time. I had about 7 ESX hosts to add the Datastores to, so doing this manually would have taken twice the time it took me to whip up this script and run it. Plus, this can be used in the future for other Datastores or other clusters by simply modifying the script and re-running it.

 

Here is the script:

 

# PowerCLI script to add NFS datastores to ESX/ESXi hosts in a specified Cluster
# Only does this to hosts marked as "Connected"
$hostsincluster = Get-Cluster "Cluster 1 - M" | Get-VMHost -State "Connected"
ForEach ($vmhost in $hostsincluster)
{
    ""
    "Adding NFS Datastores to ESX host: $vmhost"
    "-----------------"
    "1st - MER001 - NAS-SATA-RAID6 (Veeam Backups)"
    New-Datastore -VMHost $vmhost -Name "MER001 - NAS-SATA-RAID6 (Veeam Backups)" -Nfs -NfsHost 10.1.35.1 -Path /share/VeeamBackup01
    "2nd - MER002 - NAS-SATA-RAID6 (ISOs)"
    New-Datastore -VMHost $vmhost -Name "MER002 - NAS-SATA-RAID6 (ISOs)" -Nfs -NfsHost 10.1.35.1 -Path /share/Images01
    "3rd - MER003 - NAS-SATA-RAID6 (XenStore01)"
    New-Datastore -VMHost $vmhost -Name "MER003 - NAS-SATA-RAID6 (XenStore01)" -Nfs -NfsHost 10.1.35.1 -Path /share/XenStore01
}
"All Done. Check to ensure no errors were reported above."

 

So the script above looks for ESX or ESXi hosts in a specified cluster that are in a “Connected” state – i.e. they are not disconnected in vCenter (we wouldn’t want to try add Datastores to hosts that don’t exist!). So we use the Get-Cluster cmdlet to say we are only concerned with hosts in this particular cluster (specified by the “Cluster 1 – M” name in my case. Obviously change this to the name of your cluster you will be working with.) We then use Get-VMHost -State “Connected” to list all of the hosts in this cluster that are in a connected state. In my example I had 2 x ESX hosts that were in a disconnected state, and I didn’t want to include these, so this part worked nicely. This list of hosts in then assigned to the $hostsincluster variable. We then use the ForEach loop to iterate through each host in this list of hosts and do the bit in-between the curly brackets for each host.

 

In my case you may notice that I am adding Datastores from the same NFS (NAS) server. They are just being mounted to different paths on the server and being given different names. I had three Datastores to add, so therefore use the New-Datastore cmlet three times for each host. You will need to adjust this to your needs – maybe you just need to add one datastore to each host, therefore remove the two extra New-Datastore cmdlet parts. Also remember to adjust the -NfsHost and -Path sections to suit your own environment.

 

We could improve on the above script by making it more customisable for future / others to use. Lets give that a quick go then and use variables to define everything at the top of the script. This means that the variables can be changed at the top of script without worrying too much about reading through the whole script to check for things to change. We’ll also add a Connect-VIServer cmdlet in there in case you have not already connected to your vCenter server and authenticated in your PowerCLI session that is running the script.

 

# PowerCLI script to add NFS datastores to ESX/ESXi hosts in a specified Cluster
# Only does this to hosts marked as "Connected"

# Define our settings
$vcserver = "vcenter01"
$clustername = "Cluster 1 - M"
$nfshost = "10.1.35.1"
$nfspath1 = "/share/VeeamBackup01"
$nfspath2 = "/share/Images01"
$nfspath3 = "/share/XenStore01"

# Connect to vCenter server
Connect-VIServer $vcserver

# Do the work
$hostsincluster = Get-Cluster $clustername | Get-VMHost -State "Connected"
ForEach ($vmhost in $hostsincluster)
{
    ""
    "Adding NFS Datastores to ESX host: $vmhost"
    "-----------------"
    "1st - MER001 - NAS-SATA-RAID6 (Veeam Backups)"
    New-Datastore -VMHost $vmhost -Name "MER001 - NAS-SATA-RAID6 (Veeam Backups)" -Nfs -NfsHost $nfshost -Path $nfspath1
    "2nd - MER002 - NAS-SATA-RAID6 (ISOs)"
    New-Datastore -VMHost $vmhost -Name "MER002 - NAS-SATA-RAID6 (ISOs)" -Nfs -NfsHost $nfshost -Path $nfspath2
    "3rd - MER003 - NAS-SATA-RAID6 (XenStore01)"
    New-Datastore -VMHost $vmhost -Name "MER003 - NAS-SATA-RAID6 (XenStore01)" -Nfs -NfsHost $nfshost -Path $nfspath3
}
"All Done. Check to ensure no errors were reported above."

So as you can see we have now defined the name of the cluster, our NAS/NFS server and three paths to different NFS shares at the top of the script, then just referenced these variables later on in the script. This means we can now easily adjust the defined variables at the top of our script in the future to work with different clusters, NAS/NFS servers or paths. The output of your final script when run should give you a nice view of what has happened too. It will output that it is adding NFS Datastores to each host it iterates through, and if it comes across any errors those should be marked in red as PowerShell / PowerCLI normally would do, allowing you to amend or update any details as necessary. PS, don’t forget to change the name of each Datastore in the script to something of your own choice (it is the part after the -Name parameter in each New-Datastore line).

 

Here is the download for the full script (with latest improvements):

[download id=”5″]