ESXi Host Backup & Restore GUI Utility (PowerCLI based) updated to 1.2

A quick post today to just mention that I have updated my ESXi 5.0 / 5.1 Host Backup & Restore GUI utility to version 1.2.

 

There are a couple of improvements in 1.2 based on feedback received in the comments I have received about the utility. The main improvement introduces a function in the script which backs the GUI to check that ESX hosts are valid before attempting to backup or restore these. You can check the utility out over on it’s page here.

 

Updates (29-12-2012) – version 1.2:

  • Added ESX/ESXi host validation into utility – will now test that the host is valid and either connected or in maintenance mode before attempting backup or restore (See the script’s new “Check-VMHost” function for those interested)
  • Minor UI improvements

 

Review: PHD Virtual Backup & Replication 5.4

 

Introduction

 

PHD Virtual Backup & Replication is as the name would suggest, a complete, all-in-one backup and replication package. It is available in both VMware and Citrix XenServer flavours. I have long been a user of other Virtualization Backup Solutions and up until recently, never had the chance to play with PHD’s offering. A couple of weeks ago, PHD Virtual asked  me to take a look at their Backup offering and put down my thoughts in the form of a sponsored review. That being said, I got the appliance installed in my lab environment and set about putting down my thoughts and observations about the product whilst using it for various backup, recovery, and replication tasks in my lab over the last two weeks.

 

Thoughts and Observations

 

Getting PHD Virtual Backup and running in my Virtual Lab environment was an absolute pleasure. Let’s just say the product definitely does what it says on the tin – installation was as simple as deploying the downloaded OVF file with the vSphere client (File -> Deply OVF Template), powering up the “Virtual Backup Appliance” and setting up some basic network settings. I would say the longest part of the installation for me was finding the line in the installation steps that said “Press CTRL + N to enter the network settings in the console” (which wasn’t long at all)! After entering my network settings, I had the choice of either browsing to the IP address of my appliance, or running the PHDVB_Install.exe file to get the Virtual Appliance “Management” console installed. I simply ran the installer and within 8 minutes or so (from start to finish) I had PHD Virtual Backup & Replication up and running in my vSphere lab.

 

 

The product supports VMware and Citrix (XenServer) in terms of hypervisor platforms. As stated above, in this review I will be working with a VMware vSphere 5.0 environment, and have therefore put the VMware edition to the test.

 

The observation I liked this far into my experience was that I didn’t have to make the choice as to whether I should be running my backup solution on a physical or virtual machine – its simple – the product is a Virtual Appliance. You deploy the initial appliance, and if needed, scale by deploying more virtual appliances. This means you don’t need to worry about managing a separate physical server(s) for your backup solution. This is just one of the reasons why PHD Virtual Backup is so easy to deploy.

The Virtual Appliance is pre-configured with the following specifications:

  • 1 vCPU
  • 1GB RAM
  • 8GB disk
In terms of actual backup storage, you do of course have a few options.
  • Add a Virtual Disk to the Appliance itself (VMDK)
  • Configure Network storage (which could be):
    • a CIFS target
    • an NFS target

 

I chose to use a separate NFS mount on a Virtual Appliance I use for general purpose storage and backup in my lab, so I simply opened the appliance management console (right click in vSphere Client -> PHD Virtual Backup -> Console) and went to “Backup Storage” under “Configuration” to configure my NFS datastore as a backup target. You can also set up a couple of thresholds for warning / stop levels in terms of free disk space on your target, as well as enable/disable backup compression at this stage.

Access to the management console is simple via right-click in the vSphere Client

 

Configuring Backup Storage for the VBA

 

 

Backing up VMs

 

As the virtual appliance integrates in with the vSphere client, dealing with configuration tasks and actually setting up backups for your VMs is simple. No need to remote to another server or open up a console to your backup appliance VM. For my testing I configured a couple of different backup jobs – one to backup my VC, Update Manager and other VI VMs and one to backup a couple of general purpose VMs in my lab.

Backup speeds themselves were of a good level and on par with what I would expect from a product that utilises the VMware vStorage APIs for Data Protection (VADP). My first job that I ran took a little while to do the first initial (full) backup, but after this the subsequent runs of the backup job correctly used CBT (Change Block Tracking) to pick up on only changed blocks and copy these up, significantly reducing backup times of my VMs. VMware Hotadd is also utilised to help with quicker VM Backup times. Each job that runs gives you some detailed information on statistics such as:

 

  • Dedupe Ratios (Per VM and Per individual VM Disk)
  • Job average speed
  • Dedupe Ratios (Per Job)
  • Total amount of Data Written (useful for tracking how well CBT is working for example)
  • CBT Enabled/Not
  • Scheduling / Time details

 

Job details view

 

A nice feature I found at this stage was the ability to look at a detailed job log right from the console. Let’s say you have a job or VM in a job that gave a warning or error message for some reason, and you wished to find out the cause. All you need to do is right-click the job name and select “View Log”. This pops up a window with a detailed, timestamped job log, allowing you to dig in to each step of the backup process and see what happened at each stage of the particular backup job.

 

Detailed job log view

 

File Level Restore

 

Restoring files is also a simple task. From the main console, there is a “FLR” (File Level Recovery) section which handles this process. I tested restoring files from within two different VMs using this console. Both were Windows Guests (one Server 2003 Standard and one Server 2008 R2 Standard VM). The process went as follows:

  • Under “Backup Catalog” where your previous backup jobs are listed, select the VM / VM Disk you would like to restore from.
  • Click the “FLR” button.
  • Go  through the “Backup to Share” wizard and tick on the option to “Add target to iSCSI Initiator on this computer”.
  • Finish Wizard, and the VM Disks are mounted on the local machine and are now accessible.
Select VM Disk to initiate FLR from under Backup Catalog.

 

Following the Wizard through to mount the VM Disk/s on local machine for File Level Restore

 

 

 

 

Disks for two different VM disks are now mounted and ready to be accessed.

 

If you take a look at the Microsoft iSCSI Initiator tool you can see the two targets that have been mounted…

 

 

Incidentally, doing file-level restores from Linux/Unix based VMs can also be done by PHD VB. You just need to supplement the restore process with a third-party tool such as “Ext2explore”. You will follow the same process to mount the VM disks using the FLR wizard, but then just use Ext2explore to actually browse the mounted disk/s instead of Windows Explorer.

 

Restoring full VMs

 

I must say that I really like the features available in PHD Virtual Backup & Replication when it comes to doing full/partial restores of VMs. The wizard you use is nicely laid out and functional. You also get some great restore options such as; appending a “_restored” tag to the end of your restored VM name, auto-generation of a new MAC address for the restored VM, and changing of the default VM network (portgroup).

These are all great features  when it comes to restoring VMs. Especially if you are restoring back into a production environment alongside the original VM and would like to ensure that there are no network conflicts for example. I have a dedicated, isolated VM network for testing (no vSwitch uplinks to physical adapters) so the option to change the default network on the VM to restore was perfect for me to test with.

 

Selecting VMs to restore by Latest or by backup date/time order

 

The excellent array of restore options available when doing full/partial VM restores

 

VM Replication

 

PHD Virtual Backup also has replication functionality. Ideally you will want to have more than one VBA (Backup Appliance) running. For example, one in your DR Site, and one in your Production site. The appliance in your DR site will essentially connect in to the Backup Storage at your production site and hook into your backup jobs done there to find the latest changes of the VM backups done to replicate. So ideally when you set up a particular replication job, you should schedule it to start a short while after the relevant backup job completes. This ensures you get the latest changes replicated. The replication job will fetch only the changes since the last run. To enable replication, you just need to complete a once off configuration task using the PHD VB Console – adding a Replication Datastore. All this is, is pointing the appliance to an existing PHD VB Backup storage area – this can be a CIFS, NFS or VMDK Disk store that you are currently using for backups. As with VM Restores, you also get some useful options when replicating to change VM networks (VM portgroups) or auto-generate new MAC addresses for replicated VMs. I should also mention that you are also able to do replication even with just one VBA.

From the PHD Console, you are able to test your replicated VMs. This is quite a handy feature and after putting a replicated VM into “TESTING” mode, you can then use the vSphere client to power up your replicated VM and perform any testing and validation you might require. A snapshot is added to the VM to ensure that the state of the VM pre-testing is preserved. Once testing is complete, you simply just click “Stop Test” in the console. The VM is powered down and changes are rolled back to the pre-testing state.

 

Testing replicated VMs with the console

 

Summary

 

Pros

 

  • “All in one” backup solution (everything you need in one Virtual Backup Appliance).
  • Simple and quick to deploy (or scale by adding more VBAs).
  • Good feature set (VM Backup, File Level Restore, Full VM restore, and Replication).
  • Easy to work with – simple/logical User Interface.
  • Integrates with the vSphere client for quick and easy access to Configuration, Backup, Restore and Replication options.
  • Great File-level restore – quick and easy access to files within VM backups (Windows or Linux/Unix).
  • Nice features available to change networking settings on restored VMs for testing or running alongside existing VMs.
  • Configurable VM Backup retention settings
  • Processing of multiple VMs at once in a backup job – allows VMs to be backed up in multiple streams instead of a “serial” fashion.

 

Cons

 

  • No network “fine tuning” options – example: fine tuning deduplication ratios when backing up over a WAN or LAN as opposed to direct disk storage. This would essentially allow you to have quicker backups for local storage jobs (albeit larger) or longer backups, but with smaller sizes to transmit over WAN links.
  • A couple of small caveats when using Replication (such as VM configuration changes are not replicated when changing settings on the original source VM, to the replicated VM).
  • No automation options – this would be nice to have in terms of backup, restore, replication or reporting automation. (A PowerShell module would be nice to have).

 

Conclusion

 

At the end of the day, PHD Virtual Backup is a great integrated Backup and Recovery product, with a little bit of room for improvement to add some extra “nice to have” features. The VBA (Virtual Backup Appliance) is dead easy to deploy and manage, and so is managing your backup, restore and replication processes. I think these are the best parts of the appliance. Whilst using it I found that each of the various Backup and DR processes I needed were easy to use through the combination of a well laid out UI and interface that “just works”. Access to files in VM Backups via the file-level restore wizard was a highlight for me – it didn’t take long at all to get at historic files and restore them using the “FLR” Wizard.

The appliance offers a good selection of options, but these could be bettered by offering some form of automation (perhaps PowerShell access) and some more advanced settings for power-users. My thought was that some more advanced backup job options could be made available for power users to fine tune compression or deduplication ratios.

A free trial of the product is available and I would definitely encourage you to take a look at this – as mentioned above, being so easy to deploy and manage it won’t be long before you are up and running. This Backup & Replication product does offer everything you need to handle DR for your VMware Virtual Environment.

 

Useful resources:

 

Installing PHD Virtual Backup & Replication for VMware vSphere

httpv://www.youtube.com/watch?v=g717ZG0rxjc

 

PHD Virtual Backup & Replication 5.4 Trial

 

ESXi 5.0 / ESXi 5.1 Host Backup & Restore GUI Utility (PowerCLI based)

 

This is a little bit of  a fun project I have been working on in bits of spare time I find. It is all written in PowerCLI 5.0 / 5.1 / PowerShell and the GUI was laid out using Primal Forms Community Edition.

 

Updates (17-02-2013) – version 1.3:

  • Hosts are retrieved using a new method (for both backup and restore options)

Updates (29-12-2012) – version 1.2:

  • Added ESX/ESXi host validation into utility – will now test that the host is valid and either connected or in maintenance mode before attempting backup or restore (See the script’s new “Check-VMHost” function for those interested)
  • Minor UI improvements

Updates (29-10-2012) – version 1.1:

  • This utility has been tested on ESXi 5.1 Hosts and confirmed to be able to successfully create a backup archive file of these.
  • Be sure your DNS is working correctly on the system you run the Backup utility on – it relies on DNS to find your ESXi hosts as they are named in the dropdown when you select the host to backup. (See comments for troubleshooting)
  • Restore to new hardware (force restore) option added
  • Fixed labels and connection box label description

 

What it is essentially, is a GUI that allows you to Backup ESXi 5.0 or ESXi 5.1 hosts to a destination of your choice on a local drive. It also allows you to restore ESXi Host configuration bundle backups taken, to other Hosts. I had other plans to integrate cloud storage options in (Amazon S3) as a backup target, but I thought that this doesn’t really offer anything valuable and I would just keep it relatively simple for now – therefore I have disabled this functionality. The PowerCLI script is fairly long – and I know there are plenty improvements I could make to better it and shorten the code, but for now it does the job. I have added various catches for exceptions / errors so you should get visual feedback if you have entered an invalid path, or username for example.

 

Anyway, hopefully this proves useful to some! As always, take care using the backup / restore functionality of this utility. The restore functionality works by first putting the ESXi host into Maintenance mode (if it isn’t already in this mode) and then applying a backup bundle to the host (Restore). The Host will reboot immediately following this. The Backup and Restore is implemented using Get-VMHostFirmware and Set-VMHostFirmware cmdlets so you can read the help descriptions for an idea on how exactly these work. The Host Backup Bundles are stored in a path of your choice and when restoring from a path the cmdlet works by looking for a bundle filename that matches the name of your ESXi host selected to restore to. Note that the file browser module I have implemented here (used to select paths) is a modification of the work done by Ravikanth Chaganti on his PowerShell Help Browser GUI. I simply adapted his code to list the contents of local drives in a tree view instead of PowerShell Help cmdlets. The only limitation here is that I have not implemented code to browse further than one level into the root of each drive. If you can provide this modification yourself, please feel free to contribute in the comments! I have just not spent the extra time to do this myself yet.

 

Here are a couple of quick screenshots of the Utility / GUI:

 

 

ESXi 5.0 Host Backup & Restore GUI Utility & Path Browser

 

Current version (1.3):

(Version 1.3) [download id=”25″]

 

Previous versions:

(Version 1.2) [download id=”21″]

(Version 1.1) [download id=”20″]

(Version 1.0) [download id=”9″]

 

Enjoy!

 

Backup Exec 2010 – Restore window hangs on devices tab

Just a quick post today. This is not necessarily a software issue, but I did come across it the other day whilst trying out the latest version of Backup Exec. Admittedly, it was my fault the restore window was freezing.

What I was doing was testing out various restores of backup jobs that had been done on two different types of devices – one device being a standard B2D (Backup to Disk) and one device being the newer Deduplication type device. I had previously removed the Deduplication device and deleted the Deduplication storage folder on the Backup Exec Media server, so every time I chose to restore from a backup set that was previously done on the deduplication storage the restore window would freeze, and I would have to kill the BE GUI.

After this happened a few times to me I realised that I was trying to reference a job done on deduplication storage, which I had previously deleted. Duh! So just in case you come across a hanging devices tab when trying to restore, check that it is not trying to reference a device that has previously been removed. Ideally, the software would have handled this problem and displayed an error to the effect of “Device not found” instead of freezing the entire GUI and leaving the CPU stuck on 100% utilisation each time it happened.