London VMUG Meet up – 26 January 2012

 

Welcome session attendance - Photo credit - Chris Kranz (@ckranz)

 

Today’s meetup was the first London VMUG that I have attended. In the past they have unfortunately landed up on days where work commitments took precedence. Running a few minutes late due to a long walk from Bank Underground Station to the venue, I arrived (to my luck) to find that the Introduction had also kicked off a few minutes late, setting most events forward fifteen minutes. I snuck in through a door near the back to listen to the welcome session.

 

First Sessions of the morning

 

Attendance was good from what I saw today – all the sessions were quite full and well attended. Symantec did an interesting presentation on ApplicationHA – a talk followed on by a live demo showcasing Application High Availability. The demo entailed bringing down the SQL Server Instance on a VM at first, allowing ApplicationHA to restart the service to sort it out again. This was followed on by another demo – deleting the entire Database and allowing ApplicationHA to pick up the problem and sort out SQL Server by leveraging Backup Exec to restore the Database. Symantec were also kind enough to offer up some NFR licenses for lab/testing use at the end of their presentation. Its a shame I didn’t get a chance to visit their stand during the break, as I was keen on taking a closer look at this in my own home lab environment.

 

Next up Chris Kranz and Alex Smith did an informative and interesting set of sessions entitled “Would you like fries with your VM?” and “DevOps & Service Management” respectively. They were interesting talks involving some interesting discussion around the traditional “IT Admin” role, compared with the “Virtual Admin” and “Cloud Admin”. Summing up – IT professionals should stay on top of their game and adapt to survive in this ever evolving industry! Alex also shared some interesting experiences and chatted about DevOps and Service Management along with a few other acronyms – determined to drill these into everyone’s head!

 

During the break I was able to meet up with Gregg Robertson and Jonathan Medd – there was some interesting chat in the short break, after which the next set of sessions began.

 

Midday Session

 

This was a set of sessions that conflicted for me – I was really keen on both. I have had a brief look at Auto Deploy before (whilst studying for VCP5), but I also really wanted to see the VMware View session (End User Computing : Today & Tomorrow – Simon Richardson). I ended up attending Alan Renouf and Max Daneri‘s “How to build 1000 hosts in 10 minutes with Auto Deploy” session – there were quite a few slides to go through, but a good overview of the PowerCLI cmdlets used for setting up Image Profiles (working with VIBs), Rule Sets and Auto Deploy in general was given. Max then handled a great demo showcasing Auto Deploy at work.

 

Post Lunch Sessions

 

I went to the “Stop the Virtualization Blame Game” session by Xangati (Ben Vaux) next. This was of interest to me, as a couple of weeks ago I deployed the free “one host” Xangati VI monitoring appliance in my lab at home. There were unfortunately a few issues with the projector in our room, but there was still a good talk about how the product works and some interesting questions were answered by the team. Xangati also had a demo set up in the main vendor / lunch area for live demos throughout the day. The product aims to give SysAdmin’s a “single pane of glass” view of the entire VI / VDI environment – where everything can be monitored and looked after. They monitor stats realtime and also offer a handy “record” feature which allows events / issues in environments to be captured, and replayed later on to see what went wrong. Interesting stuff, and I’ll definitely be playing with this product further in my home lab.

 

The next session I attended was the “Private vCloud Architecture Deep Dive” with Dave Hill and Aidan Dalgleish. This was an interesting and fairly in-depth session discussing the whole VMware ecosystem: vCloud Director 1.5, vShield, Chargeback etc etc. A “reference architecture” was presented on and discussed along with the three network pool methods and their various pros and cons (VLAN-backed, Port group-backed and vCloud Network Isolation Backed (VCNI)). I also wanted to attend Michael Poore’s session on Orchestration, however these two sessions conflicted and I unfortunately had to decide at the last minute as to which one to view!

 

The final session had me attending the Embotics lab – I had a quick try out of their V-Commander product to see what benefits it offered. I really wanted to see the Cisco UCS presentation so I did unfortunately miss this one. However I will definitely be catching up on this with the slides that will hopefully be made available soon. Gregg Robertson also did his VCP 5 Tips and Tricks presentation, which I hear went down well – I skipped this one as I was lucky enough to fit in an exam and get my VCP 5 done earlier on this month. Whilst on the topic of VCP’s Jonathan Medd surprised everyone as he casually snuck off during lunch to Global Knowledge to write his VCP 5 exam… and passed!

 

Ending off with vBeers just down the road, I managed to catch up and have some great conversation with a few other guys, including Gregg RobertsonJonathan MeddDarren Woollard, Jeremy Bowman, Michael Poore et-al (sorry to those whose names I omitted i.e. fogot!) All in all, a great day was had with some interesting content!

 

Edit – the slides are now up from the VMUG – they can be accessed here.

VMware Labs – iSCSI Shared Storage how-to using the HP P4000 LeftHand VSA [Part 2/2]

 

In the last post [Part 1/2], we prepared our VSA, created a management group and cluster in the CMC, and then initialized our disks. Next up, we’ll be creating a Volume which will be presented to our ESXi hosts as an iSCSI LUN. Before we do this though, we need to make sure our hosts can see this LUN. Therefore we’ll be making entries for each of our ESXi hosts using their iSCSI initiator names (IQNs).

 

Preparing your iSCSI Adapter

 

If your ESXi hosts don’t already have a dedicated iSCSI adapter you’ll need to use the VMware Software iSCSI adapter. By default this not enabled in ESXi 5.0. This is simple to fix – we just need to get it added. Select your first ESXi host in the Hosts & Clusters view of the vSphere Client, and click Configuration -> Storage Adapters -> Add -> Select “Add Software iSCSI Adapter”. Click OK to confirm.

 

 

Now we need to find the IQN of the iSCSI adapter. In the vSphere client, select the iSCSI adapter you are using and select Properties on it under Storage Adapters.  This will bring up the iSCSI Initiator Properties. Click the Configure button and copy the iSCSI Name (IQN) to your clipboard.

 

Getting the iSCSI adapter IQN using the vSphere Client

 

Quick tip: you can also fetch your iSCSI adapter information (including the IQN) using esxcli. Login to your host using the vMA appliance, the DCUI, or SSH for example and issue the following command, where “vmhba33” is the name of the adapter you want to fetch info on:

 

esxcli iscsi adapter get -A vmhba33

 

Getting the iSCSI adapter IQN using ESXCLI

 

Configuring a Server Cluster and the Server (Host) entries in the CMC

 

We’ll now create “Servers” in the HP CMC which are what we’ll be adding to our “SAN LUN” later on to allow our ESXi hosts access. In the CMC go to Servers and then click Tasks -> New Server Cluster. Give the Cluster a name and optional description, then click New Server. Enter the details of  each ESXi host (Click New Server for each host you have in your specific cluster). For each ESXi host, the Initiator Node Name is the iSCSI Name, or IQN we got from each ESXi host in the step above. The Controlling Server IP Address in each case should be the IP address of your vCenter Server. For this example we won’t be using CHAP authentication, so leave that at “CHAP not required”. Once all your ESXi hosts are added to the new cluster, click OK to finish.

 

Creating a new Server Cluster and adding each ESXi host and it's corresponding iSCSI Names/IQNs

 

Creating a new Volume and assigning access to our Hosts

 

Back in the CMC, with our disks that are now marked as Active we’ll now be able to create a shiny new Volume which is what we will be presenting to our ESXi hosts as an iSCSI LUN. Right-click “Volumes and Snapshots” and then select “Create New Volume”

 

 

Enter a Volume Name and Reported Size. You can also use the Advanced Tab to choose Full or Thin provisioning options, as well as Data Protection level (if you had more than one VSA running I believe).

 

 

Now we’ll need to assign servers to this Volume (We’ll be assigning our whole “Server Cluster” we created earlier to this Volume to ensure all our ESXi hosts get access to the volumen. Click Assign and Unassign Servers, then tick the box for your Server Cluster you created and ensure the Read/Write permission is selected. Then click OK

 

Assign the Server Cluster to the Volume for Read/Write access

 

Final setup and creating our Datastore with the vSphere Client

 

Go back to the vSphere client, go to one of your ESXi hosts, and bring up the Properties for your iSCSI adapter once again. We’ll now use “Add Send Target Server” under the Dynamic Discovery tab to add the IP address of the P4000 VSA. Click OK then Close once complete.

 

 

You should be prompted to Rescan the Host Bus Adapter at this stage. Click Yes and the Rescan will be begin. After the Scan is complete, you should see your new LUN is being presented as we’ll see a new device listed under your iSCSI Adapter (vmhba33 in my case for the Software iSCSI adapter).

 

New device found on the iSCSI Software adapter

 

Now that everything is prepared, it is a simple case of creating our VMFS datastore now from this LUN for our ESXi hosts to use for VM storage. Under Hosts & Clusters in the vSphere Client, go to Configuration, then Storage. Click Add Storage near the top right, and follow the wizard through. You should see the new LUN being presented from our VSA, so select that and enter the details of your new Datastore – Capacity, VMFS file system version and Datastore Name. Finish off the wizard and you are now finished. The new datastore is created, partitioned and ready to be accessed!

 

Add Storage Wizard

 

Complete - the new datastore is added

 

Well, that is all there is to it. To summarise, we have now achieved the following over the course of these two blog posts:

 

  • Installing and configuring the P4000 LeftHand VSA
  • Setting up the CMC
  • Creating a VSA Management Group
  • Creating a VSA Standard Cluster
  • Creating Servers entries in a new Server Cluster for each of our ESXi hosts to be presented the storage
  • Creating a LUN / Storage Volume
  • Configuring the ESXi hosts to find our VSA in the vSphere Client
  • Adding the Storage to our ESXi hosts and Creating our VMFS Datastore using the vSphere Client

 

To conclude, I hope this series has been helpful and that you are well on your way to setting up iSCSI shared storage for your VMware Cluster! As always, if you spot anything that needs adjusting, or have any comments, please feel free to add feedback in the comments section.

 

Nominate your top VMware & Virtualization blogs for 2012

 

This morning I came across a link to vsphere-land.com regarding blog nominations for the top Virtualization (look at that, I spelt Virtualization with a “z”!) blogs of 2012 happening between 23/01 and 07/02. I missed out on the blog nominations for the list earlier on in January, but will still be voting of course. It will be a tough choice, as there are so many talented and deserving bloggers out there. My voting strategy will involve choosing five well known blogs that I enjoy reading and think deserve a vote, along with five other blogs that deserve more attention that what they get (in my opinion). I’ll try spread the votes out one by one, so that there is a good mix of points for both sets of blogs.

 

Best of luck to all the bloggers nominated! I see there are some nice TrainSignal VMware View and vSphere training DVDs up for grabs to voters, so make sure you get in there and cast your votes before the cut off time.

 

Cast your votes

Original post by Eric Siebert over at vSphere-Land discussing the nominations

 

 

VMware Labs – iSCSI Shared Storage how-to using the HP P4000 LeftHand VSA [Part 1/2]

 

iSCSI Shared Storage for your Lab

 

I have had a few people asking how I set up my Shared iSCSI storage for my own VMware Lab environment I run at home – the same lab I used to study for my VCP 4 and VCP 5 exams. So, I thought I would write up a blog post detailing how to go about setting this up and trying it out for yourself.

 

You have NFS shared storeage up and running for your ESXi hosts in your lab, but what about iSCSI? There are many different options out there. Here are a few I can think of off the top of my head:

 

  • FreeNAS VM
  • OpenFiler VM
  • HP P4000 Lefthand VSA trial
  • Hardware based – for example Iomega StorCenter IX2 series or QNAP NAS device

 

The last two options (hardware based are less feasible for a lab environment as you ideally don’t want to pay for something you will be testing. That being said, I was quite keen on the HP P4000 LeftHand VSA, as it offers the same kind of interface that you would use with the actual hardware version as well as some really cool enterprise-like features, such as clustering. In fact, as I understand, many businesses actually use the P4000 VSA in production – it was in the game before VMware came out with their own Virtual shared storage solution. Both of these solutions actually provide highly available shared storage for your ESXi hosts. Anyway, enough of the small talk – lets get on to setting up some shared iSCSI storage for our ESXi hosts to use for running Virtual Machines.

 

Deciding where to run your P4000 VSA VM

 

First of all download the trial of the HP P4000 LeftHand VSA. Once you are signed up for the free trial, you should get two options – one version for “Laptops” and one for “ESX”. Grab the relevant version – I chose to run my VSA VMs directly in VMware Workstation 8 and allowed my ESXi VMs access to their storage. If you want to run your VSAs as VMs on your ESXi host VMs then grab the “ESX” version. Once you have it downloaded, extract the download into a convenient location. I wanted my VSA to run on faster disks in my home system, so I moved the extracted files to an SSD volume. Remember to take this into consideration for your lab too – VMs will be running on this, so plan your lab VM deployment and storage carefully. Once ready, simply right-click the VSA.vmx configuration file and select “Open with VMware Workstation”. (Or add to Inventory if you are using the ESX version and browsing the VSA with your Datastore Browser).

 

 

Configuration

 

Now that we have the VSA VM inventoried, we need to create some additional virtual disks for it to use (by default it just has a disk used for it’s OS). Right-click the VM and add some disks. There is one important thing you should note here – the disks should be added on SCSI devices 1:0 and onwards. I added 3 x Virtual Disks to my VSA. Note that a storage total of more than 500GB will require your VSA VM to have more than 768 RAM). I chose 3 x 80GB Virtual disks, meaning I would get a RAID5 160GB volume at the end of this exercise. I found out the hard way (troubleshooting a VSA that would not work) that your VSA needs around 1GB or more of RAM if you have more than 500GB of storage on it! Keep this figure under 500GB and you can get away with the default 384MB RAM which is ideal for a home lab. So here are the details I used for each Virtual Disk added (a total of 3 of these):

 

  • New Virtual Disk
  • SCSI (Recommended)
  • Mode -> (Independent) -> Persistent
  • Enter size of disk – for e.g. 80GB
  • Thin provisioned (Leave “Allocate all disk space now” unticked) – to save disk space on those SSDs especially!
  • Store Virtual Disk as a Single File
  • Specify Virtual Disk filename

 

Important: If you didn’t get an option to specify the Virtual Device Node, go back to “Advanced” on each disk and change to device node x, where x is Virtual Device Node SCSI 1:1 to 1:3. (a different node for each disk you added). If you do not specify these selections, then the VSA will not detect your disks or be able to use them.

 

Remeber to use these Virtual Device Nodes for each disk added.

 

Once your disks are added, ensure it is on the right VM network (I used bridged in Workstation for my lab), your network situation will of course vary. Then power up the VSA. Whilst it is powering up, we’ll need to get the HP P4000 Centralized Management Console installed on a “management” PC. In your VSA download you should have also received the installer for this. Simply run the installer and go through the wizard to get this installed.

 

P4000 Centralized Management Console Installer

 

Back to your VSA console, you should now be at the login prompt – type Start to login, then press Enter at the “Login” screen. We’ll now be presented with a menu:

 

 

Navigate to Network TCP/IP Settings and choose your eth0 adapter. Configure a hostname for your VSA – in my example I used “blogvsa.noobs.local”. Don’t forget to set your VSA up to have a static IP address and enter your network details. If you have a DNS server, now would be a good time to also add an A Name Record for your VSA’s hostname and assign it the IP address you configured it with. Accept the network changes for the VSA and wait for it to apply the new settings.

 

Hostname and Network configuration.

 

Now launch your HP P4000 Centralized Management Console from the machine you installed it on, and we’ll begin setting this VSA up. Once open, you should have a few options to the left, and hopefully, the CMC would have already found your new VSA on the network. If not, don’t stress – just use the menu option Find -> Find Systems -> Find. Once the VSA is discovered, you can then close the “Find” window and view the VSA under Available Systems.

 

Expand Available Systems and locate your newly powered up VSA.

 

Next, we’ll create a new Management Group and add the VSA to it. The group will exist on this VSA as it is our only storage system. Right click on the VSA and choose Add to New Management Group. Give the group a suitable name, then click Next. The next screen asks us to create an Administrative user. Enter the details for a new admin account and then click Next. Specify NTP server settings, or set the time manually then click Next. Set up your DNS Server and Domain Name on the next screen, then click Next. If you have an SMTP server to use for email alerts, enter those settings on the next screen, or continue. To keep things simple, on the “Create Cluster” page, select the default “Standard Cluster” option, continue, give it a name, then click Next. The next screen requires you to specify a Virtual IP for Fault Tolerance or load-balanced iSCSI access. Add an IP and the correct subnet mask then click Next. The next screen allows us to create a volume. We have not set up our disks and RAID yet, so check the option to “Skip Volume Creation” and we’ll come back to that afterwards. Finish the wizard and wait for it to create the Management Group and configure everything for you. Once complete, it should auto-login to the Management Group using the admin user you specified for you. Review the summary once complete and close the wizard.

 

Now, expand out your Storage Cluster under the new Management Group and find your VSA system. Select Storage and then click the Disk Setup tab. We’ll now initialize each disk that we added to the VSA earlier and add it to the RAID group for the VSA. Right-click each uninitialized disk and select “Add Disk to RAID“.

 

Add each uninitialized VSA disk to the RAID group.

 

This post is getting a little long now, so I’ll end off this post here with our VSA configured, the Management Group and Cluster set up, and our disks initialized. In the next post [part 2/2], we’ll create a new Volume with these disks and will be setting up our iSCSI initiators from our ESXi hosts as “Servers” in the CMC. After this, we will present the new Volume to our ESXi hosts as an iSCSI LUN and create our VMFS shared storage for vSphere to use. Stay tuned, as part 2 will be coming soon! (Hopefully tomorrow!) See below for the next section:

 

Edit – [part 2/2] is now up – finish off the article here.

 

PowerCLI – Fetch Interesting stats or configuration for a list of VMs

Now and then I find that I need to retrieve some useful information from a variety of VMs, this usually involves me doing a Get-VM with some selections and criteria to search for. However sometimes the information I require about a VM is listed in the advanced configuration and not as easy to get to with a single cmdlet. I thought it would be really handy to have a PowerCLI function that would easily pull the useful information out for me and summarise it for any given VM or set of VMs.

 

With that said, I recently read a great blog post by Jonathan Medd (Basic VMware Cluster Compatibility Check), and after reading it I thought it would be a great idea to create a set of functions that provide me with the information I often use. To start, I thought I would do a function that lists the most useful or common information about VMs that I often search for. As well as speeding up the process of retrieving information about VMs, I thought it would also be good PS/PowerCLI practise for me to write more functions. The reason being that I often tend to do PowerCLI reporting scripts rather than actual functions that accept input from the pipeline or other parameters. Below is my function to collect some useful information about Virtual Machines – you can specify a VM with the -VM parameter or pipe a list of VMs to it, using Get-VM | Get-VMUsefulStats. Jonathan’s post also had an interesting section about the order in which output is displayed. You’ll need to pipe the output to Select-Object to change the order the information is fed back in, otherwise it will list the information in the default order. This is not really a problem anyway, just good to know if you are fussy about the order in which the output comes back in!)

 

So, here is the first function (Get-VMUsefulStats):

 

[download id=”7″]

 

function Get-VMUsefulStats {
<#
.SYNOPSIS
Fetches interesting or useful stats about VMware Virtual Machines

.DESCRIPTION
Fetches interesting or useful stats about VMware Virtual Machines

.PARAMETER VM
The Name of the Virtual Machine to fetch information about

.EXAMPLE
PS F:\> Get-VMUsefulStats -VM FS01

.EXAMPLE
PS F:\> Get-VM | Get-VMUsefulStats

.EXAMPLE
PS F:\> Get-VM | Get-VMUsefulStats | Where {$_.Name -match "FS"}

.LINK
http://www.shogan.co.uk

.NOTES
Created by: Sean Duffy
Date: 18/01/2012
#>

[CmdletBinding()]
param(
[Parameter(Position=0,Mandatory=$true,HelpMessage="Name of the VM to fetch stats about",
ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
[System.String]
$VMName
)

process {

$VM = Get-VM $VMName

$VMHardwareVersion = $VM.Version
$VMGuestOS = $VM.OSName
$VMvCPUCount = $VM.NumCpu
$VMMemShare = ($VM.ExtensionData.Config.ExtraConfig | Where {$_.Key -eq "sched.mem.pshare.enable"}).Value
$VMMemoryMB = $VM.MemoryMB
$VMMemReservation = $VM.ExtensionData.ResourceConfig.MemoryAllocation.Reservation
$VMUsedSpace = [Math]::Round($VM.UsedSpaceGB,2)
$VMProvisionedSpace = [Math]::Round($VM.ProvisionedSpaceGB,2)
$VMPowerState = $VM.PowerState

New-Object -TypeName PSObject -Property @{
Name = $VMName
HW = $VMHardwareVersion
VMGuestOS = $VMGuestOS
vCPUCount = $VMvCPUCount
MemoryMB = $VMMemoryMB
MemoryReservation = $VMMemReservation
MemSharing = $VMMemShare
UsedSpaceGB = $VMUsedSpace
ProvisionedGB = $VMProvisionedSpace
PowerState = $VMPowerState
}
}
}

 

You can use the cmdlet to very easily retrieve information about a single VM or list of VMs. Examples:

 

Get-VMUsefulStats -VM NOOBS-VC01

Get-VM | Get-VMUsefulStats

 

To format the output in a neat table, pipe the above to Format-Table (ft) like so:

 

Get-VM | Get-VMUsefulStats | ft

 

The “MemShare” value is interesting – it is something I was specifically interested in, as some VMs I have worked with in the past have needed memory sharing to be specifically disabled, and this is something that needs to be changed with an advanced parameter on the VM. Therefore most (default) VMs will not have this entry at all and will appear blank in the output. (the parameter I am referring to for those interested is sched.mem.pshare.enable). Of course this most likely won’t be of any use to you, so feel free to omit this bit from the function, or feel free to customise the function to return information useful to your VMware deployment VMs. Here is an example of the output for one VM.

 

 

Anyway, I hope someone finds this useful, and please do let me know if you think of any improvements or better way of achieving a certain result.