Archive

Posts Tagged ‘build’

Three PowerCLI scripts for information gathering – VMs, Hosts, etc

February 11th, 2014 6 comments

 

I was on a vSphere upgrade review engagement recently, and part of this involved checking hardware and existing vSphere VI was compatible with the targeted upgrade.

To help myself along, I created a few PowerCLI scripts to help with information gathering to CSV for the VI parts – such as Host Versions, build numbers, VMware tools and hardware versions, etc… These scripts were built to run once-off, simply either by copy/pasting them into your PowerCLI console, or by running them from the PowerCLI console directly.

They can easily be adapted to collect other information relating to VMs or hosts. To run, just launch PowerCLI, connect to the VC in question (using Connect-VIServer) and then copy/paste these into the console. The output will be saved to CSV in the directory you were in. Just make sure you unblock the zip file once downloaded if you execute them directly from PowerCLI, otherwise the copy/paste option mentioned above will work fine too.

There are three scripts bundled in the zip file:

  • Gather all hosts under the connected vCenter server and output Host name, Model and Bios version results to PowerCLI window and CSV
  • Gather all hosts under the connected vCenter server and output Host name, Version and Build version results to PowerCLI window and CSV
  • Gather all hosts under the specified DC and output VM name and hardware version results to PowerCLI window and CSV

Short and simple scripts, but hopefully they will come in handy to some. As mentioned above, these can easily be extended to fetch other information about items in your environment. Just take a look at the way existing info is fetched, and adapt from there. Also remember that using | gm (get-member) on objects in PowerShell is your friend – you can discover all the properties and methods on PowerShell objects by using this, and use those to enhance your reports/outputs in your scripts.

 

My VMware vSphere Home lab configuration

September 5th, 2012 8 comments

I have always enjoyed running my own home lab for testing and playing around with the latest software and operating systems / hypervisors. Up until recently, it was all hosted on VMware Workstation 8.0 on my home gaming PC, which has an AMD Phenom II x6 (hex core) CPU and 16GB of DDR3 RAM. This has been great, and I still use it, but there are some bits and pieces I still want to be able to play with that are traditionally difficult to do on a single physical machine, such as working with VLANs and taking advantage of hardware feature sets.

 

To that end, I have been slowly building up a physical home lab environment. Here is what I currently have:

Hosts

  • 2 x HP Proliant N40L Microservers (AMD Turion Dual Core processors @ 1.5GHz)
  • 8GB DDR3 1333MHz RAM (2 x 4GB modules)
  • Onboard Gbit NIC
  • PCI-Express 4x HP NC360T Dual Port Gbit NIC as addon card (modifed to low-profile bracket)
  • 250GB local SATA HDD (just used to host the ESXi installations.

Networking

  • As mentioned above, I am using HP NC360T PCI-Express NICs to give me a total of 3 x vmnics per ESXi host.
  • Dell PowerConnect 5324 switch (24 port Gbit managed switch)
  • 1Gbit Powerline Ethernet home plugs to uplink the Dell PowerConnect switch to the home broadband connection. This allows me to keep the lab in a remote location in the house, which keeps the noise away from the living area.

Storage

  • This is a work in progress at the moment, (currently finding the low end 2 x bay home NAS devices are not sufficient for performance, and the more expensive models are too expensive to justify).
  • Repurposed Micro-ATX custom built PC, housed in a Silverstone SG05 micro-ATX chassis running FreeNAS 8.2 (Original build and pics of the chassis here)
  • Intel Core 2 Duo 2.4 GHz processor
  • 4GB DDR2-800 RAM
  • 1 Gbit NIC
  • 1 x 1TB 7200 RPM SATA II drive
  • 1 x 128GB OCZ Vertex 2E SSD (SATA II)
  • As this is temporary, each drive provides 1 x Datastore to the ESXi hosts. I therefore have one large datastore for general VMs, and one fast SSD based datastore for high priority VMs, or VM disks. I am limited by the fact that the Micro-ATX board only has 2 x onboard SATA ports, so I may consider purchasing an addon card to expand these.
  • Storage is presented as NFS. I am currently testing ZFS vs UFS and the use of the SSD drive as a ZFS and zil log / and or cache drive. To make this more reliable, I will need the above mentioned addon card to build redundancy into the system, as I would not like to lose a drive at this time!

Platform / ghetto rack

  • IKEA Lack rack (black) – cheap and expandable : )

 

To do

Currently, one host only has 4GB RAM, I have an 8GB kit waiting to be added to bring both up to 8GB. I also need to add the HP NC360T dual port NIC to this host too as it is a recent addition to the home lab.

On the storage side of things, I just managed to take delivery of 2 x OCZ Vertex 2 128GB SSD drives which I got at bargain prices the other day (£45 each). Once I have expanded SATA connectivity in my Micro-ATX FreeNAS box I will look into adding these drives for some super fast SSD storage expansion.

 

The 2 x 120GB OCZ SSDs to be used for Shared Host Storage

HP NC360T PCI-Express NIC and 8GB RAM kit for the new Microserver

 

Lastly, the Dell PowerConnect 5324 switch I am using still has the original firmware loaded (from 2005). This needs to be updated to the latest version so that I can enable Link Layer Discovery Protocol (LLDP) – which is newly supported with the VMware vSphere 5.0 release on Distributed Virtual Switches. This can help with the configuration and management of network components in an infrastructure, and will mainly serve to allow me to play with this feature in my home lab. I seem to have lost my USB-to-Serial adapter though, so this firmware upgrade will need to wait until I can source a new one off ebay.