VMware T10 compliant VAAI integration and HP P2000 G3 FC storage

 

I’ve recently been updating firmware on some development and testing storage and found that the HP P2000 storage array firmware update TS251R004 and above enable the HP P2000 G3 FC enable T10 compliance for the hardware.

 

To quote VMware’s documentation on their VAAI implementation specific to T10 compliance:

The second required component can be referred to as a VAAI plug-in specific to the VAAI filter. It implements vendor-specific VAAI functions such as ATS, XCOPY and WRITE_SAME. There were different implementations of the VAAI block primitives in vSphere 4.1, but all of the primitives in vSphere 5.0 have been ratified by T10, so any array that is T10 compliant should be able to use VAAI.

 

This means that you no longer need to be running the HP P2000 VAAI plugin software directly on ESXi hosts. In fact, HP recommend you uninstall and remove the plugin software before you upgrade the firmware on these arrays, otherwise you could suffer from performance degradation and possible loss of access to datastores.

My process was to first of all login to all hosts and check for the presence of the VAAI plugin.

  • SSH into host as root, run find / -name hp_vaaip_p2000
  • Ensure that nothing comes up with the find command, if it does (you see something like this output: /usr/lib/vmware/vmkmod/hp_vaaip_p2000), then you should use this HP document to ensure it is removed correctly: https://h20566.www2.hpe.com/hpsc/doc/public/display?sp4ts.oid=4118559&docId=mmr_kc-0123414&docLocale=en_US – this will involve some setting changes, and removing claim rules as well as removal of the HP P2000 VAAI VIB itself.
  • After verifying nothing came up, check other hosts, and once happy all hosts are clear of the plugin, upgrade the firmware for the P2000 system.
  • Ideally reboot ESXi hosts after the firmware update and ensure access to datastores is still there. Check the hardware acceleration status of datastores – they should show up as “Supported”.

HP N54L Microserver now listed on HP website

I am a big fan of HP’s Microserver range. They make for excellent home lab hardware, and I currently have 2 x N40L models running a small vSphere 5.1 cluster for testing, blogging and study purposes.

 

It looks like HP have now officially listed their new Microserver range on their website – the N54L. The most notable change seems to be a much beefier CPU. The original N36Ls had a 1.3GHz AMD processor, with a slight improvement to 1.5GHz on the N40Ls. The CPU has always been the weak point for me, but has been enough for me to get by on. So the N54L models are now apparently packing 2.2GHz AMD Athlon NEO processors. This is a fairly big clock speed improvement over the N40L range and should make for some good improvements for those using these as bare metal hypervisor use.

The two models being listed at the moment are:

  • HP ProLiant G7 N54L 1P 2GB-U Non-hot Plug SATA 250GB 150W PS MicroServer
  • HP ProLiant G7 N54L 1P 4GB-U 150W PS MicroServer

My VMware vSphere Home lab configuration

I have always enjoyed running my own home lab for testing and playing around with the latest software and operating systems / hypervisors. Up until recently, it was all hosted on VMware Workstation 8.0 on my home gaming PC, which has an AMD Phenom II x6 (hex core) CPU and 16GB of DDR3 RAM. This has been great, and I still use it, but there are some bits and pieces I still want to be able to play with that are traditionally difficult to do on a single physical machine, such as working with VLANs and taking advantage of hardware feature sets.

 

To that end, I have been slowly building up a physical home lab environment. Here is what I currently have:

Hosts

  • 2 x HP Proliant N40L Microservers (AMD Turion Dual Core processors @ 1.5GHz)
  • 8GB DDR3 1333MHz RAM (2 x 4GB modules)
  • Onboard Gbit NIC
  • PCI-Express 4x HP NC360T Dual Port Gbit NIC as addon card (modifed to low-profile bracket)
  • 250GB local SATA HDD (just used to host the ESXi installations.

Networking

  • As mentioned above, I am using HP NC360T PCI-Express NICs to give me a total of 3 x vmnics per ESXi host.
  • Dell PowerConnect 5324 switch (24 port Gbit managed switch)
  • 1Gbit Powerline Ethernet home plugs to uplink the Dell PowerConnect switch to the home broadband connection. This allows me to keep the lab in a remote location in the house, which keeps the noise away from the living area.

Storage

  • This is a work in progress at the moment, (currently finding the low end 2 x bay home NAS devices are not sufficient for performance, and the more expensive models are too expensive to justify).
  • Repurposed Micro-ATX custom built PC, housed in a Silverstone SG05 micro-ATX chassis running FreeNAS 8.2 (Original build and pics of the chassis here)
  • Intel Core 2 Duo 2.4 GHz processor
  • 4GB DDR2-800 RAM
  • 1 Gbit NIC
  • 1 x 1TB 7200 RPM SATA II drive
  • 1 x 128GB OCZ Vertex 2E SSD (SATA II)
  • As this is temporary, each drive provides 1 x Datastore to the ESXi hosts. I therefore have one large datastore for general VMs, and one fast SSD based datastore for high priority VMs, or VM disks. I am limited by the fact that the Micro-ATX board only has 2 x onboard SATA ports, so I may consider purchasing an addon card to expand these.
  • Storage is presented as NFS. I am currently testing ZFS vs UFS and the use of the SSD drive as a ZFS and zil log / and or cache drive. To make this more reliable, I will need the above mentioned addon card to build redundancy into the system, as I would not like to lose a drive at this time!

Platform / ghetto rack

  • IKEA Lack rack (black) – cheap and expandable : )

 

To do

Currently, one host only has 4GB RAM, I have an 8GB kit waiting to be added to bring both up to 8GB. I also need to add the HP NC360T dual port NIC to this host too as it is a recent addition to the home lab.

On the storage side of things, I just managed to take delivery of 2 x OCZ Vertex 2 128GB SSD drives which I got at bargain prices the other day (£45 each). Once I have expanded SATA connectivity in my Micro-ATX FreeNAS box I will look into adding these drives for some super fast SSD storage expansion.

 

The 2 x 120GB OCZ SSDs to be used for Shared Host Storage
HP NC360T PCI-Express NIC and 8GB RAM kit for the new Microserver

 

Lastly, the Dell PowerConnect 5324 switch I am using still has the original firmware loaded (from 2005). This needs to be updated to the latest version so that I can enable Link Layer Discovery Protocol (LLDP) – which is newly supported with the VMware vSphere 5.0 release on Distributed Virtual Switches. This can help with the configuration and management of network components in an infrastructure, and will mainly serve to allow me to play with this feature in my home lab. I seem to have lost my USB-to-Serial adapter though, so this firmware upgrade will need to wait until I can source a new one off ebay.

 

Corsair XMS3 RAM compatible with HP Microserver N40L

Just a quick post today on RAM compatibility with the good old trusty home lab server, the HP Proliant N40L Microserver. I am currently using Microservers for my home vSphere 5 lab, running ESXi 5.0 update 1.

 

I had 8GB of Corsair XMS3 PC3-12800 C9 (1600MHz) RAM lying around at home and wanted to put it back to good use. It does not have ECC, but I tried it out in my Microserver and it works! Despite being a higher voltage rated RAM kit (1.65v odd), it works with the Microserver’s 1.5 rated DIMM slots just fine. No need to buy an extra 8GB RAM kit with my second Microserver now.

 

 

Home labs – adding and modding a dual port Gigabit NIC to the HP Microserver N40L

I wanted to add more physical NICs to my HP Microserver N40L machine to use with vSphere. The box comes with an onboard 1GBit NIC, but I wanted to play around with VLANs and multiple uplinks in my home lab. The problem is finding an affordable solution – most dual port NICs that are any use with ESXi (Intel chipset based) cost almost as much as the Microserver itself which is quite off-putting!

After trawling ebay and the VMware HCL I found that that the HP NC360T PCI Express Dual Port Gigabit Network Card would work well with ESXi 5.0 and that I could get these NICs (used) fairly cheap. I picked up a used card off ebay for only £30 ($46.97 US), which was in my budget. Problem was I could not find a card with a low-profile bracket, so I thought I would just make do with the normal bracket and either remove it, or modify it to fit.

The NIC itself has two 1GBit ports, and is based on the Intel 82571EB chipset and offers a 4 lane (x4) PCI Express bus. This means that I could use it on the HP Microserver’s 16x PCI Express slot (which is downward compatible of course). Apparently there are also mods out there that can be used to get this card to work in the x1 slot if you don’t have the x16 slot free – but I haven’t attempted this yet.

I first tried it out without the bracket (by just removing two screws that hold the bracket to the PCB). This worked fine, but was really not a permanent solution, especially for plugging/unplugging cables whilst the machine was powered up.

not a good solution - the NIC without bracket - way too flimsy

 

So out came the tools as I decided to modify the existing bracket to fit the Microserver’s low-profile chassis.

The Intel card's bracket next to the Microserver's blanking plate.

 

HP NC360T PCI Express card with bracket removed

 

Step 1 – I drew a line at the point where the 90 degree bend in the bracket should be for a low-profile card. I then took a junior hacksaw and “scored” this line out (saw a little bit in to weaken the metal).

Step 2 – Now with the score mark in place, I simply used two pairs of pliers to bend the bracket along the score mark, which was now easy and accurate.

bending the bracket on the score mark

 

Step 3 – I marked off where the 90 degree protruding point of the bracket would end, and used the hacksaw to remove the excess. I then cut out a small notch in this top piece for the screw that holds the PCI card in place normally. I attached the bracket back to the NIC and installed in the Microserver.

 

The HP NIC fitted with modified bracket

 

And here is the final result after ESXi 5.0 recognises the new hardware –

The NIC recognised under Network adapters view - ESXi 5.0

If you have managed to find any good PCI Express NICs for the HP N40L Microserver, or have any advice or experiences with mods for hardware and the Microserver, please post in the comments section!