VMware Labs – iSCSI Shared Storage how-to using the HP P4000 LeftHand VSA [Part 2/2]

 

In the last post [Part 1/2], we prepared our VSA, created a management group and cluster in the CMC, and then initialized our disks. Next up, we’ll be creating a Volume which will be presented to our ESXi hosts as an iSCSI LUN. Before we do this though, we need to make sure our hosts can see this LUN. Therefore we’ll be making entries for each of our ESXi hosts using their iSCSI initiator names (IQNs).

 

Preparing your iSCSI Adapter

 

If your ESXi hosts don’t already have a dedicated iSCSI adapter you’ll need to use the VMware Software iSCSI adapter. By default this not enabled in ESXi 5.0. This is simple to fix – we just need to get it added. Select your first ESXi host in the Hosts & Clusters view of the vSphere Client, and click Configuration -> Storage Adapters -> Add -> Select “Add Software iSCSI Adapter”. Click OK to confirm.

 

 

Now we need to find the IQN of the iSCSI adapter. In the vSphere client, select the iSCSI adapter you are using and select Properties on it under Storage Adapters.  This will bring up the iSCSI Initiator Properties. Click the Configure button and copy the iSCSI Name (IQN) to your clipboard.

 

Getting the iSCSI adapter IQN using the vSphere Client

 

Quick tip: you can also fetch your iSCSI adapter information (including the IQN) using esxcli. Login to your host using the vMA appliance, the DCUI, or SSH for example and issue the following command, where “vmhba33” is the name of the adapter you want to fetch info on:

 

esxcli iscsi adapter get -A vmhba33

 

Getting the iSCSI adapter IQN using ESXCLI

 

Configuring a Server Cluster and the Server (Host) entries in the CMC

 

We’ll now create “Servers” in the HP CMC which are what we’ll be adding to our “SAN LUN” later on to allow our ESXi hosts access. In the CMC go to Servers and then click Tasks -> New Server Cluster. Give the Cluster a name and optional description, then click New Server. Enter the details of  each ESXi host (Click New Server for each host you have in your specific cluster). For each ESXi host, the Initiator Node Name is the iSCSI Name, or IQN we got from each ESXi host in the step above. The Controlling Server IP Address in each case should be the IP address of your vCenter Server. For this example we won’t be using CHAP authentication, so leave that at “CHAP not required”. Once all your ESXi hosts are added to the new cluster, click OK to finish.

 

Creating a new Server Cluster and adding each ESXi host and it's corresponding iSCSI Names/IQNs

 

Creating a new Volume and assigning access to our Hosts

 

Back in the CMC, with our disks that are now marked as Active we’ll now be able to create a shiny new Volume which is what we will be presenting to our ESXi hosts as an iSCSI LUN. Right-click “Volumes and Snapshots” and then select “Create New Volume”

 

 

Enter a Volume Name and Reported Size. You can also use the Advanced Tab to choose Full or Thin provisioning options, as well as Data Protection level (if you had more than one VSA running I believe).

 

 

Now we’ll need to assign servers to this Volume (We’ll be assigning our whole “Server Cluster” we created earlier to this Volume to ensure all our ESXi hosts get access to the volumen. Click Assign and Unassign Servers, then tick the box for your Server Cluster you created and ensure the Read/Write permission is selected. Then click OK

 

Assign the Server Cluster to the Volume for Read/Write access

 

Final setup and creating our Datastore with the vSphere Client

 

Go back to the vSphere client, go to one of your ESXi hosts, and bring up the Properties for your iSCSI adapter once again. We’ll now use “Add Send Target Server” under the Dynamic Discovery tab to add the IP address of the P4000 VSA. Click OK then Close once complete.

 

 

You should be prompted to Rescan the Host Bus Adapter at this stage. Click Yes and the Rescan will be begin. After the Scan is complete, you should see your new LUN is being presented as we’ll see a new device listed under your iSCSI Adapter (vmhba33 in my case for the Software iSCSI adapter).

 

New device found on the iSCSI Software adapter

 

Now that everything is prepared, it is a simple case of creating our VMFS datastore now from this LUN for our ESXi hosts to use for VM storage. Under Hosts & Clusters in the vSphere Client, go to Configuration, then Storage. Click Add Storage near the top right, and follow the wizard through. You should see the new LUN being presented from our VSA, so select that and enter the details of your new Datastore – Capacity, VMFS file system version and Datastore Name. Finish off the wizard and you are now finished. The new datastore is created, partitioned and ready to be accessed!

 

Add Storage Wizard

 

Complete - the new datastore is added

 

Well, that is all there is to it. To summarise, we have now achieved the following over the course of these two blog posts:

 

  • Installing and configuring the P4000 LeftHand VSA
  • Setting up the CMC
  • Creating a VSA Management Group
  • Creating a VSA Standard Cluster
  • Creating Servers entries in a new Server Cluster for each of our ESXi hosts to be presented the storage
  • Creating a LUN / Storage Volume
  • Configuring the ESXi hosts to find our VSA in the vSphere Client
  • Adding the Storage to our ESXi hosts and Creating our VMFS Datastore using the vSphere Client

 

To conclude, I hope this series has been helpful and that you are well on your way to setting up iSCSI shared storage for your VMware Cluster! As always, if you spot anything that needs adjusting, or have any comments, please feel free to add feedback in the comments section.

 

9 thoughts on “VMware Labs – iSCSI Shared Storage how-to using the HP P4000 LeftHand VSA [Part 2/2]”

  1. @cosy

    Hi cosy,

    Hmmm, that is a tricky question, and it all depends on how your networking is setup between sites. Would be best asking that on a network orientated forum or messageboard to be honest! You’ll need to allow comms through the VLANs of course and need some routing in there. Regarding the snapshots and replication – I haven’t actually tested these features on the P4xxx systems myself – sorry I can’t be of much more help than that.

    I would suggest looking for an HP/Lefthand forum / community and asking the question there. You may need to provide more detail on your networking setup to them too. Also check out @HPStorageGuy on twitter – he works for HP and might have some good resources for you to check out 🙂

    Cheers,
    Sean

  2. HI Sean,

    I have a 2 x HP4500 connecting to 2 x esx servers. Also we thought of setting up the DR.
    So we got 1 VSA and setup in 1 x esx box with 14TB space in siteB

    So SiteA — Storage VLAN 192.168.5.x and ESX – 192.168.3.x user–192.168.2.x

    SiteB- Storage VLAN 10.1.4.x and esx VLAN 10.1.3.x users -10.1.2.x

    Site A- SiteB

    What do i need to manage all in one location?
    Do i need snapshot luns?
    How do i setup replication?

    AS

  3. Thank you for very useful info. I have installed it according to your procedure very fast. It works 🙂

  4. Hi Bob,

    Thanks for leaving that info here! Great to see you are on your way to finishing up the configuration and change. Hope all has gone well. I had a feeling the two VIPs (one for each cluster) could work – excellent stuff!

    Sean

  5. Hi Sean,

    Apologies for my late reply also Sean. I have managed to work out the best way forward without having to install SANiQ 8.3 / 9.5 coexistence patch. I configured the two new P4500 G2s in a seperate Array [cluster] managed by a seperate 9.5 CMC.

    So to summarise, I have two automonous systems:

    CMC SANiQ 8.3 with 2 x SANiQ 8.3 P4500 G1 nodes in cluster 1

    CMC SANiQ 9.5 with 2 x SANiQ 9.5 P4500 G2 nodes in cluster 2

    all on the same subnet with 2 different VIPs. Now I have configured ESXi hosts to iSCSI connect to both sets of LUN targets on both clusters thro VIP dynamic discovery.

    This has now enabled me to svMotion all VMs residing on 8.3 based cluster to the 9.5 based cluster. I will now upgrade the 8.3 based empty cluster nodes, kill the 8.3 CMC and add the upgraded nodes now at 9.5 to the new 9.5 based cluster. A single cluster with 4 nodes and 1 CMC all at SANiQ 9.5.

    During the svMotion stage and coexitence through ESXi connectivity, both CMCs will see both arrays [clusters] as the arrays are broadcasted across the same subnet, so ensure the management is orchestarted thro the 9.5 CMC as the 8.3 CMC will not maange the 9.5 nodes.

    Thanks for you reply Sean as your article inspired me to find a solution that guaranteed a smooth merging of P4500 nodes.

    Regards

    Bob

  6. Hi Bob,

    Apologies for the late reply! At first thought I believe this should be possible. I’ll see if I can try it out in my lab. What I have tried and confirmed to work is two different VSA appliances in two different Storage Clusters (each in their own) with two VIPs, one for each, connecting to the same ESXi hosts. However this was under the same CMC. Is that what you are looking at doing?

    Sean

  7. Hi SHogan,

    Can you configure a ESXi 4.1 host through iSCSI to connect to two seperate/isolated Lefthand systems?.

    I have one system running SAN/iQ 8.3 and another system running SAN/iQ 9.5 and I do not want to join them or cluster them together, but I want to storage vMotion all VMs and vmdks etc from the 8.3 system to the new 9.5 system for contingency so I can totally rebuild the 8.3 system and then when there are at the same SAN/iQ version i.e 9.5 bring all four nodes together under one system managed by a single CMC.

    Was not sure whether ESXi 4.1 would let its single initiator [ 1 vswitch and two NICs ] point to two seperate Lefthand systems and hence two VIPs ?.

    Best Regards

    Bob

Leave a Comment