Problems using vmotion to move VMs between ESX hosts with different CPUs

When VMs are used on new ESX servers the CPU mask maybe left over from the ESX server that they were previously used on. This can cause problems when VMotioning to new ESX servers that have slightly different CPU/s. To get around this issue, we need to reset the CPU mask as part of the process when moving any VMs over to a different host / cluster.

The following explains how to reset the CPU Identification Mask to avoid this issue:

1) Shutdown the problem VM
2) From the VMWare Infrastructure client, go to “Edit Settings” (From the summary tab) on the VM in question.
3) Select the “Options” tab
4) Select “CPUID Mask” under the “Advanced” section
5) Click the “Advanced” button
6) Click “Reset All to Default” button
7) Click “OK” on all forms to apply the change.

You should now *hopefully* be able to VMotion this Virtual Machine across seeing as though the CPUID Mask has been reset to defaults – I would imagine that this gets reconfigured depending on what CPU the VM finds when it starts up again.

Drop any comments or additional information in the comments section below.

allowing root SSH access to your ESX host

In order to be able to SSH into your ESX host server via putty you need to enable root access via SSH. By default this is disabled – we will modify a configuration file and restart the service to allow ourselves into the ESX host console remotely.

From the console (you will need to physically be at the machine, or at least via a DRAC or KVM over IP), press Alt-F1 to access the command line and login as root.

Edit the following file:

/etc/ssh/sshd_config

You can do this by typing:

nano /etc/ssh/sshd_config

Go to the line that reads “PermitRootLogin no”

and change this to read “PermitRootLogin yes”

Press Ctrl-X to exit and press the “Y” key, hitting Enter to commit the choice to save.

Now we need to restart the sshd service to enable the changes. Type:

service sshd restart

and press enter.

You will now be able to SSH in as root.

Please drop me a comment if this has helped in any way!

How to setup syslog to remotely monitor a VMWare ESX Host server

Here’s a quick how-to I did on setting up syslog to remotely monitor a VMWare ESX Host.

You’ll obviously require an operational syslog server – I use Kiwi syslog – a freeware syslog daemon for this purpose.

Set up SYSLOG to monitor ESX Host remotely
The following should be configured on any new ESX Hosts that are installed. It will allow SYSLOG to be uploaded to a remote SYSLOG server such as Kiwi Syslog.

Login to the ESX host via Putty as root, or alternatively do this from the ESX server console. (PS if you are logging into a new ESX Host, you will need to have  allowed root access to the ESX server via SSH – I will do a how-to for this soon too).

nano /etc/syslog.conf

Go to the bottom line (blank) of the syslog.conf file and add this to point to your syslog server:

*.* @x.x.x.x

(Where x.x.x.x is the IP address or hostname of your syslog daemon server).

Press Ctrl-X to exit, and press “Y” to save changes, then Enter to commit your choice.

Restart syslog:

service syslog restart

If the host is a new installation, we will need to open the ESX firewall up to allow syslog out. Do the following command to open it:

esxcfg-firewall -o 514,udp,out,syslog

To reload the firewall configuration and apply changes:

esxcfg-firewall –l

Restart the syslog service once again with:

service syslog restart

If you want to spoof a message to the syslog server to test that the ESX host is actually doing any of the logging, use the following command.

/usr/bin/logger -p local6.notice -t TEST — “Testing SYSLOG”

Go and check the Syslog log file on your syslog server and you should see the log that has come through.

That is all there is to it! Please drop a comment or leave some feedback if this has helped you out in any way! 🙂

Installing VMWare ESX using a Dell DRAC card

Here is a how-to on installing VMWare ESX 3.5 using a DRAC (Dell Remote Access Controller) card to access the server. I was installing a new cluster in a Dell M1000e Blade Centre for work the other day and wrote up this process in order for it to be documented for anyone else doing it in the future.

Just for interests sake the basic specs of the system are:

1 x Dell M1000e Blade Centre
3 x Redundant 2000w+ Power supply units
16 x Dell M600 Blades (Each one has 2 x Quad core Xeon CPUs and 32GB RAM).

1. Connect to the M1000e’s chassis DRAC card.
a. Connect to M1000e chassis DRAC card. (https://x.x.x.x) – use the IP for that particular blade centre’s DRAC card. Login with DRAC credentials.
b. Use the agreed DRAC user credentials, or if this is a new setup, the defaults are username: root password: calvin).

login_drac

2. Select boot order for Blade and power it up
a. Choose the Blade server number that you will be working with from the Servers list on the left side.
b. Click on the Setup tab, and choose Virtual CD/DVD as the first boot device then click Apply.
c. Select the Power Management tab and choose Power on, then click Apply.

configure_boot_order_for_blade

3. Go to iDRAC console of the blade server
a. Click on Launch iDRAC GUI to access the iDRAC for the blade you have just powered on.
b. You will need to login again as this is another DRAC we are connecting to (This time the DRAC is for the actual blade server not the chassis).

launch_idrac_gui

4. Configure Mouse
a. Click on the Console tab near the top of the screen and then click the Configuration button near the top.
b. In the mouse mode drop down, select Linux as the mouse type, then click Apply.

configure_mouse

5. Launch Console viewer
a. From the console tab we can now select the Launch Viewer button.
b. An activeX popup might appear – allow it access and the DRAC console should now appear with the server in its boot process.

6. Mount Virtual ISO media for installation disc (ESX 3.5)
a. Click on Media, and then select Virtual Media Wizard.
b. Select ISO image and then browse to the ISO for ESX 3.5 – this could be on your local drive or a network share.
c. Click the connect CD/DVD button to mount the ISO.
d. Your boot order should be configured correctly to boot off this ISO now. (*Optional* You could always press F11 whilst the server is booting to choose the boot device anyway).

attach_virtual_media_iso

7. Reboot the server if the boot from virtual CD/DVD has already passed
a. Go to Keyboard – Macros – Alt-Ctrl-Del to do this.

8. ESX install should now start.
a. Press enter to go into graphical install mode
b. Select Test to test the media (The ISO should generally be fine).
c. Select OK to start the install.
d. Choose United Kingdom Keyboard layout (or whatever Keyboard layout you use).
e. Leave the mouse on generic 3 button USB.
f. Accept the license terms.

esx_install_start

esx1

9. Partitioning
a. For partition options, leave on “Recommended”. It should now show the Dell virtual disk of 69GB (in this case) or the Dell RAID virtual disk / disk configuration.
b. Say “Yes” to removing all existing partitions on the disk. (That is if you don’t mind formatting and completely clearing out any existing data that may be on this disk).
c. Alter partitions to get the following best practice sizes: (See http://vmetc.com/2008/02/12/best-practices-for-esx-host-partitions/)
d. Note: It doesn’t matter if these sizes are 2-3MB out for some. The installer deviates these sizes slightly. The swap partition should have 1600MB minimum though.
e. Next page is Advanced Options – Leave as is (Book from SCSI drive).

esx_partitions_recommended

10. Network Configuration
a. Setup network configuration
b. IP address (x.x.x.x) – whatever IP you are assigning this particular ESX Host.
c. Subnet mask: 255.255.255.0 for example.
d. Gateway:  Your gateway IP address (x.x.x.x)
e. Primary DNS:  (x.x.x.x)
f. Secondary DNS: (x.x.x.x)
g. Hostname: localhost.localdomain for example : ESXhost01.shogan
h. VLAN ID – Leave this blank if you are not using VLANs. If you are, then specify the VLAN here.
i. Create a default network for virtual machines – Unless you have a specific network configuration in mind leave this ticked on.

11. Time zone
a. Set Location to  your location.
b. System clock uses UTC is left as ticked.

12. Root password
a. Set default root password . (This is your admin password)!

13. Finish installation
a. Next page is “About to Install”
b. Check the information is all correct and click Next if all looks fine.

14. Change boot order back and restart the blade server.
a. Via the iDRAC page, change the boot order back to Hard disk for the blade so that it will reboot using the server’s RAID hard disks instead of the ISO.
b. Reboot the host by pressing the Finish button back in the console.
c. Disconnect the Virtual CD from the Media option in the console menu.
d. Watch the console while the server reboots to ensure no errors are reported on startup.

If all went well, you should now have an ESX Host booted to the console. Press Alt-F1 to access the command line (you will need to login as root or any other user you setup).

You can now access your server via the web browser (https://x.x.x.x). From here you can download the Virtual Infrastructure client to manage the ESX Host with.

This host could now be further configured and added to an ESX cluster for example. SANs could be assigned and vMotion setup so that HA (High Availability) and DRS (Distributed Resource Scheduling) can be put to good use!