Fix for VM console error – unable to connect to the mks the operation is not allowed in the current state

Bit of a strange one this – I have not dug deeper to find the root cause, but here is a quick fix for anyone with the issue.

mks-console-error-vm

 

I found we could not open VM console sessions in a vCenter 5.5 environment today. Usually one’s first thought is that it is a DNS or port issue when you see the classic MKS console error in a VM, but in this case I knew that DNS and ports were not an issue, as RDPing direct to the vCenter Server itself, logging in with the C# client and opening VM consoles from there were giving the exact same message. This was the case for the web client as well as the C# client.

The issue was either with the host that VMs were running on, or with the VMs themselves – the simple fix:

vMotion the VM to another host. As soon as this was done, I could open the console session. The underlying issue is still out there, but I have not had the time to dig any deeper to find out the root cause. More discussion and info available from this VMware communities thread: //communities.vmware.com/thread/450294

Force ESXi trial license to expire ahead of time

I recently caught a question on Twitter from Steve Jin, asking if anyone knew how to force an ESXi host to expire it’s trial license for testing purposes.

This got me thinking a bit, and I initially thought the obvious solution would be to set the host’s system clock forward by 60 days for example. I quickly remembered though, that ESXi hosts always seem to count time toward their trial license time based on the number of hours they are powered up for. If you power your host down for a month, and power it back up again, you’ll still have the same amount of time left over on your trial license.

So the next thing I thought, was if I were building a product and protecting it with licensing, surely I would try to prevent people from tampering with the license files. So if someone were to tamper with a license, I could immediately deactivate it, or expire it. This is where I got the idea that worked for Steve’s use case – finding the license.cfg file, and entering some invalid data.

The exact solution, as Steve found, was to find the etc/vmware/license.cfg file on your ESXi host, and tamper with <epoc> the entry, causing the license to become invalid. At this point, any remaining trial license time is invalidated and your license enters an expired state.

 

lice-eval-expire-ESXi

Change the string highlighted above to some random entry, save the file, then reboot your host. Once rebooted, your trial period will have expired!

This could be really useful in some circumstances. Perhaps there is no clear documentation on how a host running VMs in your environment would react if a trial license expired, or you wanted to know how your 3rd party backup software would react to unlicensed hosts. Being able to easily test an expired license scenario can be really handy!

Solving VMware Fusion 6 and Windows 7 VM performance issues

I have been struggling along with various VM performance issues over the last couple of months using VMware Fusion 5.x, as well as the latest 6.0.3. I just didn’t get the time to dedicate to find a fix for the performance degradation I was seeing until just recently.

I have the following specifications on my Macbook Pro Retina which I use for development purposes:

macspec1

I have a Windows 7 Professional VM running in VMware Fusion, with a spec that I had tried all kinds of different configurations on – mainly 2 vCPUs, and 4GB RAM though. This VM is running on the built-in 256GB SSD.

Nothing seemed to fix the performance issues I was seeing, which was that by at least half way though a typical work day of using Visual Studio and a few tabs of Chrome/IE/Firefox, the VM would slow down to an absolute crawl. I knew it was the VM though, as everything in OSX Mavericks, the host OS was perfectly normal. Most of the time just restarting the Windows VM itself would not help though – I would have to reboot the whole macbook.

The other week I decided enough was enough, and spent a bit of time googling and looking around the VMware Communities forums for a fix. Here is the combination of settings that seems to have resolved my issues now.

  • Settled on a VM spec of 3 x vCPUs (helpful for Visual Studio), and 4GB RAM.
  • Disabled app nap for VMware Fusion (Applications -> Right-click, Get Info on VMware Fusion, and tick the box that says “Prevent App Nap”.
  • Added 3 x new entries into my VM’s configuration file (.vmx file). To edit the .vmx file you’ll need to right-click your VM and select “Show Content”. This will allow you to browse the file content of the VM, and you’ll need to locate your VM’s .vmx file. Right-click this file and open it in your text editor of choice. I added the following lines to the bottom of the file:
MemTrimRate = "0"
sched.mem.pshare.enable = "FALSE"
prefvmx.useRecommendedLockedMemSize = "TRUE"

Don’t forget to disable App Nap for Fusion.

prevent-app-nap
Disable app nap for Fusion

Interacting with VMware vCO and the Rest API using PowerShell – getting a list of workflows

Recently I needed to get a list of VMware vCO workflows from a remote server using PowerShell. A colleague of mine pointed me in the right direction by providing me with a URL to access the vCO Rest API on, as well as letting me know what I needed to send in order to authenticate.

To connect and retrieve content back in the PowerShell example below, we’ll need to:

  • Access the Rest API URL for Orchestrator using a web client object
  • Send basic authentication in the header of our request

Notes:

One thing I did notice is that when you use your web browser to test the URL, the result is returned to you as XML, however when I used a web client object in PowerShell, I got a result returned to me in JSON. The PowerShell script below is therefore tailored to interpret the result as JSON. This being so, you’ll need to make sure you are using PowerShell 3.0 or above, as the ConvertFrom-Json cmdlet is only available using PowerShell 3.0 and above.

When sending your authentication details with the web client object request, make sure your username/password combo are used in this format:

Authorization: Basic username:password

This means that your header you add to your web client option, should be added with the string as per the above, but with the username:password part encoded using base64. The script below takes this all into account, and all you need to do is provide your username and password for vCenter Orchestrator to the PowerShell function, it will handle the base64 encoding and passing of the values to the web client itself.

Anyway, enough of that, let us get onto the actual script itself. This is presented as a PowerShell function. Load it into your session (copy-paste) or add it to your PS profile for future use. Apologies for the formatting – Syntax Highlighter really messes with the formatting and nice clean indentation I normally have in my scripts!

Here is a direct download of the PowerShell script if the script paste below doesn’t work for you:

 

Function Get-VcoWorkflow() 
{

<#
.SYNOPSIS
Fetches vCO Workflow information and details from a vCenter Orchestrator server

.DESCRIPTION
Fetches vCO Workflow information and details from a vCenter Orchestrator server

.PARAMETER Username
Username for the vCO server

.PARAMETER Password
Password for the vCO server

.PARAMETER Server
The vCO server hostname or IP address

.PARAMETER PortNumber
The port to connect on

.EXAMPLE
PS F:\> Get-VcoWorkflow -Username Sean -Password mypassword -Server 192.168.60.172 -PortNumber 8281

.LINK

//www.shogan.co.uk

.NOTES
Created by: Sean Duffy
Date: 30/03/2014
#>

[CmdletBinding()]
param(
[Parameter(Position=0,Mandatory=$true,HelpMessage="Specify your vCO username.",
ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
[String]
$Username,

[Parameter(Position=1,Mandatory=$true,HelpMessage="Specify your vCO password.",
ValueFromPipeline=$false,ValueFromPipelineByPropertyName=$true)]
[String]
$Password,

[Parameter(Position=2,Mandatory=$true,HelpMessage="Specify your vCO Server or hostname.",
ValueFromPipeline=$false,ValueFromPipelineByPropertyName=$true)]
[String]
$Server,

[Parameter(Position=2,Mandatory=$true,HelpMessage="Specify your vCO Port.",
ValueFromPipeline=$false,ValueFromPipelineByPropertyName=$true)]
[ValidateRange(0,65535)] 
[Int]
$PortNumber
)

process 
{
$Report = @() | Out-Null

# Craft our URL and encoded details note we escape the colons with a backtick.
$vCoURL = "https`://$Server`:$PortNumber/vco/api/workflows"
$UserPassCombined = "$Username`:$Password"
$EncodedUsernamePassword = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($UserPassCombined))
$Header = "Authorization: Basic $EncodedUsernamePassword"

# Ignore SSL warning
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}

# Create our web client object, and add header for basic authentication with our encoded details
$wc = New-Object System.Net.WebClient;
$wc.Headers.Add($Header)

# Download the JSON response from the Restful API, and convert it to an object from JSON.
$jsonResult = $wc.downloadString($vCoURL)
$jsonObject = $jsonResult | ConvertFrom-Json

# Create a blank Report object array
$Report = @()

# Iterate over the results and transform the key/value pair like formatting to a proper table.
# Note I use a hashtable to populate a new PSObject with properties, using values found in the original JSON result
foreach ($link in $jsonObject.link)
{
	$kvps = $link.attributes
	$HashTable = @{}
	$kvps | foreach { $HashTable[$_.name] = $_.value } # foreach item in the link object, populate the hashtable
	$NewPSObject = New-Object PSObject -Property $HashTable # Create a new PSObject and populate the properties/values using our hashtable

	# Add our populated object to our Report array
	$Report += $NewPSObject
}

return $Report

}
}

I hope that this script comes in handy, and gives you an idea as to how you can retrieve and convert data from vCO and its RESTful API for use in PowerShell 🙂

Results example after running the Function:
get-vcoworkflow-results

 

Simple Content Delivery Network (CDN) using Amazon AWS (S3 + CloudFront)

 

Content Delivery Networks

Having a content delivery network has many benefits for your users or clients. One of the most obvious reasons of having a CDN, is the ability to serve up content to your users from multiple (often the most optimal) locations.  Users access files that originate from one original source location, but the content is delivered by the closest location(s), often with the lowest latency and highest possible speed.

Using Amazon CloudFront, you can share dynamic, static, or even streamed content to users (including full websites), using Amazon’s global network of edge locations. This means that content can be served to users at the highest possible speeds, with the lowest possible latencies. In this blog post, I will cover the steps you need to take to deploy a basic CDN using Amazon AWS. For this purpose, we will leverage a combination of Amazon S3 + CloudFront.

 

Setting up Amazon S3

Amazon S3 (Amazon Simple Storage Service) is essentially Amazon’s “storage for the Internet”, and as explained above, CloudFront is a content delivery network service. As such, both products sit in Amazon’s “Storage & Content Delivery” stack.

 

  • To get started you will of course need an Amazon AWS account. Go to //aws.amazon.com/ and register. You will need to provide credit card details, but most products have some sort of free tier that you can utilise for initial testing (usually free for up to 1 year, based on certain utilisation thresholds).
  • Once you are all signed up, you’ll need to navigate to the AWS Web Console. This is the central location you can use to manage all AWS services (among other options such as the AWS SDK and Command Line).
aws-console-example
The central, AWS Web Management Console
  • To start, we’ll need to define an origin location for our content. This is the location our original files are kept. For this purpose, we will use Amazon S3. It allows us easy access to files that we place in something Amazon call a “bucket”. I like to think of it as a folder, or container. You can have as many buckets as you wish, however each one’s name needs to be completely unique across Amazon S3. Click on “S3” under the “Storage & Content Delivery” heading of your AWS Console to get started.
  • From here, you will be greeted with a welcome page and some explanation of what S3 is. Simply click “Create Bucket” to get going.

create-bucket

 

  • Provide a unique bucket name, and specify a region to use. Regions have the benefit of allowing organisations to comply with storage regulation rules – for example, if you were storing client data that you were bound legally to keep within the UK, you would specify the Ireland region.

new-bucket

 

  • Your new bucket will appear in the S3 Management Console after being created. Simply click the name of the bucket to open it. For our simple CDN, we’ll just be serving up one single file – pretend this was a really large file that needed efficient distribution to many people – for example a large media file. At the top left, you’ll see an “Upload” button. Click this, and choose a file to upload as your test file. I will be using a simple image file. (By the way, Amazon have a service called “Amazon Import/Export”, which allows you to send really large amounts of data via post on portable media to Amazon for them to upload directly to your Amazon S3 or Glacier services).
  • Click “Start Upload” once you have chosen a file to test with.
  • After the file is finished uploading, it will appear in the console under your bucket name. (I called mine “image-for-distribution.png”).

example-file-in-bucket

 

  • Right-click the file, and choose the option “Make Public” for this test. This choice would be affected by the nature of the files you would want to deliver to users in your own configuration, but for this simple example, this is what I am choosing.
  • Right-click the file again, and choose “Properties“. Here you can get the direct, public link to your file and test access to it in your web browser. This is simple, direct access, and is not the access we are aiming for, as we will utilise our CDN with CloudFront to serve the file in our final configuration. This is just to test that the direct link is working.

aws-file-properties

 

Setting up CloudFront and your Distribution

  • Now that we know our basic file is being correctly served from Amazon S3, we’ll navigate to “CloudFront” from the main AWS Console (aws.amazon.com). A quick way to get there is by clicking the orange cube icon in the top left of your AWS page – wherever you are in the console, it’ll take you back to the main AWS console. From there just click “CloudFront“.
  • In CloudFront, we’ll want to create something called a “Distribution“. Click the “Create Distribution” button to get started.

create-distribution

 

  • Make sure you select “Download” type for the “delivery method” when asked on the next page, then click “Continue“.

cloudfront-delivery-method

 

  • We’ll now select various options for our CloudFront Distribution.
    • For “Origin Domain Name“, click the text box and you’ll see a populated list of Amazon S3 buckets. Your bucket you created earlier should feature here. Click it to select it.
    • The “Origin ID” should auto populate based on your S3 bucket name you chose.
    • If you wish to restrict users to only access your content via CloudFront URLs, and not direct by S3 URLs, then choose “Yes” for “Restrict Bucket Access“.
    • If you chose “Yes” for restricting bucket access, you’ll also need to create a “Comment” and “Grant Read Permissions” on the bucket for CloudFront’s access to the S3 bucket. Click “Yes, Update Bucket Policy” to have CloudFront get read access automatically to the S3 bucket.
    • Select “HTTP and HTTPS” for “Viewer Protocol Policy“.
    • You can customise the object caching properties if you wish, but for this example, just leave the “Default Cache Behavior Settings” on their defaults.
    • Now you can set your “Distribution Settings“. Choose “Use All Edge Locations (Best Performance)” for “Price Class“. This will ensure that all edge locations around the world are used to distribute your content in the fastest, most efficient way to your users. You could also restrict this to other groups of regions e.g. only the US and Europe for example – this would be a cheaper option, but not as efficient for all users globally.
    • Next, we can add an alternate CNAME for the distribution. This is highly recommended so that you can provide your own domain name formatted URLs to users, instead of a long, ugly default Amazon CloudFront URL. Enter something now, (for example I will use cdn.shogan.co.uk as I own the domain and can create this CNAME record myself in DNS). Once you are complete with this distribution setup, you should get the Distribution URL, and point a new CNAME record to the full URL that CloudFront assigns to your distribution.
    • Leave all other options at their defaults for now, and make sure that the last option “Distribution State” is “Enabled“, then click the “Create Distribution” button at the very bottom.

example-distribution-settings1 example-distribution-settings2

  • Your Distribution should now be created. Use the Navigation menu on the left side of the screen and click “Distribution” to see a list of your CloudFront Distributions.

cloudfront-distributions

 

  • At first the “Status” will show “InProgress“. After a few minutes this should change to “Deployed“.
  • In the mean time, look for your “Domain Name” that this Distribution has been assigned, and go and create a CNAME record pointing the CNAME you specified when creating this distribution, to the domain name. For example, you may have something like dxxxxxxxxxm.cloudfront.net. In my case, I specified a CNAME of cdn.shogan.co.uk, so I will create a CNAME record linking these together.

 

Testing

Once your CNAME record is created, type in your new CNAME record, followed by a forward slash, and then the name of the file you originally uploaded to your S3 bucket that is linked to by this CloudFront distribution. For example, my file was called “file-for-distribution.png” and my CNAME record I made is cdn.shogan.co.uk. So to utilise my CloudFront CDN, I would simply access the file as “cdn.shogan.co.uk/image-for-distribution.png”. If your DNS takes a while to apply/propagate, then you can simply use the CloudFront domain name assigned to your Distribution (for example dxxxxxxxxxxm.cloudfront.net/yourfilename.extension) to test out your distribution. Remember to ensure your distribution is in a deployed state before testing. You should now see your file served up in your web browser via your brand spanking new Amazon AWS powered CDN!

 

Conclusion

That concludes the basic setup of a Amazon S3 + CloudFront powered Content Delivery Network. I hope this was useful for some. In forthcoming blog posts I will delve into setting up custom logging and monitoring / alerting for your CDN. Please remember to like/share/tweet this post out to friends if you thought it was useful.

 

 

Setting up DNS SRV records for an Office 365 migration (on 123-reg)

I needed to setup some DNS records for an Office 365 migration earlier and was initially slightly confused translating the settings Microsoft supplied us to those needed as input on 123-reg’s Advanced DNS configuration. The MX, TXT and CNAME records were simple enough, but it was the SRV records that needed a bit of fiddling to get right.

As an example on the SRV records, MS give you something like this:

Type Service Protocol Port Weight Priority TTL Name Target
SRV _sip _tls 443 1 100 1 Hour thedomain.co.uk sipdir.online.lync.com
SRV _sipfederationtls _tcp 5061 1 100 1 Hour thedomain.co.uk sipfed.online.lync.com

123-reg give you this interface to enter SRV records yourself:

Screen Shot 2013-06-13 at 14.24.42

 

By looking at the examples you can start to understand how to translate the Service, Protocol, and Weight items that MS give you, into the 123-reg input boxes (which do not exist individually for Service, Protocol and Weight).

In the first SRV record example –

Hostname therefore becomes: _sip._tls (the Service + the Protocol with a dot (.) between)

TTL of course becomes: 3600 (1 hr)

Priority is 100

Destination (the most confusing one) becomes: 1 443 sipdir.online.lync.com. (note that it starts with Weight (1), then a space, then the port number (443), then the Target (sipdir.online.lync.com), followed by a dot (.)

That forms your complete SRV record. By entering these along with the other records you require, you should have a fully functional Office 365 setup on your custom domain name.