Streamlining AWS AMI image creation and management with Packer

If you want to set up quick and efficient provisioning and automation pipelines and you rely on machine images as a part of this framework, you’ll definitely want to prepare and maintain preconfigured images.

With AWS you can of course leverage Amazon’s AMIs for EC2 machine images. If you’re configuring autoscaling for an application, you definitely don’t want to be setting up your launch configurations to launch new EC2 instances using base Amazon AMI images and then installing any prerequesites your application may need at runtime. This will be slow and tedious and will lead to sluggish and unresponsive auto scaling.

Packer comes in at this point as a great tool to script, automate and pre-bake custom AMI images. (Packer is a tool by Hashicorp, of Terraform fame). Packer also enables us to store our image configuration in source control and set up pipelines to test our images at creation time, so that when it comes time to launching them, we can be confident they’ll work.

Packer doesn’t only work with Amazon AMIs. It supports tons of other image formats via different Builders, so if you’re on Azure or some other cloud or even on-premise platform you can also use it there.

Below I’ll be listing out the high level steps to create your own custom AMI using Packer. It’ll be Windows Server 2016 based, enable WinRM connections at build time (to allow Packer to remote in and run various setup scripts), handle sysprep, EC2 configuration like setting up the administrator password, EC2 computer name, etc, and will even run some provioning tests with Pester

You can grab the files / policies required to set this up on your own from my GitHub repo here.

Setting up credentials to run Packer and an IAM role for your Packer build machine to assume

First things first, you need to be able to run Packer with the minimum set of permissions it needs. You can run packer on an EC2 instance that has an EC2 role attached that provides it the right permissions, or if you’re running from a workstation, you’ll probably want to use an IAM user access/secret key.

Here is an IAM policy that you can use for either of these. Note it also includes an iam:PassRole statement that references an AWS account number and specific role. You’ll need to update the account number to your own, and create the Role called Packer-S3-Access in your own account.

IAM Policy for user or instance running Packer:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AttachVolume",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:CopyImage",
                "ec2:CreateImage",
                "ec2:CreateKeypair",
                "ec2:CreateSecurityGroup",
                "ec2:CreateSnapshot",
                "ec2:CreateTags",
                "ec2:CreateVolume",
                "ec2:DeleteKeypair",
                "ec2:DeleteSecurityGroup",
                "ec2:DeleteSnapshot",
                "ec2:DeleteVolume",
                "ec2:DeregisterImage",
                "ec2:DescribeImageAttribute",
                "ec2:DescribeImages",
                "ec2:DescribeInstances",
                "ec2:DescribeRegions",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSnapshots",
                "ec2:DescribeSubnets",
                "ec2:DescribeTags",
                "ec2:DescribeVolumes",
                "ec2:DetachVolume",
                "ec2:GetPasswordData",
                "ec2:ModifyImageAttribute",
                "ec2:ModifyInstanceAttribute",
                "ec2:ModifySnapshotAttribute",
                "ec2:RegisterImage",
                "ec2:RunInstances",
                "ec2:StopInstances",
                "ec2:TerminateInstances",
                "ec2:RequestSpotInstances",
                "ec2:CancelSpotInstanceRequests"
            ],
            "Resource": "*"
        },
        {
            "Effect":"Allow",
            "Action":"iam:PassRole",
            "Resource":"arn:aws:iam::YOUR_AWS_ACCOUNT_NUMBER_HERE:role/Packer-S3-Access"
        }
    ]
}

IAM Policy to attach to new Role called Packer-S3-Access (Note, replace the S3 bucket name that is referenced with a bucket name of your own that will be used to provision into your AMI images with). See a little further down for details on the bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowS3BucketListing",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::YOUR-OWN-PROVISIONING-S3-BUCKET-HERE"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:prefix": [
                        "",
                        "Packer/"
                    ],
                    "s3:delimiter": [
                        "/"
                    ]
                }
            }
        },
        {
            "Sid": "AllowListingOfdesiredFolder",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::YOUR-OWN-PROVISIONING-S3-BUCKET-HERE"
            ],
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "Packer/*"
                    ]
                }
            }
        },
        {
            "Sid": "AllowAllS3ActionsInFolder",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR-OWN-PROVISIONING-S3-BUCKET-HERE/Packer/*"
            ]
        }
    ]
}

This will allow Packer to use the iam_instance_profile configuration value to specify the Packer-S3-Access EC2 role in your image definition file. Essentially, this allows your temporary Packer EC2 instance to assume the Packer-S3-Access role which will grant the temporary instance enough privileges to download some bootstrapping files / artifacts you may wish to bake into your custom AMI. All quite securely too, as the policy will only allow the Packer instance to assume this role in addition to the Packer instance being temporary too.

Setting up your Packer image definition

Once the above policies and roles are in place, you can set up your main packer image definition file. This is a JSON file that will describe your image definition as well as the scripts and items to provision inside it.

Look at standardBaseImage.json in the GitHub repository to see how this is defined.

standardBaseImage.json

{
  "builders": [{
    "type": "amazon-ebs",
    "region": "us-east-1",
    "instance_type": "t2.small",
    "ami_name": "Shogan-Server-2012-Build-{{isotime \"2006-01-02\"}}-{{uuid}}",
    "iam_instance_profile": "Packer-S3-Access",
    "user_data_file": "./ProvisionScripts/ConfigureWinRM.ps1",
    "communicator": "winrm",
    "winrm_username": "Administrator",
    "winrm_use_ssl": true,
    "winrm_insecure": true,
    "source_ami_filter": {
      "filters": {
        "name": "Windows_Server-2012-R2_RTM-English-64Bit-Base-*"
      },
      "most_recent": true
    }
  }],
  "provisioners": [
    {
        "type": "powershell",
        "scripts": [
            "./ProvisionScripts/EC2Config.ps1",
            "./ProvisionScripts/BundleConfig.ps1",
            "./ProvisionScripts/SetupBaseRequirementsAndTools.ps1",
            "./ProvisionScripts/DownloadAndInstallS3Artifacts.ps1"
        ]
    },
    {
        "type": "file",
        "source": "./Tests",
        "destination": "C:/Windows/Temp"
    },
    {
        "type": "powershell",
        "script": "./ProvisionScripts/RunPesterTests.ps1"
    },
    {
        "type": "file",
        "source": "PesterTestResults.xml",
        "destination": "PesterTestResults.xml",
        "direction": "download"
    }
  ],
  "post-processors": [
    {
        "type": "manifest"
    }
  ]
}

When Packer runs it will build out an EC2 machine as per the definition file, copy any contents specified to copy, and provision and execute any scripts defined in this file.

The packer image definition in the repository I’ve linked above will:

  • Create a Server 2012 R2 base instance.
  • Enable WinRM for Packer to be able to connect to the temporary instance.
  • Run sysprep to generalize it.
  • Set up EC2 configuration.
  • Download a bunch of tools (including Pester for running test once the image build is done).
  • Download any S3 artifacts you’ve placed in a specific bucket in your account and store them on the image.

S3 Downloads into your AMI during build

Create a new S3 bucket and give it a unique name of your choice. Set it to private, and create a new virtual folder inside the bucket called Packer. This bucket should have the same name you specified in the Packer-S3-Access role policy in the few policy definition sections.

Place any software installers or artifacts you would like to be baked into your image in the /Packer virtual folder.

Update the DownloadAndInstallS3Artifacts.ps1 script to reference any software installers and execute the installers. (See the commented out section for an example). This PowerShell script will download anything under the /Packer virtual folder and store it in your image under C:\temp\S3Downloads.

Testing

Finally, you can add your own Pester tests to validate tasks carried out during the Packer image creation.

Define any custom tests under the /Tests folder.

Here is simple test that checks that the S3 download for items from /Packer was successful (The Read-S3Object cmdlet will create the folder and download items into it from your bucket):

Describe  'S3 Artifacts Downloads' {
    It 'downloads artifacts from S3' {
        "C:\temp\S3Downloads" | Should -Exist
    }
}

The main image definition file ensures that these are all copied into the image at build time (to the temp directory) and from there Pester executes them.

Hook up your image build process to a build system like TeamCity and you can get it to output the results of the tests from PesterTestResults.xml.

Have fun automating and streamlining your image builds with Pester!

Changing DNS on Azure IaaS VM’s NIC forces RDP / network disconnect

I  just noticed this happen to a VM I was connected to this evening.

All I did was change the primary DNS from automatically assigned to manual, gave it a DNS server IP, and provided a backup secondary IP, and my RDP session was instantly dropped. Other HTTPS traffic to the box stopped too.

I had to restart the VM in Azure to get connectivity back. This VM was deployed using the classic portal, but I’ve seen reports of it happening on newer ARM deployed VMs too. Here’s a thread with others that have found the same issue.

Hopefully Microsoft will resolve this soon.

Scaling Web API 2 and back-end SQL databases in Azure

I recently created a small Web API 2 project running with a back-end SQL database (Entity Framework code first), and had it deployed to an Azure web app, along with Azure SQL.

Naturally, I started it off using the free web app and one of the cheapest possible Azure SQL tiers (S0 – 10 DTUs).

After I finished working on the API, I wanted to see what sort of performance I could get out of it, by using Azure’s various scaling options.

To test I used Loader.io. This is a really nice and easy to use load testing service by SendGrid Labs. The free edition allows me to setup various API endpoint tests and run many concurrent connections for up to 1 minute at a time.

All my tests below were done using the same GET request test. The request always returned a collection of 5 x objects from the /Animals endpoint to keep things consistent.

My initial test was against the F1 free app tier for the Web app, with the SQL database running on S0 (10 DTUs). Here are the results of sending 500 requests per second for 1 minute.

S0-10DTU-result

The API struggled to complete the full 60k requests over 1 minute, and only completed about 8k requests, with an average response time of 4638ms. Terrible, but then again we are running on very low performance, cheap tiers. I had a look at the database performance stats and noticed that the DTUs were capped out at 100% during the 1 minute load test. At this point it definitely seems to be the database performance holding things back.

Scaling the database up to the S1 tier (20 DTUs) gives a definite improvement in response times and number of requests able to be sent within one minute. If we look at the database performance stats in the portal, we can now see that the DTUs are still maxing out at 100% though.

S1-20DTU-result

20-DTUs-maxed out

At this point I decided I would increase database performance again, but throw more requests per second at the API (from 500/second up to 1000/second).

Scaling the database up to S2 (50 DTUs) and throwing more requests a second at the API, and the number of requests completed in total higher now – up by about an extra 5k. Taking a look at the DTU performance status, we can see they now maxed out at around 60%. At this point it is pretty clear that the database is no longer the bottleneck.

50-DTUs-maxed out at 60% - even with doubling the requests per second from 500 to 1000

50-DTUs-maxed out at 60%

Now I scaled the web app tier up from free, to the B1 (Basic) tier, which gives you 1 Core, 1.75GB RAM, and up to 3 x instances scaled manually. I started with just the default 1 instance and ran the 1000 req/second for 1 minute test again.

boo-test-failed-error-rate-higher-than-50% due to timeouts

The results were pretty dismal compared to the free tier now. In fact the test failed due to an error rate of greater than 50% (all caused by timeouts). It is important to remember that we have not yet scaled out from the default 1 instance though.

Scaling up to 2 x instances on the B1 tier, helped quite a bit. The test now completes, and has a much smaller timeout error rate. Many more responses were served, but the response rate was quite slow. Taking a look at the distribution of CPU time over the two instances, we can also see that the traffic is indeed being split between the two instances we’ve scaled out with.

scale-B1-basic-from-1-to-2-instances

yay-test-finished-with much smaller error rate

processor time spread over two instances during load test

Taking this one step further to 3 x instances, and re-running the test nets us the best result so far. No timeout errors, and a response time averaging around 3000ms. Much better, but still quite a high response time, and not all 60k requests are being served.

I scaled up to the B2 tier for the following run. Each instance has 2 x cores and 3.5GB RAM this time. Starting at 1 x instance and running the test on these higher specification web instances seems to now handle things a lot better.

Little to no timeout errors, with about 5000ms avg response time, but using only 1 x instance this time!

Pushing things right up to 3 x instances (2 cores and 3.5GB RAM each) nets us the best result yet. The average response time is down to 1700ms and there are no timeout errors at all. The API was able to handle 49000 requests in the 1 minute test, which is the highest number of requests it has been able to handle so far.

B2-basic-test-with-3x-instances-good-result

I scaled up to the B3 tier from here, and tried another few runs using 3 x instances (at 4 x cores and 7GB RAM each). This didn’t help things much, netting around 200ms better response time, for a much pricier tier. It therefore looks like the sweet spot for this kind of work is to scale out with medium sized instances (2 x cores each), rather than scaling up too much.

I changed the tier to S2 (2 x cores 3.5GB RAM each, but allowing up to 10 x instances scaled out) and this time, running the test gave very similar results to 3 x instances. Clearly, the instances were now no longer the bottleneck. Looking back at the database performance, I saw that the DTUs were maxing out at around 90%. It was clear that there must have been some throttling happening there now.

I changed the database DTUs to 100 using the S3 tier, and re-ran the test once more.

bingo-60k-requests

Bingo! We’re now managing to serve the test’s 1000 requests a second, and over the 1 minute test, we get all 60k requests served successfully, and have a reasonable average response time of roughly 300-400ms.

I made a quick change to the GET method in the API for this endpoint to gather items from the database asynchronously, and running the same test again, now gets us all the way down to an average response time of just 100ms over the 60k requests in one minute. Excellent!

100ms-test-result

As you can see, by running load tests like this, and trying out different scaling options for the front end and back end, logically scaling each whenever you see bottlenecks in test results or performance metrics, you can after some time determine the best specification for your database and web apps.

 

Deploying a simple linked container web app with Docker

This is a simple guide on how to deploy a multi-container ‘linked’ web app using Docker.

If you have not yet installed or set up a Docker host to run the containers on, here is my guide on setting up a basic uBuntu 16.04 Docker host VM.

The ‘web app’ we’ll be looking at how to deploy will consist of two basic components – a MySQL database for the back-end, and a simple PHP script for the ‘web front-end’ which simply connects to the MySQL container and displays some info from a database table.

simple-web-app-linked-diagram

For the MySQL container we’ll be using the official Docker repo ‘mysql-server’ image, and for our web front-end, we’ll be creating our own Docker image using a custom Dockerfile we’ll craft ourselves, based on an uBuntu 15.04 image.

This means we’ll be covering the following Docker basics:

  • Running docker containers
  • Linking docker containers (more secure than exposing ports directly)
  • Creating custom docker images using a Dockerfile
  • Building a custom image

Start off by creating a new directory in your home directory called ‘web01’ to create and store the Dockerfile we’ll using to build our custom web front-end image. Then create an emtpy file called ‘Dockerfile’ in this directory and edit it using your favourite text editor. I’m using nano for this.

 

 

This is what your new Dockerfile should look like:

 

The commands do the following:

  • FROM – tells docker build to base this image build on the ubuntu:15.04 image
  • RUN – strings a few apt-get commands together to install apache, php5, and a few other tools like curl. This is important, as every RUN command in a Dockerfile creates a new image layer, and we don’t want our image to contain too many layers.
  • The last RUN command grabs the content from a gist I created which is a basic PHP script, and places it in the /var/www/html directory in the container, then deletes the default index.html file that apache places there. This is the script that will connect to our MySQL container and display some basic info (our basic ‘web app’).
  • EXPOSE – exposes port 80 so we can map this to our Docker host and access the website outside of the container.
  • CMD – runs the apache2 service with PID 1 when the container starts.

Now you can build the Dockerfile and create your own custom image, which is what will be used to start the web container later.

Use the following build command to build the new image from your custom Dockerfile

docker build -t=”web01image” ~/web01/Dockerfile

Run ‘docker images’ after the build completes and you should see the new image listed:

docker-images

Next, you’ll run a new container using the official mysql-server image from the Docker repository. You won’t yet have this image locally, but the command will automatically download the image for you.

docker run –name db01 -e MYSQL_ROOT_PASSWORD=MyRootPassword -d mysql/mysql-server:latest

Note that I’ve called my container ‘db01’ and given it a root password of ‘MyRootPassword’. The -e parameter specifies that an environment variable called MYSQL_ROOT_PASSWORD inside the container should be given the value of ‘MyRootPassword’. The MySQL container then uses this environment variable to setup the root user for MySQL when the container starts.

Now that the database container is up and running (verify by running ‘docker ps’ to check its running), you can deploy the custom web container using your image you created above. In this docker run command, you’ll also link  the web container to the db01 container you previously started up using the –link parameter. This is important to link the two containers.

The web container will be given environment variables with information telling it about the networking config of the DB container. These environment variables will then be access by the simple web PHP script to tell it where to find the database server, and what credentials to use to connect.

docker run –name=web01 –link=db01:mysql -d -p=80:80 web01image

Important: notice that in the –link parameter, the name of the database/MySQL container is specified. Make sure you use the exact name you gave your MySQL database container here – this ensures that the linking of the two containers is correct. The last ‘web01image’ bit specifies to base the container you are running off of the newly built ‘web01image’.

The -p parameter maps the exposed port 80 in the container to port 80 on the docker host, so you’ll be able to access the website by using http://dockerhost:80

Check that the new web container and previously created MySQL container are running by using the ‘docker ps’ command.

docker-ps-output

Out of interest, this is what the PHP script looks like (this is what is downloaded and placed on the web container as a RUN build step in the Dockerfile you created above):

You can see the environment variables that the PHP script grabs (top of the script) to establish the database connection from the docker container. These environment variables are what are created and populated by linking the web container to the db container using the –link parameter.

Lastly, you may want to create a sample database, table and some data for the simple ‘web app’ to display after it connects to the database container. Issue the following ‘docker exec’ command, which will add the sample database, create a sample table, and add some sample data.

Make sure you change the ‘MyRootPassword’ bit to whatever root MySQL password you chose when you ran the MySQL container above, and ensure you run exec against the name of the MySQL container you chose (I used db01). Keep the database name and the rest of the command intact, as the PHP script relies on these staying the same.

docker exec db01 mysql -u root -pMyRootPassword -e “create database testdb1; use testdb1; CREATE TABLE events (id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, name VARCHAR(20), signup_date DATE); INSERT INTO events (id,name,signup_date) VALUES (NULL, ‘MySpecialEvent’, ‘2016-06-11’);”

Finally, browse to http://dockerhostnameorip and you should see the simple PHP script display some basic info, stating it was able to connect to the MySQL server and display the sample data in the database.

simple-php-web-app-display

Setting up a basic uBuntu 16.04 Docker host VM

I’ve used this process multiple times to create quick Docker host VMs running on VMware Workstation in my home lab. It is important to note that although I’m using VMware Workstation, the type 2 hypervisor you use here is fairly unimportant. You could just as well use VirtualBox, or Fusion for this purpose.

Download the latest uBuntu 16.04 LTS server ISO from: http://www.ubuntu.com/download/server (I believe 16.04 comes only in 64-bit, but make sure its 64-bit)

Create a new Virtual Machine for your Docker host using your type 2 hypervisor software (Workstation in my case).

Give the VM following hardware/spec:

  • OS – Linux/uBuntu 64bit
  • 1 or 2 vCPUs
  • 512 MB RAM
  • 9GB disk
  • 2 x vNICs (1st is set to the default NAT option and the 2nd should be set to Host-only)

Here is my VM’s setup:

vm-hardware-docker-host

Attach the uBuntu ISO and start the VM up.

Install a standard uBuntu OS using the text based installer, and just be sure to also install OpenSSH server when prompted for features to install. After the install completes, reboot, login with your user account you created during install, run ‘ifconfig’ to check the assigned IP address, and then use your favourite SSH client to connect to that IP. Using a PuTTy session will just make copy/pasting commands into your uBuntu VM easier.

Now you’ll install docker – the package includes both the docker server and client.

 

Run the commands above in sequence, and after the apt-get install docker-engine at the end, run ‘sudo service docker status’ to check that docker is running. You should see it listed as Active (running)

● docker.service – Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2016-07-06 21:20:39 BST; 2h 0min ago

Run a quick ‘docker info’ command to ensure that you get information back from Docker and that everything looks OK.

docker-info

vCenter Server Appliance VM fails to boot with fsck failed message

 

I had a storage outage to deal with recently, and after the datastores on this storage were taken down, a vCenter Server Appliance VM on the storage got some corrupted files and would not boot. Upon start up, I was greeted with this message:

vcenter-server-appliance-fsck-failed

The error message reads:

fsck failed.  Please repair manually and reboot.  The root
file system is currently mounted read-only.  To remount it
read-write do

After trying the mount command using the maintenance mode bash shell, I restarted and found that the appliance still did not boot properly. I found a thread on the VMware community forums where someone had the same issue and was able to run e2fsck to fix the disk issues. I tried this and found it fixed a whole heap of disk errors on the /dev/sda3 mount, but on restart I noticed more issues on the /dev/sdb2 mount, so I ran the e2fsck command again for this path, and was able to finally reboot the appliance successfully. The commands I ran to resolve were essentially:

  • mount -n -o remount,rw /
  • e2fsck -y /dev/sda3
  • e2fsck -y /dev/sdb2
  • CTRL-D to reboot after fixing the errors using e2fsck