Streamlining AWS AMI image creation and management with Packer

If you want to set up quick and efficient provisioning and automation pipelines and you rely on machine images as a part of this framework, you’ll definitely want to prepare and maintain preconfigured images.

With AWS you can of course leverage Amazon’s AMIs for EC2 machine images. If you’re configuring autoscaling for an application, you definitely don’t want to be setting up your launch configurations to launch new EC2 instances using base Amazon AMI images and then installing any prerequesites your application may need at runtime. This will be slow and tedious and will lead to sluggish and unresponsive auto scaling.

Packer comes in at this point as a great tool to script, automate and pre-bake custom AMI images. (Packer is a tool by Hashicorp, of Terraform fame). Packer also enables us to store our image configuration in source control and set up pipelines to test our images at creation time, so that when it comes time to launching them, we can be confident they’ll work.

Packer doesn’t only work with Amazon AMIs. It supports tons of other image formats via different Builders, so if you’re on Azure or some other cloud or even on-premise platform you can also use it there.

Below I’ll be listing out the high level steps to create your own custom AMI using Packer. It’ll be Windows Server 2016 based, enable WinRM connections at build time (to allow Packer to remote in and run various setup scripts), handle sysprep, EC2 configuration like setting up the administrator password, EC2 computer name, etc, and will even run some provioning tests with Pester

You can grab the files / policies required to set this up on your own from my GitHub repo here.

Setting up credentials to run Packer and an IAM role for your Packer build machine to assume

First things first, you need to be able to run Packer with the minimum set of permissions it needs. You can run packer on an EC2 instance that has an EC2 role attached that provides it the right permissions, or if you’re running from a workstation, you’ll probably want to use an IAM user access/secret key.

Here is an IAM policy that you can use for either of these. Note it also includes an iam:PassRole statement that references an AWS account number and specific role. You’ll need to update the account number to your own, and create the Role called Packer-S3-Access in your own account.

IAM Policy for user or instance running Packer:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AttachVolume",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:CopyImage",
                "ec2:CreateImage",
                "ec2:CreateKeypair",
                "ec2:CreateSecurityGroup",
                "ec2:CreateSnapshot",
                "ec2:CreateTags",
                "ec2:CreateVolume",
                "ec2:DeleteKeypair",
                "ec2:DeleteSecurityGroup",
                "ec2:DeleteSnapshot",
                "ec2:DeleteVolume",
                "ec2:DeregisterImage",
                "ec2:DescribeImageAttribute",
                "ec2:DescribeImages",
                "ec2:DescribeInstances",
                "ec2:DescribeRegions",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSnapshots",
                "ec2:DescribeSubnets",
                "ec2:DescribeTags",
                "ec2:DescribeVolumes",
                "ec2:DetachVolume",
                "ec2:GetPasswordData",
                "ec2:ModifyImageAttribute",
                "ec2:ModifyInstanceAttribute",
                "ec2:ModifySnapshotAttribute",
                "ec2:RegisterImage",
                "ec2:RunInstances",
                "ec2:StopInstances",
                "ec2:TerminateInstances",
                "ec2:RequestSpotInstances",
                "ec2:CancelSpotInstanceRequests"
            ],
            "Resource": "*"
        },
        {
            "Effect":"Allow",
            "Action":"iam:PassRole",
            "Resource":"arn:aws:iam::YOUR_AWS_ACCOUNT_NUMBER_HERE:role/Packer-S3-Access"
        }
    ]
}

IAM Policy to attach to new Role called Packer-S3-Access (Note, replace the S3 bucket name that is referenced with a bucket name of your own that will be used to provision into your AMI images with). See a little further down for details on the bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowS3BucketListing",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::YOUR-OWN-PROVISIONING-S3-BUCKET-HERE"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:prefix": [
                        "",
                        "Packer/"
                    ],
                    "s3:delimiter": [
                        "/"
                    ]
                }
            }
        },
        {
            "Sid": "AllowListingOfdesiredFolder",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::YOUR-OWN-PROVISIONING-S3-BUCKET-HERE"
            ],
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "Packer/*"
                    ]
                }
            }
        },
        {
            "Sid": "AllowAllS3ActionsInFolder",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR-OWN-PROVISIONING-S3-BUCKET-HERE/Packer/*"
            ]
        }
    ]
}

This will allow Packer to use the iam_instance_profile configuration value to specify the Packer-S3-Access EC2 role in your image definition file. Essentially, this allows your temporary Packer EC2 instance to assume the Packer-S3-Access role which will grant the temporary instance enough privileges to download some bootstrapping files / artifacts you may wish to bake into your custom AMI. All quite securely too, as the policy will only allow the Packer instance to assume this role in addition to the Packer instance being temporary too.

Setting up your Packer image definition

Once the above policies and roles are in place, you can set up your main packer image definition file. This is a JSON file that will describe your image definition as well as the scripts and items to provision inside it.

Look at standardBaseImage.json in the GitHub repository to see how this is defined.

standardBaseImage.json

{
  "builders": [{
    "type": "amazon-ebs",
    "region": "us-east-1",
    "instance_type": "t2.small",
    "ami_name": "Shogan-Server-2012-Build-{{isotime \"2006-01-02\"}}-{{uuid}}",
    "iam_instance_profile": "Packer-S3-Access",
    "user_data_file": "./ProvisionScripts/ConfigureWinRM.ps1",
    "communicator": "winrm",
    "winrm_username": "Administrator",
    "winrm_use_ssl": true,
    "winrm_insecure": true,
    "source_ami_filter": {
      "filters": {
        "name": "Windows_Server-2012-R2_RTM-English-64Bit-Base-*"
      },
      "most_recent": true
    }
  }],
  "provisioners": [
    {
        "type": "powershell",
        "scripts": [
            "./ProvisionScripts/EC2Config.ps1",
            "./ProvisionScripts/BundleConfig.ps1",
            "./ProvisionScripts/SetupBaseRequirementsAndTools.ps1",
            "./ProvisionScripts/DownloadAndInstallS3Artifacts.ps1"
        ]
    },
    {
        "type": "file",
        "source": "./Tests",
        "destination": "C:/Windows/Temp"
    },
    {
        "type": "powershell",
        "script": "./ProvisionScripts/RunPesterTests.ps1"
    },
    {
        "type": "file",
        "source": "PesterTestResults.xml",
        "destination": "PesterTestResults.xml",
        "direction": "download"
    }
  ],
  "post-processors": [
    {
        "type": "manifest"
    }
  ]
}

When Packer runs it will build out an EC2 machine as per the definition file, copy any contents specified to copy, and provision and execute any scripts defined in this file.

The packer image definition in the repository I’ve linked above will:

  • Create a Server 2012 R2 base instance.
  • Enable WinRM for Packer to be able to connect to the temporary instance.
  • Run sysprep to generalize it.
  • Set up EC2 configuration.
  • Download a bunch of tools (including Pester for running test once the image build is done).
  • Download any S3 artifacts you’ve placed in a specific bucket in your account and store them on the image.

S3 Downloads into your AMI during build

Create a new S3 bucket and give it a unique name of your choice. Set it to private, and create a new virtual folder inside the bucket called Packer. This bucket should have the same name you specified in the Packer-S3-Access role policy in the few policy definition sections.

Place any software installers or artifacts you would like to be baked into your image in the /Packer virtual folder.

Update the DownloadAndInstallS3Artifacts.ps1 script to reference any software installers and execute the installers. (See the commented out section for an example). This PowerShell script will download anything under the /Packer virtual folder and store it in your image under C:\temp\S3Downloads.

Testing

Finally, you can add your own Pester tests to validate tasks carried out during the Packer image creation.

Define any custom tests under the /Tests folder.

Here is simple test that checks that the S3 download for items from /Packer was successful (The Read-S3Object cmdlet will create the folder and download items into it from your bucket):

Describe  'S3 Artifacts Downloads' {
    It 'downloads artifacts from S3' {
        "C:\temp\S3Downloads" | Should -Exist
    }
}

The main image definition file ensures that these are all copied into the image at build time (to the temp directory) and from there Pester executes them.

Hook up your image build process to a build system like TeamCity and you can get it to output the results of the tests from PesterTestResults.xml.

Have fun automating and streamlining your image builds with Pester!

Deploying a simple linked container web app with Docker

This is a simple guide on how to deploy a multi-container ‘linked’ web app using Docker.

If you have not yet installed or set up a Docker host to run the containers on, here is my guide on setting up a basic uBuntu 16.04 Docker host VM.

The ‘web app’ we’ll be looking at how to deploy will consist of two basic components – a MySQL database for the back-end, and a simple PHP script for the ‘web front-end’ which simply connects to the MySQL container and displays some info from a database table.

simple-web-app-linked-diagram

For the MySQL container we’ll be using the official Docker repo ‘mysql-server’ image, and for our web front-end, we’ll be creating our own Docker image using a custom Dockerfile we’ll craft ourselves, based on an uBuntu 15.04 image.

This means we’ll be covering the following Docker basics:

  • Running docker containers
  • Linking docker containers (more secure than exposing ports directly)
  • Creating custom docker images using a Dockerfile
  • Building a custom image

Start off by creating a new directory in your home directory called ‘web01’ to create and store the Dockerfile we’ll using to build our custom web front-end image. Then create an emtpy file called ‘Dockerfile’ in this directory and edit it using your favourite text editor. I’m using nano for this.

 

 

This is what your new Dockerfile should look like:

 

The commands do the following:

  • FROM – tells docker build to base this image build on the ubuntu:15.04 image
  • RUN – strings a few apt-get commands together to install apache, php5, and a few other tools like curl. This is important, as every RUN command in a Dockerfile creates a new image layer, and we don’t want our image to contain too many layers.
  • The last RUN command grabs the content from a gist I created which is a basic PHP script, and places it in the /var/www/html directory in the container, then deletes the default index.html file that apache places there. This is the script that will connect to our MySQL container and display some basic info (our basic ‘web app’).
  • EXPOSE – exposes port 80 so we can map this to our Docker host and access the website outside of the container.
  • CMD – runs the apache2 service with PID 1 when the container starts.

Now you can build the Dockerfile and create your own custom image, which is what will be used to start the web container later.

Use the following build command to build the new image from your custom Dockerfile

docker build -t=”web01image” ~/web01/Dockerfile

Run ‘docker images’ after the build completes and you should see the new image listed:

docker-images

Next, you’ll run a new container using the official mysql-server image from the Docker repository. You won’t yet have this image locally, but the command will automatically download the image for you.

docker run –name db01 -e MYSQL_ROOT_PASSWORD=MyRootPassword -d mysql/mysql-server:latest

Note that I’ve called my container ‘db01’ and given it a root password of ‘MyRootPassword’. The -e parameter specifies that an environment variable called MYSQL_ROOT_PASSWORD inside the container should be given the value of ‘MyRootPassword’. The MySQL container then uses this environment variable to setup the root user for MySQL when the container starts.

Now that the database container is up and running (verify by running ‘docker ps’ to check its running), you can deploy the custom web container using your image you created above. In this docker run command, you’ll also link  the web container to the db01 container you previously started up using the –link parameter. This is important to link the two containers.

The web container will be given environment variables with information telling it about the networking config of the DB container. These environment variables will then be access by the simple web PHP script to tell it where to find the database server, and what credentials to use to connect.

docker run –name=web01 –link=db01:mysql -d -p=80:80 web01image

Important: notice that in the –link parameter, the name of the database/MySQL container is specified. Make sure you use the exact name you gave your MySQL database container here – this ensures that the linking of the two containers is correct. The last ‘web01image’ bit specifies to base the container you are running off of the newly built ‘web01image’.

The -p parameter maps the exposed port 80 in the container to port 80 on the docker host, so you’ll be able to access the website by using http://dockerhost:80

Check that the new web container and previously created MySQL container are running by using the ‘docker ps’ command.

docker-ps-output

Out of interest, this is what the PHP script looks like (this is what is downloaded and placed on the web container as a RUN build step in the Dockerfile you created above):

You can see the environment variables that the PHP script grabs (top of the script) to establish the database connection from the docker container. These environment variables are what are created and populated by linking the web container to the db container using the –link parameter.

Lastly, you may want to create a sample database, table and some data for the simple ‘web app’ to display after it connects to the database container. Issue the following ‘docker exec’ command, which will add the sample database, create a sample table, and add some sample data.

Make sure you change the ‘MyRootPassword’ bit to whatever root MySQL password you chose when you ran the MySQL container above, and ensure you run exec against the name of the MySQL container you chose (I used db01). Keep the database name and the rest of the command intact, as the PHP script relies on these staying the same.

docker exec db01 mysql -u root -pMyRootPassword -e “create database testdb1; use testdb1; CREATE TABLE events (id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, name VARCHAR(20), signup_date DATE); INSERT INTO events (id,name,signup_date) VALUES (NULL, ‘MySpecialEvent’, ‘2016-06-11’);”

Finally, browse to http://dockerhostnameorip and you should see the simple PHP script display some basic info, stating it was able to connect to the MySQL server and display the sample data in the database.

simple-php-web-app-display

How to view thumbnails for files in Windows 2008 Server

By default, Windows 2008 Server does not show you thumbnails for files when viewing them in Medium, Large or Extra Large Icon modes. To be able to view the thumbnails of images for example, you will need to do the following :

– Open up explorer, or Computer
– Click on Tools, then Folder Options (Or click the Organise drop down, and select folder options)
– Click on the View tab
– Now you can deselect the check box for “Always show icons, never thumbnails”
– Click Apply, then OK.

You should now be able to view your thumbnails. See below for the Folder Options dialog box.

view_thumbnails