SSM and socat Port Forwarding to Access Private VPC Resources

AWS System Manager Session Manager added the port forwarding feature, announced in this blog post back in 2019. In this post I’ll show you how to leverage SSM and socat port forwarding to access systems in a private subnet that don’t have the SSM agent installed.

You’ll use an SSM agent enabled EC2 instance as an initial target for the ssm port forward session. On this instance, you’ll run socat as a relay for the incoming TCP session to the other instance that does not have the SSM agent.

What is socat?

To quote the official man page, socat (SOcket CAT) is a multipurpose relay. It is a command line tool that establishes two bidirectional byte streams and transfers data between them.

You can use it to connect all sorts of channels. For example:

  • files
  • pipes
  • devices
  • sockets, such as TCP, UDP, IPv4, etc
  • SSL sockets
  • programs

SSM and socat Port Forwarding Example

In my example I have an AWS EMR (Elastic Map Reduce) master node running a web dashboard for ganglia in a private VPC subnet.

I don’t want to add a bastion host / jump box or provide SSH access from the public net.

SSM would provide a nice way for me to connect a remote session, or port forward using IAM authentication and negating the need for any ingress security group rules, but only if I had the SSM agent available on this instance.

Seeing as though the EMR master node is not SSM agent enabled, and I can’t use SSM port forwarding directly to this instance, we could use an interim machine with SSM as a jump box.

Example Configuration

Here is how I configured port forwarding in my use case to access ganglia on a private instance EMR node.

  • The EC2 instance with SSM agent must have an IAM policy attached that allows the relevant ssm access. The blog post linked above has instructions. In a nutshell though, most standard Amazon AMIs include the ssm agent. Your EC2 instance profile should include the required actions too. The AmazonSSMManagedInstanceCore managed policy includes these.
  • Install socat on the SSM agent enabled interim machine the private subnet. For this I connected an SSM session to get shell access and ran sudo yum install -y socat
  • Now I needed to open a source channel for the SSM port forward aws cli command to connect, and connect that source to the destination of the EMR master node running ganglia.
socat TCP4-LISTEN:8080,fork,reuseaddr TCP4:10.0.4.149:80

The command listens on port 8080, and forwards TCP to the EMR node, 10.0.4.149 on port 80. Importantly, the command uses fork and reuseaddr to allow multiple connections.

  • Next is to use the AWS CLI ssm start-session command to start a port forwarding session to the interim instance with the SSM agent running. Grab the Instance ID for the EC2 machine and:
aws ssm start-session --target {your-instance-id-here} --document-name AWS-StartPortForwardingSession --parameters '{"portNumber":["8080"],"localPortnumber":["8089"]}'
ssm and socat port forwarding in action

If you setup socat correctly to listen on port 8080, then the connection should be opened and accepted.

Now you can simply open a web browser locally and direct it to http://localhost:8089/ganglia to access ganglia on the remote EMR master node.

Accessing EMR cluster memory stats via the remote port forwarded session.

Closing

AWS SSM is a useful tool to get access to instances in a secure, audited fashion without needing to open up risky SSH access or other remote ports to the public internet.

When constrained and needing a jump across to an instance without the SSM agent you can leverage tools to help. Socat is one such tool that can facilitate this within the private network.

Cheap Minecraft Server in AWS with Docker and Traefik

minecraft-like figure on the grass

According to the Minecraft Realms plan pricing page, you can get a realms server at around £5.59 per month. You get some nice conveniences there but… I refuse to pay much at all when I can throw some infrastructure together myself in the cloud to create the ultimate cheap Minecraft server.

Considering my Docker instance running Traefik hosts another 3 or 4 of my personal services along with a Minecraft server, then this solution only costs me around £1.50 a month.

I chose to go with a single AWS EC2 instance that runs Docker. Minecraft runs in a container and sits alongside other personal websites and services that I host there too.

I use Traefik to route traffic coming in to this single host for various TCP ports as well as HTTP(s) on different hostnames. This essentially levels up the cost savings even further as I don’t need multiple EC2 instances (one for each service), and I don’t even need to pay for something like an application or network load balancer, as Traefik does this for me.

A Quick Review of Alternatives

There are other alternatives to consider if you’re looking for a cheap Minecraft server, so don’t take this as being the only option. Here is what I’ve used in the past before settling on my current solution:

  • Minecraft on a dedicated cloud VM. If you just want a dedicated Minecraft VM in the cloud, then DigitalOcean is a good, cheap option. You can also get fairly cheap instances Vultr.
  • Running Minecraft on my own personal Raspberry Pi Kubernetes Cluster. I was even able to expose it over the internet for friends to play on by leveraging a Pi device as a dedicated router. I then used port forwarding to get it working through my double NAT setup. The ARM container was a little slow as a server for more than 2 or 3 players on Raspberry Pi hardware though.
  • Minecraft Server on a home PC / Workstation, with port forwarding to allow other players to connect. This is not ideal, especially on Windows machines or systems that you don’t want to leave running 24/7 as you would for a dedicated server.
  • Various other Minecraft-as-a-service providers. These are decent options in some cases. However for me price and control are important, and I much prefer to self host in this case.

Cheap Minecraft Server in AWS EC2 with Traefik

I used my Cheap Traefik EC2 Docker Hosting solution as the base. You can read that article to get access to the CDK resources required to deploy it yourself.

The cost benefits to using this particular recipe are:

  • EC2 Graviton2 ARM based processor – slightly cheaper to run than Intel and AMD. The downside is more limited software choices. You need to make sure you use ARM compatible packages or Docker images.
  • Spot instance – this has massive savings over a normal lifecycle EC2 instance. The downside is that it can be terminated at any time with only a couple of minutes of notice. When using these you need to make sure you have good data persistence that is not local to the EC2 instance. I personally use a mounted EFS volume. It is re-attached to a new instance from the autoscaling group if the old instance is terminated.

If you don’t use the CDK solution I mentioned above, then alternatively deploy yourself an EC2 instance. Give it an elastic IP address, set up the Security Group ingress rules accordingly, and get shell access. First thing you’ll want to install is Docker, then you’re pretty much good to go.

Minecraft Docker Image

I found a great Minecraft Docker image that is well maintained and has the correct ARM image builds for use on Graviton2 hardware. Check out itzg/minecraft-server. There are other arch builds there that’ll run on just about any other platform.

Docker Compose Service

If you use docker-compose, then here is the simple service definition to get things running.

version: "3"

networks:
  web:
    external: true
  internal:
    external: false

services:
  mc:
    image: itzg/minecraft-server:2021.1.0-multiarch-latest
    environment:
      EULA: "TRUE"
      VERSION: "1.16.5"
      ENABLE_AUTOPAUSE: "TRUE"
      OVERRIDE_SERVER_PROPERTIES: "TRUE"
      MAX_TICK_TIME: "-1"
      TYPE: "BUKKIT"
    labels:
      - traefik.tcp.routers.mc.rule=HostSNI(`*`)
      - traefik.port=25565
    networks:
      - web
    volumes:
      - /data/mc:/data

The docker-compose definition will run a Docker container using the latest multiarch image (which will run on ARM devices). When starting, the container will prepare and run a Minecraft 1.16.5 server. It will also use Bukkit and enable auto pause. The game server does not tick over when there are no players connected.

Traefik Configuration

In the docker-compose definition above, you might have noticed the container labels. The labels prefixed with traefik are used to inform Traefik of how to route network traffic.

the cheap minecraft server uses a Traefik TCP router with HostSNI *
The TCP router using HostSNI on *

In our case, TCP connections are required on port 25565 and HostSNI is used to route those coming in for * (all hosts). The TCP connections on port 25565 go to Traefik, and based on this rule, directed to the Minecraft container.

There is one limitation to be aware of here, and that is that you can only use HostSNI with * for connections that do not use TLS. This is because Server Name Indication (SNI) is an extension of the TLS protocol.

I don’t believe Minecraft supports TLS in any case though. It just means that you won’t be able to have more than one Minecraft server container using the same port running on the single Docker host.

Finishing Off Configuration

Lastly, you might want to point a convenient Host record (A record) to your AWS EC2 Elastic IP address. For example: yourmcserver.example.com -> 1.2.3.4.

All being well, you should now be able to find and connect to your server.

minecraft server listing

WSL2 GUI X-Server Using VcXsrv

wsl2 gui desktop

I needed to set up a WSL2 GUI recently on my machine (WSL2 running uBuntu 20.04.1 LTS). I found a guide that runs through the process but found that a few tweaks needed to be made. Specifically, the communication to VcXsrv was being blocked by Windows Firewall.

There were also a couple of extra tweaks needed for audio passthrough using PulseAudio and setting a windowed resolution.

Setting up a WSL2 GUI X-Server in Windows

Start by installing xfce4 and goodies.

sudo apt install xfce4 xfce4-goodies

If you’re running Kali you should use:

sudo apt install kali-desktop-xfce

During the install you’ll be prompted about which display manager to use. This is up to you, though I personally chose lightdm.

Download this .zip package which contains VcXsrv and PulseAudio along with some configuration and a shortcut to launch.

Extract it to the root of your C:\ drive. You should end up with contents under C:\WSL VcXsrv.

WSL2 GUI vcxsrv package contents

Run the vcxsrv-64.1.20.8.1.installer.exe installer in this folder, choosing defaults for the install.

Once installed, you’ll want to enable High DPI scaling for VcXsrv in Windows.

  • Navigate to C:\Program Files\VcXsrv
  • Right-click xlaunch.exe and go to Compatibility
  • Click Change high DPI settings and choose Override high DPI scaling behavior. Ensure Application is in the dropdown.

Next, edit the startWSLVcXsrv.bat batch file and change the last line that reads ubuntu.exe run to one of:

  • ubuntu2004.exe run in the case you are using uBuntu 20.04 from the Microsoft Store for WSL
  • ubuntu1804.exe run if you are using uBuntu 18.04 from the Microsoft Store for WSL
  • ubuntu.exe run for when you are using standard uBuntu from the Microsoft Store for WSL
  • kali.exe run if you installed Kali-Linux from the Microsoft Store for WSL

Pin the WSL VcXsrv shortcut somewhere convenient like the taskbar.

Opening Windows Firewall for VcXsrv and PulseAudio

Next you need to allow inbound traffic to Windows for VcXsrv and PulseAudio.

Open Windows Defender Firewall with Advanced Security and add two new Inbound Rules as follows:

  • Type: Program
  • Program path: %ProgramFiles%\VcXsrv\vcxsrv.exe for VcXsrv and %SystemDrive%\WSL VcXsrv\pulseaudio-1.1\bin\pulseaudio.exe for PulseAudio
  • Allow the connection
  • Profile: Domain, Private
  • Name: vcxsrv or pulseaudio depending which rule you are adding

I personally added the following to ExtraParams under the XLaunch node of config.xlaunch. This sets windowed mode to 1920×1080 for monitor #1 on my machine.

-screen 0 1920x1080@1

Viewing your WSL2 GUI

With all of that setup out of the way, you should be able to simply launch VcXsrv from the pinned shortcut and everything should work.

Try it out and you should get Desktop up and running for your WSL2 environment.

WSL2 gui example with audio settings open

PulseAudio passthrough should also be available if you check your sound / volume settings. Try an audio test using alsa-utils:

sudo apt install alsa-utils
speaker-test

Kudos to this guide on reddit for most of the setup instructions. As mentioned before, I needed to configure my firewall and also added some tweaks for windowed mode.

Troubleshooting

If you find your VcXsrv Server display window is blank when launching, try the following:

  • Double-check your firewall rule is allowing inbound connections for vcxsrv.exe for the domain and private scopes.
  • With the black X-server / display window from VcXsrv still open, launch a WSL shell separately, and run the following to set your DISPLAY environment variable:
export DISPLAY=$(grep -m 1 nameserver /etc/resolv.conf | awk '{print $2}'):0

This takes the IP address of your host machine (conveniently used as a nameserver in your WSL Linux environment for DNS lookups) and sets it as the Display remote location (with :0 for the display number appended).

Now, try to launch a xfce4 session with:

xfce4-session

If all goes to plan, the session should target your machine where VcXsrv Server is running and your display window should come to life with your WSL environment desktop.

Quick and Easy ffmpeg Cheat Sheet

mixing board

This post contains my ffmpeg cheat sheet.

ffmpeg is a very useful utility to have installed for common audio and video conversion and modification tasks. I find myself using it frequently for both personal and work use.

Got an audio or video file to compress or convert? Or maybe you’re looking to split off a section of a video file. Did you do a screen recording of a meeting session and find you need to share it around, but it’s way too large in filesize?

ffmpeg is the clean and easy command line approach. I’ve put together a ffmpeg cheat sheet below that has some quick and easy usages documented.

Skip ahead to the ffmpeg Cheat Sheet section below if you already have it installed and ready to go.

Installing ffmpeg (macOS / Windows)

On macOS, you’re best off using homebrew. If you don’t have homebrew already, go get that done first. Then:

macOS

brew install ffmpeg

Windows

You can grab a release build in 7-zip archive format from here (recent as of the publish date of this post). Be sure to check the main ffmpeg downloads page for newer builds if you’re reading this page way in the future.

If you don’t have 7-zip, download and install that first to decompress the downloaded ffmpeg release archive.

Now you’ll ideally want to update your user PATH to include the path that you’ve extracted 7-zip to. Make sure you’ve moved it to a convenient location. For example on my Windows machine, I keep ffmpeg in D:\Tools\ffmpeg.

Open PowerShell (Windows key + R), type powershell, hit enter.

To ensure that the path persists whenever you run powershell in the future, in powershell, run:

notepad $profile

This will load a start-up profile for powershell in notepad. If it doesn’t exist yet, it’ll prompt you to create a new file. Choose yes.

In your profile notepad window, enter the following, replacing D:\Tools\ffmpeg with the path you extracted ffmpeg to on your own machine.

[Environment]::SetEnvironmentVariable("PATH", "$Env:PATH;D:\tools\ffmpeg")

Close Notepad and save the changes. Close Powershell, then launch it again. This time if you type ffmpeg in the powershell window you’ll run it, no matter which directory you’re in.

ffmpeg Cheat Sheet

This is a list of my 10 most useful ffmpeg commands for conversion and modification of video and audio files.

Task DescriptionCommand
Convert a video from MP4 to AVI formatffmpeg -i inputfile.mp4 outputfile.avi
Trim a video file (without re-encoding)ffmpeg -ss START_SECONDS -i input.mp4 -t DURATION_SECONDS -c copy output.mp4
Convert WAV audio file to compressed MP3 formatffmpeg -i input.wave -acodec libmp3lame output.mp3
Mux video from input1.mp4 with audio from input2.mp4 ffmpeg -i input1.mp4 -i input2.mp4 -c copy -map 0:0 -map 1:1 -shortest output.mp4
Resize or scale video from current size to 1280×720.ffmpeg -i input.mp4 -s 1280x720 -c:a copy output.mp4
Extract audio to MP3 from a video fileffmpeg -i inputvideo.mp4 -vn -acodec libmp3lame outputaudio.mp3
Add watermark or logo to the top-left of a video. (Change overlay parameter for different positions).ffmpeg -i inputvideo.mp4 -i logo.png -filter_complex "overlay=5:5" -codec:a copy outputvideo.mp4
Covert video to GIFffmpeg -i inputvideo.mp4 output.gif
Change the frame rate of a videoffmpeg -i inputvideo.mp4 -filter:v 'fps=fps=15' -codec:a copy outputvideo.mp4
ffmpeg cheat sheet example - logo overlay.
Yes, that is a banana logo overlayed onto this video using ffmpeg. What of it?

This list doesn’t even scratch the surface of the capabilities of ffmpeg. If you dig deeper you’ll find commands and parameters for just every audio and video modification process you need.

Remember, while it’s often easy to find a free conversion tool online, there’ll always be a catch or risk of using these. Whether it’s being subjected to unecessary advertising, catching potential malware, or being tracked with 3rd party cookies, you always take risks using free online tools. I guarantee almost every single free conversion website is using ffmpeg on their backend.

So do yourself a favour and practice and using CLI tools to do things yourself.

Saga Pattern with aws-cdk, Lambda, and Step Functions

steps

The saga pattern is useful when you have transactions that require a bunch of steps to complete successfully, with failure of steps requiring associated rollback processes to run. This post will cover the saga pattern with aws-cdk, leveraging AWS Step Functions and Lambda.

If you need an introduction to the saga pattern in an easy to understand format, I found this GOTO conference session by Caitie McCaffrey very informative.

Another useful resource with regard to the saga pattern and AWS Step Functions is this post over at theburningmonk.com.

Saga Pattern with aws-cdk

I’ll be taking things one step further by automating the setup and deployment of a sample app which uses the saga pattern with aws-cdk.

I’ve started using aws-cdk fairly frequently, but realise it has the issue of vendor lock-in. I found it nice to work with in the case of step functions particularly in the way you construct step chains.

Saga Pattern with Step Functions

So here is the step function state machine you’ll create using the fairly simple saga pattern aws-cdk app I’ve set up.

saga pattern with aws-cdk - a successful transaction run
A successful transaction run

Above you see a successful transaction run, where all records are saved to a DynamoDB table entry.

dynamodb data from sample app using saga pattern with aws-cdk
The sample data written by a succesful transaction run. Each step has a ‘Sample’ map entry with ‘Data’ and a timestamp.

If one of those steps were to fail, you need to manage the rollback process of your transaction from that step backwards.

Illustrating Failure Rollback

As mentioned above, with the saga pattern you’ll want to rollback any steps that have run from the point of failure backward.

The example app has three steps:

  • Process Records
  • Transform Records
  • Commit Records

Each step is a simple lambda function that writes some data to a DynamoDB table with a primary partition key of TransactionId.

In the screenshot below, TransformRecords has a simulated failure, which causes the lambda function to throw an error.

A catch step is linked to each of the process steps to handle rollback for each of them. Above, TransformRecordsRollbackTask is run when TransformRecordsTask fails.

The rollback steps cascade backward to the first ‘business logic’ step ProcessRecordsTask. Any steps that have run up to that point will therefore have their associated rollback tasks run.

Here is what an entry looks like in DynamoDB if it failed:

A failed transaction has no written data, because the data written up to the point of failure was ‘rolled back’.

You’ll notice this one does not have the ‘Sample’ data that you see in the previously shown successful transaction. In reality, for a brief moment it does have that sample data. As each rollback step is run, the associated data for that step is removed from the table entry, resulting in the above entry for TransactionId 18.

Deploying the Sample Saga Pattern App with aws-cdk

Clone the source code for the saga pattern aws-cdk app here.

You’ll need to npm install and typescript compile first. From the root of the project:

npm install && npm run build

Now you can deploy using aws-cdk.

# Check what you'll deploy / modify first with a diff
cdk diff
# Deploy
cdk deploy

With the stack deployed, you’ll now have the following resources:

  • Step Function / State Machine
  • Various Lambda functions for transaction start, finish, the process steps, and each process rollback step.
  • A DynamoDB table for the data
  • IAM role(s) created for the above

Testing the Saga Pattern Sample App

To test, head over to the Step Functions AWS Console and navigate to the newly created SagaStateMachineExample state machine.

Click New Execution, and paste the following for the input:

{
    "Payload": {
      "TransactionDetails": {
        "TransactionId": "1"
      }
    }
}

Click Start Execution.

In a few short moments, you should have a successful execution and you should see your transaction and sample data in DynamoDB.

Moving on, to simulate a random failure, try executing again, but this time with the following payload:

{
    "Payload": {
      "TransactionDetails": {
        "TransactionId": "2",
        "simulateFail": true
      }
    }
}

The lambda functions check the payload input for the simulateFail flag, and if found will do a Math.random() check to give chance of failure in one of the process steps.

Taking it Further

To take this example further, you’ll want to more carefully manage step outputs using Step Function ResultPath configuration. This will ensure that your steps don’t overwrite data in the state machine and that steps further down the line have access to the data that they need.

You’ll probably also want a step at the end of the line for the case of failure (which runs after all rollback steps have completed). This can handle notifications or other tasks that should run if a transaction fails.

Minimal Cost Web Hosting With Spot, Graviton2, EFS, Traefik, & Let’s Encrypt

web

I’m constantly searching for minimal cost web hosting solutions. To clarify that statement, I mean ‘dynamic‘ websites, not static. At the moment I am running this blog and a bunch of others on a Raspberry Pi Kubernetes cluster at home. I got to thinking though, what happens if I need to move? I’ll have an inevitable period of downtime. Clearly self-hosting from home has it’s drawbacks.

I’ve run my personal dynamic websites from AWS before (EC2 with a single Docker instance), but used an application load balancer (ALB) to help with routing traffic to different hostnames. The load balancer itself adds a large chunk of cost, and storage was EBS, a little more difficult to manage when automating host provisioning.

A Minimal Cost Web Hosting Infrastructure in AWS

I wanted to find something that minimises costs in AWS. My goal was to go as cheap as possible. I’ve arrived at the following solution, which saves on costs for networking, compute, and storage.

minimal cost web hosting Infrastructure diagram
  • AWS spot EC2 single instance running on AWS Graviton2 (ARM).
  • EFS storage for persistence (a requirement is that containers have persistence, as I use wordpress and require MySQL etc…)
  • Elastic IP address
  • Simple Lambda Function that manages auto-attachment of a static, Elastic IP (EIP) to the single EC2 instance. (In case the spot instance is terminated due to demand/price changes for example).
  • Traefik v2 for reverse proxying of traffic hitting the single EC2 instance to containers. This allows for multiple websites / hosts on a single machine

It isn’t going to win any high availability awards, but I’m OK with that for my own self-hosted applications and sites.

One important requirement with this solution is the ability to run dynamic sites. I know I could be doing this all a lot easier with S3/CloudFront if I were to only be hosting static sites.

Using this setup also allows me to easily move workloads between my home Kubernetes cluster and the cloud. This is because the docker images and tags I am using are now compatible between ARM (on Raspberry Pi) and ARM on Graviton2 AWS docker instances.

The choices I have gone with allow me to avoid ‘cloud lock in’, as I can easily switch between the two setups if needed.

Cost Breakdown

I’ve worked out the monthly costs to be roughly as follows:

  • EC2 Graviton2 ARM based instance (t4g.medium), $7.92
  • 3GB EFS Standard Storage, $0.99
  • Lambda – will only invoke when an EC2 instance change occurs, so cost not even worth calculating
  • EIP – free, as it will remain attached to the EC2 instance at all times
minimal cost web hosting solution - spot instance pricing chart
Current Spot Instance pricing for t4g.medium instances

If you don’t need 4GB of RAM, you can drop down to a t4g.small instance type for half the cost.

Total monthly running costs should be around $8.91.

Keep in mind that this solution will provide multiple hostname support (multiple domains/sites hosted on the same system), storage persistence, and a pretty quick and responsive ARM based Graviton2 processor.

You will need to use ARM compatible Docker images, but there are plenty out there for all the standard software like MySQL, WordPress, Adminer, etc…

How it Works

The infrastructure diagram above pretty much explains how everything fits together. But at a high level:

  • An Autoscaling Group is created, in mixed mode, but only allows a single, spot instance. This EC2 instance uses a standard Amazon Linux 2 ARM based AMI (machine image).
  • When the new instance is created, a Lambda function (subscribed to EC2 lifecycle events) is invoked, locates a designated Elastic IP (EIP), and associates it with the new spot EC2 instance.
  • The EC2 machine mounts the EFS storage on startup, and bootstraps itself with software requirements, a base Traefik configuration, as well as your custom ‘dynamic’ Traefik configuration that you specify. It then launches the Traefik container.
  • You point your various A records in DNS to the public IP address of the EIP.
  • Now it doesn’t matter if your EC2 spot instance is terminated, you’ll always have the same IP address, and the same EFS storage mounted when the new one starts up.
  • There is the question of ‘what if the spot market goes haywire?’ By default the spot price will be allowed to go all the way up to the on-demand price. This means you could potentially pay more for the EC2 instance, but it is not likely. If it did happen, you could change the instance configuration or choose another instance type.

Deploying the Solution

As this is an AWS opinionated infrastructure choice, I’ve packaged everything into an AWS Cloud Development Kit (AWS CDK) app. AWS CDK is an open source software development framework that allows you to do infrastructure-as-code. I’ve used Typescript as my language of choice.

Clone the source from GitHub

Deploy Requirements

You’ll need the following requirements on your local machine to deploy this for yourself:

  • NodeJS installed, along with npm.
  • AWS CDK installed globally (npm install -g aws-cdk)
  • Define your own traefik_dynamic.toml configuration, and host it somewhere where the EC2 instance will be able to grab it with curl. Note, that the Traefik dashboard basic auth password is defined using htpasswd.
htpasswd -nb YourUsername YourSuperSecurePasswordGoesHere
  • An existing VPC in your account to use. The CDK app does not create a VPC (additional cost). You can definitely use your default account VPC that is already available in all accounts though.
  • An existing AWS Keypair
  • An existing Elastic IP address (EIP) created, and tagged with the key/value of Usage:Traefik (this is for the Lambda function to identify the right EIP to associate to the EC2 instance when it starts)
Tag requirement for the Elastic IP Address

I haven’t set up the CDK app to pass in parameters, so you’ll just need to modify a bunch of variables at the top of aws-docker-web-with-traefik-stack.ts to substitute your specific values for the aforementioned items. For example:

const vpcId = "your-vpc-id";
const instanceType = "t4g.medium"; // t4g.small for even more cost saving
const keypairName = "your-existing-keypair-name";
const managementLocationCidr = "1.1.1.1/32"; // your home / management network address that SSH access will be allowed from. Change this!
const traefikDynContentUrl = "https://gist.githubusercontent.com/Shogan/f96a5a20183e672f9c49f278ea67503b/raw/351c52b7f2bacbf7b8dae65404b61ff4e4313d81/example-traefik-dynamic.toml"; // this should point to your own dynamic traefik config in toml format.
const emailForLetsEncryptAcmeResolver = 'email = "youremail@example.com"'; // update this to your own email address for lets encrypt certs
const efsAutomaticBackups = false; // set to true to enable automatic backups for EFS

Build and Deploy

Build the Typescript project using npm run build. This compiles the CDK and the EIP Manager Lambda function typescript.

At this point you’re ready to deploy with CDK.

If you have not used CDK before, all you really need to know is that it takes the infrastructure described by the code (typescript in this case), and coverts it to CloudFormation language. The cdk deploy command deploys the stack (which is the collection of AWS resources defined in code).

Run:

# Check what changes will be made first
cdk diff

# Deploy
cdk deploy

Testing a Sample Application Stack

Here is a sample docker-compose stack that will install MySQL, Adminer, and a simple WordPress setup.

SSH onto the EC2 instance that is provisioned, and use docker-compose up -d deploy the compose example stack. Just remember to edit and change the template passwords in the two environment variables.

You’ll also need to update the hostnames to your own (from .example.com), and point those A records to your Elastic public IP address.

One more thing, there is a trick to running docker-compose on ARM systems. I personally prefer to grab a docker image that contains a pre-built docker-compose binary, and shell script that ties it together with the docker-compose command. Here are the steps if you need them (run on the EC2 instance that you SSH onto):

sudo curl -L --fail https://raw.githubusercontent.com/linuxserver/docker-docker-compose/master/run.sh -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

For your own peace of mind, make sure you inspect that githubusercontent run.sh script yourself before downloading, as well as the docker image(s) it references and pulls down to run docker-compose.

Tear Down

To destroy the stack, simply issue the cdk destroy command. The EFS storage is marked by default with a retain policy, so it will not be deleted automatically.

cdk destroy AwsDockerWebWithTraefikStack
deleting the minimal cost web hosting solution cdk stack

Closing

If you’re on the look out for a minimal cost web hosting solution, then give this a try.

The stack uses the new Graviton2 based t4g instance type (ARM) to help achieve a minimal cost web hosting setup. Remember to find compatible ARM docker images for your applications before you go all in with something like this.

The t4g instance family is also a ‘burstable’ type. This means you’ll get great performance as long as you don’t use up your burst credits. Performance will slow right down if that happens. Keep an eye on your burst credit balance with CloudWatch. For 99% of use cases you’ll likely be just fine though.

Also remember that you don’t need to stick to AWS. You could bolt together services from any other cloud provider to do something similiar, most likely at a similar cost too.