Saga Pattern with aws-cdk, Lambda, and Step Functions

steps

The saga pattern is useful when you have transactions that require a bunch of steps to complete successfully, with failure of steps requiring associated rollback processes to run. This post will cover the saga pattern with aws-cdk, leveraging AWS Step Functions and Lambda.

If you need an introduction to the saga pattern in an easy to understand format, I found this GOTO conference session by Caitie McCaffrey very informative.

Another useful resource with regard to the saga pattern and AWS Step Functions is this post over at theburningmonk.com.

Saga Pattern with aws-cdk

I’ll be taking things one step further by automating the setup and deployment of a sample app which uses the saga pattern with aws-cdk.

I’ve started using aws-cdk fairly frequently, but realise it has the issue of vendor lock-in. I found it nice to work with in the case of step functions particularly in the way you construct step chains.

Saga Pattern with Step Functions

So here is the step function state machine you’ll create using the fairly simple saga pattern aws-cdk app I’ve set up.

saga pattern with aws-cdk - a successful transaction run
A successful transaction run

Above you see a successful transaction run, where all records are saved to a DynamoDB table entry.

dynamodb data from sample app using saga pattern with aws-cdk
The sample data written by a succesful transaction run. Each step has a ‘Sample’ map entry with ‘Data’ and a timestamp.

If one of those steps were to fail, you need to manage the rollback process of your transaction from that step backwards.

Illustrating Failure Rollback

As mentioned above, with the saga pattern you’ll want to rollback any steps that have run from the point of failure backward.

The example app has three steps:

  • Process Records
  • Transform Records
  • Commit Records

Each step is a simple lambda function that writes some data to a DynamoDB table with a primary partition key of TransactionId.

In the screenshot below, TransformRecords has a simulated failure, which causes the lambda function to throw an error.

A catch step is linked to each of the process steps to handle rollback for each of them. Above, TransformRecordsRollbackTask is run when TransformRecordsTask fails.

The rollback steps cascade backward to the first ‘business logic’ step ProcessRecordsTask. Any steps that have run up to that point will therefore have their associated rollback tasks run.

Here is what an entry looks like in DynamoDB if it failed:

A failed transaction has no written data, because the data written up to the point of failure was ‘rolled back’.

You’ll notice this one does not have the ‘Sample’ data that you see in the previously shown successful transaction. In reality, for a brief moment it does have that sample data. As each rollback step is run, the associated data for that step is removed from the table entry, resulting in the above entry for TransactionId 18.

Deploying the Sample Saga Pattern App with aws-cdk

Clone the source code for the saga pattern aws-cdk app here.

You’ll need to npm install and typescript compile first. From the root of the project:

npm install && npm run build

Now you can deploy using aws-cdk.

# Check what you'll deploy / modify first with a diff
cdk diff
# Deploy
cdk deploy

With the stack deployed, you’ll now have the following resources:

  • Step Function / State Machine
  • Various Lambda functions for transaction start, finish, the process steps, and each process rollback step.
  • A DynamoDB table for the data
  • IAM role(s) created for the above

Testing the Saga Pattern Sample App

To test, head over to the Step Functions AWS Console and navigate to the newly created SagaStateMachineExample state machine.

Click New Execution, and paste the following for the input:

{
    "Payload": {
      "TransactionDetails": {
        "TransactionId": "1"
      }
    }
}

Click Start Execution.

In a few short moments, you should have a successful execution and you should see your transaction and sample data in DynamoDB.

Moving on, to simulate a random failure, try executing again, but this time with the following payload:

{
    "Payload": {
      "TransactionDetails": {
        "TransactionId": "2",
        "simulateFail": true
      }
    }
}

The lambda functions check the payload input for the simulateFail flag, and if found will do a Math.random() check to give chance of failure in one of the process steps.

Taking it Further

To take this example further, you’ll want to more carefully manage step outputs using Step Function ResultPath configuration. This will ensure that your steps don’t overwrite data in the state machine and that steps further down the line have access to the data that they need.

You’ll probably also want a step at the end of the line for the case of failure (which runs after all rollback steps have completed). This can handle notifications or other tasks that should run if a transaction fails.

Using Sinon stub to Replace External Service Calls in Tests

When writing tests for a service you’ll often find that there are other dependent services that may need to be called in some way or another. You won’t want to actually invoke the real service during test, but rather ‘stub’ out the dependent service call function. Sinon stub provides an easy and customisable way to replace these external service calls for your tests.

Practical Example: AWS SQS sendMessage Sinon Stub

Let’s say you have an AWS Lambda function that drops a message onto an SQS queue. To test this function handler, your test should invoke the handler and verify that the message was sent.

This simple case already involves an external service call – the SQS sendMessage action that will drop the message onto the queue.

Here is a simple NodeJS module that wraps the SQS sendMessage call.

// sqs.ts

import AWS = require("aws-sdk");
import { AWSError } from "aws-sdk";
import { SendMessageRequest, SendMessageResult } from "aws-sdk/clients/sqs";
import { PromiseResult } from "aws-sdk/lib/request";

const sqs = new AWS.SQS({apiVersion: '2012-11-05'});

export function sendMessage(messageBody: string, queueUrl: string) : Promise<PromiseResult<SendMessageResult, AWSError>> {

  var params = {
    QueueUrl: queueUrl,
    MessageBody: messageBody,
  } as SendMessageRequest;

  return sqs
    .sendMessage(params)
    .promise()
    .then(res => res)
    .catch((err) => { throw err; });
}

The actual Lambda Handler code that uses the sqs.ts module above looks like this:

// index.ts

import { sendMessage } from './sqs';
import { Context } from 'aws-lambda';

export const handler = async (event: any, context?: Context) => {

    try {
        const queueUrl = process.env.SQS_QUEUE_URL || "https://sqs.eu-west-2.amazonaws.com/0123456789012/test-stub-example";
        const sendMessageResult = await sendMessage(JSON.stringify({foo: "bar"}), queueUrl);
        return `Sent message with ID: ${sendMessageResult.MessageId}`;
    } catch (err) {
        console.log("Error", err);
        throw err;
    }
}

Next you’ll create a Sinon stub to ‘stub out’ the sendMessage function of this module (the actual code that the real AWS Lambda function would call).

Setup an empty test case that calls the Lambda handler function to test the result.

// handler.spec.ts

import * as chai from 'chai';
import * as sinon from "sinon";
import { assert } from "sinon";

import * as sqs from '../src/sqs';
import { handler } from '../src/index';
import sinonChai from "sinon-chai";
import { PromiseResult } from 'aws-sdk/lib/request';
import { SendMessageResult } from 'aws-sdk/clients/SQS';
import { Response } from 'aws-sdk';
import { AWSError } from 'aws-sdk';

const expect = chai.expect;
chai.use(sinonChai);

const event = {
  test: "test"
};

describe("lambda-example-sqs-handler", () => {
  describe("handler", () => {

    it("should send an sqs message and return the message ID", async () => {

      // WHEN

      process.env.SQS_QUEUE_URL = "https://sqs.eu-west-1.amazonaws.com/123456789012/test-queue";
      const result = await handler(event);
      
      // THEN

      expect(result).to.exist;
      expect(result).to.eql(`Sent message with ID: 123`);
    });
  });
});

Right now running this test will fail due to the test code trying to call the sqs.ts module’s code that in turn calls the real SQS service’s sendMessage.

Here is where Sinon stub will come in handy. You can replace this specific call that sqs.ts makes with a test stub.

In the describe handler section, add the following just before the ‘it‘ section.

const sendMessageStub = sinon.stub(sqs, "sendMessage");

let stubResponse : PromiseResult<SendMessageResult, AWSError> = {
  $response: new Response<SendMessageResult, AWSError>(),
  MD5OfMessageBody: '828bcef8763c1bc616e25a06be4b90ff',
  MessageId: '123',
};

sendMessageStub.resolves(stubResponse);

The code above calls sinon.stub() and passes in the sqs module object, as well as a string (“sendMessage” in this case) identifying the specific method in the module that should be stubbed.

An optional promise result can be passed in to resolves() to get the stub to return data for the test. In this case, we’re having it return an object that matches the real SQS sendMessage return result. Among other things, this contains a message ID which the Lambda function includes in it’s response.

Add a test to verify that the stub method call.

assert.calledOnce(sendMessageStub);

If you run the test again it should now pass. The stub replaces the real service call. Nice!

sinon stub test result

Conclusion

Replacing dependent service function calls with stubs can be helpful in many ways. For example:

  • Preventing wasteful, real service calls, which could result in unwanted test data, logs, costs, etc…
  • Faster test runs that don’t rely on network calls.
  • Exercising only the relevant code you’re interested in testing.

Sinon provides a testing framework agnostic set of tools such as test spies, stubs and mocks for JavaScript. In this case, you’ve seen how it can be leveraged to make testing interconnected AWS services a breeze with a Lambda function that calls SQS sendMessage to drop a message onto a queue.

Feel free to Download the source code for this post’s example.

AWS CodeBuild local with Docker

AWS have a handy post up that shows you how to get CodeBuild local by running it with Docker here.

Having a local CodeBuild environment available can be extremely useful. You can very quickly test your buildspec.yml files and build pipelines without having to go as far as push changes up to a remote repository or incurring AWS charges by running pipelines in the cloud.

I found a few extra useful bits and pieces whilst running a local CodeBuild setup myself and thought I would document them here, along with a summarised list of steps to get CodeBuild running locally yourself.

Get CodeBuild running locally

Start by cloning the CodeBuild Docker git repository.

git clone https://github.com/aws/aws-codebuild-docker-images.git

Now, locate the Dockerfile for the CodeBuild image you are interested in using. I wanted to use the ubuntu standard 3.0 image. i.e. ubuntu/standard/3.0/Dockerfile.

Edit the Dockerfile to remove the ENTRYPOINT directive at the end.

# Remove this -> ENTRYPOINT ["dockerd-entrypoint.sh"]

Now run a docker build in the relevant directory.

docker build -t aws/codebuild/standard:3.0 .

The image will take a while to build and once done will of course be available to run locally.

Now grab a copy of this codebuild_build.sh script and make it executable.

curl -O https://gist.githubusercontent.com/Shogan/05b38bce21941fd3a4eaf48a691e42af/raw/da96f71dc717eea8ba0b2ad6f97600ee93cc84e9/codebuild_build.sh
chmod +x ./codebuild_build.sh

Place the shell script in your local project directory (alongside your buildspec.yml file).

Now it’s as easy as running this shell script with a few parameters to get your build going locally. Just use the -i option to specify the local docker CodeBuild image you want to run.

./codebuild_build.sh -c -i aws/codebuild/standard:3.0 -a output

The following two options are the ones I found most useful:

  • -c – passes in AWS configuration and credentials from the local host. Super useful if your buildspec.yml needs access to your AWS resources (most likely it will).
  • -b – use a buildspec.yml file elsewhere. By default the script will look for buildspec.yml in the current directory. Override with this option.
  • -e – specify a file to use as environment variable mappings to pass in.

Testing it out

Here is a really simple buildspec.yml if you want to test this out quickly and don’t have your own handy. Save the below YAML as simple-buildspec.yml.

version: 0.2

phases:
  install:
    runtime-versions:
      java: openjdk11
    commands:
      - echo This is a test.
  pre_build:
    commands:
      - echo This is the pre_build step
  build:
    commands:
      - echo This is the build step
  post_build:
    commands:
      - bash -c "if [ /"$CODEBUILD_BUILD_SUCCEEDING/" == /"0/" ]; then exit 1; fi"
      - echo This is the post_build step
artifacts:
  files:
    - '**/*'
  base-directory: './'

Now just run:

./codebuild_build.sh -b simple-buildspec.yml -c -i aws/codebuild/standard:3.0 -a output /tmp

You should see the script start up the docker container from your local image and ‘CodeBuild’ will start executing your buildspec steps. If all goes well you’ll get an exit code of 0 at the end.

aws codebuild test run output from a local Docker container.

Good job!

This post contributes to my effort towards 100DaysToOffload.

Definitive guide to using Weave Net CNI on AWS EKS

Looking to install the Weave Net CNI on AWS EKS / Kubernetes and remove the AWS CNI? Look no further. This guide will detail and demonstrate the process.

What this guide will cover

  • Removing AWS CNI plugin
  • Installing the Weave Net CNI on AWS EKS
  • Making sure your EC2 instances will work with Weave
  • Customising Weave Net CNI including custom pod overlay network ranges
  • Removing max-pods limit on your EKS worker nodes
  • Reconfiguring pods that don’t work after switching to Weave. (E.g. those that need to talk back to the EKS master nodes that do not get the Weave overlay network)

Want the Terraform source and test scripts to jump right in?

GitHub Terraform and test environment source

Otherwise, read on for step-by-step and more information…

There are a few guides floating around that detail how to install the Weave Net CNI plugin for Amazon Kubernetes clusters (EKS), however I’ve not seen them go into much detail.

Most tend to skip over some important steps and details when it comes to configuring weave and getting the pod networking functioning correctly.

There are also some important caveats that you should be aware of when replacing the AWS CNI Plugin with a different CNI, whether it be Weave, Calico, or any other.

Replacing CNI functionality

You should be 100% happy with what you’ll lose if completely replace the AWS CNI with another CNI. The AWS CNI has some very useful functionality such as:

  • Assigning IP addresses (via ENIs) to place pods directly into your VPC network
  • VPC flow logs that make sense

However, depending on your architecture and design decisions, as well as potential VPC network limitations, you may wish to opt out of the CNI that Amazon provides and instead use a different CNI that provides an overlay network with other functionality.

AWS CNI Limitations

One of the problems I have seen in VPCs is limited CIDR ranges, and therefore subnets that are carved up into smaller numbers of IP addresses.

The Amazon AWS CNI plugin is very IP address hungry and attaches multiple Secondary Private IP addresses to EKS worker nodes (EC2 instances) to provide pods in your cluster with directly assigned IPs.

This means that you can easily exhaust subnet IP addresses with just a few EKS worker nodes running.

This limitation also means that those who want high densities of pods running on worker nodes are in for a surprise. The IP address limit becomes an issue for maximum number of pods in these scenarios way before compute capacity becomes a problem.

This page shows the maximum number of ENI’s and Secondary IP addresses that can be used per EC2 instance: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt

Removing the AWS CNI plugin

Note: This process will involve you needing to replace your existing EKS worker nodes (if any) in the cluster after installing the Weave Net CNI.

Assuming you have a connection to your cluster already, the first thing to do is to remove the AWS CNI.

kubectl -n=kube-system delete daemonset aws-node

With that gone, your future EKS workers will no longer assign multiple Secondary IP addresses from your VPC subnets.

Installing CNI Genie

With the AWS CNI plugin removed, your pods won’t be able to get a network connection when starting up from this point onward.

Installing a basic deployment of CNI Genie is a quick way to get automatic CNI selection working for containers that start from this point on.

CNI genie has tons of other great features like allowing you to customise which CNI containers use when starting up and more.

For now, you’re just using it to allow containers to start-up and use the Weave Net overlay network by default.

Install CNI Genie. This manifest works with Kubernetes 1.12, 1.13, and 1.14 on EKS.

kubectl apply -f https://raw.githubusercontent.com/Shogan/terraform-eks-with-weave/master/src/weave/genie-plugin.yaml

Installing Weave

Before continuing, you should ensure your EC2 machines disable source/destination network checking.

Make this change in the userdata script that your instances run when starting from their autoscale groups.

REGION_ID=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | grep -Po "(us|ca|ap|eu|sa)-(north|south)?(east|west|central)-[0-9]+")
aws ec2 modify-instance-attribute --instance-id $INSTANCE_ID --no-source-dest-check --region $REGION_ID

On to installing Weave Net CNI on AWS EKS…

Next, get a Weave Net CNI yaml manifest file. Decide what overlay network IP Range you are going to be using and fill it in for the env.IPALLOC_RANGE query string parameter value in the code block below before making the curl request.

curl --location -o ./weave-cni.yaml "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=192.168.0.0/16"

Note: the env.IPALLOC_RANGE query string param added is to specify you want a config with a custom CIDR range. This should be chosen specifically not to overlap with any network ranges shared with the VPC you’ll be deploying into.

In the example above I had a VPC and VPC peers that shared the CIDR block 10.0.0.0/8). Therefore I chose to use 192.168.0.0/16 for the Weave overlay network.

You should be aware of the network ranges you’re using and plan this out appropriately.

The config you now have as weave-cni.yaml will contain the environment variable IPALLOC_RANGE with the correct value that the weave pods will use to setup networking on the EKS Worker nodes.

Apply the weave Net CNI resources:

Note: This manifest is pre-created to use an overlay network range of 192.168.0.0/16

kubectl apply -f https://raw.githubusercontent.com/Shogan/terraform-eks-with-weave/master/src/weave/weave-cni.yaml

Note: Don’t expect things to change suddenly. The current EKS worker nodes will need to be rotated out (e.g. drain, terminate, wait for new to appear) in order for the IP addresses that the AWS CNI has kept warm/allocated to be released.

If you have any existing EKS workers running, drain them now and terminate/replace them with new workers. This includes the source/destination check change made previously.

kubectl get nodes
kubectl drain nodename --ignore-daemonsets

Remove max pod limits on nodes:

Your worker nodes by default have a limit set on how many pods they can schedule. The EKS AMI sets this based on EC2 type (and the max pods due to the usual ENI limitations / IP address limitations with the AWS CNI).

Check your max pod limits with:

kubectl get nodes -o yaml | grep pods

If you’re using the standard EKS optimized AMI (or a derivative of it) then you can simply pass an option to the bootstrap.sh script located in the image that setup the kubelet and joins the cluster. Set –use-max-pods false as an argument to the script.

For example, your autoscale group launch configuration might get the EC2 worker nodes to join the cluster using the bootstrap.sh script. You can update it like so:

/etc/eks/bootstrap.sh --b64-cluster-ca 'YOUR_BASE64_CLUSTER_CA_DATA_HERE' --apiserver-endpoint 'https://YOUR_EKS_CLUSTER_ENDPOINT_HERE' --use-max-pods false --kubelet-extra-args '' 'YOUR_CLUSTER_NAME_HERE'

If you’re using the EKS Terraform module you can simply pass in bootstrap-extra-args – this will automatically setup your worker node userdata templates with extra bootstrap arguments for the kubelet. See example here

Checking max-pods limit again after applying this change, you should see the previous pod limit (based on prior AWS CNI max pods for your instance type) removed now.

You’re almost running Weave Net CNI on AWS EKS, but first you need to roll out new worker nodes.

With the Weave Net CNI installed, the kubelet service updated and your EC2 source/destination checks disabled, you can rotate out your old EKS worker nodes, replacing them with the new nodes.

kubectl drain node --ignore-daemonsets

Once the new nodes come up and start scheduling pods, if everything went to plan you should see that new pods are using the Weave overlay network. E.g. 192.168.0.0/16.

A quick run-down on weave IP addresses and routes

If you get a shell to a worker node running the weave overlay network and do a listing of routes, you might see something like the following:

# ip route show
default via 10.254.109.129 dev eth0
10.254.109.128/26 dev eth0 proto kernel scope link src 10.254.109.133
169.254.169.254 dev eth0
192.168.0.0/16 dev weave proto kernel scope link src 192.168.192.0 

This routing table shows two main interfaces in use. One from the host (EC2) instance network interfaces itself, eth0, and one from weave called weave.

When network packets are destined for the 10.254.109.128/26 address space, then traffic is routed down eth0.

If traffic on the host is destined for any address on 192.168.0.0/16, it will instead route via the weave interface ‘weave’ and the weave system will handle routing that traffic appropriately.

Otherwise if the traffic is destined for some public IP address out on the wider internet, it’ll go down the default route which is down the interface, eth0. This is a default gateway in the VPC subnet in this case – 10.254.109.129.

Finally, metadata URL traffic for 169.254.169.254 goes down the main host eth0 interface of course.

Caveats

For the most part everything should work great. Weave will route traffic between it’s overlay network and your worker node’s host network just fine.

However, some of your custom workloads or kubernetes tools might not like being on the new overlay network. For example they might need to talk to other Kubernetes nodes that do not run weave net.

This is now where the limitation of using a managed Kubernetes offering like EKS becomes a bit of a problem.

You can’t run weave on the Kubernetes master / API servers that are effectively the ‘managed’ control plane that AWS EKS hosts for you.

This means that your weave overlay network does not span the Kubernetes master nodes where the Kubernetes API runs.

If you have an application or container in the weave overlay network and the Kubernetes master node / API needs to talk to it, this won’t work.

One potential solution though is to use hostNetwork: true in your pod specification. However you should of course be aware of how this would affect your application and application security.

In my case, I was running metrics-server and it stopped working after it started using Weave. I found out that the Kubernetes API needs to talk to the metrics-server service and of course this won’t work in the overlay network.

Example EKS with Weave Net CNI cluster

You can use the source code I’ve uploaded here.

There are five simple steps to deploy this example EKS cluster in your own account.

  • Modify the example.tfvars file to fit your own parameters.
  • terraform plan -var-file="example.tfvars" -out="example.tfplan"
  • terraform apply "example.tfplan"
  • ./setup-weave.sh
  • ./test-weave.sh

Warning: This will create a new VPC, subnets, NAT Gateway instance, Internet Gateway, EKS Cluster, and set of worker node autoscale groups. So be sure Terraform Destroy this if you’re just testing things out.

– Your wallet

After terraform creates all the resources, you can run the two included shell scripts. setup-weave.sh will remove the AWS CNI, install CNI genie, Weave, and deploy two simple example pods and services.

At this point you should terminate your existing worker nodes (that still use the AWS CNI) and wait for your new worker nodes to join the cluster.

test-weave.sh will wait for the hello-node test pods to become ready, and then execute a curl command inside one, talking to the other via the the service and vice versa. If successful, you’ll see a HTTP 200 OK response from each service.