Minimal Cost Web Hosting With Spot, Graviton2, EFS, Traefik, & Let’s Encrypt

web

I’m constantly searching for minimal cost web hosting solutions. To clarify that statement, I mean ‘dynamic‘ websites, not static. At the moment I am running this blog and a bunch of others on a Raspberry Pi Kubernetes cluster at home. I got to thinking though, what happens if I need to move? I’ll have an inevitable period of downtime. Clearly self-hosting from home has it’s drawbacks.

I’ve run my personal dynamic websites from AWS before (EC2 with a single Docker instance), but used an application load balancer (ALB) to help with routing traffic to different hostnames. The load balancer itself adds a large chunk of cost, and storage was EBS, a little more difficult to manage when automating host provisioning.

A Minimal Cost Web Hosting Infrastructure in AWS

I wanted to find something that minimises costs in AWS. My goal was to go as cheap as possible. I’ve arrived at the following solution, which saves on costs for networking, compute, and storage.

minimal cost web hosting Infrastructure diagram
  • AWS spot EC2 single instance running on AWS Graviton2 (ARM).
  • EFS storage for persistence (a requirement is that containers have persistence, as I use wordpress and require MySQL etc…)
  • Elastic IP address
  • Simple Lambda Function that manages auto-attachment of a static, Elastic IP (EIP) to the single EC2 instance. (In case the spot instance is terminated due to demand/price changes for example).
  • Traefik v2 for reverse proxying of traffic hitting the single EC2 instance to containers. This allows for multiple websites / hosts on a single machine

It isn’t going to win any high availability awards, but I’m OK with that for my own self-hosted applications and sites.

One important requirement with this solution is the ability to run dynamic sites. I know I could be doing this all a lot easier with S3/CloudFront if I were to only be hosting static sites.

Using this setup also allows me to easily move workloads between my home Kubernetes cluster and the cloud. This is because the docker images and tags I am using are now compatible between ARM (on Raspberry Pi) and ARM on Graviton2 AWS docker instances.

The choices I have gone with allow me to avoid ‘cloud lock in’, as I can easily switch between the two setups if needed.

Cost Breakdown

I’ve worked out the monthly costs to be roughly as follows:

  • EC2 Graviton2 ARM based instance (t4g.medium), $7.92
  • 3GB EFS Standard Storage, $0.99
  • Lambda – will only invoke when an EC2 instance change occurs, so cost not even worth calculating
  • EIP – free, as it will remain attached to the EC2 instance at all times
minimal cost web hosting solution - spot instance pricing chart
Current Spot Instance pricing for t4g.medium instances

If you don’t need 4GB of RAM, you can drop down to a t4g.small instance type for half the cost.

Total monthly running costs should be around $8.91.

Keep in mind that this solution will provide multiple hostname support (multiple domains/sites hosted on the same system), storage persistence, and a pretty quick and responsive ARM based Graviton2 processor.

You will need to use ARM compatible Docker images, but there are plenty out there for all the standard software like MySQL, WordPress, Adminer, etc…

How it Works

The infrastructure diagram above pretty much explains how everything fits together. But at a high level:

  • An Autoscaling Group is created, in mixed mode, but only allows a single, spot instance. This EC2 instance uses a standard Amazon Linux 2 ARM based AMI (machine image).
  • When the new instance is created, a Lambda function (subscribed to EC2 lifecycle events) is invoked, locates a designated Elastic IP (EIP), and associates it with the new spot EC2 instance.
  • The EC2 machine mounts the EFS storage on startup, and bootstraps itself with software requirements, a base Traefik configuration, as well as your custom ‘dynamic’ Traefik configuration that you specify. It then launches the Traefik container.
  • You point your various A records in DNS to the public IP address of the EIP.
  • Now it doesn’t matter if your EC2 spot instance is terminated, you’ll always have the same IP address, and the same EFS storage mounted when the new one starts up.
  • There is the question of ‘what if the spot market goes haywire?’ By default the spot price will be allowed to go all the way up to the on-demand price. This means you could potentially pay more for the EC2 instance, but it is not likely. If it did happen, you could change the instance configuration or choose another instance type.

Deploying the Solution

As this is an AWS opinionated infrastructure choice, I’ve packaged everything into an AWS Cloud Development Kit (AWS CDK) app. AWS CDK is an open source software development framework that allows you to do infrastructure-as-code. I’ve used Typescript as my language of choice.

Clone the source from GitHub

Deploy Requirements

You’ll need the following requirements on your local machine to deploy this for yourself:

  • NodeJS installed, along with npm.
  • AWS CDK installed globally (npm install -g aws-cdk)
  • Define your own traefik_dynamic.toml configuration, and host it somewhere where the EC2 instance will be able to grab it with curl. Note, that the Traefik dashboard basic auth password is defined using htpasswd.
htpasswd -nb YourUsername YourSuperSecurePasswordGoesHere
  • An existing VPC in your account to use. The CDK app does not create a VPC (additional cost). You can definitely use your default account VPC that is already available in all accounts though.
  • An existing AWS Keypair
  • An existing Elastic IP address (EIP) created, and tagged with the key/value of Usage:Traefik (this is for the Lambda function to identify the right EIP to associate to the EC2 instance when it starts)
Tag requirement for the Elastic IP Address

I haven’t set up the CDK app to pass in parameters, so you’ll just need to modify a bunch of variables at the top of aws-docker-web-with-traefik-stack.ts to substitute your specific values for the aforementioned items. For example:

const vpcId = "your-vpc-id";
const instanceType = "t4g.medium"; // t4g.small for even more cost saving
const keypairName = "your-existing-keypair-name";
const managementLocationCidr = "1.1.1.1/32"; // your home / management network address that SSH access will be allowed from. Change this!
const traefikDynContentUrl = "https://gist.githubusercontent.com/Shogan/f96a5a20183e672f9c49f278ea67503b/raw/351c52b7f2bacbf7b8dae65404b61ff4e4313d81/example-traefik-dynamic.toml"; // this should point to your own dynamic traefik config in toml format.
const emailForLetsEncryptAcmeResolver = 'email = "youremail@example.com"'; // update this to your own email address for lets encrypt certs
const efsAutomaticBackups = false; // set to true to enable automatic backups for EFS

Build and Deploy

Build the Typescript project using npm run build. This compiles the CDK and the EIP Manager Lambda function typescript.

At this point you’re ready to deploy with CDK.

If you have not used CDK before, all you really need to know is that it takes the infrastructure described by the code (typescript in this case), and coverts it to CloudFormation language. The cdk deploy command deploys the stack (which is the collection of AWS resources defined in code).

Run:

# Check what changes will be made first
cdk diff

# Deploy
cdk deploy

Testing a Sample Application Stack

Here is a sample docker-compose stack that will install MySQL, Adminer, and a simple WordPress setup.

SSH onto the EC2 instance that is provisioned, and use docker-compose up -d deploy the compose example stack. Just remember to edit and change the template passwords in the two environment variables.

You’ll also need to update the hostnames to your own (from .example.com), and point those A records to your Elastic public IP address.

One more thing, there is a trick to running docker-compose on ARM systems. I personally prefer to grab a docker image that contains a pre-built docker-compose binary, and shell script that ties it together with the docker-compose command. Here are the steps if you need them (run on the EC2 instance that you SSH onto):

sudo curl -L --fail https://raw.githubusercontent.com/linuxserver/docker-docker-compose/master/run.sh -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

For your own peace of mind, make sure you inspect that githubusercontent run.sh script yourself before downloading, as well as the docker image(s) it references and pulls down to run docker-compose.

Tear Down

To destroy the stack, simply issue the cdk destroy command. The EFS storage is marked by default with a retain policy, so it will not be deleted automatically.

cdk destroy AwsDockerWebWithTraefikStack
deleting the minimal cost web hosting solution cdk stack

Closing

If you’re on the look out for a minimal cost web hosting solution, then give this a try.

The stack uses the new Graviton2 based t4g instance type (ARM) to help achieve a minimal cost web hosting setup. Remember to find compatible ARM docker images for your applications before you go all in with something like this.

The t4g instance family is also a ‘burstable’ type. This means you’ll get great performance as long as you don’t use up your burst credits. Performance will slow right down if that happens. Keep an eye on your burst credit balance with CloudWatch. For 99% of use cases you’ll likely be just fine though.

Also remember that you don’t need to stick to AWS. You could bolt together services from any other cloud provider to do something similiar, most likely at a similar cost too.

Regain Cluster Access After Expired Kubernetes Certificates

Recently I had to renew expired kubernetes certificates on my home lab cluster after getting locked out from managing it. I wasn’t tracking their age and all of a sudden I found them expired. Kubeadm has a feature to auto-renew certificates during control plane upgrades. Unfortunately I had not done an upgrade on my home cluster in the last year. I realised when issuing a kubectl command to the cluster and receiving an error along the lines of x509: certificate has expired or is not yet valid.

Preparation

To regain access I needed to SSH onto a master node in the cluster and do the following:

Move / backup old certificate and kubeadm config files:

sudo mv /etc/kubernetes/pki/apiserver.key /etc/kubernetes/pki/apiserver.key.old
sudo mv /etc/kubernetes/pki/apiserver.crt /etc/kubernetes/pki/apiserver.crt.old
sudo mv /etc/kubernetes/pki/apiserver-kubelet-client.crt /etc/kubernetes/pki/apiserver-kubelet-client.crt.old
sudo mv /etc/kubernetes/pki/apiserver-kubelet-client.key /etc/kubernetes/pki/apiserver-kubelet-client.key.old
sudo mv /etc/kubernetes/pki/front-proxy-client.crt /etc/kubernetes/pki/front-proxy-client.crt.old
sudo mv /etc/kubernetes/pki/front-proxy-client.key /etc/kubernetes/pki/front-proxy-client.key.old

sudo mv /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.old
sudo mv /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old
sudo mv /etc/kubernetes/controller-manager.conf /etc/kubernetes/controller-manager.conf.old
sudo mv /etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.old

Renew Expired Kubernetes Certificates

Now use kubeadm to renew all certificates.

sudo kubeadm alpha certs renew all

Inspect the generated certificates under /etc/kubernetes/pki to make sure they are generated correctly. E.g. expiry dates a year in the future, etc… For example, checking the new kube api server certificate details:

sudo openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text -certopt no_subject,no_version,no_serial,no_signame,no_issuer,no_pubkey,no_sigdump,no_header,no_aux
Renew Expired Kubernetes Certificates - checking new certificate details

Regenerate Kubeconfig Files

Now you can regenerate the kubeconfig files for the kubelet, controller-manager, scheduler, etc…

sudo kubeadm alpha phase kubeconfig all --apiserver-advertise-address 10.0.0.50

Note: you can leave the --apiserver-advertise-address option off, but the value used will default to your system’s default network interface.

Check that all the .conf files under /etc/kubernetes are re-created. i.e.

  • admin.conf
  • kubelet.conf
  • controller-manager.conf
  • scheduler.conf

You can also update your kubectl config with the admin.conf file that was generated if that is the config/context you were using from the master node.

mv .kube/config .kube/config.old
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chmod 777 $HOME/.kube/config
export KUBECONFIG=.kube/config

If your kube services have stopped on the master (kubelet etc) as a result of expired certificates and having tried to restart services recently (as was my case), reboot the master node at this point. All services that were failing to start because of expired certificates should now start correctly.

Reconnect and check everything is working as expected:

kubectl get nodes
kubeadm token list

Using Sinon stub to Replace External Service Calls in Tests

When writing tests for a service you’ll often find that there are other dependent services that may need to be called in some way or another. You won’t want to actually invoke the real service during test, but rather ‘stub’ out the dependent service call function. Sinon stub provides an easy and customisable way to replace these external service calls for your tests.

Practical Example: AWS SQS sendMessage Sinon Stub

Let’s say you have an AWS Lambda function that drops a message onto an SQS queue. To test this function handler, your test should invoke the handler and verify that the message was sent.

This simple case already involves an external service call – the SQS sendMessage action that will drop the message onto the queue.

Here is a simple NodeJS module that wraps the SQS sendMessage call.

// sqs.ts

import AWS = require("aws-sdk");
import { AWSError } from "aws-sdk";
import { SendMessageRequest, SendMessageResult } from "aws-sdk/clients/sqs";
import { PromiseResult } from "aws-sdk/lib/request";

const sqs = new AWS.SQS({apiVersion: '2012-11-05'});

export function sendMessage(messageBody: string, queueUrl: string) : Promise<PromiseResult<SendMessageResult, AWSError>> {

  var params = {
    QueueUrl: queueUrl,
    MessageBody: messageBody,
  } as SendMessageRequest;

  return sqs
    .sendMessage(params)
    .promise()
    .then(res => res)
    .catch((err) => { throw err; });
}

The actual Lambda Handler code that uses the sqs.ts module above looks like this:

// index.ts

import { sendMessage } from './sqs';
import { Context } from 'aws-lambda';

export const handler = async (event: any, context?: Context) => {

    try {
        const queueUrl = process.env.SQS_QUEUE_URL || "https://sqs.eu-west-2.amazonaws.com/0123456789012/test-stub-example";
        const sendMessageResult = await sendMessage(JSON.stringify({foo: "bar"}), queueUrl);
        return `Sent message with ID: ${sendMessageResult.MessageId}`;
    } catch (err) {
        console.log("Error", err);
        throw err;
    }
}

Next you’ll create a Sinon stub to ‘stub out’ the sendMessage function of this module (the actual code that the real AWS Lambda function would call).

Setup an empty test case that calls the Lambda handler function to test the result.

// handler.spec.ts

import * as chai from 'chai';
import * as sinon from "sinon";
import { assert } from "sinon";

import * as sqs from '../src/sqs';
import { handler } from '../src/index';
import sinonChai from "sinon-chai";
import { PromiseResult } from 'aws-sdk/lib/request';
import { SendMessageResult } from 'aws-sdk/clients/SQS';
import { Response } from 'aws-sdk';
import { AWSError } from 'aws-sdk';

const expect = chai.expect;
chai.use(sinonChai);

const event = {
  test: "test"
};

describe("lambda-example-sqs-handler", () => {
  describe("handler", () => {

    it("should send an sqs message and return the message ID", async () => {

      // WHEN

      process.env.SQS_QUEUE_URL = "https://sqs.eu-west-1.amazonaws.com/123456789012/test-queue";
      const result = await handler(event);
      
      // THEN

      expect(result).to.exist;
      expect(result).to.eql(`Sent message with ID: 123`);
    });
  });
});

Right now running this test will fail due to the test code trying to call the sqs.ts module’s code that in turn calls the real SQS service’s sendMessage.

Here is where Sinon stub will come in handy. You can replace this specific call that sqs.ts makes with a test stub.

In the describe handler section, add the following just before the ‘it‘ section.

const sendMessageStub = sinon.stub(sqs, "sendMessage");

let stubResponse : PromiseResult<SendMessageResult, AWSError> = {
  $response: new Response<SendMessageResult, AWSError>(),
  MD5OfMessageBody: '828bcef8763c1bc616e25a06be4b90ff',
  MessageId: '123',
};

sendMessageStub.resolves(stubResponse);

The code above calls sinon.stub() and passes in the sqs module object, as well as a string (“sendMessage” in this case) identifying the specific method in the module that should be stubbed.

An optional promise result can be passed in to resolves() to get the stub to return data for the test. In this case, we’re having it return an object that matches the real SQS sendMessage return result. Among other things, this contains a message ID which the Lambda function includes in it’s response.

Add a test to verify that the stub method call.

assert.calledOnce(sendMessageStub);

If you run the test again it should now pass. The stub replaces the real service call. Nice!

sinon stub test result

Conclusion

Replacing dependent service function calls with stubs can be helpful in many ways. For example:

  • Preventing wasteful, real service calls, which could result in unwanted test data, logs, costs, etc…
  • Faster test runs that don’t rely on network calls.
  • Exercising only the relevant code you’re interested in testing.

Sinon provides a testing framework agnostic set of tools such as test spies, stubs and mocks for JavaScript. In this case, you’ve seen how it can be leveraged to make testing interconnected AWS services a breeze with a Lambda function that calls SQS sendMessage to drop a message onto a queue.

Feel free to Download the source code for this post’s example.

Hashicorp Waypoint Server on Raspberry Pi

waypoint server running on raspberry pi

This evening I finally got a little time to play around with Waypoint. This wasn’t a straightforward install of Waypoint on my desktop though. I wanted to run and test HashiCorp Waypoint Server on Raspberry Pi. Specifically on my Pi Kubernetes cluster.

Out of the box Waypoint is simple to setup locally, whether you’re on Windows, Linux, or Mac. The binary is written in the Go programming language, which is common across HashiCorp software.

There is even an ARM binary available which lets you run the CLI on Raspberry Pi straight out of the box.

Installing Hashicorp Waypoint Server on Raspberry Pi hosted Kubernetes

I ran into some issues initially when assuming that waypoint install --platform=kubernetes -accept-tos would ensure an ARM docker image was pulled down for my Pi based Kubernetes hosts though.

My Kubernetes cluster also has the nfs-client-provisioner setup, which fulfills PersistentVolumeClaim resources with storage from my home FreeNAS Server Build. I noticed that PVCs were not being honored because they did not have the specific storage-class of nfs-storage that my nfs-client-provisioner required.

Fixing the PVC Issue

Looking at the waypoint CLI command, it’s possible to generate the YAML for the Kubernetes resources it would deploy with a --platform=kubernetes flag. So I fetched a base YAML resource definition:

waypoint install --platform=kubernetes -accept-tos --show-yaml

I modified the volumeClaimTemplates section to include my required PVC storageClassName of nfs-storage.

volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: nfs-storage
      resources:
        requests:
          storage: 1Gi

That sorted out the pending PVC issue in my cluster.

Fixing the ARM Docker Issue

Looking at the Docker image that the waypoint install command for Kubernetes gave me, I could see right away that it was not right for ARM architecture.

To get a basic Waypoint server deployment for development and testing purposes on my Raspberry Pi Kubernetes Cluster, I created a simple Dockerfile for armhf builds.

Basing it on the hypriot/rpi-alpine image, to get things moving quickly I did the following in my Dockerfile.

  • Added few tools, such as cURL.
  • Added a RUN command to download the waypoint ARM binary (currently 0.1.3) from Hashicorp releases and place in /usr/bin/waypoint.
  • Setup a /data volume mount point.
  • Created a waypoint user.
  • Created the entrypoint for /usr/bin/waypoint.

You can get my ARM Waypoint Server Dockerfile on Github, and find the built armhf Docker image on Docker Hub.

Now it is just a simple case of updating the image in the generated YAML StatefulSet to use the ARM image with the ARM waypoint binary embedded.

containers:
- name: server
  image: shoganator/waypoint:0.1.3.20201026-armhf
  imagePullPolicy: Always

With the YAML updated, I simply ran kubectl apply to deploy it to my Kubernetes Cluster. i.e.

kubectl apply -f ./waypoint-armhf.yaml

Now Waypoint Server was up and running on my Raspberry Pi cluster. It just needed bootstrapping, which is expected for a new installation.

Hashicorp Waypoint Server on Raspberry Pi - pod started.

Configuring Waypoint CLI to Connect to the Server

Next I needed to configure my internal jumpbox to connect to Waypoint Server to verify everything worked.

Things may differ for you here slightly, depending on how your cluster is setup.

Waypoint on Kubernetes creates a LoadBalancer resource. I’m using MetalLB in my cluster, so I get a virtual LoadBalancer, and the EXTERNAL-IP MetalLB assigned to the waypoint service for me was 10.23.220.90.

My cluster is running on it’s own dedicated network in my house. I use another Pi as a router / jumpbox. It has two network interfaces, and the internal interface is on the Kubernetes network.

By getting an SSH session to this Pi, I could verify the Waypoint Server connectivity via it’s LoadBalancer resource.

curl -i --insecure https://10.23.220.90:9702

HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 3490
Content-Type: text/html; charset=utf-8
Last-Modified: Mon, 19 Oct 2020 21:11:45 GMT
Date: Mon, 26 Oct 2020 14:27:33 GMT

Bootstrapping Waypoint Server

On a first time run, you need to bootstrap Waypoint. This also sets up a new context for you on the machine you run the command from.

The Waypoint LoadBalancer has two ports exposed. 9702 for HTTPS, and 9701 for the Waypoint CLI to communicate with using TCP.

With connectivity verified using curl, I could now bootstrap the server with the waypoint bootstrap command, pointing to the LoadBalancer EXTERNAL-IP and port 9701.

waypoint server bootstrap -server-addr=10.23.220.90:9701 -server-tls-skip-verify
waypoint context list
waypoint context verify

This command gives back a token as a response and sets up a waypoint CLI context from the machine it ran from.

Waypoint context setup and verified from an internal kubernetes network connected machine.

Using Waypoint CLI from a machine external to the Cluster

I wanted to use Waypoint from a management or workstation machine outside of my Pi Cluster network. If you have a similar network setup, you could also do something similar.

As mentioned before, my Pi Router device has two interfaces. A wireless interface, and a phyiscal network interface. To get connectivity over ports 9701 and 9702 I used some iptables rules. Importantly, my Kubernetes facing network interface is on 10.0.0.1 in the example below:

sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 9702 -j DNAT --to-destination 10.23.220.90:9702
sudo iptables -t nat -A POSTROUTING -p tcp -d 10.23.220.90 --dport 9702 -j SNAT --to-source 10.0.0.1
sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 9701 -j DNAT --to-destination 10.23.220.90:9701
sudo iptables -t nat -A POSTROUTING -p tcp -d 10.23.220.90 --dport 9701 -j SNAT --to-source 10.0.0.1

These rules have the effect of sending traffic destined for port 9701 and 9702 hitting the wlan0 interface, to the MetalLB IP 10.23.220.90.

The source and destination network address translation will translate the ‘from’ address of the TCP reply packets to make them look like they’re coming from 10.0.0.1 instead of 10.23.220.90.

Now, I can simply setup a Waypoint CLI context on a machine on my ‘normal’ network. This network has visibility of my Raspberry Pi Router’s wlan0 interface. I used my previously generated token in the command below:

waypoint context create -server-addr=192.168.7.31:9701 -server-tls-skip-verify -server-auth-token={generated-token-here} rpi-cluster
waypoint context verify rpi-cluster
Connectivity verified from my macOS machine, external to my Raspberry Pi Cluster with Waypoint Server running there.

Concluding

Waypoint Server is super easy to get running locally if you’re on macOS, Linux or Windows.

With a little bit of extra work you can get HashiCorp Waypoint Server on Raspberry Pi working, thanks to the versatility of the Waypoint CLI!

Testing HashiCorp Boundary Locally

network switch

Hashicorp just announced a new open source product called Boundary. Hashicorp Boundary claims to provide secure access to hosts and other systems without needing to manage user credentials or expose wider networks.

I’ll be testing out the newly released open source version 0.1 in this post.

Installation

I’m on macOS so I used homebrew to get a precompiled binary installed and added to my system PATH for the quickest route to test. There are binaries available for other operating systems too.

brew install hashicorp/tap/boundary
brew upgrade hashicorp/tap/boundary

Running boundary reveals the various CLI commands.

the boundary CLI command options

Bootstrapping a Boundary Development Environment

Boundary should be deployed in a HA configuration using multiple controllers and workers for a production environment. However for local testing and development, you can spin up an ‘all-in-one’ environment with Docker.

The development or local environment will spin up the following:

  • Boundary controller server
  • Boundary worker server
  • Postgres database for Boundary

Data will not be persisted with this type of a local testing setup.

Start a boundary dev environment with default options using the boundary dev command.

boundary dev

You can change the default admin credentials by passing in some flags with the above command if you prefer. E.g.

boundary dev -login-name="johnconnor" -password="T3rmin4at3d"

After a minute or so you should get output providing details about your dev environment.

hashicorp boundary dev environment

Login to the admin UI with your web browser using http://127.0.0.1:9200 along with the default admin/password credentials (or your chosen credentials).

hashicorp boundary organizations screen in the admin UI

Boundary Roles and Grants

Navigate to Roles -> Administration -> Grants.

The Administration Role has the grant:

id=*;type=*;actions=*

If you’re familiar with AWS IAM policies, this may look familiar. id represents resource IDs and actions represents the types of actions that can be performed. For this Administration role, a wildcard asterisk * means that users with this role can do anything with any resource.

Host Sets, Hosts and Targets

Navigate to Projects Generated project scope. Then click Host Catalogs Generated host catalog Host Sets Generated host set. On the Hosts tab click on Generated host. You can view the Type, ID and Address along with other details of this sample host.

Being a local environment, the address for this host is simply localhost.

To establish a session to a host, you need a Target. For example, to create an SSH session to a host using Hashicorp Boundary, you create a Target.

You do this with a host set. The host set provides host addressing information, along with the type of connection, e.g. TCP.

Explore the Targets page and note the default tcp target with default port 22.

boundary target

Connecting to a Target

Using your shell (another shell session) and the boundary CLI, authenticate using your local dev auth-method-id and admin credentials.

boundary authenticate password -auth-method-id=ampw_1234567890 \
    -login-name=admin -password=password

Your logged in session should be valid for 1 week.

Now you can get details about the default Generated target using it’s ID:

boundary targets read -id ttcp_1234567890

To connect and establish an SSH session to this sample host simply run the boundary connect command, passing in the target ID.

boundary connect ssh -target-id ttcp_1234567890

Exec Command and Wrapping TCP Sessions With Other Clients

The -exec flag is used to wrap a Boundary TCP session with a designated client or tool, for example curl.

As a quick test, you can use the default Target to perform a request against another host using curl.

Update the default TCP target port from 22 to 443. Then use the boundary connect command and -exec flag to curl this blog 🙂

boundary targets update tcp -default-port 443 -id ttcp_1234567890
boundary connect -exec curl -target-id ttcp_1234567890 \
     -- -vvsL --output /dev/null www.shogan.co.uk

Viewing Sessions

Session details are available in the Sessions page. You can also cancel or terminate sessions here.

Thoughts

Hashicorp Boundary already looks like it provides a ton of value out of the box. To me it seems like it offers much of the functionality that proprietary cloud services such as AWS SSM Session Manager (along with it’s various AWS service integrations) provide.

If you’re looking to avoid cloud services lock-in when it comes to tooling like this, then Boundary already looks like a great option.

Of course Hashicorp will be looking to commercialise Boundary in the future. However, if you look at their past actions with tools like Terraform and Vault, I’m willing to bet they’ll keep the vast majority of valuable features in the open source version. They’ll most likely provide a convenient and useful commercial offering in the future that larger enterprises might want to pay for.

Quick and Easy Local NPM Registry With Verdaccio and Docker

container storage

Sometimes it can be useful to be able to npm publish libraries or projects you’re working on to a local npm registry for use in other development projects.

This post is a quick how-to showing how you can get up and running with a private, local npm registry using Verdaccio and docker compose.

Verdaccio claims it is a zero config required NPM registry, and that is pretty much correct. You can have it up and running in under 5 minutes. Here’s how:

Local NPM Registry Quick Start

Clone verdaccio docker-examples and then change directory into the docker-examples/docker-local-storage-volume directory.

git clone https://github.com/verdaccio/docker-examples.git
cd docker-examples/docker-local-storage-volume

This particular sample docker-compose configuration gives you a locally run verdaccio instance along with persistence via local volume mount.

From here you can be up and running by simply issuing the following docker-compose command:

docker-compose up -d

However if you do want to make a few tweaks to the configuration, simply load up the conf/config.yaml file in your editor.

I wanted to change the max_body_size to a higher value to allow for larger npm packages to be published locally, so I added:

max_body_size: 500mb

If you haven’t yet started the local docker container, start it up with docker-compose up.

Usage

Now all you need to do is configure your local npm settings to use verdaccio on http://localhost:4873. This is default host and port that verdaccio is configured to listen on when running in docker locally).

Then add an npm user for local development:

npm adduser --registry http://localhost:4873

To use your new registry at a project level, you can create a .npmrc file in your local projects with the following content:

@shogan:registry=http://localhost:4873

Of course replace the scope of @shogan with the package scope of your choosing.

To publish a module / package locally:

npm publish --registry http://localhost:4873

Other Examples

There are lots more verdaccio samples and configurations that you can use in the docker-examples repository. Take a look to find these, including Kubernetes resources to deploy if you prefer running there for a local development setup.

Also refer to the verdaccio configuration page for more examples and documentation on the possible config options.