Hashicorp Waypoint Server on Raspberry Pi

waypoint server running on raspberry pi

This evening I finally got a little time to play around with Waypoint. This wasn’t a straightforward install of Waypoint on my desktop though. I wanted to run and test HashiCorp Waypoint Server on Raspberry Pi. Specifically on my Pi Kubernetes cluster.

Out of the box Waypoint is simple to setup locally, whether you’re on Windows, Linux, or Mac. The binary is written in the Go programming language, which is common across HashiCorp software.

There is even an ARM binary available which lets you run the CLI on Raspberry Pi straight out of the box.

Installing Hashicorp Waypoint Server on Raspberry Pi hosted Kubernetes

I ran into some issues initially when assuming that waypoint install --platform=kubernetes -accept-tos would ensure an ARM docker image was pulled down for my Pi based Kubernetes hosts though.

My Kubernetes cluster also has the nfs-client-provisioner setup, which fulfills PersistentVolumeClaim resources with storage from my home FreeNAS Server Build. I noticed that PVCs were not being honored because they did not have the specific storage-class of nfs-storage that my nfs-client-provisioner required.

Fixing the PVC Issue

Looking at the waypoint CLI command, it’s possible to generate the YAML for the Kubernetes resources it would deploy with a --platform=kubernetes flag. So I fetched a base YAML resource definition:

waypoint install --platform=kubernetes -accept-tos --show-yaml

I modified the volumeClaimTemplates section to include my required PVC storageClassName of nfs-storage.

  - metadata:
      name: data
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: nfs-storage
          storage: 1Gi

That sorted out the pending PVC issue in my cluster.

Fixing the ARM Docker Issue

Looking at the Docker image that the waypoint install command for Kubernetes gave me, I could see right away that it was not right for ARM architecture.

To get a basic Waypoint server deployment for development and testing purposes on my Raspberry Pi Kubernetes Cluster, I created a simple Dockerfile for armhf builds.

Basing it on the hypriot/rpi-alpine image, to get things moving quickly I did the following in my Dockerfile.

  • Added few tools, such as cURL.
  • Added a RUN command to download the waypoint ARM binary (currently 0.1.3) from Hashicorp releases and place in /usr/bin/waypoint.
  • Setup a /data volume mount point.
  • Created a waypoint user.
  • Created the entrypoint for /usr/bin/waypoint.

You can get my ARM Waypoint Server Dockerfile on Github, and find the built armhf Docker image on Docker Hub.

Now it is just a simple case of updating the image in the generated YAML StatefulSet to use the ARM image with the ARM waypoint binary embedded.

- name: server
  image: shoganator/waypoint:
  imagePullPolicy: Always

With the YAML updated, I simply ran kubectl apply to deploy it to my Kubernetes Cluster. i.e.

kubectl apply -f ./waypoint-armhf.yaml

Now Waypoint Server was up and running on my Raspberry Pi cluster. It just needed bootstrapping, which is expected for a new installation.

Hashicorp Waypoint Server on Raspberry Pi - pod started.

Configuring Waypoint CLI to Connect to the Server

Next I needed to configure my internal jumpbox to connect to Waypoint Server to verify everything worked.

Things may differ for you here slightly, depending on how your cluster is setup.

Waypoint on Kubernetes creates a LoadBalancer resource. I’m using MetalLB in my cluster, so I get a virtual LoadBalancer, and the EXTERNAL-IP MetalLB assigned to the waypoint service for me was

My cluster is running on it’s own dedicated network in my house. I use another Pi as a router / jumpbox. It has two network interfaces, and the internal interface is on the Kubernetes network.

By getting an SSH session to this Pi, I could verify the Waypoint Server connectivity via it’s LoadBalancer resource.

curl -i --insecure

HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 3490
Content-Type: text/html; charset=utf-8
Last-Modified: Mon, 19 Oct 2020 21:11:45 GMT
Date: Mon, 26 Oct 2020 14:27:33 GMT

Bootstrapping Waypoint Server

On a first time run, you need to bootstrap Waypoint. This also sets up a new context for you on the machine you run the command from.

The Waypoint LoadBalancer has two ports exposed. 9702 for HTTPS, and 9701 for the Waypoint CLI to communicate with using TCP.

With connectivity verified using curl, I could now bootstrap the server with the waypoint bootstrap command, pointing to the LoadBalancer EXTERNAL-IP and port 9701.

waypoint server bootstrap -server-addr= -server-tls-skip-verify
waypoint context list
waypoint context verify

This command gives back a token as a response and sets up a waypoint CLI context from the machine it ran from.

Waypoint context setup and verified from an internal kubernetes network connected machine.

Using Waypoint CLI from a machine external to the Cluster

I wanted to use Waypoint from a management or workstation machine outside of my Pi Cluster network. If you have a similar network setup, you could also do something similar.

As mentioned before, my Pi Router device has two interfaces. A wireless interface, and a phyiscal network interface. To get connectivity over ports 9701 and 9702 I used some iptables rules. Importantly, my Kubernetes facing network interface is on in the example below:

sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 9702 -j DNAT --to-destination
sudo iptables -t nat -A POSTROUTING -p tcp -d --dport 9702 -j SNAT --to-source
sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 9701 -j DNAT --to-destination
sudo iptables -t nat -A POSTROUTING -p tcp -d --dport 9701 -j SNAT --to-source

These rules have the effect of sending traffic destined for port 9701 and 9702 hitting the wlan0 interface, to the MetalLB IP

The source and destination network address translation will translate the ‘from’ address of the TCP reply packets to make them look like they’re coming from instead of

Now, I can simply setup a Waypoint CLI context on a machine on my ‘normal’ network. This network has visibility of my Raspberry Pi Router’s wlan0 interface. I used my previously generated token in the command below:

waypoint context create -server-addr= -server-tls-skip-verify -server-auth-token={generated-token-here} rpi-cluster
waypoint context verify rpi-cluster
Connectivity verified from my macOS machine, external to my Raspberry Pi Cluster with Waypoint Server running there.


Waypoint Server is super easy to get running locally if you’re on macOS, Linux or Windows.

With a little bit of extra work you can get HashiCorp Waypoint Server on Raspberry Pi working, thanks to the versatility of the Waypoint CLI!

Testing HashiCorp Boundary Locally

network switch

Hashicorp just announced a new open source product called Boundary. Hashicorp Boundary claims to provide secure access to hosts and other systems without needing to manage user credentials or expose wider networks.

I’ll be testing out the newly released open source version 0.1 in this post.


I’m on macOS so I used homebrew to get a precompiled binary installed and added to my system PATH for the quickest route to test. There are binaries available for other operating systems too.

brew install hashicorp/tap/boundary
brew upgrade hashicorp/tap/boundary

Running boundary reveals the various CLI commands.

the boundary CLI command options

Bootstrapping a Boundary Development Environment

Boundary should be deployed in a HA configuration using multiple controllers and workers for a production environment. However for local testing and development, you can spin up an ‘all-in-one’ environment with Docker.

The development or local environment will spin up the following:

  • Boundary controller server
  • Boundary worker server
  • Postgres database for Boundary

Data will not be persisted with this type of a local testing setup.

Start a boundary dev environment with default options using the boundary dev command.

boundary dev

You can change the default admin credentials by passing in some flags with the above command if you prefer. E.g.

boundary dev -login-name="johnconnor" -password="T3rmin4at3d"

After a minute or so you should get output providing details about your dev environment.

hashicorp boundary dev environment

Login to the admin UI with your web browser using along with the default admin/password credentials (or your chosen credentials).

hashicorp boundary organizations screen in the admin UI

Boundary Roles and Grants

Navigate to Roles -> Administration -> Grants.

The Administration Role has the grant:


If you’re familiar with AWS IAM policies, this may look familiar. id represents resource IDs and actions represents the types of actions that can be performed. For this Administration role, a wildcard asterisk * means that users with this role can do anything with any resource.

Host Sets, Hosts and Targets

Navigate to Projects Generated project scope. Then click Host Catalogs Generated host catalog Host Sets Generated host set. On the Hosts tab click on Generated host. You can view the Type, ID and Address along with other details of this sample host.

Being a local environment, the address for this host is simply localhost.

To establish a session to a host, you need a Target. For example, to create an SSH session to a host using Hashicorp Boundary, you create a Target.

You do this with a host set. The host set provides host addressing information, along with the type of connection, e.g. TCP.

Explore the Targets page and note the default tcp target with default port 22.

boundary target

Connecting to a Target

Using your shell (another shell session) and the boundary CLI, authenticate using your local dev auth-method-id and admin credentials.

boundary authenticate password -auth-method-id=ampw_1234567890 \
    -login-name=admin -password=password

Your logged in session should be valid for 1 week.

Now you can get details about the default Generated target using it’s ID:

boundary targets read -id ttcp_1234567890

To connect and establish an SSH session to this sample host simply run the boundary connect command, passing in the target ID.

boundary connect ssh -target-id ttcp_1234567890

Exec Command and Wrapping TCP Sessions With Other Clients

The -exec flag is used to wrap a Boundary TCP session with a designated client or tool, for example curl.

As a quick test, you can use the default Target to perform a request against another host using curl.

Update the default TCP target port from 22 to 443. Then use the boundary connect command and -exec flag to curl this blog 🙂

boundary targets update tcp -default-port 443 -id ttcp_1234567890
boundary connect -exec curl -target-id ttcp_1234567890 \
     -- -vvsL --output /dev/null www.shogan.co.uk

Viewing Sessions

Session details are available in the Sessions page. You can also cancel or terminate sessions here.


Hashicorp Boundary already looks like it provides a ton of value out of the box. To me it seems like it offers much of the functionality that proprietary cloud services such as AWS SSM Session Manager (along with it’s various AWS service integrations) provide.

If you’re looking to avoid cloud services lock-in when it comes to tooling like this, then Boundary already looks like a great option.

Of course Hashicorp will be looking to commercialise Boundary in the future. However, if you look at their past actions with tools like Terraform and Vault, I’m willing to bet they’ll keep the vast majority of valuable features in the open source version. They’ll most likely provide a convenient and useful commercial offering in the future that larger enterprises might want to pay for.