Hashicorp Waypoint Server on Raspberry Pi

waypoint server running on raspberry pi

This evening I finally got a little time to play around with Waypoint. This wasn’t a straightforward install of Waypoint on my desktop though. I wanted to run and test HashiCorp Waypoint Server on Raspberry Pi. Specifically on my Pi Kubernetes cluster.

Out of the box Waypoint is simple to setup locally, whether you’re on Windows, Linux, or Mac. The binary is written in the Go programming language, which is common across HashiCorp software.

There is even an ARM binary available which lets you run the CLI on Raspberry Pi straight out of the box.

Installing Hashicorp Waypoint Server on Raspberry Pi hosted Kubernetes

I ran into some issues initially when assuming that waypoint install --platform=kubernetes -accept-tos would ensure an ARM docker image was pulled down for my Pi based Kubernetes hosts though.

My Kubernetes cluster also has the nfs-client-provisioner setup, which fulfills PersistentVolumeClaim resources with storage from my home FreeNAS Server Build. I noticed that PVCs were not being honored because they did not have the specific storage-class of nfs-storage that my nfs-client-provisioner required.

Fixing the PVC Issue

Looking at the waypoint CLI command, it’s possible to generate the YAML for the Kubernetes resources it would deploy with a --platform=kubernetes flag. So I fetched a base YAML resource definition:

waypoint install --platform=kubernetes -accept-tos --show-yaml

I modified the volumeClaimTemplates section to include my required PVC storageClassName of nfs-storage.

volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: nfs-storage
      resources:
        requests:
          storage: 1Gi

That sorted out the pending PVC issue in my cluster.

Fixing the ARM Docker Issue

Looking at the Docker image that the waypoint install command for Kubernetes gave me, I could see right away that it was not right for ARM architecture.

To get a basic Waypoint server deployment for development and testing purposes on my Raspberry Pi Kubernetes Cluster, I created a simple Dockerfile for armhf builds.

Basing it on the hypriot/rpi-alpine image, to get things moving quickly I did the following in my Dockerfile.

  • Added few tools, such as cURL.
  • Added a RUN command to download the waypoint ARM binary (currently 0.1.3) from Hashicorp releases and place in /usr/bin/waypoint.
  • Setup a /data volume mount point.
  • Created a waypoint user.
  • Created the entrypoint for /usr/bin/waypoint.

You can get my ARM Waypoint Server Dockerfile on Github, and find the built armhf Docker image on Docker Hub.

Now it is just a simple case of updating the image in the generated YAML StatefulSet to use the ARM image with the ARM waypoint binary embedded.

containers:
- name: server
  image: shoganator/waypoint:0.1.3.20201026-armhf
  imagePullPolicy: Always

With the YAML updated, I simply ran kubectl apply to deploy it to my Kubernetes Cluster. i.e.

kubectl apply -f ./waypoint-armhf.yaml

Now Waypoint Server was up and running on my Raspberry Pi cluster. It just needed bootstrapping, which is expected for a new installation.

Hashicorp Waypoint Server on Raspberry Pi - pod started.

Configuring Waypoint CLI to Connect to the Server

Next I needed to configure my internal jumpbox to connect to Waypoint Server to verify everything worked.

Things may differ for you here slightly, depending on how your cluster is setup.

Waypoint on Kubernetes creates a LoadBalancer resource. I’m using MetalLB in my cluster, so I get a virtual LoadBalancer, and the EXTERNAL-IP MetalLB assigned to the waypoint service for me was 10.23.220.90.

My cluster is running on it’s own dedicated network in my house. I use another Pi as a router / jumpbox. It has two network interfaces, and the internal interface is on the Kubernetes network.

By getting an SSH session to this Pi, I could verify the Waypoint Server connectivity via it’s LoadBalancer resource.

curl -i --insecure https://10.23.220.90:9702

HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 3490
Content-Type: text/html; charset=utf-8
Last-Modified: Mon, 19 Oct 2020 21:11:45 GMT
Date: Mon, 26 Oct 2020 14:27:33 GMT

Bootstrapping Waypoint Server

On a first time run, you need to bootstrap Waypoint. This also sets up a new context for you on the machine you run the command from.

The Waypoint LoadBalancer has two ports exposed. 9702 for HTTPS, and 9701 for the Waypoint CLI to communicate with using TCP.

With connectivity verified using curl, I could now bootstrap the server with the waypoint bootstrap command, pointing to the LoadBalancer EXTERNAL-IP and port 9701.

waypoint server bootstrap -server-addr=10.23.220.90:9701 -server-tls-skip-verify
waypoint context list
waypoint context verify

This command gives back a token as a response and sets up a waypoint CLI context from the machine it ran from.

Waypoint context setup and verified from an internal kubernetes network connected machine.

Using Waypoint CLI from a machine external to the Cluster

I wanted to use Waypoint from a management or workstation machine outside of my Pi Cluster network. If you have a similar network setup, you could also do something similar.

As mentioned before, my Pi Router device has two interfaces. A wireless interface, and a phyiscal network interface. To get connectivity over ports 9701 and 9702 I used some iptables rules. Importantly, my Kubernetes facing network interface is on 10.0.0.1 in the example below:

sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 9702 -j DNAT --to-destination 10.23.220.90:9702
sudo iptables -t nat -A POSTROUTING -p tcp -d 10.23.220.90 --dport 9702 -j SNAT --to-source 10.0.0.1
sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 9701 -j DNAT --to-destination 10.23.220.90:9701
sudo iptables -t nat -A POSTROUTING -p tcp -d 10.23.220.90 --dport 9701 -j SNAT --to-source 10.0.0.1

These rules have the effect of sending traffic destined for port 9701 and 9702 hitting the wlan0 interface, to the MetalLB IP 10.23.220.90.

The source and destination network address translation will translate the ‘from’ address of the TCP reply packets to make them look like they’re coming from 10.0.0.1 instead of 10.23.220.90.

Now, I can simply setup a Waypoint CLI context on a machine on my ‘normal’ network. This network has visibility of my Raspberry Pi Router’s wlan0 interface. I used my previously generated token in the command below:

waypoint context create -server-addr=192.168.7.31:9701 -server-tls-skip-verify -server-auth-token={generated-token-here} rpi-cluster
waypoint context verify rpi-cluster
Connectivity verified from my macOS machine, external to my Raspberry Pi Cluster with Waypoint Server running there.

Concluding

Waypoint Server is super easy to get running locally if you’re on macOS, Linux or Windows.

With a little bit of extra work you can get HashiCorp Waypoint Server on Raspberry Pi working, thanks to the versatility of the Waypoint CLI!

Cheap S3 Cloud Backup with BackBlaze B2

white and blue fiber optic cables in a FC storage switch

I’ve been constantly evolving my cloud backup strategies to find the ultimate cheap S3 cloud backup solution.

The reason for sticking to “S3” is because there are tons of cloud provided storage service implementations of the S3 API. Sticking to this means that one can generally use the same backup/restore scripts for just about any service.

The S3 client tooling available can of course be leveraged everywhere too (s3cmd, aws s3, etc…).

BackBlaze B2 gives you 10GB of storage free for a start. If you don’t have too much to backup you could get creative with lifecycle policies and stick within the 10GB free limit.

a lifecycle policy to delete objects older than 7 days.

Current Backup Solution

This is the current solution I’ve setup.

I have a bunch of files on a FreeNAS storage server that I need to backup daily and send to the cloud.

I’ve setup a private BackBlaze B2 bucket and applied a lifecycle policy that removes any files older than 7 days. (See example screenshot above).

I leveraged a FreeBSD jail to install my S3 client (s3cmd) tooling, and mount my storage to that jail. You can follow the steps below if you would like to setup something similar:

Step-by-step setup guide

Create a new jail.

Enable VNET, DHCP, and Auto-start. Mount the FreeNAS storage path you’re interested in backing up as read-only to the jail.

The first step in a clean/base jail is to get s3cmd compiled and installed, as well as gpg for encryption support. You can use portsnap to get everything downloaded and ready for compilation.

portsnap fetch
portsnap extract # skip this if you've already run extract before
portsnap update

cd /usr/ports/net/py-s3cmd/
make -DBATCH install clean
# Note -DBATCH will take all the defaults for the compile process and prevent tons of pop-up dialogs asking to choose. If you don't want defaults then leave this bit off.

# make install gpg for encryption support
cd /usr/ports/security/gnupg/ && make -DBATCH install clean

The compile and install process takes a number of minutes. Once complete, you should be able to run s3cmd –configure to set up your defaults.

For BackBlaze you’ll need to configure s3cmd to use a specific endpoint for your region. Here is a page that describes the settings you’ll need in addition to your access / secret key.

After gpg was compiled and installed you should find it under the path /usr/local/bin/gpg, so you can use this for your s3cmd configuration too.

Double check s3cmd and gpg are installed with simple version checks.

gpg --version
s3cmd --version
quick version checks of gpg and s3cmd

A simple backup shell script

Here is a quick and easy shell script to demonstrate compressing a directory path and all of it’s contents, then uploading it to a bucket with s3cmd.

DATESTAMP=$(date "+%Y-%m-%d")
TIMESTAMP=$(date "+%Y-%m-%d-%H-%M-%S")

tar --exclude='./some-optional-stuff-to-exclude' -zcvf "/root/$TIMESTAMP-backup.tgz" .
s3cmd put "$TIMESTAMP-backup.tgz" "s3://your-bucket-name-goes-here/$DATESTAMP/$TIMESTAMP-backup.tgz"

Scheduling the backup script is an easy task with crontab. Run crontab -e and then set up your desired schedule. For example, daily at 25 minutes past 1 in the morning:

25 1 * * * /root/backup-script.sh

My home S3 backup evolution

I’ve gone from using Amazon S3, to Digital Ocean Spaces, to where I am now with BackBlaze B2. BackBlaze is definitely the cheapest option I’ve found so far.

Amazon S3 is overkill for simple home cloud backup solutions (in my opinion). You can change to use infrequent access or even glacier tiered storage to get the pricing down, but you’re still not going to beat BackBlaze on pure storage pricing.

Digital Ocean Spaces was nice for a short while, but they have an annoying minimum charge of $5 per month just to use Spaces. This rules it out for me as I was hunting for the absolute cheapest option.

BackBlaze currently has very cheap storage costs for B2. Just $0.005 per GB and only $0.01 per GB of download (only really needed if you want to restore some backup files of course).

Concluding

You can of course get more technical and coerce a willing friend/family member to host a private S3 compatible storage service for you like Minio, but I doubt many would want to go to that level of effort.

So, if you’re looking for a cheap S3 cloud backup solution with minimal maintenance overhead, definitely consider the above.

This is post #4 in my effort towards 100DaysToOffload.

This blog runs on ARM microarchitecture

raspberry pi devices

Specifically, at any one point in time, this site is powered by one of a bunch of different ARM Cortex-A72 processors. In other words, it runs across a bunch of Raspberry Pi 4 devices.

There is a long history of where this blog has been hosted. Back in 2008 it was running in a virtual machine on a Dell OptiPlex PC.

Since then I’ve moved it to various hosting services, cloud services and such, until recently when I spun down my Digital Ocean Kubernetes cluster and migrated this site to my own personal Raspberry Pi Kubernetes cluster.

I don’t need anything powerful to run Shogan.tech. Most of my web traffic comes in steadily over the working week during typical working hours.

I have done some basic load testing though and the setup I have is capable of handling a few tens of clients per second, or as in this particular test, around 500 clients over 1 minute.

load test on this site with 500 clients over 1 minute making requests to the main page.

The response times may not be brilliant, but they’re OK. Especially considering the route a typical request/response takes:

  • A request hits my ‘outer’ router from the internet and goes through some firewall rules.
  • Then, the request enters my ‘inner’ network router and is routed over a WiFi link to a Raspberry Pi device running a bunch of iptables rules.
  • This Raspberry Pi ‘router’ directs the request through another physical network interface into my dedicated Raspberry Pi cluster.
  • The request hits an IP address being used as a software Load Balancer (Metal LB) where Kubernetes directs it to the backing NGINX ingress service (and hence pod).
  • The Ingress Controller figures out which pod to direct the request to and sends it there.
  • Finally the request hits the actual container running this site, and the software serves the response back to the requesting client.
  • Not to forget there is also a request to the database container that the web container makes too!

ARM technology for me personally has been great. I’ve been able to play with cheap hardware and come up with interesting use cases for it.

I’ve enjoyed hacking away on Raspberry Pi devices since 2013. I’ve used them for fun electronics projects, hosting bespoke servers for friends, playing Minecraft multiplayer with my kids, and more.

Playing Minecraft on a server running on Kubernetes with Raspberry Pi hardware
Playing Minecraft on a server running on Kubernetes with Raspberry Pi hardware

The future of computing with ARM

When I look back at this request/response lifecycle, I’m always impressed that a tiny Raspberry Pi board the size of a credit card is responsible for doing this.

To me, ARM architecture has seemingly been slowly changing the computing landscape over the last 5 years, accelerating in pace in the last year or so.

Let’s take a look at some notable cases of this:

Fugaku (Super computer)

Earlier this week’s big news in the super computing space was that Fugaku, a super computer built in Japan is now online.

This supercomputer is built with the Fujitsu A64FX microprocessor (which is based on ARM architecture).

Even though its not yet fully online, it leads the way with a peak performance of 0.54 exaFLOPS rated in the TOP500.

Microsoft Surface hardware

Microsoft have been making big moves within the ARM processor space. Here are some notable points:

Amazon AWS EC2 instances powered by ARM

AWS have started ramping up their own processor production with their Graviton chips.

These power newer generation EC2 instances and have allowed AWS to focus down on improvements that they know their customers will benefit from.

In a competitive cloud space, this gives AWS an advantage where they can design their own processors to deliver faster performance in key areas like compression, video encoding, machine learning, and more.

Another key advantage here is if you think about the plethora of recent Intel vulnerabilities that have been patched out and resulted in slower processor performance across various providers.

AWS can design their new chips with multi-tenancy and security as first class considerations. (Always-on 256-bit DRAM encryption etc…)

Apple macOS on ARM

Of course the big news the last couple of weeks has been Apple announcing they will be moving their Mac hardware over to ARM too.

In addition to longer term benefits they can realize with their own chip designs, this also allows them to unify their mobile and desktop ecosystem.

Soon users will be able to run their iOS and macOS apps on the same hardware.

It remains to be seen how the transition goes, but there is no doubt they’ll be breaking out Rosetta v2 to help support existing software on the new platform and ease the move from Intel to ARM for their customers.

Thoughts

Considering these examples of massive investments into ARM technology, I think there is certainly a big change coming to the CPU landscape in the near future.

For us consumers, more competition means better prices and more options. Cloud pricing will continue to reduce. But how will the software landscape change?

Software houses will certainly need to be on the ball and get their existing apps ready for ARM if they’re not already on it.

What about the future of x86? I personally can’t see the PC gaming market changing very soon. I love my Steam library collection of games. Those won’t work on ARM any time soon. The same goes for a lot of enterprise software.

So with cheaper hardware, decreasing power requirements, processors that are designed for specific workloads, and more competition across the board on the horizon, I have one closing thought.

As long as we don’t pay the price in performance loss for power efficiency, and we don’t end up with a massive chasm for software compatibility, I’ll be happy.

This is post #2 in my effort towards 100DaysToOffload.

Saving £500 on a new Apple Mac Mini with 32GB RAM

mac mini internals

I purchased a new Apple Mac Mini recently and didn’t want to fall victim to Apple’s “RAM Tax”.

I used Apple’s site to configure a Mac Mini with a quad core processor, 32GB RAM, and a 512GB SSD.

I was shocked to see they added £600.00 to the price of a base model with 8GB RAM. They’re effectively charging all of this money for 24GB of extra RAM. This memory is nothing special, it’s pretty standard 2666MHz DDR4 SODIMM modules. The same stuff that is used in generic laptops.

I decided to cut back my order to the base model with 8GB of RAM. I ordered a Crucial 32GB Kit (2 x 16GB DDR4-2666 SODIMM modules running at 1.2 volts with a CAS latency of 19ns). This kit cost me just over £100.00 online.

The Crucial 2 x 16GB DDR4-2666 SODIMM kit

In total I saved around £500.00 for the trouble of about 30 minutes of work to open up the Mac Mini and replace the RAM modules myself.

The Teardown Process

Use the iFixit Guide

You can use my photos and brief explanations below if you would like to follow the steps I took to replace the RAM, but honestly, you’re better off following iFixit’s excellent guide here.

Follow along Here

If you want to compare or follow along in my format, then read on…

Get a good tool kit with hex screw drivers. I used iFixit’s basic kit.

iFixit basic tool kit

Flip the Mac Mini upside down.

Pry open the back cover, carefully with a plastic prying tool

Undo the 6 x hex screws on the metal plate under the black plastic cover. Be careful to remember the positions of these, as there are 2 x different types. 3 x short screws, and 3 x longer.

opening the mac mini

Very carefully, move the cover to the side, revealing the WiFi antenna connector. Unscrew the small hex screw holding the metal tab on the cable. Use a plastic levering tool to carefully pop the antenna connector off.

Next, unscrew 4 x screws that hold the blower fan to the exhaust port. You can see one of the screws in the photo below. Two of the screws are angled at a 45 degree orientation, so carefully undo those, and use tweezers to catch them as they come out.

Carefully lift the blower fan up, and disconnect it’s cable using a plastic pick or prying tool. The trick is to lift from underneat the back of the cable’s connector and it’ll pop off.

mac mini blower fan removal

Next, disconnect the main power cable at the top right of the photo below. This requires a little bit of wiggling to loosen and lift it as evenly as possible.

Now disconnect the LED cable (two pin). It’s very delicate, so do this as carefully as possible.

There are two main hex screws to remove from the motherboard central area now. You can see them removed below near the middle (where the brass/gold coloured rings are).

With everything disconnected, carefully push the inner motherboard and it’s tray out, using your thumbs on the fan’s exhaust port. You should ideally position your thumbs on the screw hole areas of the fan exhaust port. It’ll pop out, then just very carefully push it all the way out.

The RAM area is protected by a metal ‘cage’. Unscrew it’s 4 x hex screws and slowly lift the cage off the RAM retainer clips.

Carefully push the RAM module retainer clips to the side (they have a rubber grommet type covering over them), and the existing SODIMM modules will pop loose.

mac mini SODIMM RAM modules and slots

Remove the old modules and replace with your new ones. Make sure you align the modules in the correct orientation. The slots are keyed, so pay attention to that. Push them down toward the board once aligned and the retainer clips will snap shut and lock them in place.

Replace the RAM ‘cage’ with it’s 4 x hex screws.

Reverse the steps you took above to insert the motherboard tray back into the chassis and re-attach all the cables and connectors in the correct order.

Make sure you didn’t miss any screws or cables when reconnecting everything.

Finally boot up and enjoy your cheap RAM upgrade.

Raspberry Pi Kubernetes Cluster with OpenFaaS for Serverless Functions (Part 4)

Getting Started with OpenFaaS

This is the fourth post in this series. The focus will be on getting OpenFaaS set up on your Raspberry Pi Kubernetes cluster nice and quickly.

Here are some links to previous posts in this series:

OpenFaaS is an open source project that provides a scalable platform to easily deploy event-driven functions and microservices.

It has great support to run on ARM hardware, which makes it an excellent fit for the Raspberry Pi. It’s worth mentioning that it is of course designed to run across a multitude of different platforms other than the Pi.

Getting Started

You’ll work with a couple of different CLI tools that I chose for the speed at which they can get you up and running:

  • faas-cli – the main CLI for OpenFaaS
  • arkade – a golang based CLI tool for quick and easy one liner installs for various apps / software for Kubernetes

There are other options like Helm or standard YAML files for Kubernetes that you could also use. Find more information about these here.

I have a general purpose admin and routing dedicated Pi in my Raspberry Pi stack that I use for doing admin tasks in my cluster. This made for a great bastion host that I could use to run the following commands:

Install arkade

# Important! Before running these scripts, always inspect the remote content first, especially as they're piped into sh with 'sudo'

# MacOS or Linux
curl -SLsf https://dl.get-arkade.dev/ | sudo sh

# Windows using Bash (e.g. WSL or Git Bash)
curl -SLsf https://dl.get-arkade.dev/ | sh

Install faas-cli

# Important! Before running these scripts, always inspect the remote content first, especially as they're piped into sh with 'sudo'

# MacOS
brew install faas-cli

# Using curl
curl -sL https://cli.openfaas.com | sudo sh

Deploying OpenFaaS

Using arkade, deploy OpenFaaS with:

arkade install openfaas

If you followed my previous articles in this series to set your cluster up, then you’ll have a LoadBalancer service type available via MetalLB. However, in my case (with the above command), I did not deploy a LoadBalancer service, as I already use a single Ingress Controller for external traffic coming into my cluster.

The assumption is that you have an Ingress Controller setup for the remainder of the steps. However, you can get by without one, accessing OpenFaaS by the external gateway NodePortservice instead.

The arkade install will output a command to get your password. By default OpenFaaS comes with Basic Authentication. You’ll fetch the admin password you can use to access the system with Basic Auth next.

Grab the generated admin password and login with faas-cli:

PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin

OpenFaaS Gateway Ingress

OpenFaaS will have deployed with two Gateway services in the openfaas namespace.

  • gateway (ClusterIP)
  • gateway-external (NodePort)

Instead of relying on the NodePort service, I chose to create an Ingress Rule to send traffic from my cluster’s Ingress Controller to OpenFaaS’ ClusterIP service (gateway).

You’ll want SSL so setup a K8s secret to hold your certificate details for the hostname you choose for your Ingress Rule. Here is a template you can use for your OpenFaaS ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: openfaas
spec:
  rules:
  - host: openfaas.foo.bar
    http:
      paths:
      - backend:
          serviceName: gateway
          servicePort: 8080
        path: /
  tls:
  - hosts:
    - openfaas.foo.bar
    secretName: openfaas.foo.bar

Create your TLS K8s secret in the openfaas namespace, and then deploy the ingress rule with:

kubectl -n openfaas apply -f ./the_above_ingress_rule.yml

You should now be able to access the OpenFaaS UI with something like https://openfaas.foo.bar/ui/

The OpenFaas Web UI

Creating your own Functions

Life is far more fun on the CLI, so get started with some basics with first:

  • faas-cli store list --platform armhf – show some basic functions available for armhf (Pi)
  • faas-cli store deploy figlet --platform armhf – deploy the figlet function that converts text to ASCII representations of that text
  • echo "hai" | faas-cli invoke figlet – pipe the text ‘hai’ into the faas-cli invoke command to invoke the figlet function and get it to generate the equivalent in ASCII text.

Now, create your own function using one of the many templates available. You’ll be using the incubator template for python3 HTTP. This includes a newer function watchdog (more about that below), which gives more control over the HTTP / event lifecycle in your functions.

Grab the python3 HTTP template for armhf and create a new function with it:

# Grab incubator templates for Python, including Python HTTP. Will figure out it needs the armhf ones based on your architecture!

faas template pull https://github.com/openfaas-incubator/python-flask-template
faas-cli new --lang python3-http-armhf your-function-name-here
Success – a new, python3 HTTP function ready to go

A basic file structure gets scaffolded out. It contains a YAML file with configuration about your function. E.g.

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  your-function-name-here:
    lang: python3-http-armhf
    handler: ./your-function-name-here
    image: your-function-name-here:latest

The YAML informs building and deploying of your function.

A folder with your function handler code is also created alongside the YAML. For python it contains handler.py and requirements.txt (for python library requirements)

def handle(event, context):
    # TODO implement
    return {
        "statusCode": 200,
        "body": "Hello from OpenFaaS!"
    }

As you used the newer function templates with the latest OF Watchdog, you get full access to the event and context in your handler without any extra work. Nice!

Build and Deploy your Custom Function

Run the faas up command to build and publish your function. This will do a docker build / tag / push to a registry of your choice and then deploy the function to OpenFaaS. Update your your-function-name-here.yml file to specify your desired docker registry/repo/tag, and OpenFaas gateway address first though.

faas up -f your-function-name-here.yml

Now you’re good to go. Execute your function by doing a GET request to the function URL, using faas invoke, or by using the OpenFaaS UI!

Creating your own OpenFaaS Docker images

You can convert most Docker images to run on OpenFaaS by adding the function watchdog to your image. This is a very small HTTP server written in Golang.

It becomes the entrypoint which forwards HTTP requests to your target process via STDIN or HTTP. The response goes back to the requester by STDOUT or HTTP.

Read and find out more at these URLs:

Hopefully this gave you a good base to get started with OpenFaaS. We covered everything from deployment and configuration, to creating your own custom functions and images. Have fun experimenting!

Multipurpose FreeNAS Server Build

multipurpose freenas server build

There is something magical about building your own infrastructure from scratch. And when I say scratch, I mean using bare metal. This is a run through of my multipurpose FreeNAS server build process.

After scratching the itch recently with my Raspberry Pi Kubernetes Cluster, I got a hankering to do it again, and this build was soon in the works.

Part of my motivation came from my desire to reduce our reliance on cloud technology at home. Don’t get me wrong, I am an advocate for using the cloud where it makes sense. My day job revolves around designing and managing various clients’ cloud infrastructure.

At home, this was more about taking control of our own data.

I’ll skip to the juicy specifications part if you would like to know what hardware I used right away.

The intial hardware
Note: I got this Gigabyte B450 motherboard, but soon found out it did not support ECC.

Final specifications:

These are the final specifications I decided on. Scroll down to see the details about each area.

The Goals

The final home server build would need to meet many requirements:

  • It should provide a resilient, large shared storage pool for network file storage across multiple Windows PCs at home.
  • Support NFS storage for sharing persistent volumes to my Raspberry Pi Kubernetes Cluster.
  • It should be able to run Plex for home and remote media streaming.
  • It must be able to run Nextcloud for home and remote mobile file storage.
  • Run services in Virtual Machines, Jails, or Docker containers. For example, I like to run Pi-hole as a DNS server for all my home equipment and devices.

The Decision Process

I started out my search looking at two products. Unraid and FreeNAS.

I have had experience running FreeNAS in the past for home lab setups. I never really used it seriously with the goal of making it reliable though.

This time around, all my files would be at stake, so I did a fair bit of research into the features and offerings of both products.

Unraid performed quite well for me. But, what pushed me away from it was the fact that it is a paid for, closed source, commercial product.

Unraid does make it super easy to bundle storage together and expand that storage in future if need be. However FreeNAS’ use of ZFS and it’s various other features were what won me over.

The Build Details

Having settled on FreeNAS, I went about researching which hardware I would need. My goal here was to not spend too much money, but at the same time not cheap out and compromise on reliability.

CPU, Motherboard, RAM

ECC (Error Checking and Correction) RAM is very important for ZFS, so this is basically what my build hinged on.

I found that AMD Ryzen CPUs support ECC, and so do most Ryzen compatible motherboards.

Importantly, in my research I found that Ryzen APU CPUs do not support ECC. Make sure you do not get an APU if ECC is important to you.

Additionally, many others report much better stability running FreeNAS on AMD Ryzen Generation 2 chips and above. With this in mind, I decided I would use at least an AMD Ryzen 2xxx CPU.

On the ECC topic, I only found evidence of single bit error correction working on AMD Ryzen systems.

I also made an initial mistake here in my build buying a Gigabyte B450M DS3H motherboard. The product specs seem to indicate that it supports ECC, and so did a review I found on Anandtech. In reality the Gigabyte board does not support the ECC feature. Rather it ‘supports’ ECC memory by allowing the system to boot with ECC RAM installed, but you don’t get the actual error checking and correction!

I figured this out after booting it up with Fedora Rawhide as well as a couple of uBuntu Server distributions and running the edac-utils package. In all cases edac-utils failed to find ECC support / or any memory controller.

checking ECC support with edac-utils
Checking ECC support with edac-utils

The Asus board I settled on supports ECC and edac-utils confirmed this.

The motherboard also has an excellent EFI BIOS. I found it easy to get to the ECC and Virtualization settings.

the Asus Prime X470-Pro EFI BIOS

Storage

I used 4 x Western Digital 3TB Red hard drives for the RAIDZ1 main storage pool.

Western Digital 3TB Red hard drives

The SSD storage pool consists of 2 x Crucial MX500 250GB SSD SATA drives in a mirror configuration. This configuration is for running Virtual Machines and the NFS storage for my Kubernetes cluster.

Graphics Card

The crossing out of APUs also meant I would need a discrete graphic card for console / direct access, and to install the OS initially. I settled on a cheap PCI Express Graphics card off Ebay for this.

A cheap AMD Radeon HD 6450 1GB DVI DisplayPort PCI-Express Graphics Card I used for the FreeNAS build.

Having chosen a beefy six core Ryzen 2600 CPU, I decided I didn’t need to get a fancy graphics card for live media encoding. (Plex does much better with this). If media encoding speed and efficiency is important to you, then consider something like an nVIDIA or AMD card.

For me, the six core CPU does a fine job at encoding media for home and remote streaming over Plex.

Network

I wanted to use this system to server file storage for my home PCs and equipment. Besides this, I also wanted to export and share storage to my Raspberry Pi Kubernetes cluster, which runs on it’s own, dedicated network.

The simple solution for me here was multihoming the server onto the two networks. So I would need two network interface cards, with at least 1Gbit/s capability.

The motherboard already has an Intel NIC onboard, so I added two more ports with an Intel Pro Dual Port Gigabit PCI Express x4 card.

Intel dual port NIC

Configuration Highlights

I’ll detail the highlights of my configuration for each service the multipurpose FreeNAS Server build hosts.

Main System Setup

The boot device is the 120GB M.2 nVME SSD. I installed FreeNAS 11.3 using a bootable USB drive.

FreeNAS Configuration

I created two Storage Pools. Both are encrypted. Besides the obvious protection encryption provides, this also makes it easier to recycle drives later on if I need to.

FreeNAS storage pool configuration
  • Storage Pool 1
    • 4 x Western Digital Red 3TB drives, configured with RAIDZ1. (1 disk’s worth of storage is effectively lost for parity, giving roughly 8-9 TB of usable space).
    • Deduplication turned off
    • Compression enabled
  • Storage Pool 2
    • 2 x Crucial MX500 250GB SSD drives, configured in a Mirror (1 disk mirrors the other, providing a backup if one fails).
    • Deduplication turned off
    • Compression enabled

The network is set to use the onboard NIC to connect to my main home LAN. One of the ports on the Intel dual port NIC connects to my Raspberry Pi Kubernetes Cluster network and assigned a static IP address on that network.

Windows Shares

My home network’s storage shares are simple Windows SMB Shares.

I created a dedicated user in FreeNAS which I configured in the SMB share configuration ACLs to give access.

Windows machines then simply mount the network location / path as mapped drives.

I also enabled Shadow Copies. FreeNAS supports this to enable Windows to use Shadow Copies.

FreeNAS Windows SMB share

Pi-hole Configuration

I setup a dedicated uBuntu Server 18.04 LTS Virtual Machine using FreeNAS’ built-in VM support (bhyve). Before doing this, I enabled virtualization support in the motherboard BIOS settings. (SVM Mode = Enabled).

I used the standard installation method for Pi-Hole. I made sure the VM was using a static IP address and was bridged to my home network. Then I reconfigured my home DHCP server to dish out the Pi-hole’s IP address as the primary DNS server to all clients.

For the DNS upstream servers that Pi-hole uses, I chose to use the Quad9 (filtered, DNSSEC) ones, and enabled DNSSEC.

pi-hole upstream DNS configuration with DNSSEC

NextCloud

NextCloud has a readily available plugin for FreeNAS. However, out of the box you get no SSL. You’ll need to setup your networking at home to allow remote access. Additionally, you’ll need to get an SSL certificate. I used Let’s Encrypt.

I detailed my full process in this blog post.

Plex

Plex was a simple setup. Simply install the Plex FreeNAS plugin from the main Plugins page and follow the wizard. It will install and configure a jail to run Plex.

To mount your media, you need to stop the Plex jail and edit it to add your media location on your storage. Here is an example of my mount point. It basically mounts the media directory I use to keep all my media into the Plex Jail’s filesystem.

Plex jail mount point

NFS Storage for Kubernetes

Lastly, I setup an NFS share / export for my Raspberry Pi Kubernetes Cluster to use for Persistent Volumes to attach to pods.

NFS shares for Kubernetes in FreeNAS

The key points here were that I allowed the two network ranges I wanted to have access to this storage from. (10.0.0.0/8 is my Kubernetes cluster network). I also configured a Mapall user of ‘root’, which allows the storage to be writeable when mounted by pods/containers in Kubernetes. (Or any other clients that mount this storage).

I was happy with this level of access for this particular NFS storage share from these two networks.

Next, I installed the NFS External-storage provisioner for Kubernetes on my Pi Cluster. I needed to use the ARM specific deployment manifest as Pi’s of course have ARM CPUs.

I modified the deployment manifest to point it to my FreeNAS machine’s IP address and NFS share path.

The kubernetes nfs client provisioner manifest configured for NFS storage provisioning.

With that done, pods can now request persistent storage with a Persistent Volume Claim (PVC). The NFS client provisioner will create a directory for the pod (named after the pod itself) on the NFS mount and mount that to your pod.

Final Thoughts

So far the multipurpose FreeNAS server build has been very stable. It has been happily serving our home media streaming, storage, and shared storage needs.

It’s also providing persistent storage for my Kubernetes lab environment which is great, as I prefer not to use the not-so-durable microSD cards on the Raspberry Pi’s themselves for storage.

The disk configuration size seems fine for our needs. At the moment we’re only using ~20% of the total storage, so there is plenty of room to grow.

I’m also happy with the ability to run custom VMs or Jails for additional services, though I might need to add another 16GB of ECC RAM in the future to support more as ZFS does well with plenty of memory.