The other option is to use the deployment manifest resources with the kubectl apply command.
There are environment variables that you can use to point it to multiple clusters and tweak other bits of the configuration.
The main variable you may wish to tweak is CLUSTERS. This allows you to specify a comma separated list of Kubernetes API server URLs. Use this to get the dashboard view populated with multiple clusters you have access to.
The tool only requires read-only access to the cluster, so keep this in mind if you’re deploying it manually.
If you’re using the Helm chart, specify rbac.create = true to create the read-only ClusterRole and ClusterRoleBinding automatically.
There are plenty of nifty features for a simple operational view. You can filter, move the cluster sections around, and change themes.
It’s even got an old school CRT style theme you can enable, though I’m not sure the flicker and scan line effect are my cup of tea!
There is something magical about building your own infrastructure from scratch. And when I say scratch, I mean using bare metal. This is a run through of my multipurpose FreeNAS server build process.
After scratching the itch recently with my Raspberry Pi Kubernetes Cluster, I got a hankering to do it again, and this build was soon in the works.
Part of my motivation came from my desire to reduce our reliance on cloud technology at home. Don’t get me wrong, I am an advocate for using the cloud where it makes sense. My day job revolves around designing and managing various clients’ cloud infrastructure.
At home, this was more about taking control of our own data.
I’ll skip to the juicy specifications part if you would like to know what hardware I used right away.
These are the final specifications I decided on. Scroll down to see the details about each area.
I also made an initial mistake here in my build buying a Gigabyte B450M DS3H motherboard. The product specs seem to indicate that it supports ECC, and so did a review I found on Anandtech. In reality the Gigabyte board does not support the ECC feature. Rather it ‘supports’ ECC memory by allowing the system to boot with ECC RAM installed, but you don’t get the actual error checking and correction!
I figured this out after booting it up with Fedora Rawhide as well as a couple of uBuntu Server distributions and running the edac-utils package. In all cases edac-utils failed to find ECC support / or any memory controller.
The Asus board I settled on supports ECC and edac-utils confirmed this.
The motherboard also has an excellent EFI BIOS. I found it easy to get to the ECC and Virtualization settings.
I used 4 x Western Digital 3TB Red hard drives for the RAIDZ1 main storage pool.
The SSD storage pool consists of 2 x Crucial MX500 250GB SSD SATA drives in a mirror configuration. This configuration is for running Virtual Machines and the NFS storage for my Kubernetes cluster.
The crossing out of APUs also meant I would need a discrete graphic card for console / direct access, and to install the OS initially. I settled on a cheap PCI Express Graphics card off Ebay for this.
Having chosen a beefy six core Ryzen 2600 CPU, I decided I didn’t need to get a fancy graphics card for live media encoding. (Plex does much better with this). If media encoding speed and efficiency is important to you, then consider something like an nVIDIA or AMD card.
For me, the six core CPU does a fine job at encoding media for home and remote streaming over Plex.
I wanted to use this system to server file storage for my home PCs and equipment. Besides this, I also wanted to export and share storage to my Raspberry Pi Kubernetes cluster, which runs on it’s own, dedicated network.
The simple solution for me here was multihoming the server onto the two networks. So I would need two network interface cards, with at least 1Gbit/s capability.
The motherboard already has an Intel NIC onboard, so I added two more ports with an Intel Pro Dual Port Gigabit PCI Express x4 card.
I’ll detail the highlights of my configuration for each service the multipurpose FreeNAS Server build hosts.
Main System Setup
The boot device is the 120GB M.2 nVME SSD. I installed FreeNAS 11.3 using a bootable USB drive.
I created two Storage Pools. Both are encrypted. Besides the obvious protection encryption provides, this also makes it easier to recycle drives later on if I need to.
Storage Pool 1
4 x Western Digital Red 3TB drives, configured with RAIDZ1. (1 disk’s worth of storage is effectively lost for parity, giving roughly 8-9 TB of usable space).
Deduplication turned off
Storage Pool 2
2 x Crucial MX500 250GB SSD drives, configured in a Mirror (1 disk mirrors the other, providing a backup if one fails).
Deduplication turned off
The network is set to use the onboard NIC to connect to my main home LAN. One of the ports on the Intel dual port NIC connects to my Raspberry Pi Kubernetes Cluster network and assigned a static IP address on that network.
My home network’s storage shares are simple Windows SMB Shares.
I created a dedicated user in FreeNAS which I configured in the SMB share configuration ACLs to give access.
Windows machines then simply mount the network location / path as mapped drives.
I also enabled Shadow Copies. FreeNAS supports this to enable Windows to use Shadow Copies.
I setup a dedicated uBuntu Server 18.04 LTS Virtual Machine using FreeNAS’ built-in VM support (bhyve). Before doing this, I enabled virtualization support in the motherboard BIOS settings. (SVM Mode = Enabled).
I used the standard installation method for Pi-Hole. I made sure the VM was using a static IP address and was bridged to my home network. Then I reconfigured my home DHCP server to dish out the Pi-hole’s IP address as the primary DNS server to all clients.
For the DNS upstream servers that Pi-hole uses, I chose to use the Quad9 (filtered, DNSSEC) ones, and enabled DNSSEC.
NextCloud has a readily available plugin for FreeNAS. However, out of the box you get no SSL. You’ll need to setup your networking at home to allow remote access. Additionally, you’ll need to get an SSL certificate. I used Let’s Encrypt.
Plex was a simple setup. Simply install the Plex FreeNAS plugin from the main Plugins page and follow the wizard. It will install and configure a jail to run Plex.
To mount your media, you need to stop the Plex jail and edit it to add your media location on your storage. Here is an example of my mount point. It basically mounts the media directory I use to keep all my media into the Plex Jail’s filesystem.
NFS Storage for Kubernetes
Lastly, I setup an NFS share / export for my Raspberry Pi Kubernetes Cluster to use for Persistent Volumes to attach to pods.
The key points here were that I allowed the two network ranges I wanted to have access to this storage from. (10.0.0.0/8 is my Kubernetes cluster network). I also configured a Mapall user of ‘root’, which allows the storage to be writeable when mounted by pods/containers in Kubernetes. (Or any other clients that mount this storage).
I was happy with this level of access for this particular NFS storage share from these two networks.
I modified the deployment manifest to point it to my FreeNAS machine’s IP address and NFS share path.
With that done, pods can now request persistent storage with a Persistent Volume Claim (PVC). The NFS client provisioner will create a directory for the pod (named after the pod itself) on the NFS mount and mount that to your pod.
So far the multipurpose FreeNAS server build has been very stable. It has been happily serving our home media streaming, storage, and shared storage needs.
It’s also providing persistent storage for my Kubernetes lab environment which is great, as I prefer not to use the not-so-durable microSD cards on the Raspberry Pi’s themselves for storage.
The disk configuration size seems fine for our needs. At the moment we’re only using ~20% of the total storage, so there is plenty of room to grow.
I’m also happy with the ability to run custom VMs or Jails for additional services, though I might need to add another 16GB of ECC RAM in the future to support more as ZFS does well with plenty of memory.
The FreeNAS Nextcloud plugin installation works great with automatic configuration thanks to a recent pull request. But, you don’t get SSL enabled by default. This is critical, especially for a system exposed to the internet.
In this post you’ll see how to:
Install the Nextcloud plugin in a FreeNAS BSD jail
Add an extra NAT port for SSL to the jail
Configure NGINX inside the jail by adding a customised configuration with SSL enabled
Apply a free SSL certificate using Lets Encrypt and DNS-01 challenge validation
Look at some options for setting up home networking for public access
Start off by Installing the Nextcloud Plugin in a jail. Choose NAT for networking mode. It defaults to port 8282:80 (http).
Stop the jail once it’s running and edit it. Add another NAT rule to point 8443 to 443 for SSL.
The reason for selecting port 8443 for Nextcloud is because the FreeNAS web UI listens on port 443 for SSL too.
An alternative could be to use DHCP instead of NAT for the jail. I chose NAT for my setup as I prefer using one internal IP address for everything I run on the FreeNAS server.
Shell into the Nextcloud jail, and rename the default nginx configuration.
NGINX will load all .conf files in this directory. Hence the reason you’ll create a new configuration for your SSL setup here.
ee /usr/local/etc/nginx/conf.d/nextcloud.conf /usr/local/etc/nginx/conf.d/nextcloud-ssl.conf
Populate it with the contents of the gist below, but replace server_name, ssl_certificate, and ssl_certificate_key with your own hostname.
Generate a free SSL certificate with Lets Encrypt
To configure the Nextcloud plugin on FreeNAS with SSL you don’t need to break the bank on SSL certificate costs from traditional CAs. Lets Encrypt it free, but you’ll need to renew your certificate every three months.
DNS-01 challenge certificate generation for Lets Encrypt is a great way to get SSL certificates without a public web server.
It entails creating a TXT/SPF record on the domain you own, with a value set to a code that certbot gives you during the certbot request process.
Install certbot if you don’t already have it installed. On a debian based system:
sudo apt-get install certbot
Request a certificate for your desired hostname using certbot with dns as the preferred challenge.
sudo certbot -d yournextcloud.example.net --manual --preferred-challenges dns certonly
Follow the prompts until you receive a code to setup your own TXT record with. Go to your DNS provider control panel and create it with the code you’re given as the value.
After creating the record, finish the certificate request. Lets Encrypt will confirm the DNS TXT record and issue you a certificate. You’ll get a chain file called fullchain.pem, along with a private key file called privkey.pem.
Upload the SSL certificate files to Nextcloud
Upload both to your Nextcloud Jail. Use SCP to copy them up, renaming them as follows:
In this post I’ll explain how I created an AWS EC2 Spot Instance Termination Simulator to run on any EC2 instance.
It can signal typical monitoring scripts, applications, interrupt handlers, and more, just like a real Spot Termination signal would do.
Looking around, there aren’t really any straight forward ways of testing this behaviour.
The obvious way is to change the price you bid for spot instances to the threshold of the current market price. This is so that if there is a slight increase in demand your spot instance(s) will be terminated.
The problem with that approach is of course that you can’t easily predict when the price will move, or by how much.
How EC2 Spot Instance Termination warnings work
When your spot instance bid price is surpassed by the current market price, a Termination Notice becomes available on one of the instance’s metadata endpoints. For example, http://169.254.169.254/latest/meta-data/spot/termination-time.
There is one other, newer endpoint at http://169.254.169.254/latest/meta-data/spot/instance-action that could also be used.
At this point the endpoint returns an HTTP 200 response. A timestamp of when a shutdown signal is going to be sent to your instance’s OS is also returned.
In the case of the newer endpoint, a JSON string is returned with an action and time. Usually the endpoint usually simply returns a 404 not found response.
The tools out there tend to monitor this endpoint to warn and action in case of a Spot Termination notice being received.
The two minute warning feature was added by Amazon back in 2015, announced in this blog post.
The service is a NodePort service, exposing the port on the host it runs on.
List the service and the pod (and find the kubernetes node the pod is running on):
kubectl get svc ec2-spot-termination-simulator
kubectl get pod spot-term-simulator-xxxxxx -o wide
Take note of the NodePort the service is listening on for the Kubernetes node. E.g. port 30626.
docker run -p 30626:80 -e PORT=80 shoganator/ec2-spot-termination-simulator
Proxying the EC2 metadata service
Perform some trickery to proxy traffic destined to 169.254.169.254 on port 80 to localhost (where the container runs). This is so that the fake service can take the place of the real one.
SSH onto the Kubernetes node and run:
sudo ifconfig lo:0 169.254.169.254 up
sudo socat TCP4-LISTEN:80,fork TCP4:127.0.0.1:30626
Create an alias for the localhost interface at 169.254.169.254, effectively taking over the EC2 metadata service and sending that to 127.0.0.1 instead.
Forward TCP port 80 traffic (usually destined to the EC2 metadata service) to 127.0.0.1 on the NodePort 30626. This is where the ec2-spot-termination-simulator pod is running on this host. Substitute this with the correct port in your case.
As a result, it should return a 200 OK response with a timestamp from the fake service. This is the same as the real one would.
Looking at how the kube-spot-termination-notice-handler service works specifically:
This service runs as a DaemonSet, meaning one instance per host. The instance on the node you set the simualtor up on should immediately drain the node. The instance won’t actually be terminated as this was of course just a simulated termination.
If you’re not running on Kubernetes and are using a different spot termination handler, don’t worry. The system you’re using to monitor the EC2 instance metadata endpoint should still take action at this point.
The proxied web service is now returning a legitimate looking termination time notice on http://169.254.169.254/latest/meta-data/spot/termination-time.
This is the third post in this series and the focus will be on completing the Raspberry Pi Kubernetes cluster by adding a worker node. You’ll also setup a software based load-balancer implementation designed for bare metal Kubernetes Clusters by leveraging MetalLB.
Here are some handy links to other parts in this blog post series:
By now you should have 1 x Pi running as the dedicated Pi network router, DHCP, DNS and jumpbox, as well as 1 x Pi running as the cluster Master Node.
Of course it’s always best to have more than 1 x Master node, but as this is just an experimental/fun setup, one is just fine. The same applies to the Worker nodes, although in my case I added two workers with each Pi 4 having 4GB RAM.
Joining a Worker Node to the Cluster
Start off by completing the setup steps as per the Common Setup section in Part 2 with your new Pi.
Once your new Worker Pi is ready and on the network with it’s own static DHCP lease, join it to the cluster (currently only the Master Node) by using the kubeadm join command you noted down when you first initialised your cluster in Part 2.
After a few moments, SSH back to your master node and run kubectl get nodes. You should see the new worker node added and after it pulls down and starts the weave net CNI image it’s status will change to Ready.
Setting up MetalLB
The problem with a ‘bare metal’ Kubernetes cluster (or any self-installed, manually configured k8s cluster for that matter) is that it doesn’t have any load-balancer implementation to handle LoadBalancer service types.
When you run Kubernetes on top of a cloud hosting platform like AWS or Azure, they are backed natively by load-balancer implementations that work seamlessly with those cloud platform’s load-balancer services. E.g. classic application or elastic load balancers with AWS.
However, with a Raspberry Pi cluster, you don’t have anything fancy like that to provide LoadBalancer services for your applications you run.
MetalLB provides a software based implementation that can work on a Pi cluster.
Install version 0.8.3 of MetalLB by applying the following manifest with kubectl:
Update the addresses section to use whichever range of IP addresses you would like to assign for use with MetalLB. Note, I only used 10 addresses as below for mine.
Apply the configuration:
kubectl apply -f ./metallb-config.yaml
Setup Helm in the Pi Cluster
First of all you’ll need an ARM compatible version of Helm. Download it and move it to a directory that is in your system PATH. I’m using my Kubernetes master node as a convenient location to use kubectl and helm commands from, so I did this on my master node.
Note: it uses a custom image from jessestuart/tiller (as this is ARM compatible). The command also replaces the older api spec for the deployment with the apps/v1 version, as the older beta one is no longer applicable with Kubernetes 1.16.
Deploy an Ingress Controller with Helm
Now that you have something to fulfill LoadBalancer service types (MetalLB), and you have Helm configured, you can deploy an NGINX Ingress Controller with a LoadBalancer service type for your Pi cluster.
If you list out your new ingress controller pods though you might find a problem with them running. They’ll likely be trying to use x86 architecture images instead of ARM. I manually patched my NGINX Ingress Controller deployment to point it at an ARM compatible docker image.
kubectl set image deployment/nginx-ingress-controller nginx-ingress-controller=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:0.26.1
After a few moments the new pods should now show as running:
Now to test everything, you can grab the external IP that should have been assigned to your NGINX ingress controller LoadBalancer service and test the default NGINX backend HTTP endpoint that returns a simple 404 message.
List the service and get the EXTERNAL-IP (this should sit in the range you configured MetalLB with):
kubectl get service --selector=app=nginx-ingress
Curl the NGINX Ingress Controller LoadBalancer service endpoint with a simple GET request:
curl -i http://10.23.220.88
You’ll see the default 404 not found response which indicates that the controller did indeed receive your request from the LoadBalancer service and directed it appropriately down to the default backend pod.
At this point you’ve configured:
A Raspberry Pi Kubernetes network Router / DHCP / DNS server / jumpbox
Kubernetes master node running the master components for the cluster
Kubernetes worker nodes
MetalLB load-balancer implementation for your cluster
Helm client and Tiller agent for ARM in your cluster
NGINX ingress controller
In part 1, recall you setup some iptables rules on the Router Pi as an optional step?
These PREROUTING AND POSTROUTING rules were to forward packets destined for the Router Pi’s external IP address to be forwarded to a specific IP address in the Kubernetes network. In actual fact, the example I provided was what I used to forward traffic from the Pi router all the way to my NGINX Ingress Controller load balancer service.
Revisit this section if you’d like to achieve something similar (access services inside your cluster from outside the network), and replace the 10.23.220.88 IP address in the example I provided with the IP address of your own ingress controller service backed by MetalLB in your cluster.
Also remember that at this point you can add as many worker nodes to the cluster as you like using the kubeadm join command used earlier.