SpotiPod – Spotify Streaming Device from an iPod Classic

spotipod classic

I recently came across the sPot: Spotify in a 4th-gen iPod (2004) project on hackaday.io by Guy Dupont. This post is my go at building Guy’s project from the ground up – the SpotiPod.

Here is my version, up and running, albeit with some hardware leaking out the side for now…

The main components required in terms of hardware are:

  • iPod Classic (4th gen) – at least the device case and clickwheel components
  • A Raspberry Pi Zero W
  • 2″ LCD display (see note further on about an alternative)
  • 3.7V 1000mah LiPo battery
  • Adafruit boost module (to boost 3.7V to 5V required by the Pi, LCD, etc…
  • Adafruit USB charge controller module

Once it’s all connected and configured, you’ll be able to load up your own Spotify playlists and libraries, browse them, and play them all from your iPod Classic device over a bluetooth speaker or remote system.

The SpotiPod build almost complete.
Some wire reduction still required before I can completely close it up.

SpotiPod High-level Build

I won’t be getting into the low-level parts of the build, so for specific details I would recommend viewing Guy’s project log and branching off the posts there.

Initial Raspberry Pi Zero W setup with 2″ LCD display

I started off loading Raspbian Lite OS onto an 8GB microSD card and booting up a barebones Pi Zero W. The W version is important because it has WiFi and Bluetooth on the board.

My first task was to configure SSH to start automatically on boot. Useful when I have no display to start with. I also added some of the required software components to begin testing and messing around with, including redis server, compiling the click c++ application that Guy wrote to interface with the iPod’s clickwheel, and setting up some of the X11 components for a basic UI.

Soldering up the connection for the 2″ LCD display was my first hardware connection task. My soldering iron might come out once in a year, so I’m a bit of a novice in this area, but I managed the task on my first go.

Connecting the LCD to Raspberry Pi Zero via Composite Connector
Connecting the LCD to Raspberry Pi Zero via Composite Connector
testing the LCD
First LCD test successful

I posted more details here on the 2″ LCD connection and configuration in software.

Connecting the Charge Controller and Voltage Boost Modules

Next up I focused on getting the power delivery working. The goal was to be able to charge the LiPo battery via USB and have the circuit supply 5V to all SpotiPod components where required.

I familiarised myself with the Adafruit module documentation and pinout diagrams before connecting the circuit up.

  • Adafruit Powerboost 1000 module
  • Adafruit Charge Controller

Here is a useful diagram to follow, posted by Kakoub on the hackaday.io project page, re-hosted here in case the imgur upload ever disappears:

Click for a larger, clearer version

So after getting power delivery and connections soldered in place, and precariously placing all the components away from eachother to prevent shorts, here is where I was at:

SpotiPod Powerboost and Charge modules connected
Powerboost and Charge modules connected

Testing the Click wheel and Software

The click wheel connectivity is made easier with an 8 Pin FPC cable breakout board. I hooked this up next, soldering the 4 wires required for power and data.

The click wheel ribbon connector could then snap into the breakout board ribbon connector. This is by far the most delicate part of the build in my opinion. The ribbon cable is super thin and delicate.

Connecting the 8 pin FPC breakout board. My soldering skills slowly coming back with a bit of practice…

Testing the interface was a case of compiling the clickwheel program with gcc over an SSH connection then executing it with everything connected.

If wired up correctly, the program will output touch and click data to stdout.

iPod Classic click wheel testing over SSH

Gutting the old iPod Classic

I managed to snag an old iPod Classic 4th gen off eBay for about £20. It wasn’t working, but had the two bits I needed – the chassis, and the clickwheel.

Breaking out my trusty iFixit essentials toolkit, I set about opening it up to remove the unecessary components.

The method I found to work was to pry it open on the left and right edges using the pry tool. Once you can get into one of the edges, slide the tool around, and things get easier.

Insulating the case and components

With most of the electronics connected, I began insulating things with polyimide tape. Most importantly, the metal iPod case. I put down about three layers of tape and tried to cover all parts as best I could.

Insulating the iPod classic shell for the SpotiPod build.

Installing into the iPod Classic Case

This is the tricky part. It’s quite difficult to squeeze all the SpotiPod hardware in.

I started out by strengthening all my soldering connections with a bit of hot glue.

adding hot glue to the soldered connections for the SpotiPod
Adding hot glue to the soldered connections

After a bit of arranging, squeezing, and coercing, everything fits… Mostly!

Things are still not perfect though. I need to reduce my wire lengths before I can get the case to fully close.

For now though, everything works and I have a fully functional SpotiPod!

Tips and Tricks

I’ve put together a list of things that might help if you do this yourself. These are bits that I recall getting snagged on:

  • Make sure you setup all the X11 and software dependencies correctly. Getting Openbox and the frontend application to launch on start up can be tricky and this is crucial. Pay attention to your /etc/X11/xinit/xinitrc and /etc/xdg/openbox/autostart configurations.
  • You don’t have to use the more expensive Adafruit composite LCD display. Ricardo’s build at RSFlightronics uses a much cheaper LCD and some creative approaches to get display output working.
  • Watch out for the click wheel ribbon orientation when you connect it to the breakout board!
  • Use thin and short length wires for connections where possible. Not too short though as it is useful to be able to open the device up and put the two halves side-by-side.
  • Make sure you have a Spotify Premium subscription. I can’t remember exactly, but I’m sure that creating your own app to get your client and secret keys, or some of the scopes required for the app will only work on Premium. (It might have even been spotify connect).
  • You’ll need to configure your own Spotify App using the Spotify Developer portal. Keep your client and secret keys safe to yourself. Remember to setup environment variables with these that the openbox session can access.
  • The frontend/UI application has a hardcoded reference to the Spotify Connect device as “Spotifypod”. Keep things simple by setting your raspotify configuation to use this name too, otherwise you need to update the code too.
  • If you’re struggling to get the software side working at first, it can really help to setup VNC while you debug things. This allows you to get a desktop environment on the Pi Zero and execute scripts or programs in an x session as openbox would.

Thanks again to Guy Dupont for his excellent SpotiPod project and idea. Putting this all together really makes for a fun and rewarding hardware/software hacking experience.

Tiny 2″ TFT Composite Video on Raspberry Pi Zero

This weekend I wanted to test composite video on the Raspberry Pi Zero. I had a Raspberry Pi Zero W and a NTSC/PAL (Television) TFT Display – 2.0″ Diagonal from Adafruit.

This display is ridiculously small. It’s quite something to boot up Raspbian with the PIXEL desktop environment on a Pi Zero with this little 2″ display.

Hardware Wire Up

The display only needs 4 connections.

  • Power (I’m using 5v from a Adafruit Power Boost 1000 basic module)
  • Ground (ground connected to pin 14 / ground on Pi Zero and negative on the boost module)
  • Positive (yellow) from TFT display board to TV on the Pi Zero
  • White from the TFT display board to the other pin, next to the TV pin.
connecting composite video on raspberry pi zero with the TV pin

Here’s what everything looked like after connecting the boost module (3.7v to 5v conversion), the battery charge controller, LiPo, and 2″ TFT display.

boost module, battery charge controller, LiPO, and TFT display all connected to the Raspberry Pi Zero

Powering up and Configuration

Power up and enjoy the tiny display outputting the boot up sequence.

The display is meant to run at 320×240, so after booting up I edited /etc/config.txt to set this up along with some overscan tweaks.

sudo nano /etc/config.txt

Set the following in the /boot/config.txt file:

# these overscan settings are what worked well for me
overscan_left=-26
overscan_right=-26
overscan_top=-16
overscan_bottom=-24

framebuffer_width=320
framebuffer_height=240

Although it is really expensive for what it is, the 2.0″ TFT display is great for small electronics projects that call for full display output. It’s simple and easy to connect, and doesn’t take up too much space either.

Puck.js Duplo Block Police Siren Build

soldering iron, pexels

I picked up a Puck.js a while ago and after trying out a few basic bits of code, sadly let it start to gather dust on my shelf. That changed this weekend as I browsed the sample project listings for something simple to build, picking up the Puck.js Duplo police siren build to try.

It should be a fun toy for my youngest to play with, as he really enjoys Duplo.

What is the Puck.js?

puck.js duplo build start - the puck.js out of it's case.
The Puck.js and it’s button case.

The Puck.js is an open source JavaScript microcontroller. It has a variety of features such as:

  • Button
  • Magnetometer
  • Accelerometer
  • Gyro
  • IR & RGB LEDs
  • Temperature and light sensor
  • FET output
  • Programmable NFC tag
  • 9 IO pins

The best part about the Puck.js for me is how accessible it is to run and deploy code to. Using Web Bluetooth and it’s included puck.js library, you can write code in a Web IDE, connect over Bluetooth, and have your code running in seconds.

Building the Puck.js Duplo Police Siren Project

If you want to try it out yourself, the actual tutorial page itself is the best resource to begin with. There is a video available there to follow along with.

Here is the Thingiverse page where you can get the model. It currently only lists a scad version of the file, so you’ll need to download OpenSCAD and open it there.

Here is a gist for the file including the cut-out operation that subtracts the innards from the block to make room for the puck.js and piezo to fit into.

I printed the block using my Elegoo Mars resin 3D printer. My first go seems to be slightly loose fitting, so I might shrink the model to 99% size and try again for a second iteration.

I used a new resin that is meant to be easily rinsed/washed after printing with water. The quality on the top of the block doesn’t look as good as usual, so I’m not sure if this resin is to blame for that or not.

puck.js duplo block 3D print

Connecting the Components

The LEDs connect to D1/D2 and D30/D31. The piezo goes on D28/D29. I used RGB LEDs, so I snipped the other legs off, leaving just the blue and common cathode terminals to connect up.

After a bit of dodgy soldering, it works!

Installing Everything Into the Block

With a little bit of coercion, the whole lot fits in. I used a bit of hot glue on four edges of the Puck.js to keep it in place, but keep it easy to remove if needed.

After that, I added a layer of scrap paper with the piezo in-between, and glued that in too.

Here is the final result.

The Puck.js may be fairly pricey, but it includes a lot of IO. It’s battery and use of Bluetooth LE make it ideal for projects where battery life is a concern. The battery is super cheap and can last up to around a year if used carefully.

I had fun making this project. It’s a great project to get started with the Puck.js, and hopefully I’ll find some more use cases soon where I can use more of these great little devices.

Jarvis standing desk and workspace configuration

Jarvis standing desk setup

I recently invested in a standing desk for my home office. On the recommendation of a friend, I purchased the Jarvis Laminate Standing Desk from fully.com.

I’ve been working predominantly from home for around 1.5 years now (at least 4 days a week), and since going fully remote at 5 days a week after COVID-19 I decided to invest more in the home office.

After having noticed myself slouching at my desk on more than one occasion, I got curious about standing desks. After reading about the apparent benefits of standing more (as opposed to sitting at a desk all day), and on the recommendation of a friend I pulled the trigger on this fully motorised setup.

Here are the specifications and complete configuration I went for:

  • Jarvis Laminate Standing Desk (160 x 80 cm, black finish)
  • Jarvis Lifting Column, Mid Range
  • Jarvis Frame EU, Wide, Long Foot Programmable
  • WireTamer Cable Trays (2x)
  • Topo Mini Anti-fatigue mat

The total for the desk parts and standing mat came up to around £600 excluding VAT, which I think is a great price considering the health benefits it should bring about over the longer term.

The build

I got stuck in one evening after work, and thought it might be 2 hour job. I was very wrong. Here is photo I took of the chaos I unleashed in the office after taking down my old desk and beginning assembly of the Jarvis.

jarvis desk assembly, defeat.
Note the crossed legs, socks, and beer as I take a breather, almost admitting defeat for the night.

Soon I was making progress though.

jarvis standing desk progress
jarvis desk assembled and upright

Multi-monitor configuration

I’ve got two computers that needed setting up. One is my PC, the other is an Apple Mac Mini (the main work machine). So next on the list was a decent adjustable multi-monitor stand. I ended up getting:

  • FLEXIMOUNTS F6D Dual monitor mount LCD arm

This was the strongest arm I could find. I could not (at the time) find anything that would fit my 34″ Acer Predator ultra wide LCD. (It’s pretty heavy).

Although this mount is only compatible up to 30″ LCDs, it seems to cope with my Acer Predator on one side, and my LG 5K Retina 27″ LCD display on the other.

the monitor dual mount setup on the desk

Around midnight that evening I finally had everything configured and in order. I booted up both machines and raised the desk to try it out.

Adjusting to, and actually working at the desk

I’m not fully committed to standing all day (at least not yet). I tend to spend around 3 hours standing each day, and the remainder sitting. I’m slowly increasing the standing time as I go on.

The Top Mini anti-fatigue mat definitely helps. I also noticed wearing my light, minimalist running shoes feels good while standing and working too.

topo mini anti-fatigue mat
The topo mini mat

Some more angles of the full setup

Lastly, here are some photos of the final setup. Certainly a farcry and total overhaul since the days of this home office setup of mine circa 2009!

In summary, I’m very happy with the new setup. My office is a little narrow and crammed, but this configuration helps me move around a lot more and I feel better for it.

This is post #8 in my effort towards 100DaysToOffload.

Building a Pi Kubernetes Cluster – Part 3 – Worker Nodes and MetalLB

Building a Raspberry Pi Kubernetes Cluster - part 3 - worker nodes featured image

This is the third post in this series and the focus will be on completing the Raspberry Pi Kubernetes cluster by adding a worker node. You’ll also setup a software based load-balancer implementation designed for bare metal Kubernetes Clusters by leveraging MetalLB.

Here are some handy links to other parts in this blog post series:

By now you should have 1 x Pi running as the dedicated Pi network router, DHCP, DNS and jumpbox, as well as 1 x Pi running as the cluster Master Node.

Of course it’s always best to have more than 1 x Master node, but as this is just an experimental/fun setup, one is just fine. The same applies to the Worker nodes, although in my case I added two workers with each Pi 4 having 4GB RAM.

Joining a Worker Node to the Cluster

Start off by completing the setup steps as per the Common Setup section in Part 2 with your new Pi.

Once your new Worker Pi is ready and on the network with it’s own static DHCP lease, join it to the cluster (currently only the Master Node) by using the kubeadm join command you noted down when you first initialised your cluster in Part 2.

E.g.

sudo kubeadm join 10.0.0.50:6443 --token kjx8lp.wfr7n4ie33r7dqx2 \
     --discovery-token-ca-cert-hash sha256:25a997a1b37fb34ed70ff4889ced6b91aefbee6fb18e1a32f8b4c8240db01ec3

After a few moments, SSH back to your master node and run kubectl get nodes. You should see the new worker node added and after it pulls down and starts the weave net CNI image it’s status will change to Ready.

kubernetes worker node added to cluster

Setting up MetalLB

The problem with a ‘bare metal’ Kubernetes cluster (or any self-installed, manually configured k8s cluster for that matter) is that it doesn’t have any load-balancer implementation to handle LoadBalancer service types.

When you run Kubernetes on top of a cloud hosting platform like AWS or Azure, they are backed natively by load-balancer implementations that work seamlessly with those cloud platform’s load-balancer services. E.g. classic application or elastic load balancers with AWS.

However, with a Raspberry Pi cluster, you don’t have anything fancy like that to provide LoadBalancer services for your applications you run.

MetalLB provides a software based implementation that can work on a Pi cluster.

Install version 0.8.3 of MetalLB by applying the following manifest with kubectl:

kubectl apply -f https://gist.githubusercontent.com/Shogan/d418190a950a1d6788f9b168216f6fe1/raw/ca4418c7167a64c77511ba44b2c7736b56bdad48/metallb.yaml

Make sure the MetalLB pods are now up and running in the metallb-system namespace that was created.

metallb pods running

Now you will create a ConfigMap that will contain the settings your MetalLB setup will use for the cluster load-balancer services.

Create a file called metallb-config.yaml with the following content:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.23.220.88-10.23.220.98

Update the addresses section to use whichever range of IP addresses you would like to assign for use with MetalLB. Note, I only used 10 addresses as below for mine.

Apply the configuration:

kubectl apply -f ./metallb-config.yaml

Setup Helm in the Pi Cluster

First of all you’ll need an ARM compatible version of Helm. Download it and move it to a directory that is in your system PATH. I’m using my Kubernetes master node as a convenient location to use kubectl and helm commands from, so I did this on my master node.

Install Helm Client

export HELM_VERSION=v2.9.1
wget https://kubernetes-helm.storage.googleapis.com/helm-$HELM_VERSION-linux-arm.tar.gz
tar xvzf helm-$HELM_VERSION-linux-arm.tar.gz
sudo mv linux-arm/helm /usr/bin/helm

Install Helm Tiller in the Cluster

Use the following command to initialise the tiller component in your Pi cluster.

helm init --tiller-image=jessestuart/tiller --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Note: it uses a custom image from jessestuart/tiller (as this is ARM compatible). The command also replaces the older api spec for the deployment with the apps/v1 version, as the older beta one is no longer applicable with Kubernetes 1.16.

Deploy an Ingress Controller with Helm

Now that you have something to fulfill LoadBalancer service types (MetalLB), and you have Helm configured, you can deploy an NGINX Ingress Controller with a LoadBalancer service type for your Pi cluster.

helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.service.type=LoadBalancer

If you list out your new ingress controller pods though you might find a problem with them running. They’ll likely be trying to use x86 architecture images instead of ARM. I manually patched my NGINX Ingress Controller deployment to point it at an ARM compatible docker image.

kubectl set image deployment/nginx-ingress-controller     nginx-ingress-controller=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:0.26.1

After a few moments the new pods should now show as running:

new nginx ingress pods running with ARM image

Now to test everything, you can grab the external IP that should have been assigned to your NGINX ingress controller LoadBalancer service and test the default NGINX backend HTTP endpoint that returns a simple 404 message.

List the service and get the EXTERNAL-IP (this should sit in the range you configured MetalLB with):

kubectl get service --selector=app=nginx-ingress

Curl the NGINX Ingress Controller LoadBalancer service endpoint with a simple GET request:

curl -i http://10.23.220.88

You’ll see the default 404 not found response which indicates that the controller did indeed receive your request from the LoadBalancer service and directed it appropriately down to the default backend pod.

the nginx default backend 404 response

Concluding

At this point you’ve configured:

  • A Raspberry Pi Kubernetes network Router / DHCP / DNS server / jumpbox
  • Kubernetes master node running the master components for the cluster
  • Kubernetes worker nodes
  • MetalLB load-balancer implementation for your cluster
  • Helm client and Tiller agent for ARM in your cluster
  • NGINX ingress controller

In part 1, recall you setup some iptables rules on the Router Pi as an optional step?

These PREROUTING AND POSTROUTING rules were to forward packets destined for the Router Pi’s external IP address to be forwarded to a specific IP address in the Kubernetes network. In actual fact, the example I provided was what I used to forward traffic from the Pi router all the way to my NGINX Ingress Controller load balancer service.

Revisit this section if you’d like to achieve something similar (access services inside your cluster from outside the network), and replace the 10.23.220.88 IP address in the example I provided with the IP address of your own ingress controller service backed by MetalLB in your cluster.

Also remember that at this point you can add as many worker nodes to the cluster as you like using the kubeadm join command used earlier.

Building a Raspberry Pi Kubernetes Cluster – Part 2 – Master Node

Building a Raspberry Pi Kubernetes Cluster - part 2 - master node title featured image

The Kubernetes Master node is one that runs what are known as the master processes: The kube-apiserver, kube-controller-manager and kube-scheduler.

In this post we’ll go through some common setup that all nodes (masters and workers) in your cluster should get, and then on top of that, the specific setup that will finally configure a single node in the cluster to be the master.

If you would like to jump to the other partes in this series, here are the links:

By now you should have some sort of stack or collection of Raspberry Pis going. As mentioned in the previous post, I used a Raspberry Pi 3 for my router/dhcp server for the Kubernetes Pi Cluster network, and Raspberry Pi 4’s with 4GB RAM each for the master and worker nodes. Here is how my stack looks now:

picture of raspberry pi devices in stack, forming the kubernetes cluster
The stack of Rasperry Pi’s in my cluster. Router Pi at the bottom, master and future worker nodes above. They’re sitting on top of the USB power hub and 8 port gigabit network switch

Common Setup

This setup will be used for both masters and workers in the cluster.

Start by writing the official Raspbian Buster Lite image to your microSD card. (I used the 26th September 2019 version), though as you’ll see next I also updated the Pi’s firmware and OS using the rpi-update command.

After attaching your Pi (master) to the network switch, it should pick up an IP address from the DHCP server you setup in part 1.

SSH into the Pi and complete the basic setup such as setting a hostname and ensuring it gets a static IP address lease from DHCP by editing your dnsmasq configuration (as per part 1).

Note: As the new Pi is running on a different network behind your Pi Router, you can either SSH into your Pi Router (like a bastion host or jump box) and then SSH into the new Master Pi node from there.

Now update it:

sudo rpi-update

After the update completes, reboot the Pi.

sudo reboot now

SSH back into the Pi, then download and install Docker. I used version 19.03 here, though at the moment it is not ‘officially’ supported.

export VERSION=19.03
curl -sSL get.docker.com | sh && sudo usermod pi -aG docker && newgrp docker

Kubernetes nodes should have swap disabled, so do that next. Additionally, you’ll enable control groups (cgroups) for resource isolation.

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove
sudo systemctl disable dphys-swapfile.service

sudo sed -i -e 's/$/ cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory/' /boot/cmdline.txt

Installing kubeadm and other Kubernetes components

Next you’ll install the kubeadm tool (helps us create our cluster quickly), as well as a bunch of other components required, such as the kubelet (the main node agent that registers nodes with the API server among other things), kubectl and the kubernetes cni (to provision container networking).

Next up, install the legacy iptables package and setup networking so that it traverses future iptables rules.

Note: when I built my cluster initially I discovered problems with iptables later on, where the kube-proxy and kubelet services had trouble populating all their required iptables rules using the pre-installed version of iptables. Switching to legacy iptables fixed this.

The error I ran into (hopefully those searching it will come across this post too) was:

proxier.go:1423] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.6.0: Couldn't load target `KUBE-MARK-DROP':No such file or directory

Setup iptables and change it to the legacy version:

sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy

Lastly to finish off the common (master or worker) node setup, reboot.

sudo reboot now

Master Node Setup

Now you can configure this Pi as a master Kubernetes node. SSH back in after the reboot and pull down the various node component docker images, then initialise it.

Important: Make sure you change the 10.0.0.50 IP address in the below code snippet to match whatever IP address you reserved for this master node in your dnsmasq leases configuration. This is the IP address that the master API server will advertise out with.

Note: In my setup I am using 192.168.0.0./16 as the pod CIDR (overlay network). This is specifically to keep it separate from my internal Pi network of 10.0.0.0/8.

sudo kubeadm config images pull -v3
sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.50 --pod-network-cidr=192.168.0.0/16

# capture text and run as normal user. e.g.:
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Once the kubeadm commands complete, the init command will output a bunch of commands to run. Copy and enter them afterwards to setup the kubectl configuration under $HOME/.kube/config.

You’ll also see a kubeadm join command/token. Take note of that and keep it safe. You’ll use this to join other workers to the cluster later on.

kubeadm join 10.0.0.50:6443 --token yi4hzn.glushkg39orzx0fk \
    --discovery-token-ca-cert-hash sha256:xyz0721e03e1585f86e46e477de0bdf32f59e0a6083f0e16871ababc123

Installing the CNI (Weave)

You’ll setup Weave Net next. At a high level, Weave Net creates a virtual container network that connects your containers that are scheduled across (potentially) many different hosts and enables their automatic discovery across these hosts too.

Kubernetes has a pluggable architecture for container networking, and Weave Net is one implementation of this.

Note: the command below assumes you’re using an overlay/container network of 192.168.0.0/16. Change this if you’re not using this range.

On your Pi master node:

curl --location -o ./weave-cni.yaml "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=192.168.0.0/16"
kubectl apply -f ./weave-cni.yaml

After a few moments waiting for your node to pull down the weave net container images, check that the weave container(s) are running and that the master node is showing as ready. Here is how that should look…

kubectl -n kube-system get pods
kubectl get nodes
pi@korben:~ $ kubectl -n kube-system get pods | grep weave
weave-net-cfxhr                  2/2     Running   20         10d
weave-net-chlgh                  2/2     Running   17         23d
weave-net-rxlg8                  2/2     Running   13         23d

pi@korben:~ $ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
korben   Ready    master   23d   v1.16.2

That is pretty much it for the master node setup. You now have a single master node running the Kubernetes master components / API server, and have even used to successfully provision and configure container networking.

As a result of deploying Weave Net, you now have a DaemonSet that will ensure that any new node that joins the cluster will automatically get the Weave Net CNI. All other nodes in the cluster will automatically update to ‘know’ about the new node and subsequently containers in the cluster will be able to talk to each other over the overlay network.