How to setup a basic Kubernetes cluster and add an NGINX Ingress Controller on DigitalOcean

Most of the steps in this how to post can be applied to any Kubernetes cluster to get an NGINX Ingress Controller deployed, so you don’t necessarily have to be running Kubernetes in DigitalOcean. With that said, let’s go through the process of setting up a Kubernetes cluster with NGINX Ingress Controller on DigitalOcean.

DigitalOcean have just officially announced their own Kubernetes offering so this guide covers initial deployment of a basic worker node pool on DigitalOcean, and then moves on to deploying an Ingress Controller setup.

If you’re thinking of signing up on DigitalOcean, consider using my referral link below. It’ll net you $100 of credit to spend over 60 days, and if you stick with them I’ll get a small $25 credit to my own account. Win win!

My Referral link to sign up with DigitalOcean

Note: If you already have a Kubernetes cluster setup and configured, then you can skip the initial cluster and node pool provisioning step below and move on to the Helm setup part.

Deploy a Kubernetes node pool on DigitalOcean

You could simply do this with the Web UI console (which makes things really simple), but here I’ll be providing the doctl commands to do this via the command line.

First of all, if you don’t have it already download and setup the latest doctl release. Make sure it’s available in your PATH.

Initialise / authenticate doctl. Provide your own API key when prompted.

doctl auth init

Right now, the help documentation in doctl version 1.12.2 does not display the kubernetes related commands arguments, but they’re available and do work.

Create a new Kubernetes cluster with just a single node of the smallest size (you can adjust this to your liking of course). I want a nice cheap cluster with a single node for now.

doctl k8s cluster create example-cluster --count=1 --size=s-1vcpu-2gb

The command above will provision a new cluster with a default node pool in the NYC region and wait for the process to finish before completing. It’ll also update your kubeconfig file if it detects one on your system.

output of the doctl k8s cluster create command

Once it completes, it’ll return and you’ll see the ID of your new cluster along with some other details output to the screen.

Viewing the Kubernetes console in your browser should also show it ready to go. You can download the config from the web console too if you wish.

Kubeconfig setup

If you’re new to configuring kubectl to manage Kubernetes, follow the guide here to use your kube config file that DigitalOcean provides you with.

the load balancer that will front your Ingress Controller on DigitalOcean Kubernetes

Handling different cluster contexts

With kubectl configured, test that it works. Make sure you’re in your new cluster’s context.

kubectl config use-context do-nyc1-example-cluster

If you’re on a Windows machine and use PowerShell and have multiple Kubernetes clusters, here is a simple set of functions I usually add to my PowerShell profile – one for each cluster context that allows easy switching of contexts without having to type out the full kubectl command each time:

Open your PowerShell profile with:

notepad $profile

Add the following (one for each context you want) – make sure you replace the context names with your own cluster names:

function kubecontext-minikube { kubectl config use-context minikube }
function kubecontext-seank8s { kubectl config use-context sean.k8s.local }
function kubecontext-digitalocean { kubectl config use-context do-nyc1-example-cluster }

Simply enter the function name and hit enter in your PS session to switch contexts.

If you didn’t have any prior clusters setup in your kubeconfig file, you should just have your new DigitalOcean cluster context selected already by default.

Deploy Helm to your cluster

Time to setup Helm. Follow this guide to install and configure helm using kubectl.

helm logo

Deploy the Helm nginx-ingress chart to enable an Ingress Controller on DigitalOcean in your Kubernetes cluster

Now that you have helm setup, you can easily deploy an Ingress Controller to your cluster using the nginx helm chart (package).

helm install --name nginx-ingress stable/nginx-ingress --set service.type=LoadBalancer --namespace default

When you specify the service.type of “LoadBalancer”, DigitalOcean will provision a LoadBalancer that fronts this Kubernetes service on your cluster. After a few moments the Helm deployment should complete (it’ll run async in the background).

You can monitor the progress of the service setup in your cluster with the following command:

kubectl --namespace default get services -o wide -w nginx-ingress-controller

Open the Web console, go to Networking, and then look for Load Balancers.

You should see your new NGINX load balancer. This will direct any traffic through to your worker pool node(s) and into the Kubernetes Service resource that fronts the pods running NGINX Ingress.

view of the digitalocean load balancer

At this point you should be able to hit the IP Address in your web browser and get the default nginx backend for ingress (with a 404 response). E.g.

Great! This means it’s all working so far.

Create a couple of basic web deployments inside your cluster

Next up you’ll create a couple of very simple web server Deployments running in single pods in your cluster’s node pool.

Issue the following kubectl command to create two simple web deployments using Google’s official GCR hello-app image. You’ll end up with two deployments and two pods running separately hosted “hello-app” web apps.

kubectl run web-example1 --image=gcr.io/google-samples/hello-app:2.0 --port=8080
kubectl run web-example2 --image=gcr.io/google-samples/hello-app:2.0 --port=8080

Confirm they’re up and running wth 1 pod each:

kubectl get deployments
NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
web-example1                    1         1         1            1           12m
web-example2                    1         1         1            1           23m

Now you need a service to back the new deployment’s pods. Expose each deployment with a simple NodePort service on port 8080:

kubectl expose deployment/web-example1 --type="NodePort" --port 8080
kubectl expose deployment/web-example2 --type="NodePort" --port 8080

A NodePort service will effectively assign a port number from your cluster’s service node port range (default between 30000 and 32767) and each node in your cluster will proxy that specific port into your Service on the port you specify. Nodes are not available externally by default and so creating a NodePort service does not expose your service externally either.

Check the services are up and running and have node ports assigned:

kubectl get services
NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
web-example1                    NodePort       10.245.125.151   <none>           8080:30697/TCP               13m
web-example2                    NodePort       10.245.198.91    <none>           8080:31812/TCP               24m

DNS pointing to your Load Balancer

Next you’ll want to set up a DNS record to point to your NGINX Ingress Controller Load Balancer IP address. Grab the IP address from the new Kubernetes provisioned Load Balancer for Ingress from the DigitalOcean web console.

Create an A record to point to this IP address.

Create your Ingress Rules

With DNS setup, create a new YAML file called fanout.yaml:

This specification will create an Kubernetes Ingress Resource which your Ingress Controller will use to determine how to route incoming HTTP requests to your Ingress Controller Load Balancer.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-fanout-example
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: example-ingress.yourfancydomainnamehere.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: web-example1
          servicePort: 8080
      - path: /web2/*
        backend:
          serviceName: web-example2
          servicePort: 8080

Make sure you update the host value under the first rule to point to your new DNS record (that fronts your Ingress Controller Load Balancer). i.e. the “example-ingress.yourfancydomainnamehere.com” bit needs to change to your own host / A record you created that points to your own Load Balancer IP address.

The configuration above is a typical “fanout” ingress setup. It provides two rules for two different paths on the host DNS you setup and allows you to route HTTP traffic to different services based on the hostname/path.

This is super useful as you can front multiple different services with a single Load Balancer.

  • example-ingress.yourfancydomainnamehere.com/* -> points to your simple web deployment backed by the web-example1 service you exposed it on. Any request that does not match any other rule will be directed to this service (*).
  • example-ingress.yourfancydomainnamehere.com/web2/* -> points to your web-example2 service. If you hit your hostname with the path /web2/* the request will go to this service.

Testing

Try browse to the first hostname using your own DNS record and try different combinations that match the rules you defined in your ingress rule on HTTP. You should get the web-example1 “hello-app” being served from your web-example1 pod for any request that does not match /web2/*. E.g. /foo.

For /web2/* you should get the web-example2 “hello-app” default web page. It’ll also display the name of the pod it was served from (in my case web-example2-75fd68f658-f8xcd).

Conclusion

Congratulations! You now have a single Load Balancer fronting an NGINX Ingress Controller on DigitalOcean Kubernetes.

You can now expose multiple Kubernetes run services / deployments from a single Ingress and avoid the need to have multiple Load Balancers running (and costing you money!)

Provision your own Kubernetes cluster with private network topology on AWS using kops and Terraform – Part 2

Getting Started

If you managed to follow and complete the previous blog post, then you managed to get a Kubernetes cluster up and running in your own private AWS VPC using kops and Terraform to assist you.

In this blog post, you’ll cover following items:

  • Setup upstream DNS for your cluster
  • Get a Kubernetes Dashboard service and deployment running
  • Deploy a basic metrics dashboard for Kubernetes using heapster, InfluxDB and Grafana

Upstream DNS

In order for services running in your Kubernetes cluster to be able to resolve services outside of your cluster, you’ll now configure upstream DNS.

Containers that are started in the cluster will have their local resolv.conf files automatically setup with what you define in your upstream DNS config map.

Create a ConfigMap with details about your own DNS server to use as upstream. You can also set some external ones like Google DNS for example (see example below):

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
data:
  stubDomains: |
    {"yourinternaldomain.local": ["10.254.1.1"]}
  upstreamNameservers: |
    ["10.254.1.1", "8.8.8.8", "8.8.4.4"]

Save your ConfigMap as kube-dns.yaml and apply it to enable it.

kubectl apply -f kube-dns.yaml

You should now see it listed in Config Maps under the kube-system namespace.

Kubernetes Dashboard

Deploying the Kubernetes dashboard is as simple as running one kubectl command.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

You can then start a dashboard proxy using kubectl to access it right away:

kubectl proxy

Head on over to the following URL to access the dashboard via the proxy you ran:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

You can also access the Dashboard via the API server internal elastic load balancer that was set up in part 1 of this blog post series. E.g.

https://your-internal-elb-hostname/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default

Heapster, InfluxDB and Grafana (now deprecated)

Note: Heapster is now deprecated and there are alternative options you could instead look at, such as what the official Kubernetes git repo refers you to (metrics-server). Nevertheless, here are the instructions you can follow should you wish to enable Heapster and get a nice Grafana dashboard that showcases your cluster, nodes and pods metrics…

Clone the official Heapster git repo down to your local machine:

git clone https://github.com/kubernetes/heapster.git

Change directory to the heapster directory and run:

kubectl create -f deploy/kube-config/influxdb/
kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml

These commands will essentially launch deployments and services for grafana, heapster, and influxdb.

The Grafana service should attempt to get a LoadBalancer from AWS via your Kubernetes cluster, but if this doesn’t happen, edit the monitoring-grafana service YAML configuration and change the type to LoadBalancer. E.g.

"type": "LoadBalancer",

Save the monitoring-grafana service definition and your cluster should automatically provision a public facing ELB and set it up to point to the Grafana pod.

Note: if you want it available on an internal load balancer instead, you’ll need to create your grafana service using the aws-load-balancer-internal annotation instead.

Grafana dashboard for Kubernetes with Heapster

Now that you have Heapster running, you can also get some metrics displayed directly in your Kubernetes dashboard too.

You may need to restart the dashboard pods to access the new performance stats in the dashboard though. If this doesn’t work, delete the dashboard deployment, service, pods, role, and then re-deploy the dashboard using the same process you followed earlier.

Once its up and running, use the DNS for the new ELB to access grafana’s dashboard, login with admin/admin and change the default admin password to something secure and save. You can now access cluster stats/performance stats in kubernetes, as well as in Grafana.

Closing off

This concludes part two of this series. To sum up, you managed to configure upstream DNS, deploy the Kubernetes dashboard and set up Heapster to allow you to see metrics in the dashboard, as well as deploying InfluxDB for storing the metric data with Grafana as a front end service for viewing dashboards.

Provision your own Kubernetes cluster with private network topology on AWS using kops and Terraform – Part 1

Goals

In this post series I’ll be covering how to provision a brand new self-hosted Kubernetes environment provisioned into AWS (on top of EC2 instances) with a specific private networking topology as follows:

  • Deploy into an existing VPC
  • Use existing VPC Subnets
  • Use private networking topology (Calico), with a private/internal ELB to access the API servers/cluster
  • Don’t use Route 53 AWS DNS services or external DNS, instead use Kubernetes gossip DNS service for internal cluster name resolution, and allow for upstream DNS to be set up to your own private DNS servers for outside-of-cluster DNS lookups

This is a more secure set up than a more traditional / standard kops provisioned Kubernetes cluster,  placing API servers on a private subnet, yet still allows you the flexibility of using Load Balanced services in your cluster to expose web services or APIs for example to the public internet if you wish.

Set up your workstation with the right tools

You need a Linux or MacOS based machine to work from (management station/machine). This is because kops only runs on these platforms right now.

sudo apt install python-pip
  • Use pip to install the awscli
pip install awscli --upgrade --user
  • Create yourself an AWS credentials file (~/.aws/credentials) and set it up to use an access and secret key for your kops IAM user you created earlier.
  • Setup the following env variables to reference from, but make sure you fill in the values you require for this new cluster. So change the VPC ID, S3 state store bucket name, and cluster NAME.
export ZONES=us-east-1b,us-east-1c,us-east-1d
export KOPS_STATE_STORE=s3://your-k8s-state-store-bucket
export NAME=yourclustername.k8s.local
export VPC_ID=vpc-yourvpcidgoeshere
  • Note for the above exports above, ZONES is used to specify where the master nodes in the k8s cluster will be placed (Availability Zones). You’ll definitely want these spread out for maximum availability

Set up your S3 state store bucket for the cluster

You can either create this manually, or create it with Terraform. Here is a simple Terraform script that you can throw into your working directory to create it. Just change the name of the bucket to your desired S3 bucket name for this cluster’s state storage.

Remember to use the name for this bucket that you specified in your KOPS_STATE_STORE export variable.

resource "aws_s3_bucket" "state_store" {
  bucket        = "${var.name}-${var.env}-state-store"
  acl           = "private"
  force_destroy = true

  versioning {
    enabled = true
  }

  tags {
    Name        = "${var.name}-${var.env}-state-store"
    Infra       = "${var.name}"
    Environment = "${var.env}"
    Terraformed = "true"
  }
}

Terraform plan and apply your S3 bucket if you’re using Terraform, passing in variables for name/env to name it appropriately…

terraform plan
terraform apply

Generate a new SSH private key for the cluster

  • Generate a new SSH key. By default it will be created in ~/.ssh/id_rsa
ssh-keygen -t rsa

Generate the initial Kubernetes cluster configuration and output it to Terraform script

Use the kops tool to create a cluster configuration, but instead of provisioning it directly, you’ll output it to terraform script. This is important, as you’ll be wanting to change values in this output file to provision the cluster into existing VPC and subnets. You also want to change the ELB from a public facing ELB to internal only.

kops create cluster --master-zones=$ZONES --zones=$ZONES --topology=private --networking=calico --vpc=$VPC_ID --target=terraform --out=. ${NAME}

Above you ran the kops create cluster command and specified to use a private topology with calico networking. You also designated an existing VPC Id, and told the tool to create terraform script as the output in the current directory instead of actually running the create cluster command against AWS right now.

Change your default editor for kops if you require a different one to vim. E.g for nano:

set EDITOR=nano

Edit the cluster configuration:

kops edit cluster ${NAME}

Change the yaml that references the loadBalancer value as type Public to be Internal.

While you are still in the editor for the cluster config, you need to also change the entire subnets section to reference your existing VPC subnets, and egress pointing to your NAT instances. Remove the current subnets section, and add the following template, (updating it to reference your own private subnet IDs for each region availability zone, and the correct NAT instances for each too. (You might possibly use one NAT instance for all subnets or you may have multiple). The Utility subnets should be your public subnets, and the Private subnets your private ones of course. Make sure that you reference subnets for the correct VPC you are deploying into.

subnets:
- egress: nat-2xcdc5421df76341
  id: subnet-b32d8afg
  name: us-east-1b
  type: Private
  zone: us-east-1b
- egress: nat-04g7fe3gc03db1chf
  id: subnet-da32gge3
  name: us-east-1c
  type: Private
  zone: us-east-1c
- egress: nat-0cd542gtf7832873c
  id: subnet-6dfb132g
  name: us-east-1d
  type: Private
  zone: us-east-1d
- id: subnet-234053gs
  name: utility-us-east-1b
  type: Utility
  zone: us-east-1b
- id: subnet-2h3gd457
  name: utility-us-east-1c
  type: Utility
  zone: us-east-1c
- id: subnet-1gvb234c
  name: utility-us-east-1d
  type: Utility
  zone: us-east-1d
  • Save and exit the file from your editor.
  • Output a new terraform config over the existing one to update the script based on the newly changed ELB type and subnets section.
kops update cluster --out=. --target=terraform ${NAME}
  • The updated file is now output to kubernetes.tf in your working directory
  • Run a terraform plan from your terminal, and make sure that the changes will not affect any existing infrastructure, and will not create or change any subnets or VPC related infrastructure in your existing VPC. It should only list out a number of new infrastructure items it is going to create.
  • Once happy, run terraform apply from your terminal
  • Once terraform has run with the new kubernetes.tf file, the certificate will only allow the standard named cluster endpoint connection (cert only valid for api.internal.clustername.k8s.local for example). You now need to re-run kops update and output to terraform again.
kops update cluster $NAME --target=terraform --out=.
  • This will update the cluster state in your S3 bucket with new certificate details, but not actually change anything in the local kubernetes.tf file (you shouldn’t see any changes here). However you can now run a rolling update rolling update with the cloudonly and force –yes options:
kops rolling-update cluster $NAME --cloudonly --force --yes

This will roll all the masters and nodes in the cluster (the created autoscaling groups will initialise new nodes from the launch configurations) and when the ASGs initiate new instances, they’ll get the new certs applied from the S3 state storage bucket. You can then access the ELB endpoint on HTTPS, and you should get an auth prompt popup.

Find the endpoint on the internal ELB that was created. The rolling update may take around 10 minutes to complete, and as mentioned before, will terminate old instances in the Autoscaling group and bring new instances up with the new certificate configuration.

Tag your public subnets to allow auto provisioning of ELBs for Load Balanced Services

In order to allow Kubernetes to automatically create load balancers (ELBs) in AWS for services that use the LoadBalancer configuration, you need to tag your utility subnets with a special tag to allow the cluster to find these subnets automatically and provision ELBs for any services you create on-the-fly.

Tag the subnets that you are using as utility subnets (public) with the following tag:

Key: kubernetes.io/role/elb Value: (Don’t add a value, leave it blank)

Tag your private subnets for internal-only ELB provisioning for Load Balanced Services

In order to allow Kubernetes to automatically create load balancers (ELBs) in AWS for services that use the LoadBalancer configuratio and a private facing configuration, you need to tag the private subnets that the cluster operates in with a special tag to allow k8s to find these subnets automatically.

Tag the subnets that you are using as private (where your nodes and master nodes should be running now) with the following two tags:

Key: kubernetes.io/cluster/{yourclusternamehere.k8s.local} Value: shared
Key: kubernetes.io/role/internal-elb Value: 1

As an example for the above, the key might end up with a value of “kubernetes.io/cluster/yourclusternamehere.k8s.local” if your cluster is named “yourclusternamehere.k8s.local” (remember you named your cluster when you created your local workstation EXPORT value for {NAME}.

Closing off

This concludes part one of this series for now.

As a summary, you should now have a kubernetes cluster up and running in your private subnets, spread across availability zones, and you’ve done it all using kops and Terraform.

Straighten things out by creating a git repository, and commiting your terraform artifacts for the cluster and storing them in version control. Watch out for the artifacts that kops output along with the Terraform script like the private certificate files – these should be kept safe.

Part two should be coming soon, where we’ll run through some more tasks to continue setting the cluster up like setting upstream DNS, provisioning the Kubernetes Dashboard service/pod and more…

Now reading: Cocos2d for iPhone 0.99 Beginner’s Guide

I was recently offered a copy of Pablo Ruiz’s “Cocos2d for iPhone 0.99 Beginner’s Guide” eBook to read through and provide comments / feedback on – needless to say I was quite excited to get stuck in, however I am still on holiday in South Africa so for now I am just downloading the eBook and will save it for when I am back in the UK.

I actually can’t wait to have a read through. cocos2d is by far the most fun I have had programming with, and I’m sure this book will be a valuable asset.

You can grab a copy over at PacketPub if you are interested in learning about programming with (imo) the best 2D gaming engine for iOS. At the moment it is going on special for around £25.00 which is not bad at all for a guide encompassing a lot of what cocos2d has to offer.