How to setup a basic Kubernetes cluster and add an NGINX Ingress Controller on DigitalOcean

Most of the steps in this how to post can be applied to any Kubernetes cluster to get an NGINX Ingress Controller deployed, so you don’t necessarily have to be running Kubernetes in DigitalOcean. With that said, let’s go through the process of setting up a Kubernetes cluster with NGINX Ingress Controller on DigitalOcean.

DigitalOcean have just officially announced their own Kubernetes offering so this guide covers initial deployment of a basic worker node pool on DigitalOcean, and then moves on to deploying an Ingress Controller setup.

If you’re thinking of signing up on DigitalOcean, consider using my referral link below. It’ll net you $100 of credit to spend over 60 days, and if you stick with them I’ll get a small $25 credit to my own account. Win win!

My Referral link to sign up with DigitalOcean

Note: If you already have a Kubernetes cluster setup and configured, then you can skip the initial cluster and node pool provisioning step below and move on to the Helm setup part.

Deploy a Kubernetes node pool on DigitalOcean

You could simply do this with the Web UI console (which makes things really simple), but here I’ll be providing the doctl commands to do this via the command line.

First of all, if you don’t have it already download and setup the latest doctl release. Make sure it’s available in your PATH.

Initialise / authenticate doctl. Provide your own API key when prompted.

doctl auth init

Right now, the help documentation in doctl version 1.12.2 does not display the kubernetes related commands arguments, but they’re available and do work.

Create a new Kubernetes cluster with just a single node of the smallest size (you can adjust this to your liking of course). I want a nice cheap cluster with a single node for now.

doctl k8s cluster create example-cluster --count=1 --size=s-1vcpu-2gb

The command above will provision a new cluster with a default node pool in the NYC region and wait for the process to finish before completing. It’ll also update your kubeconfig file if it detects one on your system.

output of the doctl k8s cluster create command

Once it completes, it’ll return and you’ll see the ID of your new cluster along with some other details output to the screen.

Viewing the Kubernetes console in your browser should also show it ready to go. You can download the config from the web console too if you wish.

Kubeconfig setup

If you’re new to configuring kubectl to manage Kubernetes, follow the guide here to use your kube config file that DigitalOcean provides you with.

the load balancer that will front your Ingress Controller on DigitalOcean Kubernetes

Handling different cluster contexts

With kubectl configured, test that it works. Make sure you’re in your new cluster’s context.

kubectl config use-context do-nyc1-example-cluster

If you’re on a Windows machine and use PowerShell and have multiple Kubernetes clusters, here is a simple set of functions I usually add to my PowerShell profile – one for each cluster context that allows easy switching of contexts without having to type out the full kubectl command each time:

Open your PowerShell profile with:

notepad $profile

Add the following (one for each context you want) – make sure you replace the context names with your own cluster names:

function kubecontext-minikube { kubectl config use-context minikube }
function kubecontext-seank8s { kubectl config use-context sean.k8s.local }
function kubecontext-digitalocean { kubectl config use-context do-nyc1-example-cluster }

Simply enter the function name and hit enter in your PS session to switch contexts.

If you didn’t have any prior clusters setup in your kubeconfig file, you should just have your new DigitalOcean cluster context selected already by default.

Deploy Helm to your cluster

Time to setup Helm. Follow this guide to install and configure helm using kubectl.

helm logo

Deploy the Helm nginx-ingress chart to enable an Ingress Controller on DigitalOcean in your Kubernetes cluster

Now that you have helm setup, you can easily deploy an Ingress Controller to your cluster using the nginx helm chart (package).

helm install --name nginx-ingress stable/nginx-ingress --set service.type=LoadBalancer --namespace default

When you specify the service.type of “LoadBalancer”, DigitalOcean will provision a LoadBalancer that fronts this Kubernetes service on your cluster. After a few moments the Helm deployment should complete (it’ll run async in the background).

You can monitor the progress of the service setup in your cluster with the following command:

kubectl --namespace default get services -o wide -w nginx-ingress-controller

Open the Web console, go to Networking, and then look for Load Balancers.

You should see your new NGINX load balancer. This will direct any traffic through to your worker pool node(s) and into the Kubernetes Service resource that fronts the pods running NGINX Ingress.

view of the digitalocean load balancer

At this point you should be able to hit the IP Address in your web browser and get the default nginx backend for ingress (with a 404 response). E.g.

Great! This means it’s all working so far.

Create a couple of basic web deployments inside your cluster

Next up you’ll create a couple of very simple web server Deployments running in single pods in your cluster’s node pool.

Issue the following kubectl command to create two simple web deployments using Google’s official GCR hello-app image. You’ll end up with two deployments and two pods running separately hosted “hello-app” web apps.

kubectl run web-example1 --image=gcr.io/google-samples/hello-app:2.0 --port=8080
kubectl run web-example2 --image=gcr.io/google-samples/hello-app:2.0 --port=8080

Confirm they’re up and running wth 1 pod each:

kubectl get deployments
NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
web-example1                    1         1         1            1           12m
web-example2                    1         1         1            1           23m

Now you need a service to back the new deployment’s pods. Expose each deployment with a simple NodePort service on port 8080:

kubectl expose deployment/web-example1 --type="NodePort" --port 8080
kubectl expose deployment/web-example2 --type="NodePort" --port 8080

A NodePort service will effectively assign a port number from your cluster’s service node port range (default between 30000 and 32767) and each node in your cluster will proxy that specific port into your Service on the port you specify. Nodes are not available externally by default and so creating a NodePort service does not expose your service externally either.

Check the services are up and running and have node ports assigned:

kubectl get services
NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
web-example1                    NodePort       10.245.125.151   <none>           8080:30697/TCP               13m
web-example2                    NodePort       10.245.198.91    <none>           8080:31812/TCP               24m

DNS pointing to your Load Balancer

Next you’ll want to set up a DNS record to point to your NGINX Ingress Controller Load Balancer IP address. Grab the IP address from the new Kubernetes provisioned Load Balancer for Ingress from the DigitalOcean web console.

Create an A record to point to this IP address.

Create your Ingress Rules

With DNS setup, create a new YAML file called fanout.yaml:

This specification will create an Kubernetes Ingress Resource which your Ingress Controller will use to determine how to route incoming HTTP requests to your Ingress Controller Load Balancer.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-fanout-example
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: example-ingress.yourfancydomainnamehere.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: web-example1
          servicePort: 8080
      - path: /web2/*
        backend:
          serviceName: web-example2
          servicePort: 8080

Make sure you update the host value under the first rule to point to your new DNS record (that fronts your Ingress Controller Load Balancer). i.e. the “example-ingress.yourfancydomainnamehere.com” bit needs to change to your own host / A record you created that points to your own Load Balancer IP address.

The configuration above is a typical “fanout” ingress setup. It provides two rules for two different paths on the host DNS you setup and allows you to route HTTP traffic to different services based on the hostname/path.

This is super useful as you can front multiple different services with a single Load Balancer.

  • example-ingress.yourfancydomainnamehere.com/* -> points to your simple web deployment backed by the web-example1 service you exposed it on. Any request that does not match any other rule will be directed to this service (*).
  • example-ingress.yourfancydomainnamehere.com/web2/* -> points to your web-example2 service. If you hit your hostname with the path /web2/* the request will go to this service.

Testing

Try browse to the first hostname using your own DNS record and try different combinations that match the rules you defined in your ingress rule on HTTP. You should get the web-example1 “hello-app” being served from your web-example1 pod for any request that does not match /web2/*. E.g. /foo.

For /web2/* you should get the web-example2 “hello-app” default web page. It’ll also display the name of the pod it was served from (in my case web-example2-75fd68f658-f8xcd).

Conclusion

Congratulations! You now have a single Load Balancer fronting an NGINX Ingress Controller on DigitalOcean Kubernetes.

You can now expose multiple Kubernetes run services / deployments from a single Ingress and avoid the need to have multiple Load Balancers running (and costing you money!)

Editing a webapp or site’s HTTP headers with Lambda@Edge and CloudFront

dark clouds in the sky

Putting CloudFront in front of a static website that is hosted in an S3 bucket is an excellent way of serving up your content and ensuring it is geographically performant no matter where your users are by leveraging caching and CloudFront’s geographically placed edge locations. You can go one step further and customise your HTTP headers with Lambda@Edge and CloudFront.

The basic Cloudfront and S3 origin setup goes a little something like this:

  • Place your static site files in an S3 bucket that is set up for static web hosting
  • Create a CloudFront distribution that uses the S3 bucket content as the origin
  • Add a cache behaviour to the distribution

This is an excellent way of hosting a website or webapp that can be delivered anywhere in the world with ultra low latency, and you don’t even have to worry about running your own webserver to host the content. Your content simply sits in an S3 bucket and is delivered by CloudFront (and can be cached too).

Modifying HTTP headers with Lambda@Edge and CloudFront

But what happens if you want to get a little more technical and serve up custom responses for any HTTP requests for your website content? Traditionally you’d need a custom webserver that you could use to modify the HTTP request/response lifecycle (such as Varnish / Nginx).

That was the case until Lambda@Edge was announced.

I was inspired to play around with Lambda@Edge after reading Julia Evan’s blog post about Cloudflare Workers, where she set up something similar to add a missing Content-Type header to responses from her blog’s underlying web host. I wanted to see how easy it was to handle in an AWS setup with S3 hosted content and CloudFront.

So here is a quick guide on how to modify your site / webapp’s HTTP responses when you have CloudFront sitting in front of it.

Note: you can run Lambda@Edge functions on all these CloudFront events (not just the one mentioned above):

  • After CloudFront receives a request from a viewer (viewer request)
  • Before CloudFront forwards the request to the origin (origin request)
  • After CloudFront receives the response from the origin (origin response)
  • Before CloudFront forwards the response to the viewer (viewer response)
  • You can return a custom response from Lambda@Edge without even sending a request to the CloudFront origin at all.

Of course the only ones that are guaranteed to always run are the Viewer type events. This is because origin request and origin response events only happen when the requested object is not already cached in an edge location. In this case CloudFront forwards a request to the origin and will receive a response back from the origin (hopefully!), and these events you can indeed act upon.

How to edit HTTP responses with Lambda@Edge

Create a new Lambda function and make sure it is placed in the us-east-1 region. (There is a requirement here by AWS that the function must be created in the US East / N. Virginia Region). When you create the function, it is deployed to all regions across the world with their own replication version of the Lambda@Edge function.

Fun fact: your CloudWatch logs for Lambda@Edge will appear in the relevant region where your content is requested from – i.e. based on the region the edge location exists in that ends up serving up your content.

You’ll need to create a new IAM Role for the function to leverage, so use the Lambda@Edge role template.

Select Node 6.10 runtime for the function. In the code editor, setup the following Node.js handler function which will do the actual header manipulation work:

exports.handler = (event, context, callback) => {
    const response = event.Records[0].cf.response;
    const headers = response.headers;
    
    headers['x-sean-example'] = [{key: 'X-Sean-Example', value: 'Lambda @ Edge was here!'}];
    
    callback(null, response);
};

basic lambda function configuration

The function will receive an event for every request passing through. In that event you simply retrieve the CloudFront response event.Records[0].cf.response and set your required header(s) by referencing the key by header name and setting the value.

Make sure you publish a version of the Lambda function, as you’ll need to attach it to your CloudFront behavior by ARN that includes the version number. (You can’t use $LATEST, so make sure you use a numerical version number that you have published).

Now if you make a new request to your content, you should see the new header being added by Lambda@Edge!

HTTP header view showing modified header and value.

Lambda@Edge is a great way to easily modify CloudFront Distribution related events in the HTTP lifecycle. You can keep response times super low as the Lambda functions are executed at the edge location closest to your users. It also helps you to keep your infrastructure as simple as possible by avoiding the use of complicated / custom web servers that would otherwise just add unecessary operational overhead.

Running an S3 API compatible object storage server (Minio) on the Raspberry Pi

I’ve recently become interested in hosting my own local S3 API compatible object storage server at home.

So tonight I set about setting up Minio.

Image result for minio

Minio is an object storage server that is S3 API compatible. This means I’ll be able to use my working knowledge of the Amazon S3 API and tools, but to interact with my own, locally hosted storage service running on a Raspberry Pi.

I had heard about Zenko before (an S3 API compatible object storage server) but was searching around for something really lightweight that I could easily run on ARM architecture – i.e. my Raspberry Pi model 3 I have sitting on my desk right now. In doing so, Minio was the first that I found that could easily be compiled to run on the Raspberry Pi.

The goal right now is to have a local object storage service that is compatible with S3 APIs that I can use for home use. This has a bunch of cool use cases, and the ones I am specifically interested in right now are:

  • Being able to write scripts that interact with S3, but test them locally with Minio before even having to think about deploying them to the cloud. A local object storage API is going to be free and fast. Plus it’s great knowing that you’re fully in control of your own data.
  • Setting up a publically exposable object storage service that I can target with serverless functions that I plan to be running on demand in the cloud to do processing and then output artifacts to my home object storage service.

The second use case above is what I intend on doing to send ffmpeg processed video to. Basically I want to be able to process video from online services using something like AWS Lambda (probably using ffmpeg bundled in with the function) and output the resulting files to my home storage system.

The object storage service will receive these output files from Lambda and I’ll have a cronjob or rsync setup to then sync the objects placed into my storage bucket(s) to my home Plex media share.

This means I’ll be able to remotely queue up stuff to watch via a simple interface I’ll expose (or a message queue of some sort) to be processed by Lambda, and by the time I’m home everything will be ready to watch in Plex.

Normally I would be more interesting in running the Docker image for Minio, but at home I want something that is really cheap to run, and so compiling Minio for Raspberry Pi makes total sense to me here, as this device is super cheap to level powered on 24/7 as opposed to running something beefier that would instead run as a Docker host or lightweight Kubernetes home cluster.

Here’s the quick start up guide to get it running on Raspberry Pi

You’ll basically download Go, extract it, set it up on your path, then use it to compile Minio’s source code into an ARM compatible binary that you can run on your pi.

wget https://dl.google.com/go/go1.10.3.linux-armv6l.tar.gz
sudo tar -C /usr/local -xzf go1.10.3.linux-armv6l.tar.gz
export PATH=$PATH:/usr/local/go/bin # put into ~/.profile
source .profile
go get -u github.com/minio/minio
mkdir ~/minio-data
cd go/bin
./minio server ~/minio-data/

And you’re up and running! It’s that simple to get going quickly.

Running interactively you’ll get a default access and secret key in the terminal, so head on over to the Web UI / interface to check things out: http://your-raspberry-pi-ip-or-hostname:9000/minio/

Enter your credentials to login.

Of course at this stage you can also start using your S3 API compatible command line tools to start working with your new object storage server too.

Nice!

Streamlining your Kubernetes development process with Draft (and Helm)

Draft is a tool built for developers who do their dev work against a Kubernetes environment (whether it be a live cluster of a Minikube instance).

It really helps speed up development time by helping out with the code -> build -> run -> test dev cycle. It does this by scaffolding out a Dockerfile and Helm Chart template pack customised for your app with a single command and then by building and deploying your application image to your Kubernetes environment with a second.

Setting up Draft and a basic .NET Core Web API project

First off, make sure you have already set up your kubectl configuration to be able to talk to your Kubernetes cluster, and have also setup and configured Helm.

Set the Draft binary up in a known system path on your machine after downloading it from the Draft Releases page.

Run draft init to initialise Draft. It’ll drop it’s configuration in a subdirectory of your user profile directory called .draft.

Create a new .NET Core 2.1 ASP.net project and select Web API as the type.

Open a shell and navigate over to the root project directory of your new .NET Core 2.1 app. E.g. cd solution\projectname

Run draft create to setup Draft with your new project. This is where the Draft magic happens. Essentially, Draft will:

  • Detect your application code language. (In this case csharp)
  • Create a Dockerfile for your app
  • Set up a Helm chart and necessary template structure to easily deploy your app into Kubernetes direct from your development machine

You should see output similar to this:

PS C:\git\draftdotnetcorewebapi\draftdotnetcorewebapi> draft create
--> Draft detected JSON (97.746232%)
--> Could not find a pack for JSON. Trying to find the next likely language match...
--> Draft detected XML (1.288026%)
--> Could not find a pack for XML. Trying to find the next likely language match...
--> Draft detected csharp (0.914658%)
--> Ready to sail

At this point you could run draft up and if you have a container registry setup for Draft on your machine already, it would build and push your Docker image and then deploy your app into Kubernetes. However, if you don’t yet have a container registry setup for Draft you’ll need to do that first.

draft config set registry docker.io/yourusernamehere

PS, just make sure your local development machine has credentials setup for your container registry. E.g. Docker Hub.

Run your app with Draft (and help from Helm)

Now run draft up

PS C:\git\draftdotnetcorewebapi\draftdotnetcorewebapi> draft up
Draft Up Started: 'draftdotnetcorewebapi': 01CH1KFSSJWDJJGYBEB3AZAB01
draftdotnetcorewebapi: Building Docker Image: SUCCESS ⚓  (45.0376s)
draftdotnetcorewebapi: Pushing Docker Image: SUCCESS ⚓  (10.0875s)
draftdotnetcorewebapi: Releasing Application: SUCCESS ⚓  (3.3175s)
Inspect the logs with `draft logs 01CH1KFSSJWDJJGYBEB3AZAB01`

Awesome. Draft built your application into a Docker image, pushed that image up to your container registry and then released your application using the Helm Chart it scaffolded for you when you initially ran draft create.

Take a look at Kubernetes. Your application is running.

kubectl get deployments

PS C:\git\draftdotnetcorewebapi\draftdotnetcorewebapi> kubectl get deployments
NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
draftdotnetcorewebapi-csharp   1         1         1            1           7m

Iterating on your application

So your app is up and running in Kubernetes, now what?

Let’s make some changes to the Helm chart to get it deploying using a LoadBalancer (or NodePort if you’re using Minikube). Let’s also add a new Api Controller called NamesController that simply returns a JSON array of static names with a GET request.

using Microsoft.AspNetCore.Mvc;

namespace draftdotnetcorewebapi.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class NamesController : ControllerBase
    {
        [HttpGet]
        public ActionResult<IEnumerable> Get()
        {
            return new string[] { "Wesley", "Jean-Luc", "Damar", "Guinan" };
        }
    }
}

Change your charts/csharp/values.yaml file to look like this (use NodePort if you’re trying this out with Minikube):

using Microsoft.AspNetCore.Mvc;
# Default values for c#.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
  pullPolicy: IfNotPresent
service:
  name: dotnetcore
  type: LoadBalancer
  externalPort: 8080
  internalPort: 80
resources:
  limits:
    cpu: 1
    memory: 256Mi
  requests:
    cpu: 250m
    memory: 256Mi
ingress:
  enabled: false

Run draft up again. Your app will get built and released again. This time you’ll have a LoadBalancer service exposed and your updated application with the new API endpoint will be available within seconds.

This time however Draft was clever enough to know that it didn’t need a new Helm release. Using Helm, it determined that an existing release was already in place and instead did a helm upgrade underneath the covers. Test it for yourself with a helm list

PS C:\git\draftdotnetcorewebapi\draftdotnetcorewebapi> helm list
NAME                            REVISION        UPDATED                         STATUS          CHART                           NAMESPACE
draftdotnetcorewebapi           2               Wed Jun 27 23:10:11 2018        DEPLOYED        csharp-v0.1.0                   default

Check the service’s External IP / URL and try it out by tacking on /api/names on the end to try out the new Names API endpoint.

PS C:\git\draftdotnetcorewebapi\draftdotnetcorewebapi>PS C:\git\draftdotnetcorewebapi\draftdotnetcorewebapi> kubectl get service draftdotnetcorewebapi-csharp -o wide
NAME                           TYPE           CLUSTER-IP     EXTERNAL-IP                                                               PORT(S)          AGE       SELECTOR
draftdotnetcorewebapi-csharp   LoadBalancer   100.66.92.87   aezzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz.us-east-2.elb.amazonaws.com   8080:31381/TCP   32m       app=draftdotnetcorewebapi-csharp

Draft clean up

To take your app down and delete the Helm release, simply issue a draft delete on the command line.

PS C:\git\draftdotnetcorewebapi\draftdotnetcorewebapi> helm list
NAME                            REVISION        UPDATED                         STATUS          CHART                           NAMESPACE
draftdotnetcorewebapi           2               Wed Jun 27 23:10:11 2018        DEPLOYED        csharp-v0.1.0                   default

Check the service’s External IP / URL and try it out by tacking on /api/names on the end to try out the new Names API endpoint.

PS C:\git\draftdotnetcorewebapi\draftdotnetcorewebapi> draft delete
app 'draftdotnetcorewebapi' deleted

That’s all there is to it.

Draft really helps ease the monotony and pain of setting up a new project and getting it all working with Docker and Kuberenetes. It vastly improves your development cycle times too. Check it out and start using it to save time!

Setting up Helm for Kubernetes (with RBAC) and Deploying Your First Chart

I was pointed to Helm the other day and decided to have a quick look at it. I tasked myself with setting it up in a sandbox environment and deploying a pre-packaged application (a.k.a chart, or helm package) into my Kubernetes sandbox environment.

Helm 101

The best way to think about Helm is as a ‘package manager for Kubernetes’. You install Helm as a cli tool (It’s written in Golang) and all the operations it provides to you, you’ll find are very similar to those of common package managers like npm etc…

Helm has a few main concepts.

  • As mentioned above, a ‘Chart’ is a package for Helm. It contains the resource definitions required to run an app/tool/service on a Kubernetes cluster.
  • A ‘Repository’ is where charts are stored and shared from
  • A ‘Release’ is an instance of a chart running in your Kubernetes cluster. You can create multiple releases for multiple instances of your app/tool/service.

More info about Helm and it’s concepts can be found on the Helm Quickstart guide. If however, you wish to get stuck right in, read on…

This is a quick run-down of the tasks involved in setting it up and deploying a chart (I tried out kube-slack to provide slack notifications for failed kubernetes operations in my sandbox environment to my slack channel).

Setting up Helm

Download and unzip the latest Helm binary for your OS. I’m using Windows so I grabbed that binary, unblocked it, and put in a folder found in my path. Running a PowerShell session I can simply type:

helm

Helm executes and provides a list of possible options.

Before you continue with initialising Helm, you should create a service account in your cluster that Helm will use to manage releases across namespaces (or in a particular namespace you wish it to operate in). For testing its easiest to set up the service account to use the default built-in “cluster-admin” role. (To be more secure you should set up Tiller to have restricted permissions and even restrict it based on namespace too).

To setup the basic SA with the cluster-admin role, you’ll need a ClusterRoleBinding to go with the SA. Here is the config you need to set both up.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tillersa
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tillersa-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tillersa
    namespace: kube-system

Run kubectl create and point to this config to set up the SA and ClusterRoleBinding:

kubectl create -f .\tillersa-and-cluster-rolebinding.yaml

Now you can do a helm initialisation.

helm init --service-account tillersa --tiller-namespace kube-system

If all went well, you’ll get a message stating it was initialised and setup in your cluster.

Run:

kubectl get pods -n=kube-system

and you should see your new tiller-deploy pod running.

Deploying Charts with Helm

Run helm list to see that you currently have no chart releases deployed.

helm list

You can search the public Helm repository for charts (applications/tools/etc) that you can now easily deploy into your cluster.

helm search

Search for ‘grafana’ with helm. We’ll deploy that to the cluster in this example.

helm search grafana

Next up you might want to inspect and discover more about the chart you’re going to install. This is useful to see what sort of configuration parameters you can pass to it to customise it to your requirements.

helm inspect grafana

Choose a namespace in your cluster to deploy to and a service type for Grafana (to customise it slightly to your liking) and then run the following, replacing the service.type and service.port values for your own. For example you could use a ClusterIP service instead of LoadBalancer like I did:

helm install --name sean-grafana-release stable/grafana --set service.type=LoadBalancer --set service.port=8088 --namespace sean-dev

Helm will report back on the deployment it started for your release.

The command is not synchronous so you can run helm status to report on the status of a release.

helm status sean-grafana-release

Check on deployments in your namespace with kubectl or the Kubernetes dashboard and you should find Grafana running happily along.

In my case I used a LoadBalancer service, so my cluster being AWS based spun up an ELB to front Grafana. Checking the ELB endpoint on port 8088 as I specified in my Helm install command sure enough shows my new Grafana app’s login page.

The chart ensures all the necessary components are setup and created in your cluster to run Grafana. Things like the deployment, the service, service account, secrets, etc..

In this case the chart outputs instructions on how to retrieve your Grafana admin password for login. You can see how to get that in the output of your release.

Tidy Up

To clean up and delete your release simply do:

helm delete sean-grafana-release

Concluding

Done!

There is plenty more to explore with helm. If you wish to change your helm configuration with helm init, look into using the –upgrade parameter. helm reset can be used to remove Helm from your cluster and there are many many more options and scenarios that could be covered.

Explore further with the helm command to see available commands and do some digging.

Next up for me I’ll be looking at converting one of my personal applications into a chart that I can deploy into Kubernetes.