Fast Batch S3 Bucket object deletion from the shell

This is a quick post showing a nice and fast batch S3 bucket object deletion technique.

I recently had an S3 bucket that needed cleaning up. It had a few million objects in it. With path separating forward slashes this means there were around 5 million or so keys to iterate.

The goal was to delete every object that did not have a .zip file extension. Effectively I wanted to leave only the .zip file objects behind (of which there were only a few thousand), but get rid of all the other millions of objects.

My first attempt was straight forward and naive. Iterate every single key, check that it is not a .zip file, and delete it if not. However, every one of these iterations ended up being an HTTP request and this turned out to be a very slow process. Definitely not fast batch S3 bucket object deletion…

I fired up about 20 shells all iterating over objects and deleting like this but it still would have taken days.

I then stumbled upon a really cool technique on serverfault that you can use in two stages.

  1. Iterate the bucket objects and stash all the keys in a file.
  2. Iterate the lines in the file in batches of 1000 and call delete-objects on these – effectively deleting the objects in batches of 1000 (the maximum for 1 x delete request).

In-between stage 1 and stage 2 I just had to clean up the large text file of object keys to remove any of the lines that were .zip objects. For this process I used sublime text and a simple regex search and replace (replacing with an empty string to remove those lines).

So here is the process I used to delete everything in the bucket except the .zip objects. This took around 1-2 hours for the object key path collection and then the delete run.

Get all the object key paths

Note you will need to have Pipe Viewer installed first (pv). Pipe Viewer is a great little utility that you can place into any normal pipeline between two processes. It gives you a great little progress indicator to monitor progress in the shell.

aws s3api list-objects --output text --bucket the-bucket-name-here --query 'Contents[].[Key]' | pv -l > all-the-stuff.keys

 

Remove any object key paths you don’t want to delete

Open your all-the-stuff.keys file in Sublime or any other text editor with regex find and replace functionality.

The regex search for sublime text:

^.*.zip*\n

Find and replace all .zip object paths with the above regex string, replacing results with an empty string. Save the file when done. Make sure you use the correctly edited file for the following deletion phase!

Iterate all the object keys in batches and call delete

tail -n+0 all-the-stuff.keys | pv -l | grep -v -e "'" | tr '\n' '\0' | xargs -0 -P1 -n1000 bash -c 'aws s3api delete-objects --bucket the-bucket-name-here --delete "Objects=[$(printf "{Key=%q}," "$@")],Quiet=false"' _

This one-liner effectively:

  • tails the large text file (mine was around 250MB) of object keys
  • passes this into pipe viewer for progress indication
  • translates (tr) all newline characters into a null character ‘\0’ (effectively every line ending)
  • chops these up into groups of 1000 and passes the 1000 x key paths as an argument with xargs to the aws s3api delete-object command. This delete command can be passed an Objects array parameter, which is where the 1000 object key paths are fed into.
  • finally quiet mode is disabled to show the result of the delete requests in the shell, but you can also set this to true to remove that output.

Effectively you end up calling aws s3api delete-object passing in 1000 objects to delete at a time.

This is how it can get through the work so quickly.

Nice!

Kubernetes Ingress Controller with NGINX Reverse Proxy and Wildcard SSL from Let’s Encrypt

This is a pattern I’ve used with success for access to apps running in a number of Kubernetes clusters that were restricted to only having a single ingress load balancer.

The Scenario

  • Kubernetes clusters (EKS) are on the internal network only (in this case private subnets in an AWS VPC).
  • IAM permissions are locked down to prevent creation of security groups (we can only use existing, pre-defined security groups) and so the LoadBalancer service type of Kubernetes is off-limits, as the k8s control plane needs to be able to create these automatically with security groups – this operation fails because of the restricted IAM permissions on the cluster. We have one Elastic Load Balancer created with the LoadBalancer service type when the cluster was initial bootstrapped with an nginx ingress controller + service type == LoadBalancer before the permissions were locked down again.
  • The Ingress Controller that is running is backed by an internal facing Elastic Load Balancer (ELB), created initially as described above.
  • Applications run across namespaces in each cluster, and the Ingress Controller must be able to provide dynamic access for users of these internal applications that sit on the network outside the k8s cluster.
  • DNS and ingress must be dynamic enough to allow the same apps to run in different namespaces, use the same URL path, but with differing hostnames. SSL must also be provided for all of these apps using a wildcard SSL certificate. E.g.
    • namespace1.cluster.foo.bar/app1
    • namespace2.cluster.foo.bar/app1
    • namespace3.cluster.foo.bar/app1
    • namespace1.cluster.foo.bar/app2
    • namespace2.cluster.foo.bar/app2
    • namespace3.cluster.foo.bar/app2
  • Once DNS wildcard CNAME record is created, it is difficult to change to point to a new location if needing changes (reliant on 3rd party to manage DNS).

A Solution with Reverse Proxying

There are of course a number of ways to approach this, like running under cert-manager inside the cluster with the letsencrypt issuer, or if you are running your own PKI with vault, the vault issuer.

cert-manager wouldn’t work well here as services are not publicly accessible for HTTP-01 certificate verification.

It could also be possible to terminate SSL at the ingress controller level in the cluster with the SSL certificate loaded there.

One additional requirement that I didn’t mention above though was that developers who are pushing their apps into the clusters need to be able to ‘dynamically’ configure their own personal ‘dev’ namespaces / ingress rules.

They configure their ingress easily enough with the Kubernetes Ingress resource when they deploy their apps (using Helm), however hostnames are not so easy for them to configure. Route53 is not in use here, and not allowed in this environment, and programmatic access to DNS is not possible.

A reverse proxy with NGINX

This layer exists more or less just to allow easy re-pointing of CNANE wildcard DNS entry to the Kubernetes cluster. As DNS is not easily configured (handled by another team/resource), we can simply leave it pointed to the NGINX elastic load balancer, and then just re-point requests using NGINX configuration if we need to.

It’s worth pointing out that this NGINX layer could be hosted on a multitude of places, including as a containerised solution, or it could even be replaced by a lambda function with API Gateway that could do the reverse proxying instead.

Environments are designated by namespaces in each ‘class’ of cluster. For example a non-production EKS cluster will have namespaces for non-production environments.

Hostnames need to be used to help the ingress rules match correctly with designated paths.

I configured an internal load balancer and setup a fleet of NGINX instances behind it.

Here is a quick runbook of how to setup NGINX and certbot on a vanilla Amazon Linux 2 EC2 instance. Use whichever automation you prefer such as baking your own AMI with packer, using Terraform, or ansible, but the runbook of steps to install NGINX and certbot is effectively:

# nginx
sudo amazon-linux-extras install nginx1.12
sudo systemctl enable nginx
sudo systemctl start nginx

# certbot
sudo wget -r --no-parent -A 'epel-release-*.rpm' https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/
sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm
sudo yum-config-manager --enable epel*
yum repolist all
yum install -y certbot

# request / generate letsencrypt wildcard cert using dns challenge interactively
certbot -d *.your.domain.here --manual --preferred-challenges dns certonly
# Interactive command above, choose to omit this in automation and do manually if you're using DNS-01 like I am here - certbot will give you a dynamically generated TXT record value for DNS-01 that you'll need to create.
systemctl restart nginx

Once NGINX is installed and your certs are generated, you’ll need to configure /etc/nginx/nginx.conf to point to the correct certificate files.
A wildcard CNAME record is created once-off that points anyhost.cluster.foo.bar to the internal ELB hostname for the reverse proxy NGINX instances (these sit outside of the cluster as standard EC2 hosts for now). For example:

[CNAME] *.cluster.foo.bar -> internal-nginx-reverse-proxy-fleet-xxxx-xxxx.us-east-2.elb.amazonaws.com

I used certbot (letsencrypt) to issue a wildcard SSL certificate for the NGINX fleet servers for *.cluster.foo.bar. DNS-01 challenge type was used, as everything here is in a private, internal network, not accessible by letsencrypt services.

A TXT record just needs to be created with your DNS to verify to letsencrypt that you own the domain in question.

In the NGINX configuration, the generated certificate is loaded up for port 443 and the following location rule is setup to proxy_pass the requests sent to the NGINX fleet back to the Kubernetes Ingress Controller ELB.

location / {
  proxy_set_header Host $host;
  proxy_pass http://internal-ingress-controller-xxxxx.us-east-2.elb.amazonaws.com;
}

The proxy_set_header directive is important, as it adds the host header that the NGINX fleet instance receives from the client, and sends it with the proxied request back to the Kubernetes ingress controller. The ingress rules need to match both hostname AND path in the requests to find the correct service inside the cluster/namespace.

SSL is now effectively terminated at the NGINX fleet layer with a wildcard SSL certificate and services inside the cluster don’t need to worry about configuring their own individual SSL certificates.

Ingress Rule Configuration

Now, developers can deploy their apps, and customise their ingress rules to use both hostname and path to setup access for their apps running in the cluster(s).

For example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  labels:
    app: some-app
  name: some-app
  namespace: namespace1
spec:
  rules:
  - host: namespace1.cluster.foo.bar
    http:
      paths:
      - backend:
          serviceName: app1
          servicePort: 8083
        path: /app1
}

There are definitely other ways of doing this. Cleaner possibly, more automated in some ways, however with the constraints in play here (internal EKS, private only networks, no public internet access into the cluster), I think this is a good solution that makes life fairly pleasant for the developers that need to deploy their apps to these Kubernetes clusters.

Useful NGINX Ingress Controller Configurations for Kubernetes using Helm

My favourite Ingress Controller for Kubernetes is definitely the official NGINX Ingress Controller. It provides tons of customisation and is under active development with great community support. This post will dive into some of the more useful nginx ingress controller configurations and options available.

If you use the official stable/nginx-ingress chart for Helm, the default values you’ll get with installation are not always the best choices.

This is my collection of useful / common configuration options I tend to change when installing an ingress controller. A few of these options are geared towards AWS deployments, but otherwise the rest of the options are generic enough to apply to any platform you may be running on.

Useful nginx ingress controller options for Kubernetes

AWS only configuration options

  • Use an internal (private) Elastic Load Balancer for Ingress. Annotate with: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
  • Specify the kind of AWS Load Balancer to use with Ingress Controller. Annotate with: service.beta.kubernetes.io/aws-load-balancer-type: nlb/elb/alb

Common configuration options

  • controller.service.type (default == LoadBalancer) – specifies the type of controller service to create. Useful to open up the Ingress Controller for North/South traffic with differing models of access. E.g. Cluster only with ClusterIP, NodePort for specific host only access, or LoadBalancer to expose with a public or internal facing Load Balancer.
  • controller.scope.enabled (default == disabled / watch all namespaces) – where the controller should look out for ingress rule resources. Useful to limit the namespace(s) that the Ingress Controller works in.
  • controller.scope.namespace – namespace to watch for ingress rules if the controller.scope.enabled option is toggled on.
  • controller.minReadySeconds – how many seconds a pod needs to be ready before killing the next, during update – useful for when updating/upgrading the Ingress Controller deployment.
  • controller.replicaCount (default == 1) – definitely set this higher than 1. You want at least 2 for replicaCount to ensure there is always a controller running when draining nodes or updating your ingress controller.
  • controller.service.loadBalancerSourceRanges (default == []) – Useful to lock your Ingress Controller Load Balancer down. For example, you might not want Ingress open to 0.0.0.0/0 (all internet) and instead assign a value that restricts ingress access to an IP range you own. Using helm, you can specify an array with typical array square brackets e.g. [10.0.0.0/8, 172.0.0.0/8]
  • controller.service.enableHttp (default == true) – Useful to disable insecure HTTP (and leave only HTTPS)
  • controller.stats.enabled (default == false) – Enables controller stats page – Useful for stats and debugging. Not a good idea for production though. The controller stats service can be locked down if required by specific CIDR range.

To deploy the NGINX Ingress Controller helm chart and specify some of the above customisations, you can create a yaml file and populate it with the following example configuration (replace/change as required):

controller:
  replicaCount: 2
  service:
    type: "LoadBalancer"
    loadBalancerSourceRanges: [10.0.0.0/8]
    targetPorts:
      http: http
      https: http
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
  stats:
    enabled: true

Install with helm like so:

helm install -f ingress-custom.yaml stable/nginx-ingress --name nginx-ingress --namespace example

If you’re using an internal elastic load balancer (like the above example yaml configuration), don’t forget to make sure your private subnets are tagged with the following key/value:

key = “kubernetes.io/role/internal-elb”
value = “1”

Enjoy customising your own ingress controller!

Editing a webapp or site’s HTTP headers with Lambda@Edge and CloudFront

Putting CloudFront in front of a static website that is hosted in an S3 bucket is an excellent way of serving up your content and ensuring it is geographically performant no matter where your users are by leveraging caching and CloudFront’s geographically placed edge locations.

The setup goes a little something like this:

  • Place your static site files in an S3 bucket that is set up for static web hosting
  • Create a CloudFront distribution that uses the S3 bucket content as the origin
  • Add a cache behaviour to the distribution

This is an excellent way of hosting a website or webapp that can be delivered anywhere in the world with ultra low latency, and you don’t even have to worry about running your own webserver to host the content. Your content simply sits in an S3 bucket and is delivered by CloudFront (and can be cached too).

But what happens if you want to get a little more technical and serve up custom responses for any HTTP requests for your website content? Traditionally you’d need a custom webserver that you could use to modify the HTTP request/response lifecycle (such as Varnish / Nginx).

That was the case until Lambda@Edge was announced.

I was inspired to play around with Lambda@Edge after reading Julia Evan’s blog post about Cloudflare Workers, where she set up something similar to add a missing Content-Type header to responses from her blog’s underlying web host. I wanted to see how easy it was to handle in an AWS setup with S3 hosted content and CloudFront.

So here is a quick guide on how to modify your site / webapp’s HTTP responses when you have CloudFront sitting in front of it.

Note: you can run Lambda@Edge functions on all these CloudFront events (not just the one mentioned above):

  • After CloudFront receives a request from a viewer (viewer request)
  • Before CloudFront forwards the request to the origin (origin request)
  • After CloudFront receives the response from the origin (origin response)
  • Before CloudFront forwards the response to the viewer (viewer response)
  • You can return a custom response from Lambda@Edge without even sending a request to the CloudFront origin at all.

Of course the only ones that are guaranteed to always run are the Viewer type events. This is because origin request and origin response events only happen when the requested object is not already cached in an edge location. In this case CloudFront forwards a request to the origin and will receive a response back from the origin (hopefully!), and these events you can indeed act upon.

How to edit HTTP responses with Lambda@Edge

Create a new Lambda function and make sure it is placed in the us-east-1 region. (There is a requirement here by AWS that the function must be created in the US East / N. Virginia Region). When you create the function, it is deployed to all regions across the world with their own replication version of the Lambda@Edge function.

Fun fact: your CloudWatch logs for Lambda@Edge will appear in the relevant region where your content is requested from – i.e. based on the region the edge location exists in that ends up serving up your content.

You’ll need to create a new IAM Role for the function to leverage, so use the Lambda@Edge role template.

Select Node 6.10 runtime for the function. In the code editor, setup the following Node.js handler function which will do the actual header manipulation work:

exports.handler = (event, context, callback) => {
    const response = event.Records[0].cf.response;
    const headers = response.headers;
    
    headers['x-sean-example'] = [{key: 'X-Sean-Example', value: 'Lambda @ Edge was here!'}];
    
    callback(null, response);
};

 

The function will receive an event for every request passing through. In that event you simply retrieve the CloudFront response event.Records[0].cf.response and set your required header(s) by referencing the key by header name and setting the value.

Make sure you publish a version of the Lambda function, as you’ll need to attach it to your CloudFront behavior by ARN that includes the version number. (You can’t use $LATEST, so make sure you use a numerical version number that you have published).

Now if you make a new request to your content, you should see the new header being added by Lambda@Edge!

Lambda@Edge is a great way to easily modify CloudFront Distribution related events in the HTTP lifecycle. You can keep response times super low as the Lambda functions are executed at the edge location closest to your users. It also helps you to keep your infrastructure as simple as possible by avoiding the use of complicated / custom web servers that would otherwise just add unecessary operational overhead.

Running an S3 API compatible object storage server (Minio) on the Raspberry Pi

I’ve recently become interested in hosting my own local S3 API compatible object storage server at home.

So tonight I set about setting up Minio.

Image result for minio

Minio is an object storage server that is S3 API compatible. This means I’ll be able to use my working knowledge of the Amazon S3 API and tools, but to interact with my own, locally hosted storage service running on a Raspberry Pi.

I had heard about Zenko before (an S3 API compatible object storage server) but was searching around for something really lightweight that I could easily run on ARM architecture – i.e. my Raspberry Pi model 3 I have sitting on my desk right now. In doing so, Minio was the first that I found that could easily be compiled to run on the Raspberry Pi.

The goal right now is to have a local object storage service that is compatible with S3 APIs that I can use for home use. This has a bunch of cool use cases, and the ones I am specifically interested in right now are:

  • Being able to write scripts that interact with S3, but test them locally with Minio before even having to think about deploying them to the cloud. A local object storage API is going to be free and fast. Plus it’s great knowing that you’re fully in control of your own data.
  • Setting up a publically exposable object storage service that I can target with serverless functions that I plan to be running on demand in the cloud to do processing and then output artifacts to my home object storage service.

The second use case above is what I intend on doing to send ffmpeg processed video to. Basically I want to be able to process video from online services using something like AWS Lambda (probably using ffmpeg bundled in with the function) and output the resulting files to my home storage system.

The object storage service will receive these output files from Lambda and I’ll have a cronjob or rsync setup to then sync the objects placed into my storage bucket(s) to my home Plex media share.

This means I’ll be able to remotely queue up stuff to watch via a simple interface I’ll expose (or a message queue of some sort) to be processed by Lambda, and by the time I’m home everything will be ready to watch in Plex.

Normally I would be more interesting in running the Docker image for Minio, but at home I want something that is really cheap to run, and so compiling Minio for Raspberry Pi makes total sense to me here, as this device is super cheap to level powered on 24/7 as opposed to running something beefier that would instead run as a Docker host or lightweight Kubernetes home cluster.

Here’s the quick start up guide to get it running on Raspberry Pi

You’ll basically download Go, extract it, set it up on your path, then use it to compile Minio’s source code into an ARM compatible binary that you can run on your pi.

wget https://dl.google.com/go/go1.10.3.linux-armv6l.tar.gz
sudo tar -C /usr/local -xzf go1.10.3.linux-armv6l.tar.gz
export PATH=$PATH:/usr/local/go/bin # put into ~/.profile
source .profile
go get -u github.com/minio/minio
mkdir ~/minio-data
cd go/bin
./minio server ~/minio-data/

And you’re up and running! It’s that simple to get going quickly.

Running interactively you’ll get a default access and secret key in the terminal, so head on over to the Web UI / interface to check things out: http://your-raspberry-pi-ip-or-hostname:9000/minio/

Enter your credentials to login.

Of course at this stage you can also start using your S3 API compatible command line tools to start working with your new object storage server too.

Nice!

Provision your own Kubernetes cluster with private network topology on AWS using kops and Terraform – Part 2

Getting Started

If you managed to follow and complete the previous blog post, then you managed to get a Kubernetes cluster up and running in your own private AWS VPC using kops and Terraform to assist you.

In this blog post, you’ll cover following items:

  • Setup upstream DNS for your cluster
  • Get a Kubernetes Dashboard service and deployment running
  • Deploy a basic metrics dashboard for Kubernetes using heapster, InfluxDB and Grafana

Upstream DNS

In order for services running in your Kubernetes cluster to be able to resolve services outside of your cluster, you’ll now configure upstream DNS.

Containers that are started in the cluster will have their local resolv.conf files automatically setup with what you define in your upstream DNS config map.

Create a ConfigMap with details about your own DNS server to use as upstream. You can also set some external ones like Google DNS for example (see example below):

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
data:
  stubDomains: |
    {"yourinternaldomain.local": ["10.254.1.1"]}
  upstreamNameservers: |
    ["10.254.1.1", "8.8.8.8", "8.8.4.4"]

Save your ConfigMap as kube-dns.yaml and apply it to enable it.

kubectl apply -f kube-dns.yaml

You should now see it listed in Config Maps under the kube-system namespace.

Kubernetes Dashboard

Deploying the Kubernetes dashboard is as simple as running one kubectl command.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

You can then start a dashboard proxy using kubectl to access it right away:

kubectl proxy

Head on over to the following URL to access the dashboard via the proxy you ran:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

You can also access the Dashboard via the API server internal elastic load balancer that was set up in part 1 of this blog post series. E.g.

https://your-internal-elb-hostname/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default

Heapster, InfluxDB and Grafana (now deprecated)

Note: Heapster is now deprecated and there are alternative options you could instead look at, such as what the official Kubernetes git repo refers you to (metrics-server). Nevertheless, here are the instructions you can follow should you wish to enable Heapster and get a nice Grafana dashboard that showcases your cluster, nodes and pods metrics…

Clone the official Heapster git repo down to your local machine:

git clone https://github.com/kubernetes/heapster.git

Change directory to the heapster directory and run:

kubectl create -f deploy/kube-config/influxdb/
kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml

These commands will essentially launch deployments and services for grafana, heapster, and influxdb.

The Grafana service should attempt to get a LoadBalancer from AWS via your Kubernetes cluster, but if this doesn’t happen, edit the monitoring-grafana service YAML configuration and change the type to LoadBalancer. E.g.

"type": "LoadBalancer",

Save the monitoring-grafana service definition and your cluster should automatically provision a public facing ELB and set it up to point to the Grafana pod.

Note: if you want it available on an internal load balancer instead, you’ll need to create your grafana service using the aws-load-balancer-internal annotation instead.

Grafana dashboard for Kubernetes with Heapster

Now that you have Heapster running, you can also get some metrics displayed directly in your Kubernetes dashboard too.

You may need to restart the dashboard pods to access the new performance stats in the dashboard though. If this doesn’t work, delete the dashboard deployment, service, pods, role, and then re-deploy the dashboard using the same process you followed earlier.

Once its up and running, use the DNS for the new ELB to access grafana’s dashboard, login with admin/admin and change the default admin password to something secure and save. You can now access cluster stats/performance stats in kubernetes, as well as in Grafana.

Closing off

This concludes part two of this series. To sum up, you managed to configure upstream DNS, deploy the Kubernetes dashboard and set up Heapster to allow you to see metrics in the dashboard, as well as deploying InfluxDB for storing the metric data with Grafana as a front end service for viewing dashboards.