Useful NGINX Ingress Controller Configurations for Kubernetes using Helm

My favourite Ingress Controller for Kubernetes is definitely the official NGINX Ingress Controller. It provides tons of customisation and is under active development with great community support. This post will dive into some of the more useful nginx ingress controller configurations and options available.

If you use the official stable/nginx-ingress chart for Helm, the default values you’ll get with installation are not always the best choices.

This is my collection of useful / common configuration options I tend to change when installing an ingress controller. A few of these options are geared towards AWS deployments, but otherwise the rest of the options are generic enough to apply to any platform you may be running on.

Useful nginx ingress controller options for Kubernetes

AWS only configuration options

  • Use an internal (private) Elastic Load Balancer for Ingress. Annotate with: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
  • Specify the kind of AWS Load Balancer to use with Ingress Controller. Annotate with: service.beta.kubernetes.io/aws-load-balancer-type: nlb/elb/alb

Common configuration options

  • controller.service.type (default == LoadBalancer) – specifies the type of controller service to create. Useful to open up the Ingress Controller for North/South traffic with differing models of access. E.g. Cluster only with ClusterIP, NodePort for specific host only access, or LoadBalancer to expose with a public or internal facing Load Balancer.
  • controller.scope.enabled (default == disabled / watch all namespaces) – where the controller should look out for ingress rule resources. Useful to limit the namespace(s) that the Ingress Controller works in.
  • controller.scope.namespace – namespace to watch for ingress rules if the controller.scope.enabled option is toggled on.
  • controller.minReadySeconds – how many seconds a pod needs to be ready before killing the next, during update – useful for when updating/upgrading the Ingress Controller deployment.
  • controller.replicaCount (default == 1) – definitely set this higher than 1. You want at least 2 for replicaCount to ensure there is always a controller running when draining nodes or updating your ingress controller.
  • controller.service.loadBalancerSourceRanges (default == []) – Useful to lock your Ingress Controller Load Balancer down. For example, you might not want Ingress open to 0.0.0.0/0 (all internet) and instead assign a value that restricts ingress access to an IP range you own. Using helm, you can specify an array with typical array square brackets e.g. [10.0.0.0/8, 172.0.0.0/8]
  • controller.service.enableHttp (default == true) – Useful to disable insecure HTTP (and leave only HTTPS)
  • controller.stats.enabled (default == false) – Enables controller stats page – Useful for stats and debugging. Not a good idea for production though. The controller stats service can be locked down if required by specific CIDR range.

To deploy the NGINX Ingress Controller helm chart and specify some of the above customisations, you can create a yaml file and populate it with the following example configuration (replace/change as required):

controller:
  replicaCount: 2
  service:
    type: "LoadBalancer"
    loadBalancerSourceRanges: [10.0.0.0/8]
    targetPorts:
      http: http
      https: http
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
  stats:
    enabled: true

Install with helm like so:

helm install -f ingress-custom.yaml stable/nginx-ingress --name nginx-ingress --namespace example

If you’re using an internal elastic load balancer (like the above example yaml configuration), don’t forget to make sure your private subnets are tagged with the following key/value:

key = “kubernetes.io/role/internal-elb”
value = “1”

Enjoy customising your own ingress controller!

Another update! ESXi Host Backup & Restore GUI Utility (PowerCLI based) updated to 1.3

The other weekend I managed to get some spare time to do another update to my ESXi 5.0 / 5.1 Host Backup & Restore GUI utility, this time it has been updated to version 1.3. I didn’t post up the changes as it was done by special request from one of my blog readers (thanks Flavio!) However, after receiving more comments with a few others having a similiar issue to what Flavio had, I thought I should definitely post the updated version here, which should hopefully solve the issues some people are seeing.

 

The changes are based on feedback received in the comments I have received about the utility relating to exceptions received when users in some circumstances try to backup their host configurations. Specifically the exception message “Exception caught: Get-VMHost VMHost with name ‘xxx’ was not found using the specified filter(s).

You can check the utility out over on it’s page here.

Updates (17-02-2013) – version 1.3:

  • Hosts are retrieved using a new method (for both backup and restore options)

 

Distributed Virtual Switch 5.1 Health Check for VLAN configuration issues

With the announcement of vSphere 5.1, one of the new features announced was the network health check feature now available for Distributed Virtual Switches (version 5.1 of the switch). This area has already been covered in detail by two bloggers I know of, namely Chris Wahl at Wahlnetwork, and Rickard Nobel.

However, this is one feature I was really looking forward to testing out myself, and had been preparing for by getting some physical Microserver Hosts up and running in my home lab with multiple NICs and VLAN support. The other day I had a chance to play around with the Network health check functionality with one of my hosts uplinked to a DVS I had created in vCenter.

This evening I was reminded of how useful this feature actually is. I had plugged one uplink from my Dell PowerConnect 5324 switch into the dual port NIC in the host and left the other NIC disconnected as I was short one cable. Tonight (a day later) I connected this up and was immediately notified of an issue on the uplink with the VLAN health status! I had of course, forgotten to setup the port trunking on the Dell switch (VLANs 8 and 10) after having set this up yesterday for just the one port that was connected.

 

Here is a breakdown of what I saw using the vSphere Web Client after selecting my DVS and then choosing the Health tab under “Monitor”. (vCenter also has alarms set up when you enable the feature that show to alert you of the issue).

 

 

A quick change on my switch to set the VLANs up on this particular uplink port meant I was soon up and running again.

 

As you can see the Health Check feature is really useful, providing vSphere admins with an easy way to check network port configurations on the networking hardware without having to login to another interface and check themselves, or rely on another team to do this for them. For more detail, or instructions on how to set this up, I recommend checking out the two blog posts I linked to above by Chris Wahl and Rickard Nobel.

 

My VMware vSphere Home lab configuration

I have always enjoyed running my own home lab for testing and playing around with the latest software and operating systems / hypervisors. Up until recently, it was all hosted on VMware Workstation 8.0 on my home gaming PC, which has an AMD Phenom II x6 (hex core) CPU and 16GB of DDR3 RAM. This has been great, and I still use it, but there are some bits and pieces I still want to be able to play with that are traditionally difficult to do on a single physical machine, such as working with VLANs and taking advantage of hardware feature sets.

 

To that end, I have been slowly building up a physical home lab environment. Here is what I currently have:

Hosts

  • 2 x HP Proliant N40L Microservers (AMD Turion Dual Core processors @ 1.5GHz)
  • 8GB DDR3 1333MHz RAM (2 x 4GB modules)
  • Onboard Gbit NIC
  • PCI-Express 4x HP NC360T Dual Port Gbit NIC as addon card (modifed to low-profile bracket)
  • 250GB local SATA HDD (just used to host the ESXi installations.

Networking

  • As mentioned above, I am using HP NC360T PCI-Express NICs to give me a total of 3 x vmnics per ESXi host.
  • Dell PowerConnect 5324 switch (24 port Gbit managed switch)
  • 1Gbit Powerline Ethernet home plugs to uplink the Dell PowerConnect switch to the home broadband connection. This allows me to keep the lab in a remote location in the house, which keeps the noise away from the living area.

Storage

  • This is a work in progress at the moment, (currently finding the low end 2 x bay home NAS devices are not sufficient for performance, and the more expensive models are too expensive to justify).
  • Repurposed Micro-ATX custom built PC, housed in a Silverstone SG05 micro-ATX chassis running FreeNAS 8.2 (Original build and pics of the chassis here)
  • Intel Core 2 Duo 2.4 GHz processor
  • 4GB DDR2-800 RAM
  • 1 Gbit NIC
  • 1 x 1TB 7200 RPM SATA II drive
  • 1 x 128GB OCZ Vertex 2E SSD (SATA II)
  • As this is temporary, each drive provides 1 x Datastore to the ESXi hosts. I therefore have one large datastore for general VMs, and one fast SSD based datastore for high priority VMs, or VM disks. I am limited by the fact that the Micro-ATX board only has 2 x onboard SATA ports, so I may consider purchasing an addon card to expand these.
  • Storage is presented as NFS. I am currently testing ZFS vs UFS and the use of the SSD drive as a ZFS and zil log / and or cache drive. To make this more reliable, I will need the above mentioned addon card to build redundancy into the system, as I would not like to lose a drive at this time!

Platform / ghetto rack

  • IKEA Lack rack (black) – cheap and expandable : )

 

To do

Currently, one host only has 4GB RAM, I have an 8GB kit waiting to be added to bring both up to 8GB. I also need to add the HP NC360T dual port NIC to this host too as it is a recent addition to the home lab.

On the storage side of things, I just managed to take delivery of 2 x OCZ Vertex 2 128GB SSD drives which I got at bargain prices the other day (£45 each). Once I have expanded SATA connectivity in my Micro-ATX FreeNAS box I will look into adding these drives for some super fast SSD storage expansion.

 

The 2 x 120GB OCZ SSDs to be used for Shared Host Storage
HP NC360T PCI-Express NIC and 8GB RAM kit for the new Microserver

 

Lastly, the Dell PowerConnect 5324 switch I am using still has the original firmware loaded (from 2005). This needs to be updated to the latest version so that I can enable Link Layer Discovery Protocol (LLDP) – which is newly supported with the VMware vSphere 5.0 release on Distributed Virtual Switches. This can help with the configuration and management of network components in an infrastructure, and will mainly serve to allow me to play with this feature in my home lab. I seem to have lost my USB-to-Serial adapter though, so this firmware upgrade will need to wait until I can source a new one off ebay.

 

PowerCLI – Fetch Interesting stats or configuration for a list of VMs

Now and then I find that I need to retrieve some useful information from a variety of VMs, this usually involves me doing a Get-VM with some selections and criteria to search for. However sometimes the information I require about a VM is listed in the advanced configuration and not as easy to get to with a single cmdlet. I thought it would be really handy to have a PowerCLI function that would easily pull the useful information out for me and summarise it for any given VM or set of VMs.

 

With that said, I recently read a great blog post by Jonathan Medd (Basic VMware Cluster Compatibility Check), and after reading it I thought it would be a great idea to create a set of functions that provide me with the information I often use. To start, I thought I would do a function that lists the most useful or common information about VMs that I often search for. As well as speeding up the process of retrieving information about VMs, I thought it would also be good PS/PowerCLI practise for me to write more functions. The reason being that I often tend to do PowerCLI reporting scripts rather than actual functions that accept input from the pipeline or other parameters. Below is my function to collect some useful information about Virtual Machines – you can specify a VM with the -VM parameter or pipe a list of VMs to it, using Get-VM | Get-VMUsefulStats. Jonathan’s post also had an interesting section about the order in which output is displayed. You’ll need to pipe the output to Select-Object to change the order the information is fed back in, otherwise it will list the information in the default order. This is not really a problem anyway, just good to know if you are fussy about the order in which the output comes back in!)

 

So, here is the first function (Get-VMUsefulStats):

 

[download id=”7″]

 

function Get-VMUsefulStats {
<#
.SYNOPSIS
Fetches interesting or useful stats about VMware Virtual Machines

.DESCRIPTION
Fetches interesting or useful stats about VMware Virtual Machines

.PARAMETER VM
The Name of the Virtual Machine to fetch information about

.EXAMPLE
PS F:\> Get-VMUsefulStats -VM FS01

.EXAMPLE
PS F:\> Get-VM | Get-VMUsefulStats

.EXAMPLE
PS F:\> Get-VM | Get-VMUsefulStats | Where {$_.Name -match "FS"}

.LINK
http://www.shogan.co.uk

.NOTES
Created by: Sean Duffy
Date: 18/01/2012
#>

[CmdletBinding()]
param(
[Parameter(Position=0,Mandatory=$true,HelpMessage="Name of the VM to fetch stats about",
ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
[System.String]
$VMName
)

process {

$VM = Get-VM $VMName

$VMHardwareVersion = $VM.Version
$VMGuestOS = $VM.OSName
$VMvCPUCount = $VM.NumCpu
$VMMemShare = ($VM.ExtensionData.Config.ExtraConfig | Where {$_.Key -eq "sched.mem.pshare.enable"}).Value
$VMMemoryMB = $VM.MemoryMB
$VMMemReservation = $VM.ExtensionData.ResourceConfig.MemoryAllocation.Reservation
$VMUsedSpace = [Math]::Round($VM.UsedSpaceGB,2)
$VMProvisionedSpace = [Math]::Round($VM.ProvisionedSpaceGB,2)
$VMPowerState = $VM.PowerState

New-Object -TypeName PSObject -Property @{
Name = $VMName
HW = $VMHardwareVersion
VMGuestOS = $VMGuestOS
vCPUCount = $VMvCPUCount
MemoryMB = $VMMemoryMB
MemoryReservation = $VMMemReservation
MemSharing = $VMMemShare
UsedSpaceGB = $VMUsedSpace
ProvisionedGB = $VMProvisionedSpace
PowerState = $VMPowerState
}
}
}

 

You can use the cmdlet to very easily retrieve information about a single VM or list of VMs. Examples:

 

Get-VMUsefulStats -VM NOOBS-VC01

Get-VM | Get-VMUsefulStats

 

To format the output in a neat table, pipe the above to Format-Table (ft) like so:

 

Get-VM | Get-VMUsefulStats | ft

 

The “MemShare” value is interesting – it is something I was specifically interested in, as some VMs I have worked with in the past have needed memory sharing to be specifically disabled, and this is something that needs to be changed with an advanced parameter on the VM. Therefore most (default) VMs will not have this entry at all and will appear blank in the output. (the parameter I am referring to for those interested is sched.mem.pshare.enable). Of course this most likely won’t be of any use to you, so feel free to omit this bit from the function, or feel free to customise the function to return information useful to your VMware deployment VMs. Here is an example of the output for one VM.

 

 

Anyway, I hope someone finds this useful, and please do let me know if you think of any improvements or better way of achieving a certain result.