Multipurpose FreeNAS Server Build

multipurpose freenas server build

There is something magical about building your own infrastructure from scratch. And when I say scratch, I mean using bare metal. This is a run through of my multipurpose FreeNAS server build process.

After scratching the itch recently with my Raspberry Pi Kubernetes Cluster, I got a hankering to do it again, and this build was soon in the works.

Part of my motivation came from my desire to reduce our reliance on cloud technology at home. Don’t get me wrong, I am an advocate for using the cloud where it makes sense. My day job revolves around designing and managing various clients’ cloud infrastructure.

At home, this was more about taking control of our own data.

I’ll skip to the juicy specifications part if you would like to know what hardware I used right away.

The intial hardware
Note: I got this Gigabyte B450 motherboard, but soon found out it did not support ECC.

Final specifications:

These are the final specifications I decided on. Scroll down to see the details about each area.

The Goals

The final home server build would need to meet many requirements:

  • It should provide a resilient, large shared storage pool for network file storage across multiple Windows PCs at home.
  • Support NFS storage for sharing persistent volumes to my Raspberry Pi Kubernetes Cluster.
  • It should be able to run Plex for home and remote media streaming.
  • It must be able to run Nextcloud for home and remote mobile file storage.
  • Run services in Virtual Machines, Jails, or Docker containers. For example, I like to run Pi-hole as a DNS server for all my home equipment and devices.

The Decision Process

I started out my search looking at two products. Unraid and FreeNAS.

I have had experience running FreeNAS in the past for home lab setups. I never really used it seriously with the goal of making it reliable though.

This time around, all my files would be at stake, so I did a fair bit of research into the features and offerings of both products.

Unraid performed quite well for me. But, what pushed me away from it was the fact that it is a paid for, closed source, commercial product.

Unraid does make it super easy to bundle storage together and expand that storage in future if need be. However FreeNAS’ use of ZFS and it’s various other features were what won me over.

The Build Details

Having settled on FreeNAS, I went about researching which hardware I would need. My goal here was to not spend too much money, but at the same time not cheap out and compromise on reliability.

CPU, Motherboard, RAM

ECC (Error Checking and Correction) RAM is very important for ZFS, so this is basically what my build hinged on.

I found that AMD Ryzen CPUs support ECC, and so do most Ryzen compatible motherboards.

Importantly, in my research I found that Ryzen APU CPUs do not support ECC. Make sure you do not get an APU if ECC is important to you.

Additionally, many others report much better stability running FreeNAS on AMD Ryzen Generation 2 chips and above. With this in mind, I decided I would use at least an AMD Ryzen 2xxx CPU.

On the ECC topic, I only found evidence of single bit error correction working on AMD Ryzen systems.

I also made an initial mistake here in my build buying a Gigabyte B450M DS3H motherboard. The product specs seem to indicate that it supports ECC, and so did a review I found on Anandtech. In reality the Gigabyte board does not support the ECC feature. Rather it ‘supports’ ECC memory by allowing the system to boot with ECC RAM installed, but you don’t get the actual error checking and correction!

I figured this out after booting it up with Fedora Rawhide as well as a couple of uBuntu Server distributions and running the edac-utils package. In all cases edac-utils failed to find ECC support / or any memory controller.

checking ECC support with edac-utils
Checking ECC support with edac-utils

The Asus board I settled on supports ECC and edac-utils confirmed this.

The motherboard also has an excellent EFI BIOS. I found it easy to get to the ECC and Virtualization settings.

the Asus Prime X470-Pro EFI BIOS

Storage

I used 4 x Western Digital 3TB Red hard drives for the RAIDZ1 main storage pool.

Western Digital 3TB Red hard drives

The SSD storage pool consists of 2 x Crucial MX500 250GB SSD SATA drives in a mirror configuration. This configuration is for running Virtual Machines and the NFS storage for my Kubernetes cluster.

Graphics Card

The crossing out of APUs also meant I would need a discrete graphic card for console / direct access, and to install the OS initially. I settled on a cheap PCI Express Graphics card off Ebay for this.

A cheap AMD Radeon HD 6450 1GB DVI DisplayPort PCI-Express Graphics Card I used for the FreeNAS build.

Having chosen a beefy six core Ryzen 2600 CPU, I decided I didn’t need to get a fancy graphics card for live media encoding. (Plex does much better with this). If media encoding speed and efficiency is important to you, then consider something like an nVIDIA or AMD card.

For me, the six core CPU does a fine job at encoding media for home and remote streaming over Plex.

Network

I wanted to use this system to server file storage for my home PCs and equipment. Besides this, I also wanted to export and share storage to my Raspberry Pi Kubernetes cluster, which runs on it’s own, dedicated network.

The simple solution for me here was multihoming the server onto the two networks. So I would need two network interface cards, with at least 1Gbit/s capability.

The motherboard already has an Intel NIC onboard, so I added two more ports with an Intel Pro Dual Port Gigabit PCI Express x4 card.

Intel dual port NIC

Configuration Highlights

I’ll detail the highlights of my configuration for each service the multipurpose FreeNAS Server build hosts.

Main System Setup

The boot device is the 120GB M.2 nVME SSD. I installed FreeNAS 11.3 using a bootable USB drive.

FreeNAS Configuration

I created two Storage Pools. Both are encrypted. Besides the obvious protection encryption provides, this also makes it easier to recycle drives later on if I need to.

FreeNAS storage pool configuration
  • Storage Pool 1
    • 4 x Western Digital Red 3TB drives, configured with RAIDZ1. (1 disk’s worth of storage is effectively lost for parity, giving roughly 8-9 TB of usable space).
    • Deduplication turned off
    • Compression enabled
  • Storage Pool 2
    • 2 x Crucial MX500 250GB SSD drives, configured in a Mirror (1 disk mirrors the other, providing a backup if one fails).
    • Deduplication turned off
    • Compression enabled

The network is set to use the onboard NIC to connect to my main home LAN. One of the ports on the Intel dual port NIC connects to my Raspberry Pi Kubernetes Cluster network and assigned a static IP address on that network.

Windows Shares

My home network’s storage shares are simple Windows SMB Shares.

I created a dedicated user in FreeNAS which I configured in the SMB share configuration ACLs to give access.

Windows machines then simply mount the network location / path as mapped drives.

I also enabled Shadow Copies. FreeNAS supports this to enable Windows to use Shadow Copies.

FreeNAS Windows SMB share

Pi-hole Configuration

I setup a dedicated uBuntu Server 18.04 LTS Virtual Machine using FreeNAS’ built-in VM support (bhyve). Before doing this, I enabled virtualization support in the motherboard BIOS settings. (SVM Mode = Enabled).

I used the standard installation method for Pi-Hole. I made sure the VM was using a static IP address and was bridged to my home network. Then I reconfigured my home DHCP server to dish out the Pi-hole’s IP address as the primary DNS server to all clients.

For the DNS upstream servers that Pi-hole uses, I chose to use the Quad9 (filtered, DNSSEC) ones, and enabled DNSSEC.

pi-hole upstream DNS configuration with DNSSEC

NextCloud

NextCloud has a readily available plugin for FreeNAS. However, out of the box you get no SSL. You’ll need to setup your networking at home to allow remote access. Additionally, you’ll need to get an SSL certificate. I used Let’s Encrypt.

I detailed my full process in this blog post.

Plex

Plex was a simple setup. Simply install the Plex FreeNAS plugin from the main Plugins page and follow the wizard. It will install and configure a jail to run Plex.

To mount your media, you need to stop the Plex jail and edit it to add your media location on your storage. Here is an example of my mount point. It basically mounts the media directory I use to keep all my media into the Plex Jail’s filesystem.

Plex jail mount point

NFS Storage for Kubernetes

Lastly, I setup an NFS share / export for my Raspberry Pi Kubernetes Cluster to use for Persistent Volumes to attach to pods.

NFS shares for Kubernetes in FreeNAS

The key points here were that I allowed the two network ranges I wanted to have access to this storage from. (10.0.0.0/8 is my Kubernetes cluster network). I also configured a Mapall user of ‘root’, which allows the storage to be writeable when mounted by pods/containers in Kubernetes. (Or any other clients that mount this storage).

I was happy with this level of access for this particular NFS storage share from these two networks.

Next, I installed the NFS External-storage provisioner for Kubernetes on my Pi Cluster. I needed to use the ARM specific deployment manifest as Pi’s of course have ARM CPUs.

I modified the deployment manifest to point it to my FreeNAS machine’s IP address and NFS share path.

The kubernetes nfs client provisioner manifest configured for NFS storage provisioning.

With that done, pods can now request persistent storage with a Persistent Volume Claim (PVC). The NFS client provisioner will create a directory for the pod (named after the pod itself) on the NFS mount and mount that to your pod.

Final Thoughts

So far the multipurpose FreeNAS server build has been very stable. It has been happily serving our home media streaming, storage, and shared storage needs.

It’s also providing persistent storage for my Kubernetes lab environment which is great, as I prefer not to use the not-so-durable microSD cards on the Raspberry Pi’s themselves for storage.

The disk configuration size seems fine for our needs. At the moment we’re only using ~20% of the total storage, so there is plenty of room to grow.

I’m also happy with the ability to run custom VMs or Jails for additional services, though I might need to add another 16GB of ECC RAM in the future to support more as ZFS does well with plenty of memory.

Checking if your SSD supports “TRIM” using FreeNAS 8.x

I have been playing with the newer versions of FreeNAS for shared storage on my home VMware vSphere lab recently (after having last used it on version 7.x). I added a spare OCZ Vertex Plus 120GB SSD to my mini-ITX based FreeNAS box and was wondering how TRIM would be handled, if at all with FreeNAS.

 

To check to see if your SSD supports TRIM under FreeNAS, open up a Shell session to your FreeNAS box – i.e. PuTTy, or via the Web GUI. Then issue the following command, specifying your SSD drive where /dev/ada0 is used as an example below. Note that we are using the CAM control program that comes with FreeBSD. Please exercise caution with this command as it has the potential to cause damage if not used correctly!

 

camcontrol identify /dev/ada0

 

If you need to check disk/device names to figure out which one is your SSD, you could use the GUI. Go to Storage -> View Disks, then check the name column for the device names of each disk. Use /dev/diskname in the command above. After running the command above, you’ll get a list of disk information back, just check the “data set management (TRIM)” row to see if TRIM support is enabled or not.

 

I have not yet worked out a way to see if TRIM is actually being actively used yet though – so if anyone has any suggestions or ideas as to how to check that it is actually in use, please let me know!

 

VMware Labs – iSCSI Shared Storage how-to using the HP P4000 LeftHand VSA [Part 2/2]

 

In the last post [Part 1/2], we prepared our VSA, created a management group and cluster in the CMC, and then initialized our disks. Next up, we’ll be creating a Volume which will be presented to our ESXi hosts as an iSCSI LUN. Before we do this though, we need to make sure our hosts can see this LUN. Therefore we’ll be making entries for each of our ESXi hosts using their iSCSI initiator names (IQNs).

 

Preparing your iSCSI Adapter

 

If your ESXi hosts don’t already have a dedicated iSCSI adapter you’ll need to use the VMware Software iSCSI adapter. By default this not enabled in ESXi 5.0. This is simple to fix – we just need to get it added. Select your first ESXi host in the Hosts & Clusters view of the vSphere Client, and click Configuration -> Storage Adapters -> Add -> Select “Add Software iSCSI Adapter”. Click OK to confirm.

 

 

Now we need to find the IQN of the iSCSI adapter. In the vSphere client, select the iSCSI adapter you are using and select Properties on it under Storage Adapters.  This will bring up the iSCSI Initiator Properties. Click the Configure button and copy the iSCSI Name (IQN) to your clipboard.

 

Getting the iSCSI adapter IQN using the vSphere Client

 

Quick tip: you can also fetch your iSCSI adapter information (including the IQN) using esxcli. Login to your host using the vMA appliance, the DCUI, or SSH for example and issue the following command, where “vmhba33” is the name of the adapter you want to fetch info on:

 

esxcli iscsi adapter get -A vmhba33

 

Getting the iSCSI adapter IQN using ESXCLI

 

Configuring a Server Cluster and the Server (Host) entries in the CMC

 

We’ll now create “Servers” in the HP CMC which are what we’ll be adding to our “SAN LUN” later on to allow our ESXi hosts access. In the CMC go to Servers and then click Tasks -> New Server Cluster. Give the Cluster a name and optional description, then click New Server. Enter the details of  each ESXi host (Click New Server for each host you have in your specific cluster). For each ESXi host, the Initiator Node Name is the iSCSI Name, or IQN we got from each ESXi host in the step above. The Controlling Server IP Address in each case should be the IP address of your vCenter Server. For this example we won’t be using CHAP authentication, so leave that at “CHAP not required”. Once all your ESXi hosts are added to the new cluster, click OK to finish.

 

Creating a new Server Cluster and adding each ESXi host and it's corresponding iSCSI Names/IQNs

 

Creating a new Volume and assigning access to our Hosts

 

Back in the CMC, with our disks that are now marked as Active we’ll now be able to create a shiny new Volume which is what we will be presenting to our ESXi hosts as an iSCSI LUN. Right-click “Volumes and Snapshots” and then select “Create New Volume”

 

 

Enter a Volume Name and Reported Size. You can also use the Advanced Tab to choose Full or Thin provisioning options, as well as Data Protection level (if you had more than one VSA running I believe).

 

 

Now we’ll need to assign servers to this Volume (We’ll be assigning our whole “Server Cluster” we created earlier to this Volume to ensure all our ESXi hosts get access to the volumen. Click Assign and Unassign Servers, then tick the box for your Server Cluster you created and ensure the Read/Write permission is selected. Then click OK

 

Assign the Server Cluster to the Volume for Read/Write access

 

Final setup and creating our Datastore with the vSphere Client

 

Go back to the vSphere client, go to one of your ESXi hosts, and bring up the Properties for your iSCSI adapter once again. We’ll now use “Add Send Target Server” under the Dynamic Discovery tab to add the IP address of the P4000 VSA. Click OK then Close once complete.

 

 

You should be prompted to Rescan the Host Bus Adapter at this stage. Click Yes and the Rescan will be begin. After the Scan is complete, you should see your new LUN is being presented as we’ll see a new device listed under your iSCSI Adapter (vmhba33 in my case for the Software iSCSI adapter).

 

New device found on the iSCSI Software adapter

 

Now that everything is prepared, it is a simple case of creating our VMFS datastore now from this LUN for our ESXi hosts to use for VM storage. Under Hosts & Clusters in the vSphere Client, go to Configuration, then Storage. Click Add Storage near the top right, and follow the wizard through. You should see the new LUN being presented from our VSA, so select that and enter the details of your new Datastore – Capacity, VMFS file system version and Datastore Name. Finish off the wizard and you are now finished. The new datastore is created, partitioned and ready to be accessed!

 

Add Storage Wizard

 

Complete - the new datastore is added

 

Well, that is all there is to it. To summarise, we have now achieved the following over the course of these two blog posts:

 

  • Installing and configuring the P4000 LeftHand VSA
  • Setting up the CMC
  • Creating a VSA Management Group
  • Creating a VSA Standard Cluster
  • Creating Servers entries in a new Server Cluster for each of our ESXi hosts to be presented the storage
  • Creating a LUN / Storage Volume
  • Configuring the ESXi hosts to find our VSA in the vSphere Client
  • Adding the Storage to our ESXi hosts and Creating our VMFS Datastore using the vSphere Client

 

To conclude, I hope this series has been helpful and that you are well on your way to setting up iSCSI shared storage for your VMware Cluster! As always, if you spot anything that needs adjusting, or have any comments, please feel free to add feedback in the comments section.

 

VMware Labs – iSCSI Shared Storage how-to using the HP P4000 LeftHand VSA [Part 1/2]

 

iSCSI Shared Storage for your Lab

 

I have had a few people asking how I set up my Shared iSCSI storage for my own VMware Lab environment I run at home – the same lab I used to study for my VCP 4 and VCP 5 exams. So, I thought I would write up a blog post detailing how to go about setting this up and trying it out for yourself.

 

You have NFS shared storeage up and running for your ESXi hosts in your lab, but what about iSCSI? There are many different options out there. Here are a few I can think of off the top of my head:

 

  • FreeNAS VM
  • OpenFiler VM
  • HP P4000 Lefthand VSA trial
  • Hardware based – for example Iomega StorCenter IX2 series or QNAP NAS device

 

The last two options (hardware based are less feasible for a lab environment as you ideally don’t want to pay for something you will be testing. That being said, I was quite keen on the HP P4000 LeftHand VSA, as it offers the same kind of interface that you would use with the actual hardware version as well as some really cool enterprise-like features, such as clustering. In fact, as I understand, many businesses actually use the P4000 VSA in production – it was in the game before VMware came out with their own Virtual shared storage solution. Both of these solutions actually provide highly available shared storage for your ESXi hosts. Anyway, enough of the small talk – lets get on to setting up some shared iSCSI storage for our ESXi hosts to use for running Virtual Machines.

 

Deciding where to run your P4000 VSA VM

 

First of all download the trial of the HP P4000 LeftHand VSA. Once you are signed up for the free trial, you should get two options – one version for “Laptops” and one for “ESX”. Grab the relevant version – I chose to run my VSA VMs directly in VMware Workstation 8 and allowed my ESXi VMs access to their storage. If you want to run your VSAs as VMs on your ESXi host VMs then grab the “ESX” version. Once you have it downloaded, extract the download into a convenient location. I wanted my VSA to run on faster disks in my home system, so I moved the extracted files to an SSD volume. Remember to take this into consideration for your lab too – VMs will be running on this, so plan your lab VM deployment and storage carefully. Once ready, simply right-click the VSA.vmx configuration file and select “Open with VMware Workstation”. (Or add to Inventory if you are using the ESX version and browsing the VSA with your Datastore Browser).

 

 

Configuration

 

Now that we have the VSA VM inventoried, we need to create some additional virtual disks for it to use (by default it just has a disk used for it’s OS). Right-click the VM and add some disks. There is one important thing you should note here – the disks should be added on SCSI devices 1:0 and onwards. I added 3 x Virtual Disks to my VSA. Note that a storage total of more than 500GB will require your VSA VM to have more than 768 RAM). I chose 3 x 80GB Virtual disks, meaning I would get a RAID5 160GB volume at the end of this exercise. I found out the hard way (troubleshooting a VSA that would not work) that your VSA needs around 1GB or more of RAM if you have more than 500GB of storage on it! Keep this figure under 500GB and you can get away with the default 384MB RAM which is ideal for a home lab. So here are the details I used for each Virtual Disk added (a total of 3 of these):

 

  • New Virtual Disk
  • SCSI (Recommended)
  • Mode -> (Independent) -> Persistent
  • Enter size of disk – for e.g. 80GB
  • Thin provisioned (Leave “Allocate all disk space now” unticked) – to save disk space on those SSDs especially!
  • Store Virtual Disk as a Single File
  • Specify Virtual Disk filename

 

Important: If you didn’t get an option to specify the Virtual Device Node, go back to “Advanced” on each disk and change to device node x, where x is Virtual Device Node SCSI 1:1 to 1:3. (a different node for each disk you added). If you do not specify these selections, then the VSA will not detect your disks or be able to use them.

 

Remeber to use these Virtual Device Nodes for each disk added.

 

Once your disks are added, ensure it is on the right VM network (I used bridged in Workstation for my lab), your network situation will of course vary. Then power up the VSA. Whilst it is powering up, we’ll need to get the HP P4000 Centralized Management Console installed on a “management” PC. In your VSA download you should have also received the installer for this. Simply run the installer and go through the wizard to get this installed.

 

P4000 Centralized Management Console Installer

 

Back to your VSA console, you should now be at the login prompt – type Start to login, then press Enter at the “Login” screen. We’ll now be presented with a menu:

 

 

Navigate to Network TCP/IP Settings and choose your eth0 adapter. Configure a hostname for your VSA – in my example I used “blogvsa.noobs.local”. Don’t forget to set your VSA up to have a static IP address and enter your network details. If you have a DNS server, now would be a good time to also add an A Name Record for your VSA’s hostname and assign it the IP address you configured it with. Accept the network changes for the VSA and wait for it to apply the new settings.

 

Hostname and Network configuration.

 

Now launch your HP P4000 Centralized Management Console from the machine you installed it on, and we’ll begin setting this VSA up. Once open, you should have a few options to the left, and hopefully, the CMC would have already found your new VSA on the network. If not, don’t stress – just use the menu option Find -> Find Systems -> Find. Once the VSA is discovered, you can then close the “Find” window and view the VSA under Available Systems.

 

Expand Available Systems and locate your newly powered up VSA.

 

Next, we’ll create a new Management Group and add the VSA to it. The group will exist on this VSA as it is our only storage system. Right click on the VSA and choose Add to New Management Group. Give the group a suitable name, then click Next. The next screen asks us to create an Administrative user. Enter the details for a new admin account and then click Next. Specify NTP server settings, or set the time manually then click Next. Set up your DNS Server and Domain Name on the next screen, then click Next. If you have an SMTP server to use for email alerts, enter those settings on the next screen, or continue. To keep things simple, on the “Create Cluster” page, select the default “Standard Cluster” option, continue, give it a name, then click Next. The next screen requires you to specify a Virtual IP for Fault Tolerance or load-balanced iSCSI access. Add an IP and the correct subnet mask then click Next. The next screen allows us to create a volume. We have not set up our disks and RAID yet, so check the option to “Skip Volume Creation” and we’ll come back to that afterwards. Finish the wizard and wait for it to create the Management Group and configure everything for you. Once complete, it should auto-login to the Management Group using the admin user you specified for you. Review the summary once complete and close the wizard.

 

Now, expand out your Storage Cluster under the new Management Group and find your VSA system. Select Storage and then click the Disk Setup tab. We’ll now initialize each disk that we added to the VSA earlier and add it to the RAID group for the VSA. Right-click each uninitialized disk and select “Add Disk to RAID“.

 

Add each uninitialized VSA disk to the RAID group.

 

This post is getting a little long now, so I’ll end off this post here with our VSA configured, the Management Group and Cluster set up, and our disks initialized. In the next post [part 2/2], we’ll create a new Volume with these disks and will be setting up our iSCSI initiators from our ESXi hosts as “Servers” in the CMC. After this, we will present the new Volume to our ESXi hosts as an iSCSI LUN and create our VMFS shared storage for vSphere to use. Stay tuned, as part 2 will be coming soon! (Hopefully tomorrow!) See below for the next section:

 

Edit – [part 2/2] is now up – finish off the article here.

 

How to set up a VMware vSphere Lab in Virtual Machines, with DRS and HA

 

I recently wrote a (reasonably!) lengthy article on how to set up your own VMware vSphere lab or test environment consisting entirely of Virtual Machines, running off of one piece of host hardware. This is really handy as a lot of people new to Virtualization often think they need to purchase full on server equipment to create a white box, or find second hand servers off of eBay. Even more often, they make the mistake of overlooking the CPU feature set required to run vSphere – Hardware Virtualization, buying 64bit capable servers (good), but lacking the Intel VT or AMD-V feature-set required for vSphere (bad!)

 

This is when running everything virtualized comes in really handy. As well as keeping your hardware and lab requirements/size down, you have everything you need all in one installation of VMware Workstation. You’ll also be able to test out some really cool features that vSphere / vCenter Server has to offer – such as HA (High Availability) and DRS (Distributed Resource Scheduling). In the article I also make reference to a few best practises to have when configuring the real deal for production use. I hope this comprehensive guide is useful for those of you looking to set something like this up!

 

VMware lab consisting - nested VMs running in Virtualized ESXi hypervisors.

 

Read the article here on Simple-Talk.com to get started and see how its all done!