WSL2 GUI X-Server Using VcXsrv

wsl2 gui desktop

I needed to set up a WSL2 GUI recently on my machine (WSL2 running uBuntu 20.04.1 LTS). I found a guide that runs through the process but found that a few tweaks needed to be made. Specifically, the communication to VcXsrv was being blocked by Windows Firewall.

There were also a couple of extra tweaks needed for audio passthrough using PulseAudio and setting a windowed resolution.

Setting up a WSL2 GUI X-Server in Windows

Start by installing xfce4 and goodies.

sudo apt install xfce4 xfce4-goodies

If you’re running Kali you should use:

sudo apt install kali-desktop-xfce

During the install you’ll be prompted about which display manager to use. This is up to you, though I personally chose lightdm.

Download this .zip package which contains VcXsrv and PulseAudio along with some configuration and a shortcut to launch.

Extract it to the root of your C:\ drive. You should end up with contents under C:\WSL VcXsrv.

WSL2 GUI vcxsrv package contents

Run the vcxsrv-64.1.20.8.1.installer.exe installer in this folder, choosing defaults for the install.

Once installed, you’ll want to enable High DPI scaling for VcXsrv in Windows.

  • Navigate to C:\Program Files\VcXsrv
  • Right-click xlaunch.exe and go to Compatibility
  • Click Change high DPI settings and choose Override high DPI scaling behavior. Ensure Application is in the dropdown.

Next, edit the startWSLVcXsrv.bat batch file and change the last line that reads ubuntu.exe run to one of:

  • ubuntu2004.exe run in the case you are using uBuntu 20.04 from the Microsoft Store for WSL
  • ubuntu1804.exe run if you are using uBuntu 18.04 from the Microsoft Store for WSL
  • ubuntu.exe run for when you are using standard uBuntu from the Microsoft Store for WSL
  • kali.exe run if you installed Kali-Linux from the Microsoft Store for WSL

Pin the WSL VcXsrv shortcut somewhere convenient like the taskbar.

Opening Windows Firewall for VcXsrv and PulseAudio

Next you need to allow inbound traffic to Windows for VcXsrv and PulseAudio.

Open Windows Defender Firewall with Advanced Security and add two new Inbound Rules as follows:

  • Type: Program
  • Program path: %ProgramFiles%\VcXsrv\vcxsrv.exe for VcXsrv and %SystemDrive%\WSL VcXsrv\pulseaudio-1.1\bin\pulseaudio.exe for PulseAudio
  • Allow the connection
  • Profile: Domain, Private
  • Name: vcxsrv or pulseaudio depending which rule you are adding

I personally added the following to ExtraParams under the XLaunch node of config.xlaunch. This sets windowed mode to 1920×1080 for monitor #1 on my machine.

-screen 0 1920x1080@1

Viewing your WSL2 GUI

With all of that setup out of the way, you should be able to simply launch VcXsrv from the pinned shortcut and everything should work.

Try it out and you should get Desktop up and running for your WSL2 environment.

WSL2 gui example with audio settings open

PulseAudio passthrough should also be available if you check your sound / volume settings. Try an audio test using alsa-utils:

sudo apt install alsa-utils
speaker-test

Kudos to this guide on reddit for most of the setup instructions. As mentioned before, I needed to configure my firewall and also added some tweaks for windowed mode.

Troubleshooting

If you find your VcXsrv Server display window is blank when launching, try the following:

  • Double-check your firewall rule is allowing inbound connections for vcxsrv.exe for the domain and private scopes.
  • With the black X-server / display window from VcXsrv still open, launch a WSL shell separately, and run the following to set your DISPLAY environment variable:
export DISPLAY=$(grep -m 1 nameserver /etc/resolv.conf | awk '{print $2}'):0

This takes the IP address of your host machine (conveniently used as a nameserver in your WSL Linux environment for DNS lookups) and sets it as the Display remote location (with :0 for the display number appended).

Now, try to launch a xfce4 session with:

xfce4-session

If all goes to plan, the session should target your machine where VcXsrv Server is running and your display window should come to life with your WSL environment desktop.

ZFS ARC Assisting Frequent, Random IO from Web Traffic

ZFS includes some great features that help deliver lightning fast read speeds for storage. Two such features are Adapative Replacement Cache (ARC), and a second possible cache tier, L2ARC. Together these features can dramatically decrease the number of reads required from magnetic storage.

Usually ZFS will assign all but 1GB of system memory to ARC. When files are retrieved they are cached in system RAM dedicated to ARC. This allows them to be retrieved very quickly the next time they are read.

A second tier of cache named L2ARC is also possible. This is where you can dedicate solid state disks to act as another tier of cache. L2ARC only really makes sense when you are fronting slower magnetic spinning disks with cache, and you won’t have enough RAM to support a large enough ARC either. L2ARC doesn’t make a lot of sense if you are already running storage pools across solid state drives.

Another point to consider is that if your requirements mandate a large ARC size, system memory will start to get very expensive. If you are using magnetic storage tiers, then using SSD for L2ARC becomes a much cheaper option than simply dumping piles of money into more memory.

My personal FreeNAS server has 16GB of RAM, with much of that dedicated to ARC.

Analyzing ZFS ARC hit rates for Web Traffic to this Blog

I’ve been running this blog on my personal Kubernetes Raspberry Pi Cluster, which has its storage provisioned via NFS from my FreeNAS storage server.

The storage and cache breakdown is:

  • 4 x 3TB SATA spinning disks, RAIDZ1
  • 2 x 250GB SSD SATA disks, ZFS Mirror
  • 16GB RAM, around 4GB used for jails and VMs, the remaining for ARC
  • No L2ARC. NFS storage is provided from the SSD based storage pool

This blog has a bunch of static files along with it’s WordPress installation, plus a MySQL database. Both of these sets of files are provisioned from NFS on the ZFS mirror storage pool, utilizing Kubernetes PVs.

The SSD storage is already quite fast, but the cache features of ZFS really help here when serving frequent random IO generated from web traffic. Just take a look at these two charts:

ZFS ARC hit ratio with ARC requests chart.
ARC Hit Ratio and ARC Requests. Notice the really high (almost 100%) ARC hit ratio.

Instead of constantly reading from SSDs and causing unecessary wear, web traffic file requests are mostly served from ARC.

From these two graphs, I can tell that right now 16GB of system memory is perfectly fine for my home storage and web traffic serving needs. ARC is handling these workloads perfectly, with a very high cache hit ratio too.

Secondly, I certainly don’t need L2ARC. Web content is already sitting on SSD storage. Additionally, even if files were on magnetic storage, the main ARC would still be able to serve almost all web traffic.

Conclusion

ZFS ARC impresses all around. On it’s busiest day of web traffic last year, this blog saw 12000 sessions, and around 13500 page views in just a few hours. ZFS ARC happily served the site’s storage needs from system memory.

In fact, the ARC hit ratio actually increased to just over 99% at this point in time!

ZFS ARC hit ratio above 99%
Over 99% ARC hit ratio

Puck.js Duplo Block Police Siren Build

soldering iron, pexels

I picked up a Puck.js a while ago and after trying out a few basic bits of code, sadly let it start to gather dust on my shelf. That changed this weekend as I browsed the sample project listings for something simple to build, picking up the Puck.js Duplo police siren build to try.

It should be a fun toy for my youngest to play with, as he really enjoys Duplo.

What is the Puck.js?

puck.js duplo build start - the puck.js out of it's case.
The Puck.js and it’s button case.

The Puck.js is an open source JavaScript microcontroller. It has a variety of features such as:

  • Button
  • Magnetometer
  • Accelerometer
  • Gyro
  • IR & RGB LEDs
  • Temperature and light sensor
  • FET output
  • Programmable NFC tag
  • 9 IO pins

The best part about the Puck.js for me is how accessible it is to run and deploy code to. Using Web Bluetooth and it’s included puck.js library, you can write code in a Web IDE, connect over Bluetooth, and have your code running in seconds.

Building the Puck.js Duplo Police Siren Project

If you want to try it out yourself, the actual tutorial page itself is the best resource to begin with. There is a video available there to follow along with.

Here is the Thingiverse page where you can get the model. It currently only lists a scad version of the file, so you’ll need to download OpenSCAD and open it there.

Here is a gist for the file including the cut-out operation that subtracts the innards from the block to make room for the puck.js and piezo to fit into.

I printed the block using my Elegoo Mars resin 3D printer. My first go seems to be slightly loose fitting, so I might shrink the model to 99% size and try again for a second iteration.

I used a new resin that is meant to be easily rinsed/washed after printing with water. The quality on the top of the block doesn’t look as good as usual, so I’m not sure if this resin is to blame for that or not.

puck.js duplo block 3D print

Connecting the Components

The LEDs connect to D1/D2 and D30/D31. The piezo goes on D28/D29. I used RGB LEDs, so I snipped the other legs off, leaving just the blue and common cathode terminals to connect up.

After a bit of dodgy soldering, it works!

Installing Everything Into the Block

With a little bit of coercion, the whole lot fits in. I used a bit of hot glue on four edges of the Puck.js to keep it in place, but keep it easy to remove if needed.

After that, I added a layer of scrap paper with the piezo in-between, and glued that in too.

Here is the final result.

The Puck.js may be fairly pricey, but it includes a lot of IO. It’s battery and use of Bluetooth LE make it ideal for projects where battery life is a concern. The battery is super cheap and can last up to around a year if used carefully.

I had fun making this project. It’s a great project to get started with the Puck.js, and hopefully I’ll find some more use cases soon where I can use more of these great little devices.

Quick and Easy ffmpeg Cheat Sheet

mixing board

This post contains my ffmpeg cheat sheet.

ffmpeg is a very useful utility to have installed for common audio and video conversion and modification tasks. I find myself using it frequently for both personal and work use.

Got an audio or video file to compress or convert? Or maybe you’re looking to split off a section of a video file. Did you do a screen recording of a meeting session and find you need to share it around, but it’s way too large in filesize?

ffmpeg is the clean and easy command line approach. I’ve put together a ffmpeg cheat sheet below that has some quick and easy usages documented.

Skip ahead to the ffmpeg Cheat Sheet section below if you already have it installed and ready to go.

Installing ffmpeg (macOS / Windows)

On macOS, you’re best off using homebrew. If you don’t have homebrew already, go get that done first. Then:

macOS

brew install ffmpeg

Windows

You can grab a release build in 7-zip archive format from here (recent as of the publish date of this post). Be sure to check the main ffmpeg downloads page for newer builds if you’re reading this page way in the future.

If you don’t have 7-zip, download and install that first to decompress the downloaded ffmpeg release archive.

Now you’ll ideally want to update your user PATH to include the path that you’ve extracted 7-zip to. Make sure you’ve moved it to a convenient location. For example on my Windows machine, I keep ffmpeg in D:\Tools\ffmpeg.

Open PowerShell (Windows key + R), type powershell, hit enter.

To ensure that the path persists whenever you run powershell in the future, in powershell, run:

notepad $profile

This will load a start-up profile for powershell in notepad. If it doesn’t exist yet, it’ll prompt you to create a new file. Choose yes.

In your profile notepad window, enter the following, replacing D:\Tools\ffmpeg with the path you extracted ffmpeg to on your own machine.

[Environment]::SetEnvironmentVariable("PATH", "$Env:PATH;D:\tools\ffmpeg")

Close Notepad and save the changes. Close Powershell, then launch it again. This time if you type ffmpeg in the powershell window you’ll run it, no matter which directory you’re in.

ffmpeg Cheat Sheet

This is a list of my 10 most useful ffmpeg commands for conversion and modification of video and audio files.

Task DescriptionCommand
Convert a video from MP4 to AVI formatffmpeg -i inputfile.mp4 outputfile.avi
Trim a video file (without re-encoding)ffmpeg -ss START_SECONDS -i input.mp4 -t DURATION_SECONDS -c copy output.mp4
Convert WAV audio file to compressed MP3 formatffmpeg -i input.wave -acodec libmp3lame output.mp3
Mux video from input1.mp4 with audio from input2.mp4 ffmpeg -i input1.mp4 -i input2.mp4 -c copy -map 0:0 -map 1:1 -shortest output.mp4
Resize or scale video from current size to 1280×720.ffmpeg -i input.mp4 -s 1280x720 -c:a copy output.mp4
Extract audio to MP3 from a video fileffmpeg -i inputvideo.mp4 -vn -acodec libmp3lame outputaudio.mp3
Add watermark or logo to the top-left of a video. (Change overlay parameter for different positions).ffmpeg -i inputvideo.mp4 -i logo.png -filter_complex "overlay=5:5" -codec:a copy outputvideo.mp4
Covert video to GIFffmpeg -i inputvideo.mp4 output.gif
Change the frame rate of a videoffmpeg -i inputvideo.mp4 -filter:v 'fps=fps=15' -codec:a copy outputvideo.mp4
ffmpeg cheat sheet example - logo overlay.
Yes, that is a banana logo overlayed onto this video using ffmpeg. What of it?

This list doesn’t even scratch the surface of the capabilities of ffmpeg. If you dig deeper you’ll find commands and parameters for just every audio and video modification process you need.

Remember, while it’s often easy to find a free conversion tool online, there’ll always be a catch or risk of using these. Whether it’s being subjected to unecessary advertising, catching potential malware, or being tracked with 3rd party cookies, you always take risks using free online tools. I guarantee almost every single free conversion website is using ffmpeg on their backend.

So do yourself a favour and practice and using CLI tools to do things yourself.

Saga Pattern with aws-cdk, Lambda, and Step Functions

steps

The saga pattern is useful when you have transactions that require a bunch of steps to complete successfully, with failure of steps requiring associated rollback processes to run. This post will cover the saga pattern with aws-cdk, leveraging AWS Step Functions and Lambda.

If you need an introduction to the saga pattern in an easy to understand format, I found this GOTO conference session by Caitie McCaffrey very informative.

Another useful resource with regard to the saga pattern and AWS Step Functions is this post over at theburningmonk.com.

Saga Pattern with aws-cdk

I’ll be taking things one step further by automating the setup and deployment of a sample app which uses the saga pattern with aws-cdk.

I’ve started using aws-cdk fairly frequently, but realise it has the issue of vendor lock-in. I found it nice to work with in the case of step functions particularly in the way you construct step chains.

Saga Pattern with Step Functions

So here is the step function state machine you’ll create using the fairly simple saga pattern aws-cdk app I’ve set up.

saga pattern with aws-cdk - a successful transaction run
A successful transaction run

Above you see a successful transaction run, where all records are saved to a DynamoDB table entry.

dynamodb data from sample app using saga pattern with aws-cdk
The sample data written by a succesful transaction run. Each step has a ‘Sample’ map entry with ‘Data’ and a timestamp.

If one of those steps were to fail, you need to manage the rollback process of your transaction from that step backwards.

Illustrating Failure Rollback

As mentioned above, with the saga pattern you’ll want to rollback any steps that have run from the point of failure backward.

The example app has three steps:

  • Process Records
  • Transform Records
  • Commit Records

Each step is a simple lambda function that writes some data to a DynamoDB table with a primary partition key of TransactionId.

In the screenshot below, TransformRecords has a simulated failure, which causes the lambda function to throw an error.

A catch step is linked to each of the process steps to handle rollback for each of them. Above, TransformRecordsRollbackTask is run when TransformRecordsTask fails.

The rollback steps cascade backward to the first ‘business logic’ step ProcessRecordsTask. Any steps that have run up to that point will therefore have their associated rollback tasks run.

Here is what an entry looks like in DynamoDB if it failed:

A failed transaction has no written data, because the data written up to the point of failure was ‘rolled back’.

You’ll notice this one does not have the ‘Sample’ data that you see in the previously shown successful transaction. In reality, for a brief moment it does have that sample data. As each rollback step is run, the associated data for that step is removed from the table entry, resulting in the above entry for TransactionId 18.

Deploying the Sample Saga Pattern App with aws-cdk

Clone the source code for the saga pattern aws-cdk app here.

You’ll need to npm install and typescript compile first. From the root of the project:

npm install && npm run build

Now you can deploy using aws-cdk.

# Check what you'll deploy / modify first with a diff
cdk diff
# Deploy
cdk deploy

With the stack deployed, you’ll now have the following resources:

  • Step Function / State Machine
  • Various Lambda functions for transaction start, finish, the process steps, and each process rollback step.
  • A DynamoDB table for the data
  • IAM role(s) created for the above

Testing the Saga Pattern Sample App

To test, head over to the Step Functions AWS Console and navigate to the newly created SagaStateMachineExample state machine.

Click New Execution, and paste the following for the input:

{
    "Payload": {
      "TransactionDetails": {
        "TransactionId": "1"
      }
    }
}

Click Start Execution.

In a few short moments, you should have a successful execution and you should see your transaction and sample data in DynamoDB.

Moving on, to simulate a random failure, try executing again, but this time with the following payload:

{
    "Payload": {
      "TransactionDetails": {
        "TransactionId": "2",
        "simulateFail": true
      }
    }
}

The lambda functions check the payload input for the simulateFail flag, and if found will do a Math.random() check to give chance of failure in one of the process steps.

Taking it Further

To take this example further, you’ll want to more carefully manage step outputs using Step Function ResultPath configuration. This will ensure that your steps don’t overwrite data in the state machine and that steps further down the line have access to the data that they need.

You’ll probably also want a step at the end of the line for the case of failure (which runs after all rollback steps have completed). This can handle notifications or other tasks that should run if a transaction fails.

How I’ve Started To De-Google My Life

About a year ago I embarked on a quest to “de-google” my life as much as possible. While it hasn’t been a completely successful mission, I feel happier knowing I’m not giving away as much of my data to google as I was before.

It hasn’t been an easy process. It is not yet a complete process either. For example, the analytics on this very site are still run in Google Analytics. I realise the irony in this, but rest assured this is something I plan on changing.

If you’re looking for ways to “de-google” your own life, or are after some quick wins to get started, read on.

Some Quick Wins

I started off by changing the amount of data that Google keeps on my account. For example web browsing, youtube viewing, and location history data are all generally kept forever by default.

If you’re looking for the quickest things you can do to reduce the amount of data that Google holds about you in these areas, try these quick wins for a start:

Using your Google account, go to your Activity Controls Page here.

  • Turn off Web & App Activity.
  • Turn off Location History and enable Auto-delete
  • Disable YouTube History and enable Auto-delete.
  • Make sure Ad personalisation is disabled.

These can all be controlled from your single Activity Controls Page linked above.

de-google your life by limiting google activity controls.
Limiting various Google Product activity tracking and storage

Moving on, here are the main Google Products I used and what I did to migrate away from them, or at least limit my usage of them…

Migrating Away from Gmail

I started with email, as it is one of the biggest and most difficult to move away from in my opinion.

After evaluating options, and even considering self-hosted e-mail, I settled on moving over to Protonmail.

Protonmail has proven to be a good replacement for email and my calendar. For contacts, I now use Nextcloud (see below).

I set up a new Protonmail account and forwarded my gmail inbox over to the new protonmail address.

The next step I took was to start changing all my online account email addresses to the new one. Now, whenever an email arrives in my Protonmail inbox and has been forwarded from gmail, I make an effort to login to that account and change the email.

Eventually I’ll have little to no e-mail coming in via gmail and I’ll be able to shutdown and delete the account entirely.

I used a free Protonmail account for a while, but after a few months decided that I really liked the product and switched to a paid subscription.

Now I’ve configured some of my personal domains in Protonmail and use it for hosted email on those too. I benefit from end-to-end encryption now as well as features like DKIM too.

Dumping Chrome

Next, I stopped using Chrome. I installed Mozilla Firefox and used the import feature to bring in most of my Chrome bookmarks.

The few browser extensions I use are all available as Firefox Add-ons, so I had little to no friction with this move.

I do prefer Chrome’s developer features, but have managed fine so far with Firefox on that front.

Google Drive Replacement

Next up, Google Drive storage. Thankfully, a free Google account doesn’t give you too much storage space for free, so I did not have that many files in Google Drive that needed migrating.

hard drive
Self-hosting your files keeps you in control of your own data, but can also be a risk. Make sure you have solid backup and off-site storage contingency.

I now self-host Nextcloud. I’ve written a blog post on setting up Nextcloud as a FreeNAS hosted jail.

Now I benefit from around 10TB of “cloud” storage, and can rest assured its under my own control.

Using the Nextcloud mobile app, I can also have all of my mobile phone photos and videos synchronised automatically to my Nextcloud instance.

I have the most important folders backed up to multiple locations out of Nextcloud.

YouTube

Here’s the difficult one for me. I consume a fair bit of content on Youtube. This makes it very tricky to completely de-google my life.

I have taken steps to improve how I use YouTube and to limit their ability to track me. It’s not perfect though.

For one, I use Youtube signed-out now. Sure they can still track by things like IP address and fingerprinting, but

I’ve also started using a mobile app (available on F-Droid and direct via their Github page) that has a feature to import all of your Youtube channel subscriptions and set up a video feed using these.

The app does not sign into your Google account at all, and does not try to provide you with any annoying “recommendations” like Youtube does. Even better, you don’t get any ads. Not a single one.

I’ve also started supporting alternative content platforms wherever possible. Of course they don’t match YouTube, but offer a glimmer of hope. For example:

I still maintain my own YouTube channel, and for now I have not done anything about that. To be honest, I’m not entirely sure what my approach here would be if I were to completely delete my Google Account.

The first thing I think I would do is set up a PeerTube instance and host videos there. I would also prefer to consume video content there.

Alternative Analytics

So as noted above, I still rely on Google Analytics. I hope to change that in the near future.

Some alternatives I’ve started considering are:

Going Forward

I’ll continue to try to reduce my reliance on Google Products going forward. As I mentioned before, what I have done so far is by no means perfect or complete. It is a constant concerted effort for me.

This post should not just apply to Google either. It shouldn’t just be case of “de-google” and move on. There are many other entities out there providing ‘free’ products that literally turn us into the ‘product’ by harvesting our data for advertising.

One of the the most important questions to always ask yourself when signing up for a ‘free’ new service is: “Why is this free, and what’s in it for them if I sign up?