Generating Music with Markov Chains and Alda

generating music

A while ago I read a fantastic article by Alex Bainter about how he used markov chains to generate new versions of Aphex Twin’s track ‘aisatsana’. After reading it I also wanted to try my hand at generating music using markov chains, but mix it up by tring out alda.

‘aisatsana’ is very different to the rest of Aphex Twin’s 2012 released Syro album. It’s a calm, soothing piano piece that could easily place you into a meditative state after listening to it.

Alda is a text-based programming language for music composition. If you haven’t tried it before, you’ll get a feel for how it works in this post. If you want to learn about it through some much simpler examples, this quick start guide is a good place to start.

Generating music is made easier using the simple language that alda provides.

As an example, here are 4 x samples ‘phrases’ that are generated with markov chains (based off the aisatsana starting state), and played back with Alda. I’ve picked 4 x random phrases out of 32 that sounded similar to me, but were different in each case. A generated track will not necessarily consist of all similar sounding phrases, but might contain a number of these.

Markov Chains 101

On my journey, the first stop was to learn more about markov chains.

Markov chains are mathematical “stochastic” systems that change from one “state” (a situation or set of values) to another. In addition to this a Markov chain tells you the probabilitiy of transitioning from one state to another.

Using a honey bee worker as an example we might say: A honey bee has a bunch of different states:

  • At the hive
  • Leaving the hive
  • Collecting pollen
  • Make honey
  • Returning to hive
  • Cleaning hive
  • Defending hive

After observing honey bees for a while, you might model their behaviour using a markov chain like so:

  • When at the hive they have:
    • 50% chance to make honey
    • 40% chance to leave the hive
    • 10% chance to clean the hive
  • When leaving the hive they have:
    • 95% chance to collect pollen
    • 5% chance to defend the hive
  • When collecting pollen they have:
    • 85% chance of collecting pollen
    • 10% chance of returning to hive
    • 5% chance to defend the hive
  • etc…

The above illustrates what is needed to create a markov chain. A list of states (the “state space”), and the probabilities of transitioning between them.

To play around with Markov chains and simple string generation, I created a small codebase (nodejs / typescript). The app takes a list of ‘chat messages logs’ (really any line separated list of strings) as input. It then uses random selection to find any lines containing the ‘seed’ string.

With the seed string, it generates new and potentially unique ‘chat messages’ based on this input seed and the ‘state’ (which is the list of chat messages fed in).

Using a random function and initial filtering means that the generation probability is constrained to the size of the input and filtered list, but it still helped me understand some of the concepts.

Converting Aisatana MIDI to Alda Format

To start, the first thing I needed was a list of musical segments from the original track. These are what we refer to as ‘phrases’.

As Alex did in his implementation, I grabbed a MIDI version of Aisatsana. I then fed it into a MIDI to JSON converter, yielding a breakdown of the track into individual notes. Here is what the first two notes look like:

[
  {
    "name": "E3",
    "midi": 52,
    "time": 0,
    "velocity": 0.30708661417322836,
    "duration": 0.5882355
  },
  {
    "name": "G3",
    "midi": 55,
    "time": 0.5882355,
    "velocity": 0.31496062992125984,
    "duration": 0.5882355
  }
]

From there I wrote some javascript to take these notes in JSON format, parse the time values and order them into the 32 ‘phrases’ that aisatsana is made up of.

That is, there are 32 ‘phrases’, with each consisting of 32 ‘half-beats’ at 0.294117647058824 seconds per half beat. Totalling the 301 seconds.

const notes = [] // <-- MIDI to JSON notes here

// constants specific to the aisatsana track
const secPerHalfBeat = 0.294117647058824;
const phraseHalfBeats = 32;

// Array to store quantized phrases
let phrases = [];

notes.forEach(n => {
  const halfBeat = Math.round(n.time / secPerHalfBeat);
  const phraseIndex = Math.floor(halfBeat / phraseHalfBeats);
  const note = n.name.substring(0, 1).toLowerCase();
  const octave = n.name.substring(1, 2);
  const time = n.time;
  const duration = n.duration;

  // Store note in correct 'phrase'
  if (!phrases[phraseIndex]) {
  	phrases[phraseIndex] = [];
  }

  phrases[phraseIndex].push({ note: note, octave: octave, time: time, duration: duration });
});

It also gathers information such as the note symbol, octave, and duration for each note and stores it in a phrases array, which also happens to be ordered by phrase index.

Grouping by Chord

Next, the script runs through each phrase and groups the notes by time. If a note is played at the same timestamp, that means it is part of the same chord. To play correctly with alda, I need to know this, so a chords array is setup for each phrase.

phrases.forEach(phrase => {
  let chords = []
  const groupByTime = groupBy('time');
  phrase.chords = [];
  const chordGrouping = groupByTime(phrase);

  for (let [chordTimestamp, notes] of Object.entries(chordGrouping)) {
    phrase.chords.push(notes)
  }
});

Generating alda Compatible Strings

With chord grouping done, we can now convert the track into 32 phrases that alda will understand.

phrases.forEach(phrase => {
  let aldaStr = "piano: (tempo 51) (quant 90) ";
  phrase.chords.forEach(chord => {
    if (chord.length > 1) {
      // Alda plays notes together as a chord when separated by a '/'
      // character. Generate the alda string based on whether or not
      // it needs to have multiple notes in the chord, separating with
      // '/' if so.
      for (let [idx, note] of Object.entries(chord)) {
        if (idx == chord.length - 1) {
          aldaStr += `o${note.octave} ${note.note} ${note.duration}s `;
        } else {
          aldaStr += `o${note.octave} ${note.note} ${note.duration}s / `;
        }
        
      };
    } else {
      chord.forEach(note => {
        aldaStr += `o${note.octave} ${note.note} ${note.duration}s `;
      });
    }
  });
  // Output the phrase as an alda-compatible / playable string (you can
  // also copy this directly into alda's REPL to play it)
  console.log(aldaStr);
})

Here is the full script to convert the MIDI to alda phrase strings.

Generating Music with Markov Chains

There are different entry points that I could have used to create the markov chain initial state, but I went with feeding in the alda strings directly to see what patterns would emerge.

Here are the first 4 x phrases from aisatsana in alda-compatible format:

piano: (tempo 51) (quant 90) o3 e 0.5882355s o3 g 0.5882355s o3 c 0.5882354999999999s o4 c 7.6470615s
piano: (tempo 51) (quant 90) o3 e 0.5882354999999997s o3 g 0.5882354999999997s o3 c 0.5882354999999997s o4 c 0.5882354999999997s o3 b 2.3529420000000005s o4 e 4.705884000000001s
piano: (tempo 51) (quant 90) o3 e 0.5882354999999997s o3 g 0.5882354999999997s o3 c 0.5882354999999997s o4 c 0.5882354999999997s o3 b 7.058826s
piano: (tempo 51) (quant 90) o3 e 0.5882354999999997s o3 g 0.5882354999999997s o3 c 0.5882354999999997s o4 c 0.5882354999999997s o3 b 1.1764709999999994s o4 e 5.882354999999997s

If you like, you can drop those right into alda’s REPL to play them, or drop them into a text file and play them with:

alda play --file first-four-phrases.alda

The strings are quite ugly to look at, but it turns out that they can still be used to generate new and original phrases based off the aisatsana track phrases using markov chains.

Using the markov-chains npm package, I wrote a small nodejs app to generate new phrases. It takes the 32 x alda compatible phrase strings from the original MIDI track of ‘aisatsana’ as a list of states and walks the chain to create new phrases.

E.g.

const states = [
  // [ alda phrase strings here ],
  // [ alda phrase strings here ],
  // [ alda phrase strings here ]
  // etc...
]

const chain = new Chain(states);
 
// generate new phrase(s)
const newPhrases = chain.walk();

I threw together a small function that you can run directly to generate new phrases. Give it a try here. Hitting this URL in the browser will give you new phrases from the markov generation.

If you want a text version that you can drop right into the alda REPL or into a file for alda to play try this:

curl -s https://solitary-mountain-114.fly.dev/ | jq -r '.phrases[]'

I’ve uploaded the code here that does the markov chain generation using the initial alda phrase strings as input state.

Results and Alda Serverless

Generated Music

The results from generating music off the phrases from the original track are certainly fun and interesting to listen to. The new phrases play out in different ways to the original track, but still have the feeling of belonging to the same piece of music.

Going forward I’ll be definitely experiment further with markov chains and music generation using alda.

Experimenting with alda and Serverless

Something I got side-tracked on during this experiment was hosting the alda player in a serverless function. I got pretty far along using AWS Lambda Layers, but the road was bumpy. Alda requires some fairly chunky dependencies.

Even after managing to squeeze Java and the Alda binaries into lambda layers the audio playback engine was failing to start in a serverless function.

I managed to clear through a number of problems but eventually my patience wore down and I settled with writing my own serverless function to generate the strings to feed into alda directly.

My goal here was to generate unique phrases, output them to MIDI, and then convert them to Audio to be played almost instantenously. For now it’s easy enough to take the generated strings and drop them directly into the alda REPL or play them direct from file though.

It will be nice to see alda develop further and offer an online REPL – which would mean the engine itself would be light enough to perform the above too.

Using JSONPath Queries on JSON Data

JSON data for querying with JSONPath

JSONPath does for JSON processing what XPath (defined as a W3C standard) does for XML. JSONPath queries can be super useful, and are a great addition to any developer or ops person’s toolbox.

You may want to do a quick data query, test, or run through some JSON parsing scenarios for your code. If you have your data easily available in JSON format, then using JSONPath queries or expressions can be a great way to filter your data quickly and efficiently.

JSONPath 101

JSONPath expressions use $ to refer to the outer level object. If for example you have an array at the root, $ would refer to that array.

When writing JSONPath expressions, you can use dot notation or bracket notation. For example:

  • $.animals.land[0].weight
  • $['animals']['land'][0]['weight']

You can use filter expressions to filter out specific items in your queries. For example: ?(<bool expression>)

Here is an example that would filter our collection of land animals to show only those heavier than 50.0, returning their names:

$.animals.land[?(@.weight > 50.0)].name

The wildcard character * is used to select all objects or elements.

Note the @ symbol that is used to select the ‘current’ item being iterated in the boolean expression.

There are more JSONPath syntax elements to learn about, but the above are what I find most useful and commonly required.

JSONPath Query Example

Here is a chunk of JSON data, and some basic queries that show how you can easily filter down the dataset and select what you need.

JSONPath Queries – Example 1

Find all “Report runs” where root.id is equal to a specific value:

$.runs[?(@.root.id=="af1bcd6b-406f-43f9-86b3-9f01ee211ddc")]

JSONPath Queries – Example 2 (AND operator)

Find all “Report runs” where root.id is equal to a specific value, and shell.id is equal to a specific value:

$..runs[?(@.root.id=="af1bcd6b-406f-43f9-86b3-9f01ee211ddc" && @.shell.id=='d743537e393d')]

Useful JSONPath Resources

Use this webapp to write and test JSONPath expressions live in your browser.

Using Node.js Streams to Create a Toy Version of jq

To play around with Node.js streams, I made a simple ‘toy’ version of jq, a handy command line JSON processor.

To be clear, the real jq utility is very lightweight and has a ton of functionality. In this post I’ll be mimicking some of it’s functionality and using Node.js to do so.

This will of course result in a bloated tool with way more bundled in than what we actually need.

That being said, it’ll be a useful exercise to go through to learn a little bit about Node.js streams.

In the github repository, you’ll see how easy it is to hack together a very simple command-line tool to process data streamed in from stdin.

node.js stream transform example with PowerShell pipeline processing.
JSON going in to the tool, being filtered, and then converted to an object in PowerShell through the pipeline.

Node.js Streams

The stream documentation describes them as as:

A stream is an abstract interface for working with streaming data in Node.js. The stream module provides an API for implementing the stream interface.

For this example I’ll be jumping straight into the Transform Streamstream.Transform. This is a duplex stream, where the input would usually be related to the output in some way.

Transforming Input from stdin

The basic use of a Transform stream in Node.js (to process input from stdin) looks like this:

const {Transform} = require('stream')
const TransformStream = new Transform;
TransformStream._transform = (chunk, encoding, callback) => {
    // do something with the chunk
    console.log(chunk.toString().toUpperCase());
    callback();
}

process.stdin.pipe(TransformStream);

Toy jq Utility With Transform Streams

I created a ‘toy’ version of jq using a Node.js Transform stream. It’s a very quickly hacked together example, so don’t expect it to do everything that jq can do. I’m also fully aware that the real jq utility is a very lightweight tool and that doing this in the Node.js runtime adds a lot of unecessary bloat!

This is purely for demonstration purposes.

Packaging up the Node.js app with pkg, we get a platform specific binary called toyjq.

Examples

Using node.js streams - toyjs usage examples

Pretty print input JSON

cat ./example.json | toyjq-linux

{
  name: 'directoryobject',
  path: '/path/to/directoryobject',
  type: 'Directory',
  children: [ { foo: 'bar' }, { foo: 'bar1' } ]
}

Output the `type` field only from input JSON:

cat ./example.json | toyjq-linux

cat ./example.json | toyjq-linux '.type'

"Directory"

Output just the `name` and `children` fields in the input JSON:

cat ./example.json | toyjq-linux '{name, children}'

{
  name: 'directoryobject',
  children: [ { foo: 'bar' }, { foo: 'bar1' } ]
}

Assuming now the Windows platform version of toyjq, and using a PowerShell cmdlet for this example…

Select the children array, convert it to an object in PowerShell and then select the last item in that object array:

(cat .\example.json | .\toyjq-win.exe '.children' | ConvertFrom-Json).foo | Select -Last 1

bar1

The above examples show how you can easily process data from one input pipeline (stdin in this case) and send it along through the pipeline using Node.js streams.

You can find the example toyjq app on my GitHub repository.

Using Sinon stub to Replace External Service Calls in Tests

When writing tests for a service you’ll often find that there are other dependent services that may need to be called in some way or another. You won’t want to actually invoke the real service during test, but rather ‘stub’ out the dependent service call function. Sinon stub provides an easy and customisable way to replace these external service calls for your tests.

Practical Example: AWS SQS sendMessage Sinon Stub

Let’s say you have an AWS Lambda function that drops a message onto an SQS queue. To test this function handler, your test should invoke the handler and verify that the message was sent.

This simple case already involves an external service call – the SQS sendMessage action that will drop the message onto the queue.

Here is a simple NodeJS module that wraps the SQS sendMessage call.

// sqs.ts

import AWS = require("aws-sdk");
import { AWSError } from "aws-sdk";
import { SendMessageRequest, SendMessageResult } from "aws-sdk/clients/sqs";
import { PromiseResult } from "aws-sdk/lib/request";

const sqs = new AWS.SQS({apiVersion: '2012-11-05'});

export function sendMessage(messageBody: string, queueUrl: string) : Promise&lt;PromiseResult&lt;SendMessageResult, AWSError>> {

  var params = {
    QueueUrl: queueUrl,
    MessageBody: messageBody,
  } as SendMessageRequest;

  return sqs
    .sendMessage(params)
    .promise()
    .then(res => res)
    .catch((err) => { throw err; });
}

The actual Lambda Handler code that uses the sqs.ts module above looks like this:

// index.ts

import { sendMessage } from './sqs';
import { Context } from 'aws-lambda';

export const handler = async (event: any, context?: Context) => {

    try {
        const queueUrl = process.env.SQS_QUEUE_URL || "https://sqs.eu-west-2.amazonaws.com/0123456789012/test-stub-example";
        const sendMessageResult = await sendMessage(JSON.stringify({foo: "bar"}), queueUrl);
        return `Sent message with ID: ${sendMessageResult.MessageId}`;
    } catch (err) {
        console.log("Error", err);
        throw err;
    }
}

Next you’ll create a Sinon stub to ‘stub out’ the sendMessage function of this module (the actual code that the real AWS Lambda function would call).

Setup an empty test case that calls the Lambda handler function to test the result.

// handler.spec.ts

import * as chai from 'chai';
import * as sinon from "sinon";
import { assert } from "sinon";

import * as sqs from '../src/sqs';
import { handler } from '../src/index';
import sinonChai from "sinon-chai";
import { PromiseResult } from 'aws-sdk/lib/request';
import { SendMessageResult } from 'aws-sdk/clients/SQS';
import { Response } from 'aws-sdk';
import { AWSError } from 'aws-sdk';

const expect = chai.expect;
chai.use(sinonChai);

const event = {
  test: "test"
};

describe("lambda-example-sqs-handler", () => {
  describe("handler", () => {

    it("should send an sqs message and return the message ID", async () => {

      // WHEN

      process.env.SQS_QUEUE_URL = "https://sqs.eu-west-1.amazonaws.com/123456789012/test-queue";
      const result = await handler(event);
      
      // THEN

      expect(result).to.exist;
      expect(result).to.eql(`Sent message with ID: 123`);
    });
  });
});

Right now running this test will fail due to the test code trying to call the sqs.ts module’s code that in turn calls the real SQS service’s sendMessage.

Here is where Sinon stub will come in handy. You can replace this specific call that sqs.ts makes with a test stub.

In the describe handler section, add the following just before the ‘it‘ section.

const sendMessageStub = sinon.stub(sqs, "sendMessage");

let stubResponse : PromiseResult&lt;SendMessageResult, AWSError> = {
  $response: new Response&lt;SendMessageResult, AWSError>(),
  MD5OfMessageBody: '828bcef8763c1bc616e25a06be4b90ff',
  MessageId: '123',
};

sendMessageStub.resolves(stubResponse);

The code above calls sinon.stub() and passes in the sqs module object, as well as a string (“sendMessage” in this case) identifying the specific method in the module that should be stubbed.

An optional promise result can be passed in to resolves() to get the stub to return data for the test. In this case, we’re having it return an object that matches the real SQS sendMessage return result. Among other things, this contains a message ID which the Lambda function includes in it’s response.

Add a test to verify that the stub method call.

assert.calledOnce(sendMessageStub);

If you run the test again it should now pass. The stub replaces the real service call. Nice!

sinon stub test result

Conclusion

Replacing dependent service function calls with stubs can be helpful in many ways. For example:

  • Preventing wasteful, real service calls, which could result in unwanted test data, logs, costs, etc…
  • Faster test runs that don’t rely on network calls.
  • Exercising only the relevant code you’re interested in testing.

Sinon provides a testing framework agnostic set of tools such as test spies, stubs and mocks for JavaScript. In this case, you’ve seen how it can be leveraged to make testing interconnected AWS services a breeze with a Lambda function that calls SQS sendMessage to drop a message onto a queue.

Feel free to Download the source code for this post’s example.

Quick and Easy Local NPM Registry With Verdaccio and Docker

container storage

Sometimes it can be useful to be able to npm publish libraries or projects you’re working on to a local npm registry for use in other development projects.

This post is a quick how-to showing how you can get up and running with a private, local npm registry using Verdaccio and docker compose.

Verdaccio claims it is a zero config required NPM registry, and that is pretty much correct. You can have it up and running in under 5 minutes. Here’s how:

Local NPM Registry Quick Start

Clone verdaccio docker-examples and then change directory into the docker-examples/docker-local-storage-volume directory.

git clone https://github.com/verdaccio/docker-examples.git
cd docker-examples/docker-local-storage-volume

This particular sample docker-compose configuration gives you a locally run verdaccio instance along with persistence via local volume mount.

From here you can be up and running by simply issuing the following docker-compose command:

docker-compose up -d

However if you do want to make a few tweaks to the configuration, simply load up the conf/config.yaml file in your editor.

I wanted to change the max_body_size to a higher value to allow for larger npm packages to be published locally, so I added:

max_body_size: 500mb

If you haven’t yet started the local docker container, start it up with docker-compose up.

Usage

Now all you need to do is configure your local npm settings to use verdaccio on http://localhost:4873. This is default host and port that verdaccio is configured to listen on when running in docker locally).

Then add an npm user for local development:

npm adduser --registry http://localhost:4873

To use your new registry at a project level, you can create a .npmrc file in your local projects with the following content:

@shogan:registry=http://localhost:4873

Of course replace the scope of @shogan with the package scope of your choosing.

To publish a module / package locally:

npm publish --registry http://localhost:4873

Other Examples

There are lots more verdaccio samples and configurations that you can use in the docker-examples repository. Take a look to find these, including Kubernetes resources to deploy if you prefer running there for a local development setup.

Also refer to the verdaccio configuration page for more examples and documentation on the possible config options.

Scaling Web API 2 and back-end SQL databases in Azure

I recently created a small Web API 2 project running with a back-end SQL database (Entity Framework code first), and had it deployed to an Azure web app, along with Azure SQL.

Naturally, I started it off using the free web app and one of the cheapest possible Azure SQL tiers (S0 – 10 DTUs).

After I finished working on the API, I wanted to see what sort of performance I could get out of it, by using Azure’s various scaling options.

To test I used Loader.io. This is a really nice and easy to use load testing service by SendGrid Labs. The free edition allows me to setup various API endpoint tests and run many concurrent connections for up to 1 minute at a time.

All my tests below were done using the same GET request test. The request always returned a collection of 5 x objects from the /Animals endpoint to keep things consistent.

My initial test was against the F1 free app tier for the Web app, with the SQL database running on S0 (10 DTUs). Here are the results of sending 500 requests per second for 1 minute.

S0-10DTU-result

The API struggled to complete the full 60k requests over 1 minute, and only completed about 8k requests, with an average response time of 4638ms. Terrible, but then again we are running on very low performance, cheap tiers. I had a look at the database performance stats and noticed that the DTUs were capped out at 100% during the 1 minute load test. At this point it definitely seems to be the database performance holding things back.

Scaling the database up to the S1 tier (20 DTUs) gives a definite improvement in response times and number of requests able to be sent within one minute. If we look at the database performance stats in the portal, we can now see that the DTUs are still maxing out at 100% though.

S1-20DTU-result

20-DTUs-maxed out

At this point I decided I would increase database performance again, but throw more requests per second at the API (from 500/second up to 1000/second).

Scaling the database up to S2 (50 DTUs) and throwing more requests a second at the API, and the number of requests completed in total higher now – up by about an extra 5k. Taking a look at the DTU performance status, we can see they now maxed out at around 60%. At this point it is pretty clear that the database is no longer the bottleneck.

50-DTUs-maxed out at 60% - even with doubling the requests per second from 500 to 1000

50-DTUs-maxed out at 60%

Now I scaled the web app tier up from free, to the B1 (Basic) tier, which gives you 1 Core, 1.75GB RAM, and up to 3 x instances scaled manually. I started with just the default 1 instance and ran the 1000 req/second for 1 minute test again.

boo-test-failed-error-rate-higher-than-50% due to timeouts

The results were pretty dismal compared to the free tier now. In fact the test failed due to an error rate of greater than 50% (all caused by timeouts). It is important to remember that we have not yet scaled out from the default 1 instance though.

Scaling up to 2 x instances on the B1 tier, helped quite a bit. The test now completes, and has a much smaller timeout error rate. Many more responses were served, but the response rate was quite slow. Taking a look at the distribution of CPU time over the two instances, we can also see that the traffic is indeed being split between the two instances we’ve scaled out with.

scale-B1-basic-from-1-to-2-instances

yay-test-finished-with much smaller error rate

processor time spread over two instances during load test

Taking this one step further to 3 x instances, and re-running the test nets us the best result so far. No timeout errors, and a response time averaging around 3000ms. Much better, but still quite a high response time, and not all 60k requests are being served.

I scaled up to the B2 tier for the following run. Each instance has 2 x cores and 3.5GB RAM this time. Starting at 1 x instance and running the test on these higher specification web instances seems to now handle things a lot better.

Little to no timeout errors, with about 5000ms avg response time, but using only 1 x instance this time!

Pushing things right up to 3 x instances (2 cores and 3.5GB RAM each) nets us the best result yet. The average response time is down to 1700ms and there are no timeout errors at all. The API was able to handle 49000 requests in the 1 minute test, which is the highest number of requests it has been able to handle so far.

B2-basic-test-with-3x-instances-good-result

I scaled up to the B3 tier from here, and tried another few runs using 3 x instances (at 4 x cores and 7GB RAM each). This didn’t help things much, netting around 200ms better response time, for a much pricier tier. It therefore looks like the sweet spot for this kind of work is to scale out with medium sized instances (2 x cores each), rather than scaling up too much.

I changed the tier to S2 (2 x cores 3.5GB RAM each, but allowing up to 10 x instances scaled out) and this time, running the test gave very similar results to 3 x instances. Clearly, the instances were now no longer the bottleneck. Looking back at the database performance, I saw that the DTUs were maxing out at around 90%. It was clear that there must have been some throttling happening there now.

I changed the database DTUs to 100 using the S3 tier, and re-ran the test once more.

bingo-60k-requests

Bingo! We’re now managing to serve the test’s 1000 requests a second, and over the 1 minute test, we get all 60k requests served successfully, and have a reasonable average response time of roughly 300-400ms.

I made a quick change to the GET method in the API for this endpoint to gather items from the database asynchronously, and running the same test again, now gets us all the way down to an average response time of just 100ms over the 60k requests in one minute. Excellent!

100ms-test-result

As you can see, by running load tests like this, and trying out different scaling options for the front end and back end, logically scaling each whenever you see bottlenecks in test results or performance metrics, you can after some time determine the best specification for your database and web apps.