Scaling Web API 2 and back-end SQL databases in Azure

I recently created a small Web API 2 project running with a back-end SQL database (Entity Framework code first), and had it deployed to an Azure web app, along with Azure SQL.

Naturally, I started it off using the free web app and one of the cheapest possible Azure SQL tiers (S0 – 10 DTUs).

After I finished working on the API, I wanted to see what sort of performance I could get out of it, by using Azure’s various scaling options.

To test I used Loader.io. This is a really nice and easy to use load testing service by SendGrid Labs. The free edition allows me to setup various API endpoint tests and run many concurrent connections for up to 1 minute at a time.

All my tests below were done using the same GET request test. The request always returned a collection of 5 x objects from the /Animals endpoint to keep things consistent.

My initial test was against the F1 free app tier for the Web app, with the SQL database running on S0 (10 DTUs). Here are the results of sending 500 requests per second for 1 minute.

S0-10DTU-result

The API struggled to complete the full 60k requests over 1 minute, and only completed about 8k requests, with an average response time of 4638ms. Terrible, but then again we are running on very low performance, cheap tiers. I had a look at the database performance stats and noticed that the DTUs were capped out at 100% during the 1 minute load test. At this point it definitely seems to be the database performance holding things back.

Scaling the database up to the S1 tier (20 DTUs) gives a definite improvement in response times and number of requests able to be sent within one minute. If we look at the database performance stats in the portal, we can now see that the DTUs are still maxing out at 100% though.

S1-20DTU-result

20-DTUs-maxed out

At this point I decided I would increase database performance again, but throw more requests per second at the API (from 500/second up to 1000/second).

Scaling the database up to S2 (50 DTUs) and throwing more requests a second at the API, and the number of requests completed in total higher now – up by about an extra 5k. Taking a look at the DTU performance status, we can see they now maxed out at around 60%. At this point it is pretty clear that the database is no longer the bottleneck.

50-DTUs-maxed out at 60% - even with doubling the requests per second from 500 to 1000

50-DTUs-maxed out at 60%

Now I scaled the web app tier up from free, to the B1 (Basic) tier, which gives you 1 Core, 1.75GB RAM, and up to 3 x instances scaled manually. I started with just the default 1 instance and ran the 1000 req/second for 1 minute test again.

boo-test-failed-error-rate-higher-than-50% due to timeouts

The results were pretty dismal compared to the free tier now. In fact the test failed due to an error rate of greater than 50% (all caused by timeouts). It is important to remember that we have not yet scaled out from the default 1 instance though.

Scaling up to 2 x instances on the B1 tier, helped quite a bit. The test now completes, and has a much smaller timeout error rate. Many more responses were served, but the response rate was quite slow. Taking a look at the distribution of CPU time over the two instances, we can also see that the traffic is indeed being split between the two instances we’ve scaled out with.

scale-B1-basic-from-1-to-2-instances

yay-test-finished-with much smaller error rate

processor time spread over two instances during load test

Taking this one step further to 3 x instances, and re-running the test nets us the best result so far. No timeout errors, and a response time averaging around 3000ms. Much better, but still quite a high response time, and not all 60k requests are being served.

I scaled up to the B2 tier for the following run. Each instance has 2 x cores and 3.5GB RAM this time. Starting at 1 x instance and running the test on these higher specification web instances seems to now handle things a lot better.

Little to no timeout errors, with about 5000ms avg response time, but using only 1 x instance this time!

Pushing things right up to 3 x instances (2 cores and 3.5GB RAM each) nets us the best result yet. The average response time is down to 1700ms and there are no timeout errors at all. The API was able to handle 49000 requests in the 1 minute test, which is the highest number of requests it has been able to handle so far.

B2-basic-test-with-3x-instances-good-result

I scaled up to the B3 tier from here, and tried another few runs using 3 x instances (at 4 x cores and 7GB RAM each). This didn’t help things much, netting around 200ms better response time, for a much pricier tier. It therefore looks like the sweet spot for this kind of work is to scale out with medium sized instances (2 x cores each), rather than scaling up too much.

I changed the tier to S2 (2 x cores 3.5GB RAM each, but allowing up to 10 x instances scaled out) and this time, running the test gave very similar results to 3 x instances. Clearly, the instances were now no longer the bottleneck. Looking back at the database performance, I saw that the DTUs were maxing out at around 90%. It was clear that there must have been some throttling happening there now.

I changed the database DTUs to 100 using the S3 tier, and re-ran the test once more.

bingo-60k-requests

Bingo! We’re now managing to serve the test’s 1000 requests a second, and over the 1 minute test, we get all 60k requests served successfully, and have a reasonable average response time of roughly 300-400ms.

I made a quick change to the GET method in the API for this endpoint to gather items from the database asynchronously, and running the same test again, now gets us all the way down to an average response time of just 100ms over the 60k requests in one minute. Excellent!

100ms-test-result

As you can see, by running load tests like this, and trying out different scaling options for the front end and back end, logically scaling each whenever you see bottlenecks in test results or performance metrics, you can after some time determine the best specification for your database and web apps.

 

Deploying a simple linked container web app with Docker

This is a simple guide on how to deploy a multi-container ‘linked’ web app using Docker.

If you have not yet installed or set up a Docker host to run the containers on, here is my guide on setting up a basic uBuntu 16.04 Docker host VM.

The ‘web app’ we’ll be looking at how to deploy will consist of two basic components – a MySQL database for the back-end, and a simple PHP script for the ‘web front-end’ which simply connects to the MySQL container and displays some info from a database table.

simple-web-app-linked-diagram

For the MySQL container we’ll be using the official Docker repo ‘mysql-server’ image, and for our web front-end, we’ll be creating our own Docker image using a custom Dockerfile we’ll craft ourselves, based on an uBuntu 15.04 image.

This means we’ll be covering the following Docker basics:

  • Running docker containers
  • Linking docker containers (more secure than exposing ports directly)
  • Creating custom docker images using a Dockerfile
  • Building a custom image

Start off by creating a new directory in your home directory called ‘web01’ to create and store the Dockerfile we’ll using to build our custom web front-end image. Then create an emtpy file called ‘Dockerfile’ in this directory and edit it using your favourite text editor. I’m using nano for this.

 

 

This is what your new Dockerfile should look like:

 

The commands do the following:

  • FROM – tells docker build to base this image build on the ubuntu:15.04 image
  • RUN – strings a few apt-get commands together to install apache, php5, and a few other tools like curl. This is important, as every RUN command in a Dockerfile creates a new image layer, and we don’t want our image to contain too many layers.
  • The last RUN command grabs the content from a gist I created which is a basic PHP script, and places it in the /var/www/html directory in the container, then deletes the default index.html file that apache places there. This is the script that will connect to our MySQL container and display some basic info (our basic ‘web app’).
  • EXPOSE – exposes port 80 so we can map this to our Docker host and access the website outside of the container.
  • CMD – runs the apache2 service with PID 1 when the container starts.

Now you can build the Dockerfile and create your own custom image, which is what will be used to start the web container later.

Use the following build command to build the new image from your custom Dockerfile

docker build -t=”web01image” ~/web01/Dockerfile

Run ‘docker images’ after the build completes and you should see the new image listed:

docker-images

Next, you’ll run a new container using the official mysql-server image from the Docker repository. You won’t yet have this image locally, but the command will automatically download the image for you.

docker run –name db01 -e MYSQL_ROOT_PASSWORD=MyRootPassword -d mysql/mysql-server:latest

Note that I’ve called my container ‘db01’ and given it a root password of ‘MyRootPassword’. The -e parameter specifies that an environment variable called MYSQL_ROOT_PASSWORD inside the container should be given the value of ‘MyRootPassword’. The MySQL container then uses this environment variable to setup the root user for MySQL when the container starts.

Now that the database container is up and running (verify by running ‘docker ps’ to check its running), you can deploy the custom web container using your image you created above. In this docker run command, you’ll also link  the web container to the db01 container you previously started up using the –link parameter. This is important to link the two containers.

The web container will be given environment variables with information telling it about the networking config of the DB container. These environment variables will then be access by the simple web PHP script to tell it where to find the database server, and what credentials to use to connect.

docker run –name=web01 –link=db01:mysql -d -p=80:80 web01image

Important: notice that in the –link parameter, the name of the database/MySQL container is specified. Make sure you use the exact name you gave your MySQL database container here – this ensures that the linking of the two containers is correct. The last ‘web01image’ bit specifies to base the container you are running off of the newly built ‘web01image’.

The -p parameter maps the exposed port 80 in the container to port 80 on the docker host, so you’ll be able to access the website by using http://dockerhost:80

Check that the new web container and previously created MySQL container are running by using the ‘docker ps’ command.

docker-ps-output

Out of interest, this is what the PHP script looks like (this is what is downloaded and placed on the web container as a RUN build step in the Dockerfile you created above):

You can see the environment variables that the PHP script grabs (top of the script) to establish the database connection from the docker container. These environment variables are what are created and populated by linking the web container to the db container using the –link parameter.

Lastly, you may want to create a sample database, table and some data for the simple ‘web app’ to display after it connects to the database container. Issue the following ‘docker exec’ command, which will add the sample database, create a sample table, and add some sample data.

Make sure you change the ‘MyRootPassword’ bit to whatever root MySQL password you chose when you ran the MySQL container above, and ensure you run exec against the name of the MySQL container you chose (I used db01). Keep the database name and the rest of the command intact, as the PHP script relies on these staying the same.

docker exec db01 mysql -u root -pMyRootPassword -e “create database testdb1; use testdb1; CREATE TABLE events (id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, name VARCHAR(20), signup_date DATE); INSERT INTO events (id,name,signup_date) VALUES (NULL, ‘MySpecialEvent’, ‘2016-06-11’);”

Finally, browse to http://dockerhostnameorip and you should see the simple PHP script display some basic info, stating it was able to connect to the MySQL server and display the sample data in the database.

simple-php-web-app-display