Scaling Web API 2 and back-end SQL databases in Azure

I recently created a small Web API 2 project running with a back-end SQL database (Entity Framework code first), and had it deployed to an Azure web app, along with Azure SQL.

Naturally, I started it off using the free web app and one of the cheapest possible Azure SQL tiers (S0 – 10 DTUs).

After I finished working on the API, I wanted to see what sort of performance I could get out of it, by using Azure’s various scaling options.

To test I used Loader.io. This is a really nice and easy to use load testing service by SendGrid Labs. The free edition allows me to setup various API endpoint tests and run many concurrent connections for up to 1 minute at a time.

All my tests below were done using the same GET request test. The request always returned a collection of 5 x objects from the /Animals endpoint to keep things consistent.

My initial test was against the F1 free app tier for the Web app, with the SQL database running on S0 (10 DTUs). Here are the results of sending 500 requests per second for 1 minute.

S0-10DTU-result

The API struggled to complete the full 60k requests over 1 minute, and only completed about 8k requests, with an average response time of 4638ms. Terrible, but then again we are running on very low performance, cheap tiers. I had a look at the database performance stats and noticed that the DTUs were capped out at 100% during the 1 minute load test. At this point it definitely seems to be the database performance holding things back.

Scaling the database up to the S1 tier (20 DTUs) gives a definite improvement in response times and number of requests able to be sent within one minute. If we look at the database performance stats in the portal, we can now see that the DTUs are still maxing out at 100% though.

S1-20DTU-result

20-DTUs-maxed out

At this point I decided I would increase database performance again, but throw more requests per second at the API (from 500/second up to 1000/second).

Scaling the database up to S2 (50 DTUs) and throwing more requests a second at the API, and the number of requests completed in total higher now – up by about an extra 5k. Taking a look at the DTU performance status, we can see they now maxed out at around 60%. At this point it is pretty clear that the database is no longer the bottleneck.

50-DTUs-maxed out at 60% - even with doubling the requests per second from 500 to 1000

50-DTUs-maxed out at 60%

Now I scaled the web app tier up from free, to the B1 (Basic) tier, which gives you 1 Core, 1.75GB RAM, and up to 3 x instances scaled manually. I started with just the default 1 instance and ran the 1000 req/second for 1 minute test again.

boo-test-failed-error-rate-higher-than-50% due to timeouts

The results were pretty dismal compared to the free tier now. In fact the test failed due to an error rate of greater than 50% (all caused by timeouts). It is important to remember that we have not yet scaled out from the default 1 instance though.

Scaling up to 2 x instances on the B1 tier, helped quite a bit. The test now completes, and has a much smaller timeout error rate. Many more responses were served, but the response rate was quite slow. Taking a look at the distribution of CPU time over the two instances, we can also see that the traffic is indeed being split between the two instances we’ve scaled out with.

scale-B1-basic-from-1-to-2-instances

yay-test-finished-with much smaller error rate

processor time spread over two instances during load test

Taking this one step further to 3 x instances, and re-running the test nets us the best result so far. No timeout errors, and a response time averaging around 3000ms. Much better, but still quite a high response time, and not all 60k requests are being served.

I scaled up to the B2 tier for the following run. Each instance has 2 x cores and 3.5GB RAM this time. Starting at 1 x instance and running the test on these higher specification web instances seems to now handle things a lot better.

Little to no timeout errors, with about 5000ms avg response time, but using only 1 x instance this time!

Pushing things right up to 3 x instances (2 cores and 3.5GB RAM each) nets us the best result yet. The average response time is down to 1700ms and there are no timeout errors at all. The API was able to handle 49000 requests in the 1 minute test, which is the highest number of requests it has been able to handle so far.

B2-basic-test-with-3x-instances-good-result

I scaled up to the B3 tier from here, and tried another few runs using 3 x instances (at 4 x cores and 7GB RAM each). This didn’t help things much, netting around 200ms better response time, for a much pricier tier. It therefore looks like the sweet spot for this kind of work is to scale out with medium sized instances (2 x cores each), rather than scaling up too much.

I changed the tier to S2 (2 x cores 3.5GB RAM each, but allowing up to 10 x instances scaled out) and this time, running the test gave very similar results to 3 x instances. Clearly, the instances were now no longer the bottleneck. Looking back at the database performance, I saw that the DTUs were maxing out at around 90%. It was clear that there must have been some throttling happening there now.

I changed the database DTUs to 100 using the S3 tier, and re-ran the test once more.

bingo-60k-requests

Bingo! We’re now managing to serve the test’s 1000 requests a second, and over the 1 minute test, we get all 60k requests served successfully, and have a reasonable average response time of roughly 300-400ms.

I made a quick change to the GET method in the API for this endpoint to gather items from the database asynchronously, and running the same test again, now gets us all the way down to an average response time of just 100ms over the 60k requests in one minute. Excellent!

100ms-test-result

As you can see, by running load tests like this, and trying out different scaling options for the front end and back end, logically scaling each whenever you see bottlenecks in test results or performance metrics, you can after some time determine the best specification for your database and web apps.

 

vSphere 6.0 performance metric limitations in the database (config.vpxd.stats.maxQueryMetrics)

A change I noticed right away between vSphere 5.5 and vSphere 6.0 is the introduction of a default limiter when it comes to performing database queries for performance metrics.

When querying vCenter 6.0 for performance data, there is a system in place by default that limits the number of entities that are included in a database query. As performance charts in the vSphere Web and C# client depend on this performance data, you may sometimes see an error when attempting to view overview or advanced charts because of this change.

In my case, I am using some custom code to query performance metrics using vSphere APIs and noticed the issue right away, as I was trying to gather a large amount of data.

VMware state that the reason for the change is to protect the vCenter database from receiving intensive or large queries.

If you wish to work around this, or remove the limit, you’ll need to introduce a new key/value pair advanced setting in the advanced settings area for your vCenter server instance. The key should be named “config.vpxd.stats.maxQueryMetrics” (without the quotes) and should have a value set of -1 to disable the limit. This could also be set to a value of 100 for example to limit the entities included in a database query to 100.

A further edit should be made to the web.xml file, however in my case I was not concerned with the limit affecting the client, as I was using the API, and making the first change seemed to do the trick for me.

You can read more about this setting by using this link to the official VMware KB article

How to move your VMware vCenter Server database (SQL)

 

My vSphere 5 lab at home had been feeling a little sluggish when it came to running the vSphere client and working with vCenter – I could see that the drive where my vCenter DB was running (SATA) was taking a bit of hit when I accessed Performance graphs and the like from vCenter. I had a little bit of free SSD storage on another drive (OCZ Vertex 2 SATA drive) that would be perfect for my vCenter DB size, so in the search of better performance in my lab, and a bit of a practise run at moving the vCenter SQL database, I set about moving just the SQL databases on the VM to a new, dedicated SSD-based VMDK. This also meant that I didn’t take up too much space moving the entire VM across to SSD storage, as this kind of storage comes at a premium!

 

This post will cover the process I followed for SQL Server 2005 Express. Most labs environments are likely to be running this edition, especially if you upgrade the lab from vSphere 4.1. The steps are quite similar for a SQL 2008 database, but there are differences, so just make sure you follow the correct KBs if you are on a different edition of SQL Server. Here is the high level overview of what was involved in the whole process.

 

High level overview: vCenter 5 steps on SQL Server 2005 Express

 

  • Shutdown lab and make a full clone of the vCenter VM. Power back up again afterwards – Always good to have a rollback plan!
  • Added a new disk to the VM, located on my SSD-based storage.
  • Backup all my SQL databases on the vCenter VM along with the System databases.
  • Noted down all credentials that vCenter uses to connect to the SQL database and checked I was familiar with all my ODBC settings just in case any of these needed changing or updating.
  • Stopped vCenter and VUM services
  • Performed database move steps carefully, verifying everything each step of the way.
  • Started vCenter and VUM services back up and check all was working as expected.
  • Note that there are some additional considerations if you are planning on moving a vSphere 4.x database. Refer to the VMware KB linked below for more info if you are on vSphere 4.x

 

VMware have a fairly high-level KB on moving your vCenter Server SQL database. You can take a look at it over here to see if you need anything else.

 

The Process

 

  • After making a clone of my vCenter VM, backing up all my SQL databases on the vCenter server, and stopping all my VMware specific services, I started with the Microsoft specific steps for moving a SQL 2005 database.

 

  • First off, you need to detach the VIM_VCDB database. Execute the following SQL query in SQL Server Management Studio:

 

use master
   go
   sp_detach_db 'VIM_VCDB'
   go

 

  • After this query completes successfully, move your VIM_VCDB.mdf and VIM_VCDB.ldf files to the new location (where you are moving the database to). Once moved, go back to Management Studio and execute the next query which will reattach the database. Of course you will need to specify the path your database is now going to be located in – the example below references the path I used.

 

use master
     go
     sp_attach_db 'VIM_VCDB','D:\VCDB_SQLDATA\VIM_VCDB.mdf','D:\VCDB_SQLDATA\VIM_VCDB.ldf'
  go

 

  • After this query is successful, you can run the next stored procedure, which should return the new location of your database, provided it has been moved and reattached correctly.

 

use VIM_VCDB
go
sp_helpfile
go

 

  • Now that the Database has been moved, you should be good to start your vCenter services back up again and do some testing to ensure everything is working as expected.

 

Extra steps

 

If you would also like to move your System databases, things get a little more complex. First off, you will need to make sure your management studio is set to only open up a SQL query window on startup. (Tools -> Options -> Environment -> General ->At Startup -> Open new query Window) this is so that when we enter single-user mode for the SQL Server service (part of moving the System DBs), we don’t get errors trying to run our scripts – as by default SQL Server Management Studio tries to make a SQL connection for it’s object explorer, as well as for your query when you execute it. This means you’ll get an error message as there would already be an active SQL connection using the Object Explorer before you can execute your queries to move the System DBs. The Microsoft KB does not explain this, and it took me a little while to realise that this was the problem, so don’t forget to change this before starting! The rest of the steps can be followed through in this MSDN Article on moving SQL Server System Databases. Just make sure the correct edition of SQL Server is highlighted at the top of the page before you begin.

 

The System Databases include:

 

  • Model
  • MSDB
  • Master
  • Resource
  • TempDB

 

Another tip when moving the System databases that was not mentioned in the Microsoft KB article, and will cause you to get stuck unless already configured, is to set the correct permissions on the new folder your SQL System databases are going to be sitting in. There is a local security group on your SQL server that needs to be assigned “Full Control” on your new System database location. If it doesn’t have this permission, then you will get errors when you try to start services up again. See screenshot below for the service name (the name of this security group depends on the name of your Server and SQL Service):

 

 

That should cover the whole process, as well as provide a few tips on the areas I initially got stuck on (that were not explained in the official Microsoft KBs). If you have a different edition of SQL Server, just switch the MSDN article to the relevant edition and take it from there. Good luck!

 

Shrink a SQL Database using SQL Management Studio

Here is a quick “how to” on shrinking a SQL Database using SQL Management Studio.

1. Launch SQL Management Studio and login with your desired credentials.

2. Connect to the SQL Database engine instance and expand it by double clicking on it.
3. Expand your “Databases” node and right-click the specific database you would like to shrink.
4. Go to “Tasks” -> “Shrink” and then click “Database”

5. There are some optional settings at this point. Read more about the process here if you would like to learn more. Otherwise continue with the defaults by clicking “OK”.

Here are the results on a DB I shrunk today (before & after):

How to increase the default exchange 2003 SP2 database store limits

This applies to Exchange 2003 SP2.

Today I had a call from a client complaining that their e-mail would sporadically stop working every day or two. They said that by restarting the server, they could temporarily fix the problem.

I connected up, and took a look at the server’s event viewer application logs, around about the times that the client complained this last happened, which was around 07h30 in the morning. At 05h00 in the morning, when the exchange database runs some checks, I found the problem. A warning event that complains that the exchange logical database is now over the default size allowed. Logical size being the physical size of the .edb and .stm files, less the logical free space (also known as white space). Anyway the defaults size for the entire database is 18GB (16GB + 2GB). We need to adjust these now, as our combined mailboxes and public folders are over the 18GB size limit, or are quite close to breaching the limit. If they are over, then your exchange database would have already dismounted following the next check at 05h00 in the morning. If they have not passed the 18GB limit, then you will probably just be getting warning events at the moment, and should still increase the size limits to avoid any downtime.

This is how…

Open the registry editor – Start – Run, and type : regedit
Click Ok

Now navigate to (Note that the GUID is a unique string of numbers for each server) :

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\NameOfYourExchangeServer\Private-GUID

Create a new DWORD entry as follows :

Database Size Limit in GB

Right-click and modify the entry once created, and give it a decimal value of anything between 1 and 75 depending on how many GB you want to limit this size to. Make sure you have enough disk space free on the partition your Exchange database is residing, and then enter something higher than 18. For example I used 60 for 60GB.

Modify the exchange 2003 SP2 default database size

Click OK

Now navigate to the next part (This is to modify the public folders database size) :

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\NameOfYourExchangeServer\Public-GUID

Do the same as above, by creating the same DWORD value, and give it a size limit (decimal value) higher than the current public database value. For example I used 15 for 15GB.

Click OK.

Now we need to exit the registry editor, and restart the Exchange Information Store.

Go to start – run, and type : services.msc

Press enter, or click OK.

Navigate to the Exchange Information Store service, and right click it. Select the restart option.

Please note that this will now dismount your store. If your mail store is still online, users will be temporarily disconnected while the store re-mounts itself. Once back online, the database sizes will have increased, and you will get some nice notifications in your application log informing you of the new database sizes.