Archive

Archive for the ‘Development’ Category

Scaling Web API 2 and back-end SQL databases in Azure

August 18th, 2016 2 comments

I recently created a small Web API 2 project running with a back-end SQL database (Entity Framework code first), and had it deployed to an Azure web app, along with Azure SQL.

Naturally, I started it off using the free web app and one of the cheapest possible Azure SQL tiers (S0 – 10 DTUs).

After I finished working on the API, I wanted to see what sort of performance I could get out of it, by using Azure’s various scaling options.

To test I used Loader.io. This is a really nice and easy to use load testing service by SendGrid Labs. The free edition allows me to setup various API endpoint tests and run many concurrent connections for up to 1 minute at a time.

All my tests below were done using the same GET request test. The request always returned a collection of 5 x objects from the /Animals endpoint to keep things consistent.

My initial test was against the F1 free app tier for the Web app, with the SQL database running on S0 (10 DTUs). Here are the results of sending 500 requests per second for 1 minute.

S0-10DTU-result

The API struggled to complete the full 60k requests over 1 minute, and only completed about 8k requests, with an average response time of 4638ms. Terrible, but then again we are running on very low performance, cheap tiers. I had a look at the database performance stats and noticed that the DTUs were capped out at 100% during the 1 minute load test. At this point it definitely seems to be the database performance holding things back.

Scaling the database up to the S1 tier (20 DTUs) gives a definite improvement in response times and number of requests able to be sent within one minute. If we look at the database performance stats in the portal, we can now see that the DTUs are still maxing out at 100% though.

S1-20DTU-result

20-DTUs-maxed out

At this point I decided I would increase database performance again, but throw more requests per second at the API (from 500/second up to 1000/second).

Scaling the database up to S2 (50 DTUs) and throwing more requests a second at the API, and the number of requests completed in total higher now – up by about an extra 5k. Taking a look at the DTU performance status, we can see they now maxed out at around 60%. At this point it is pretty clear that the database is no longer the bottleneck.

50-DTUs-maxed out at 60% - even with doubling the requests per second from 500 to 1000

50-DTUs-maxed out at 60%

Now I scaled the web app tier up from free, to the B1 (Basic) tier, which gives you 1 Core, 1.75GB RAM, and up to 3 x instances scaled manually. I started with just the default 1 instance and ran the 1000 req/second for 1 minute test again.

boo-test-failed-error-rate-higher-than-50% due to timeouts

The results were pretty dismal compared to the free tier now. In fact the test failed due to an error rate of greater than 50% (all caused by timeouts). It is important to remember that we have not yet scaled out from the default 1 instance though.

Scaling up to 2 x instances on the B1 tier, helped quite a bit. The test now completes, and has a much smaller timeout error rate. Many more responses were served, but the response rate was quite slow. Taking a look at the distribution of CPU time over the two instances, we can also see that the traffic is indeed being split between the two instances we’ve scaled out with.

scale-B1-basic-from-1-to-2-instances

yay-test-finished-with much smaller error rate

processor time spread over two instances during load test

Taking this one step further to 3 x instances, and re-running the test nets us the best result so far. No timeout errors, and a response time averaging around 3000ms. Much better, but still quite a high response time, and not all 60k requests are being served.

I scaled up to the B2 tier for the following run. Each instance has 2 x cores and 3.5GB RAM this time. Starting at 1 x instance and running the test on these higher specification web instances seems to now handle things a lot better.

Little to no timeout errors, with about 5000ms avg response time, but using only 1 x instance this time!

Pushing things right up to 3 x instances (2 cores and 3.5GB RAM each) nets us the best result yet. The average response time is down to 1700ms and there are no timeout errors at all. The API was able to handle 49000 requests in the 1 minute test, which is the highest number of requests it has been able to handle so far.

B2-basic-test-with-3x-instances-good-result

I scaled up to the B3 tier from here, and tried another few runs using 3 x instances (at 4 x cores and 7GB RAM each). This didn’t help things much, netting around 200ms better response time, for a much pricier tier. It therefore looks like the sweet spot for this kind of work is to scale out with medium sized instances (2 x cores each), rather than scaling up too much.

I changed the tier to S2 (2 x cores 3.5GB RAM each, but allowing up to 10 x instances scaled out) and this time, running the test gave very similar results to 3 x instances. Clearly, the instances were now no longer the bottleneck. Looking back at the database performance, I saw that the DTUs were maxing out at around 90%. It was clear that there must have been some throttling happening there now.

I changed the database DTUs to 100 using the S3 tier, and re-ran the test once more.

bingo-60k-requests

Bingo! We’re now managing to serve the test’s 1000 requests a second, and over the 1 minute test, we get all 60k requests served successfully, and have a reasonable average response time of roughly 300-400ms.

I made a quick change to the GET method in the API for this endpoint to gather items from the database asynchronously, and running the same test again, now gets us all the way down to an average response time of just 100ms over the 60k requests in one minute. Excellent!

100ms-test-result

As you can see, by running load tests like this, and trying out different scaling options for the front end and back end, logically scaling each whenever you see bottlenecks in test results or performance metrics, you can after some time determine the best specification for your database and web apps.

 

Working with the vCenter Server Simulator 5.5 – configuring custom ESXi hosts

October 18th, 2013 2 comments

Working with VCSIM (vCenter Server Simulator)

 

William Lam has done some excellent blog posts on using the simulator included with the VCSA (vCenter Server Appliance), to setup a simulated vSphere environment. Just the other day at VMworld Europe, he presented a session for vBrownBag entitled “NotSupported Tips/Tricks for vSphere 5.5“. In this session he introduced the new simulator, he dubs “VCSIM 2.0”, which is the latest iteration included with the VCSA 5.5 appliance.

I had previously had a brief look at the VCSIM included with 5.1, but after seeing its limited functionality, did not pursue its use for development testing. However, after learning about the features introduced in VCSIM “2.0”, I just had to take a further look…

To see how to setup and start VCSIM, have a read of Will’s blog post here. However, at a high level, this is what you need to do to start the simulator with defaults:

  • Deploy and fully configure the VCSA 5.5 appliance. Make sure DNS (forward and reverse) is working and the embedded database is properly configured, otherwise the vpxa service will have trouble intialising
  • Ensure you have no issues with the embedded DB being reset (i.e. don’t do this on a production VCSA!)
  • SSH in to the appliance
  • issue command: vmware-vcsim-start default

 

Customising the default VCSIM ESXi host model

 

Today, I needed to replicate a certain condition in our lab environment. Specifically, I needed the ESXi hosts to have 32 CPU cores. By default the ESXi hosts that are simulated have 8 cores. I did a bit of digging around in the /etc/vmware-vpx/vcsim/model folder and figured out which files were referenced when launching the simulator with the default option. By default, the host model in the ESX50 folder is used, so naturally, in order to configure custom ESXi hosts, we need to edit the files within this folder.

Initially, I found one file, “HostHardwareInfo.xml” and changed the CPU core count value to 32. This appeared to work – starting up the sim, and looking at the Web Client, I saw that the simulated hosts were now showing 32 CPU cores. I also changed the RAM up to 32GB (from the default of 16) just to test another option, and this was also showing up. However, upon loading up the MOB (Managed Object Browser), and navigating to the these hosts, I saw that the properties under the host summary->config->hardware were telling another story – they were still set to 8 cores and 16GB RAM. A little more digging revealed that another file, “HostListSummary.xml” also needed to be updated.

So in order to setup your custom ESXi host models for the default VCSIM profile, make sure you update both of these files.

The files to update your default ESXi model

The files to update your default ESXi model

 

Here is the small change I made to increase the Host core count to 32 cores.

<cpuInfo>
    <numCpuPackages>2</numCpuPackages>
    <numCpuCores>32</numCpuCores>
    <numCpuThreads>4</numCpuThreads>
    <hz>2999654793</hz>
 </cpuInfo>

And the data reflected in the MOB:

ESXi Host Hardware Summary

 

Changes as seen in the vSphere Web Client:

vSphere Web Client Host Hardware Summary

Make sure you backup these files before changing them, so that you can roll back if you need to. There are other ways of creating your own profiles for the simulator, but I could not find any documentation on how to create custom hosts. The only bits I could find were relating to creating your own datastores. You can also use the default profile template to create your own profile in it’s entirety, and this is a better long term solution, however to get things up and running quickly with the default profile, the above works nicely.

Note that all properties and methods pertaining to each managed object found in the API appear to be set up and created when using the VCSIM, so this makes a great development/testing/lab tool. Kudos to VMware for releasing this with the VCSA, and thanks to William Lam for pointing it out and blogging about it!

vMetrics for WordPress blogs updated to version 1.1

January 20th, 2013 No comments

I spent a little bit of time updating my vMetrics plugin for WordPress blogs. To give you a brief run-down, vMetrics allows you to display information from your VMware vCenter Cluster or ESX hosts / lab on your WordPress blog. It works with vSphere 4, 5 and 5.1.

 

 

In version 1.1 I have made the following changes:

Change log for version 1.1:

  • Added new metrics section for hardware information (Model and Vendor of first host in cluster – this is editable in the PowerCLI script)
  • Added configurable widget title section for Hardware
  • Updated PowerCLI updater script to have a DO WHILE loop (allowing you to run the script once on a management machine and it will keep updating your blog vMetrics every 30 minutes. (The script is called once every half hour). Thanks @dawoo for the idea 🙂
  • Added PowerCLI section to send the vendor and model type of the first ESX host it finds back to vMetrics so that you can display this information in the widget too
  • Cleaned up PHP in main plugin code

You can take a look at the main plugin page here or use the links below to download the latest version right away. Installation and configuration steps can be found on the main plugin page.

Latest version downloads (get the plugin and updater script):

Download vMetrics Plugin for Wordpress 1.1 (902)
Download vMetrics PowerCLI Updater script 1.1 (888)

Cosmosis (iOS) – updated to version 1.3

August 9th, 2011 No comments

So heres an update that is slightly off my usual subject matter! I have been spending a little bit of time updating my 2D Space shmup game I have developed for iOS (iPhone, iPod touch, iPad). I  finished submitting the update to Apple on 04/08/2011, and this morning I saw it has now been approved, so it is ready to be downloaded / updated from the App Store.

 

Bonus level 2 added in version 1.3

 

Here is a list of the most significant new features in version 1.3.

 

– Unlocked all levels by default
– New (Second) Bonus Level added
– New enemy ship type added
– New enemy ship attack patterns (more challenging/interesting)
– New scrolling level select screen
– Main menu redesigned and new ambient music added for menu and in game
– OpenFeint updated
– New News menu option added for the latest news and announcements
– Game difficulty tweaked to make it slightly more challenging
– Bosses are a bit to tougher to fight now
– Survival mode difficulty tweaked
– New loading & credits screen
– App rating dialog that appears after a few days now works

 

So there is some new content as well as better difficulty and more challenging enemies to fight. I also added an interesting new feature – the News screen. This integrates with OpenFeint and pulls down and news / announcements I make on my developer control panel for OpenFeint into this custom designed (Cosmosis themed) News screen. It also updates the App’s badge icon according to the number of new (unread) news announcements and displays a small badge icon on the news menu in game. The main reason I developed this extra bit, is that I would like to be able to notify users of any future new releases (apps) I may release.

 

If you have any feedback or comments about Cosmosis, feel free to leave them below, or grab a copy and leave me a review on iTunes!

Now reading: Cocos2d for iPhone 0.99 Beginner’s Guide

January 17th, 2011 No comments

I was recently offered a copy of Pablo Ruiz’s “Cocos2d for iPhone 0.99 Beginner’s Guide” eBook to read through and provide comments / feedback on – needless to say I was quite excited to get stuck in, however I am still on holiday in South Africa so for now I am just downloading the eBook and will save it for when I am back in the UK.

I actually can’t wait to have a read through. cocos2d is by far the most fun I have had programming with, and I’m sure this book will be a valuable asset.

You can grab a copy over at PacketPub if you are interested in learning about programming with (imo) the best 2D gaming engine for iOS. At the moment it is going on special for around £25.00 which is not bad at all for a guide encompassing a lot of what cocos2d has to offer.

My first iOS game released on the App Store – Cosmosis

November 30th, 2010 2 comments

So I finally got my first game (and app) released on the App Store the other day. It is a 2D Space shooter called Cosmosis. Here is a feature / gameplay video and a link to the official App Store page. Check it out if you are into iOS games. It is compatible with the iPhone, iPod Touch and iPad!

The Official Game Page

Here are a couple of screenshots:

Cosmosis - Gameplay screenshot

Cosmosis - Gameplay screenshot 2

http://itunes.apple.com/us/app/cosmosis/id404662019?mt=8