Cheap S3 Cloud Backup with BackBlaze B2

white and blue fiber optic cables in a FC storage switch

I’ve been constantly evolving my cloud backup strategies to find the ultimate cheap S3 cloud backup solution.

The reason for sticking to “S3” is because there are tons of cloud provided storage service implementations of the S3 API. Sticking to this means that one can generally use the same backup/restore scripts for just about any service.

The S3 client tooling available can of course be leveraged everywhere too (s3cmd, aws s3, etc…).

BackBlaze B2 gives you 10GB of storage free for a start. If you don’t have too much to backup you could get creative with lifecycle policies and stick within the 10GB free limit.

a lifecycle policy to delete objects older than 7 days.

Current Backup Solution

This is the current solution I’ve setup.

I have a bunch of files on a FreeNAS storage server that I need to backup daily and send to the cloud.

I’ve setup a private BackBlaze B2 bucket and applied a lifecycle policy that removes any files older than 7 days. (See example screenshot above).

I leveraged a FreeBSD jail to install my S3 client (s3cmd) tooling, and mount my storage to that jail. You can follow the steps below if you would like to setup something similar:

Step-by-step setup guide

Create a new jail.

Enable VNET, DHCP, and Auto-start. Mount the FreeNAS storage path you’re interested in backing up as read-only to the jail.

The first step in a clean/base jail is to get s3cmd compiled and installed, as well as gpg for encryption support. You can use portsnap to get everything downloaded and ready for compilation.

portsnap fetch
portsnap extract # skip this if you've already run extract before
portsnap update

cd /usr/ports/net/py-s3cmd/
make -DBATCH install clean
# Note -DBATCH will take all the defaults for the compile process and prevent tons of pop-up dialogs asking to choose. If you don't want defaults then leave this bit off.

# make install gpg for encryption support
cd /usr/ports/security/gnupg/ && make -DBATCH install clean

The compile and install process takes a number of minutes. Once complete, you should be able to run s3cmd –configure to set up your defaults.

For BackBlaze you’ll need to configure s3cmd to use a specific endpoint for your region. Here is a page that describes the settings you’ll need in addition to your access / secret key.

After gpg was compiled and installed you should find it under the path /usr/local/bin/gpg, so you can use this for your s3cmd configuration too.

Double check s3cmd and gpg are installed with simple version checks.

gpg --version
s3cmd --version
quick version checks of gpg and s3cmd

A simple backup shell script

Here is a quick and easy shell script to demonstrate compressing a directory path and all of it’s contents, then uploading it to a bucket with s3cmd.

DATESTAMP=$(date "+%Y-%m-%d")
TIMESTAMP=$(date "+%Y-%m-%d-%H-%M-%S")

tar --exclude='./some-optional-stuff-to-exclude' -zcvf "/root/$TIMESTAMP-backup.tgz" .
s3cmd put "$TIMESTAMP-backup.tgz" "s3://your-bucket-name-goes-here/$DATESTAMP/$TIMESTAMP-backup.tgz"

Scheduling the backup script is an easy task with crontab. Run crontab -e and then set up your desired schedule. For example, daily at 25 minutes past 1 in the morning:

25 1 * * * /root/backup-script.sh

My home S3 backup evolution

I’ve gone from using Amazon S3, to Digital Ocean Spaces, to where I am now with BackBlaze B2. BackBlaze is definitely the cheapest option I’ve found so far.

Amazon S3 is overkill for simple home cloud backup solutions (in my opinion). You can change to use infrequent access or even glacier tiered storage to get the pricing down, but you’re still not going to beat BackBlaze on pure storage pricing.

Digital Ocean Spaces was nice for a short while, but they have an annoying minimum charge of $5 per month just to use Spaces. This rules it out for me as I was hunting for the absolute cheapest option.

BackBlaze currently has very cheap storage costs for B2. Just $0.005 per GB and only $0.01 per GB of download (only really needed if you want to restore some backup files of course).

Concluding

You can of course get more technical and coerce a willing friend/family member to host a private S3 compatible storage service for you like Minio, but I doubt many would want to go to that level of effort.

So, if you’re looking for a cheap S3 cloud backup solution with minimal maintenance overhead, definitely consider the above.

This is post #4 in my effort towards 100DaysToOffload.

Scaling Web API 2 and back-end SQL databases in Azure

I recently created a small Web API 2 project running with a back-end SQL database (Entity Framework code first), and had it deployed to an Azure web app, along with Azure SQL.

Naturally, I started it off using the free web app and one of the cheapest possible Azure SQL tiers (S0 – 10 DTUs).

After I finished working on the API, I wanted to see what sort of performance I could get out of it, by using Azure’s various scaling options.

To test I used Loader.io. This is a really nice and easy to use load testing service by SendGrid Labs. The free edition allows me to setup various API endpoint tests and run many concurrent connections for up to 1 minute at a time.

All my tests below were done using the same GET request test. The request always returned a collection of 5 x objects from the /Animals endpoint to keep things consistent.

My initial test was against the F1 free app tier for the Web app, with the SQL database running on S0 (10 DTUs). Here are the results of sending 500 requests per second for 1 minute.

S0-10DTU-result

The API struggled to complete the full 60k requests over 1 minute, and only completed about 8k requests, with an average response time of 4638ms. Terrible, but then again we are running on very low performance, cheap tiers. I had a look at the database performance stats and noticed that the DTUs were capped out at 100% during the 1 minute load test. At this point it definitely seems to be the database performance holding things back.

Scaling the database up to the S1 tier (20 DTUs) gives a definite improvement in response times and number of requests able to be sent within one minute. If we look at the database performance stats in the portal, we can now see that the DTUs are still maxing out at 100% though.

S1-20DTU-result

20-DTUs-maxed out

At this point I decided I would increase database performance again, but throw more requests per second at the API (from 500/second up to 1000/second).

Scaling the database up to S2 (50 DTUs) and throwing more requests a second at the API, and the number of requests completed in total higher now – up by about an extra 5k. Taking a look at the DTU performance status, we can see they now maxed out at around 60%. At this point it is pretty clear that the database is no longer the bottleneck.

50-DTUs-maxed out at 60% - even with doubling the requests per second from 500 to 1000

50-DTUs-maxed out at 60%

Now I scaled the web app tier up from free, to the B1 (Basic) tier, which gives you 1 Core, 1.75GB RAM, and up to 3 x instances scaled manually. I started with just the default 1 instance and ran the 1000 req/second for 1 minute test again.

boo-test-failed-error-rate-higher-than-50% due to timeouts

The results were pretty dismal compared to the free tier now. In fact the test failed due to an error rate of greater than 50% (all caused by timeouts). It is important to remember that we have not yet scaled out from the default 1 instance though.

Scaling up to 2 x instances on the B1 tier, helped quite a bit. The test now completes, and has a much smaller timeout error rate. Many more responses were served, but the response rate was quite slow. Taking a look at the distribution of CPU time over the two instances, we can also see that the traffic is indeed being split between the two instances we’ve scaled out with.

scale-B1-basic-from-1-to-2-instances

yay-test-finished-with much smaller error rate

processor time spread over two instances during load test

Taking this one step further to 3 x instances, and re-running the test nets us the best result so far. No timeout errors, and a response time averaging around 3000ms. Much better, but still quite a high response time, and not all 60k requests are being served.

I scaled up to the B2 tier for the following run. Each instance has 2 x cores and 3.5GB RAM this time. Starting at 1 x instance and running the test on these higher specification web instances seems to now handle things a lot better.

Little to no timeout errors, with about 5000ms avg response time, but using only 1 x instance this time!

Pushing things right up to 3 x instances (2 cores and 3.5GB RAM each) nets us the best result yet. The average response time is down to 1700ms and there are no timeout errors at all. The API was able to handle 49000 requests in the 1 minute test, which is the highest number of requests it has been able to handle so far.

B2-basic-test-with-3x-instances-good-result

I scaled up to the B3 tier from here, and tried another few runs using 3 x instances (at 4 x cores and 7GB RAM each). This didn’t help things much, netting around 200ms better response time, for a much pricier tier. It therefore looks like the sweet spot for this kind of work is to scale out with medium sized instances (2 x cores each), rather than scaling up too much.

I changed the tier to S2 (2 x cores 3.5GB RAM each, but allowing up to 10 x instances scaled out) and this time, running the test gave very similar results to 3 x instances. Clearly, the instances were now no longer the bottleneck. Looking back at the database performance, I saw that the DTUs were maxing out at around 90%. It was clear that there must have been some throttling happening there now.

I changed the database DTUs to 100 using the S3 tier, and re-ran the test once more.

bingo-60k-requests

Bingo! We’re now managing to serve the test’s 1000 requests a second, and over the 1 minute test, we get all 60k requests served successfully, and have a reasonable average response time of roughly 300-400ms.

I made a quick change to the GET method in the API for this endpoint to gather items from the database asynchronously, and running the same test again, now gets us all the way down to an average response time of just 100ms over the 60k requests in one minute. Excellent!

100ms-test-result

As you can see, by running load tests like this, and trying out different scaling options for the front end and back end, logically scaling each whenever you see bottlenecks in test results or performance metrics, you can after some time determine the best specification for your database and web apps.

 

Simple Content Delivery Network (CDN) using Amazon AWS (S3 + CloudFront)

 

Content Delivery Networks

Having a content delivery network has many benefits for your users or clients. One of the most obvious reasons of having a CDN, is the ability to serve up content to your users from multiple (often the most optimal) locations.  Users access files that originate from one original source location, but the content is delivered by the closest location(s), often with the lowest latency and highest possible speed.

Using Amazon CloudFront, you can share dynamic, static, or even streamed content to users (including full websites), using Amazon’s global network of edge locations. This means that content can be served to users at the highest possible speeds, with the lowest possible latencies. In this blog post, I will cover the steps you need to take to deploy a basic CDN using Amazon AWS. For this purpose, we will leverage a combination of Amazon S3 + CloudFront.

 

Setting up Amazon S3

Amazon S3 (Amazon Simple Storage Service) is essentially Amazon’s “storage for the Internet”, and as explained above, CloudFront is a content delivery network service. As such, both products sit in Amazon’s “Storage & Content Delivery” stack.

 

  • To get started you will of course need an Amazon AWS account. Go to http://aws.amazon.com/ and register. You will need to provide credit card details, but most products have some sort of free tier that you can utilise for initial testing (usually free for up to 1 year, based on certain utilisation thresholds).
  • Once you are all signed up, you’ll need to navigate to the AWS Web Console. This is the central location you can use to manage all AWS services (among other options such as the AWS SDK and Command Line).

aws-console-example
The central, AWS Web Management Console

  • To start, we’ll need to define an origin location for our content. This is the location our original files are kept. For this purpose, we will use Amazon S3. It allows us easy access to files that we place in something Amazon call a “bucket”. I like to think of it as a folder, or container. You can have as many buckets as you wish, however each one’s name needs to be completely unique across Amazon S3. Click on “S3” under the “Storage & Content Delivery” heading of your AWS Console to get started.
  • From here, you will be greeted with a welcome page and some explanation of what S3 is. Simply click “Create Bucket” to get going.

create-bucket

 

  • Provide a unique bucket name, and specify a region to use. Regions have the benefit of allowing organisations to comply with storage regulation rules – for example, if you were storing client data that you were bound legally to keep within the UK, you would specify the Ireland region.

new-bucket

 

  • Your new bucket will appear in the S3 Management Console after being created. Simply click the name of the bucket to open it. For our simple CDN, we’ll just be serving up one single file – pretend this was a really large file that needed efficient distribution to many people – for example a large media file. At the top left, you’ll see an “Upload” button. Click this, and choose a file to upload as your test file. I will be using a simple image file. (By the way, Amazon have a service called “Amazon Import/Export”, which allows you to send really large amounts of data via post on portable media to Amazon for them to upload directly to your Amazon S3 or Glacier services).
  • Click “Start Upload” once you have chosen a file to test with.
  • After the file is finished uploading, it will appear in the console under your bucket name. (I called mine “image-for-distribution.png”).

example-file-in-bucket

 

  • Right-click the file, and choose the option “Make Public” for this test. This choice would be affected by the nature of the files you would want to deliver to users in your own configuration, but for this simple example, this is what I am choosing.
  • Right-click the file again, and choose “Properties“. Here you can get the direct, public link to your file and test access to it in your web browser. This is simple, direct access, and is not the access we are aiming for, as we will utilise our CDN with CloudFront to serve the file in our final configuration. This is just to test that the direct link is working.

aws-file-properties

 

Setting up CloudFront and your Distribution

  • Now that we know our basic file is being correctly served from Amazon S3, we’ll navigate to “CloudFront” from the main AWS Console (aws.amazon.com). A quick way to get there is by clicking the orange cube icon in the top left of your AWS page – wherever you are in the console, it’ll take you back to the main AWS console. From there just click “CloudFront“.
  • In CloudFront, we’ll want to create something called a “Distribution“. Click the “Create Distribution” button to get started.

create-distribution

 

  • Make sure you select “Download” type for the “delivery method” when asked on the next page, then click “Continue“.

cloudfront-delivery-method

 

  • We’ll now select various options for our CloudFront Distribution.
    • For “Origin Domain Name“, click the text box and you’ll see a populated list of Amazon S3 buckets. Your bucket you created earlier should feature here. Click it to select it.
    • The “Origin ID” should auto populate based on your S3 bucket name you chose.
    • If you wish to restrict users to only access your content via CloudFront URLs, and not direct by S3 URLs, then choose “Yes” for “Restrict Bucket Access“.
    • If you chose “Yes” for restricting bucket access, you’ll also need to create a “Comment” and “Grant Read Permissions” on the bucket for CloudFront’s access to the S3 bucket. Click “Yes, Update Bucket Policy” to have CloudFront get read access automatically to the S3 bucket.
    • Select “HTTP and HTTPS” for “Viewer Protocol Policy“.
    • You can customise the object caching properties if you wish, but for this example, just leave the “Default Cache Behavior Settings” on their defaults.
    • Now you can set your “Distribution Settings“. Choose “Use All Edge Locations (Best Performance)” for “Price Class“. This will ensure that all edge locations around the world are used to distribute your content in the fastest, most efficient way to your users. You could also restrict this to other groups of regions e.g. only the US and Europe for example – this would be a cheaper option, but not as efficient for all users globally.
    • Next, we can add an alternate CNAME for the distribution. This is highly recommended so that you can provide your own domain name formatted URLs to users, instead of a long, ugly default Amazon CloudFront URL. Enter something now, (for example I will use cdn.shogan.co.uk as I own the domain and can create this CNAME record myself in DNS). Once you are complete with this distribution setup, you should get the Distribution URL, and point a new CNAME record to the full URL that CloudFront assigns to your distribution.
    • Leave all other options at their defaults for now, and make sure that the last option “Distribution State” is “Enabled“, then click the “Create Distribution” button at the very bottom.

example-distribution-settings1 example-distribution-settings2

  • Your Distribution should now be created. Use the Navigation menu on the left side of the screen and click “Distribution” to see a list of your CloudFront Distributions.

cloudfront-distributions

 

  • At first the “Status” will show “InProgress“. After a few minutes this should change to “Deployed“.
  • In the mean time, look for your “Domain Name” that this Distribution has been assigned, and go and create a CNAME record pointing the CNAME you specified when creating this distribution, to the domain name. For example, you may have something like dxxxxxxxxxm.cloudfront.net. In my case, I specified a CNAME of cdn.shogan.co.uk, so I will create a CNAME record linking these together.

 

Testing

Once your CNAME record is created, type in your new CNAME record, followed by a forward slash, and then the name of the file you originally uploaded to your S3 bucket that is linked to by this CloudFront distribution. For example, my file was called “file-for-distribution.png” and my CNAME record I made is cdn.shogan.co.uk. So to utilise my CloudFront CDN, I would simply access the file as “cdn.shogan.co.uk/image-for-distribution.png”. If your DNS takes a while to apply/propagate, then you can simply use the CloudFront domain name assigned to your Distribution (for example dxxxxxxxxxxm.cloudfront.net/yourfilename.extension) to test out your distribution. Remember to ensure your distribution is in a deployed state before testing. You should now see your file served up in your web browser via your brand spanking new Amazon AWS powered CDN!

 

Conclusion

That concludes the basic setup of a Amazon S3 + CloudFront powered Content Delivery Network. I hope this was useful for some. In forthcoming blog posts I will delve into setting up custom logging and monitoring / alerting for your CDN. Please remember to like/share/tweet this post out to friends if you thought it was useful.

 

 

Cloud Credibility challenges – blogging about my team members

So there is a fun website called “CloudCred” that allows individuals or teams to participate in various tasks and challenges –  everything from technical challenges to social and fun are covered and it is quite a good team building exercise, apart from the leaderboard challenge aspect!

One of the tasks is to blog about my team members and include links to their own blogs. We have quite a few team members so I can’t cover all of them, but here goes:

Of course this task is for our team – Xtravirt Limited, so we also have a company blog you can go and visit for some excellent content around the Cloud and Virtualisation industry.

The latest trends in VMware and Cloud Computing

cloud computing

 

VMware promotes virtualization as a catalyst for cloud computing. Cloud infrastructures are built on and powered by VMware. VMware allows IT professionals to build solutions that are specifically tailored to a client’s individual needs. Internal and external clouds may be created to handle the needs of a growing business. Hybrid clouds are growing in popularity for businesses that want the convenience of both. Here are some of the benefits of VMware cloud virtualization:

 

  • Efficient Processes. VMware makes it possible to automate processes and employ utilization to increase IT performance. When IT professionals leverage existing resources and avoid expenses related to infrastructure investment, the total cost of ownership (TCO) is reduced tremendously.
  • Agility. End-users gain a more secure environment with cloud computing. With VMware, IT professionals can be assured that they will preserve IT authority, control and security while remaining compliant. Processes are also simplified to make the job easier. An IT organization is able to respond quickly to organizations with evolving business needs.
  • More Flexibility. IT professionals can use VMware in conjunction with traditional systems for maximum flexibility. The systems may be deployed internally or externally. When configuring VMware, IT professionals are not limited to using any one vendor or technology. The solutions are portable and are capable of using a common management and security framework.
  • Better Security. VMware solutions protect end-points, the network edge and applications through virtualization. The cloud based deployments of security patches and solutions are dynamic and constantly being updated.
  • Automation and Management. With VMware, a highly efficient, self-managing infrastructure can be created. Business rules and policies can be mapped to IT resources when the tools are virtually pooled.
  • Portable and Independent. Open standard VMware solutions provide more flexibility and reduce the dependence on a particular vendor. With this security model, applications are easily portable from internal datacenters to external service provider clouds. The applications are also dynamic, optimized and deployable on public clouds with VMware cloud application platforms.
  • Saves Time. A self-service cloud-based portal is capable of reducing time spent by deploying standardized solutions that have been pre-configured to operate off-the-shelf or out-of-the-box. This method promotes efficiency through automation and standardization. Tailored services are also popular and can be achieved with VMware solutions. IT can remain in compliance and preserve control over policies with VMware.
  • Virtual Pooling and Dynamic Resource Allocation. Virtual datacenters are created by pooling IT resources through abstraction. Logical storage building blocks, server units and network are integrated into the solution to power applications. This process is completed in accordance to regulations and business rules. User demand also plays a role in how these applications are deployed and hosted.

 

How Businesses are using VMware to transition to the Cloud

Dynamic businesses have a need for a robust and affordable IT solution. Most businesses use 70 percent of their resources focusing on maintenance of servers and applications in a traditional system. With only 30 percent of the IT budget left for innovation, companies cannot grow and provide the type of service and products its clients need and desire. IT management is searching for a better strategy, and VMware seems to be a viable solution.

VMware provides users with faster response times. Faster response times lead to lower costs over time. Self-managed virtual infrastructures are efficient and preferred by many businesses.

IT professionals can identify which cloud-based solution is best for your company. The choices typically consist of a public, private or hybrid solution. Many companies have successfully implemented these solutions.

VMware’s cloud infrastructure and management application is commonly known as vCloud Director.  This application will allow a company to transition to the cloud at their own pace. The application was introduced in 2011 to provide companies with greater flexibility and efficiency in the cloud.

VMware’s solution allows companies the ability to leverage their existing infrastructure. This saved business owners significant time and money. The savings could then be reinvested for innovation. VMware’s cost-effective solution provides an answer to the pre-existing solution of 70 percent spending on infrastructure maintenance.

NetApp has exceptional backup and recovery capabilities that are necessary for any company’s disaster recovery solution. Within minutes, VMware’s vCloud Director can recover data. The backup and recovery system is customizable, fast and accurate.

NetApp and VMware have a 24 hour per day and seven day per week global staff monitoring the applications and data stored in the cloud. This ensures the data is protected. Technical support constantly works with all parties to ensure issues are addressed promptly and efficiently. Additionally, VMware ensures that resources are available to meet service level agreements.

 

Consider How VMware Can Help Your Organization

VMware is a viable solution that can be beneficial in any organization. Consider VMware for your business and witness an increase in productivity, efficiency and mobility. VMware solutions are chosen frequently because they work.

 

Author Bio:

David Malmborg works with Dell. When David is not working, he enjoys spending time with his two kids. For more information on cloud computing, David recommends clicking here.