Backblaze storage pods – excellent value for money storage in the datacenter

I know this is old now, but a while back I came across this blog post by the company Backblaze. They detail how they build these custom “storage pods” that get rack mounted in their datacenter for online storage. In their post, they show how using this method they manage to save tons of money that would have been otherwise spent on Amazon S3 storage, EMC / Dell or Sun solutions. Each storage pod can be looked at as one building block of a much larger storage solution.

I think this design is great and if I had the space / resources I would defintely attempt one of these as a project for myself. To quote their site, the storage pods contain the following hardware:

“one pod contains one Intel Motherboard with four SATA cards plugged into it. The nine SATA cables run from the cards to nine port multiplier backplanes that each have five hard drives plugged directly into them (45 hard drives in total).”

Here is a youtube video showing the design of one storage pod.

httpv://www.youtube.com/watch?v=Wm7Rp5u8Q1g&feature=player_embedded

Read up more at Backblaze blog

My workspace and hardware zen

Everyone has their own relax or zen area where they like to spend time getting away from reality and de-stressing. One of mine just happens to be the same place where I get a lot of work done – my main gaming platform and home office area! Since we moved into our new flat, I found that there wasn’t much space to set up my PC. Last weekend I whipped out the old jigsaw and sliced a couple of inches off the side of my PC desk in order to get it to fit into this corner.

I then decided to neaten up and organise everything a bit to enhance my working conditions when I do work from home. I made a “ghetto” iPhone dock out of the packaging the phone came in, using the plastic dish the phone is cradled in. I cut out a small area at the bottom for the iPhone connector to fit in, then routed the cabling into the box itself, which sits diagonally in the lid of the box, flipped upside down. The cable then comes out the back and plugs in to the power socket behind my desk. This keeps the cabling nice and neat and I just plonk the phone down into the dock when I get home for a charge. I don’t need a USB connection to the PC as I have SSH enabled via a jailbreak – I therefore use Wifi access and WinSCP or SCP from Putty to transfer files between PC and phone.

Behind this is my touch sensitive desk lamp, in front of the dock is my work IP phone which connects up to our VOIP server. Then we have my main PC which consists of the following: Asus P45 P5Q motherboard, E8400 3.0GHz Core2Duo CPU overclocked to 3.6GHz in Summer and 4.0GHz in Winter. 4GB OCZ DDR800 RAM running at DDR1000 speeds and an ATI HD 4870 graphics card which has a custom flashed bios which overvolts the GPU and applies a generous overclock. I used to have a nice quiet watercooling loop in the PC, but sold it recently and went back to air cooling. I plan on doing another Watercooling build soon and will hopefully post the process and worklog here when I do. The other peripherals consist of a Dell 24″ LCD (1920×1200), G15 Keyboard and Logitech MX518 mouse.

I use this PC for just about everything – all my PC gaming, Web browsing, a little bit of programming and Virtualisation (On top of Windows 7 Professional it is running VMWare Server 2.0) with a variety of guest VMs that I use for testing and practising various Windows and Linux server technologies.

Other hardware I have lying around is an old Dell Poweredge 2U server which I run VMWare ESX 3.5 and a Dell Optiplex machine running uBuntu 8.04, with VMWare Server 2.0 for linux and a guest VM operating system running on top of that which runs uBuntu Server 9.04 and this very website.

Anyway here are a few photos of my nice clean new workspace.

Installing VMWare ESX using a Dell DRAC card

Here is a how-to on installing VMWare ESX 3.5 using a DRAC (Dell Remote Access Controller) card to access the server. I was installing a new cluster in a Dell M1000e Blade Centre for work the other day and wrote up this process in order for it to be documented for anyone else doing it in the future.

Just for interests sake the basic specs of the system are:

1 x Dell M1000e Blade Centre
3 x Redundant 2000w+ Power supply units
16 x Dell M600 Blades (Each one has 2 x Quad core Xeon CPUs and 32GB RAM).

1. Connect to the M1000e’s chassis DRAC card.
a. Connect to M1000e chassis DRAC card. (https://x.x.x.x) – use the IP for that particular blade centre’s DRAC card. Login with DRAC credentials.
b. Use the agreed DRAC user credentials, or if this is a new setup, the defaults are username: root password: calvin).

login_drac

2. Select boot order for Blade and power it up
a. Choose the Blade server number that you will be working with from the Servers list on the left side.
b. Click on the Setup tab, and choose Virtual CD/DVD as the first boot device then click Apply.
c. Select the Power Management tab and choose Power on, then click Apply.

configure_boot_order_for_blade

3. Go to iDRAC console of the blade server
a. Click on Launch iDRAC GUI to access the iDRAC for the blade you have just powered on.
b. You will need to login again as this is another DRAC we are connecting to (This time the DRAC is for the actual blade server not the chassis).

launch_idrac_gui

4. Configure Mouse
a. Click on the Console tab near the top of the screen and then click the Configuration button near the top.
b. In the mouse mode drop down, select Linux as the mouse type, then click Apply.

configure_mouse

5. Launch Console viewer
a. From the console tab we can now select the Launch Viewer button.
b. An activeX popup might appear – allow it access and the DRAC console should now appear with the server in its boot process.

6. Mount Virtual ISO media for installation disc (ESX 3.5)
a. Click on Media, and then select Virtual Media Wizard.
b. Select ISO image and then browse to the ISO for ESX 3.5 – this could be on your local drive or a network share.
c. Click the connect CD/DVD button to mount the ISO.
d. Your boot order should be configured correctly to boot off this ISO now. (*Optional* You could always press F11 whilst the server is booting to choose the boot device anyway).

attach_virtual_media_iso

7. Reboot the server if the boot from virtual CD/DVD has already passed
a. Go to Keyboard – Macros – Alt-Ctrl-Del to do this.

8. ESX install should now start.
a. Press enter to go into graphical install mode
b. Select Test to test the media (The ISO should generally be fine).
c. Select OK to start the install.
d. Choose United Kingdom Keyboard layout (or whatever Keyboard layout you use).
e. Leave the mouse on generic 3 button USB.
f. Accept the license terms.

esx_install_start

esx1

9. Partitioning
a. For partition options, leave on “Recommended”. It should now show the Dell virtual disk of 69GB (in this case) or the Dell RAID virtual disk / disk configuration.
b. Say “Yes” to removing all existing partitions on the disk. (That is if you don’t mind formatting and completely clearing out any existing data that may be on this disk).
c. Alter partitions to get the following best practice sizes: (See http://vmetc.com/2008/02/12/best-practices-for-esx-host-partitions/)
d. Note: It doesn’t matter if these sizes are 2-3MB out for some. The installer deviates these sizes slightly. The swap partition should have 1600MB minimum though.
e. Next page is Advanced Options – Leave as is (Book from SCSI drive).

esx_partitions_recommended

10. Network Configuration
a. Setup network configuration
b. IP address (x.x.x.x) – whatever IP you are assigning this particular ESX Host.
c. Subnet mask: 255.255.255.0 for example.
d. Gateway:  Your gateway IP address (x.x.x.x)
e. Primary DNS:  (x.x.x.x)
f. Secondary DNS: (x.x.x.x)
g. Hostname: localhost.localdomain for example : ESXhost01.shogan
h. VLAN ID – Leave this blank if you are not using VLANs. If you are, then specify the VLAN here.
i. Create a default network for virtual machines – Unless you have a specific network configuration in mind leave this ticked on.

11. Time zone
a. Set Location to  your location.
b. System clock uses UTC is left as ticked.

12. Root password
a. Set default root password . (This is your admin password)!

13. Finish installation
a. Next page is “About to Install”
b. Check the information is all correct and click Next if all looks fine.

14. Change boot order back and restart the blade server.
a. Via the iDRAC page, change the boot order back to Hard disk for the blade so that it will reboot using the server’s RAID hard disks instead of the ISO.
b. Reboot the host by pressing the Finish button back in the console.
c. Disconnect the Virtual CD from the Media option in the console menu.
d. Watch the console while the server reboots to ensure no errors are reported on startup.

If all went well, you should now have an ESX Host booted to the console. Press Alt-F1 to access the command line (you will need to login as root or any other user you setup).

You can now access your server via the web browser (https://x.x.x.x). From here you can download the Virtual Infrastructure client to manage the ESX Host with.

This host could now be further configured and added to an ESX cluster for example. SANs could be assigned and vMotion setup so that HA (High Availability) and DRS (Distributed Resource Scheduling) can be put to good use!

E8400 Gaming rig build

This is an old post from my other site. I thought as it was IT relevant I would clone the small write up I did across to this blog…

I recently bought myself a new rig, consisting of a Coolermaster CM-690 and the following hardware:

Asus P5Q P45 Pro motherboard
Intel E8400 overclocked to 3.6GHz 24/7
OCZ 2GB ATI Heatspreader RAM DDR800 4-4-4-12
Sapphire ATI HD 4870 512MB GDDR5 Graphics card
OCZ GameXstream 600w Power supply
Western Digital 750GB SATAII Hard drive
Logitech G15 Keyboard (orange backlight model)
Logitech MX518 (5 year old mouse that has travelled the world with me!)

For display I chose a 24″ Dell LCD with a native resolution of 1920×1200 and 6ms response time.

My ultimate goal was to build a faster, cooler and quieter PC than the previous one I had in S.A.

Right, so in my last rig I had the pre-built CM-690 L-shaped window panel. This came with the chassis when I bought it, so I was pretty lazy and didn’t change anything. I also had a Coolermaster Aquagate watercooling unit that fitted in 2 x optical drive bays, which had the pump, radiator and everything incorporated, cooling my E8200 on the old rig. Temperatures were not much better than the Zalman 9700LED that I used to have on it and it was quite messy. I also didn’t enjoy the tiny tubing that this unit used, hence my custom kit choice with 1/2″ diameter tubing for this project. I had never built myself a custom watercooling system, so this will be my first. It will also be the first batch of modding I have done in about 10 years! (The last mod I did was on an AMD K6-2 333MHz in an AT case many, many years ago)! That is barring some odd LED, and minor case mods here and there.

Anyway, here is an image of the final product (Case cut, window installed, hardware assembled and modded to fit the watercooling gear. Cables neatened and basically everything finished, barring the watercooling of the graphics card.

final-1

night-shot

I cut a rough pattern out of the top with my jigsaw, this is where the radiator is to be fit:

case-cut

I cabled-sleeved most of the loose / visible wiring throughout the chassis:

cable-sleeving

Next to be cut was the side panel – Masked off the area to be cut, and used the jigsaw once again:

perspex

This is the box of goodies (watercooling hardware) I ordered from Specialtech:

goodies

The waterblock for cooling the CPU:

cpu-block

Shortly after finishing the water components, and tubing, I started the system up for leak testing…

test-run

A few weeks later the graphics card was ready to be added to the watercooling system. This is a Sapphire ATI HD 4870 512MB (GDDR4) card. I had to remove the stock air cooler, and re-apply some new thermal compound. I used Zalman STG-1 thermal paste for this.

4870-air

Here the card is naked, with the old thermal compound applied to the GPU. The card still needed to be cleaned with some pure alcohol to remove the old thermal paste.

4870-naked

Everything installed, Feser one non-conductive cooling fluid in the loop with the system up and running :

final-2

A small update on this build.

Since the original work was finished, I have now upgraded the RAM. I added another 2GB OCZ RAM to give a total of 4GB. I also pushed my original overclock a bit further, and now run the FSB at 445MHz with a CPU multiplier of 9x giving me a total of 4.0GHz on the E8400. The RAM is running a multiplier of 2x overclocking the four modules to 890MHz each, with timings of 4-4-4-12. My Vcore setting for the processor is on around about 1.375 volts, and my RAM is sitting at 2.2 volts which is what I consider a safe 24/7 setting for RAM modules cooled by passive heatsinks. The FSB is set to 1.16 volts for the increase FSB speed to hold stable. I also flashed the 4870’s bios with a custom image, that sets the card’s default core speed to 795mhz (from a default of 750mhz) and the memory to 1100mhz (from a default of 900mhz). I then use Catalyst Control Centre to up the core speed to a further 830mhz for gaming. The PC now runs at these speeds 24/7 and has no stability issues.