Steve's blog

A blog about what's going on in Analysis UK...

Why you should do other stuff...

As a software developer it's all too easy to turn up to work, do what's required, go home in evening and once a month pick up your pay. Just where does that get you?

Well maybe your happy doing that, then 5-10 years down the line you find you've been doing the same thing, the same way, never really pushing the boundaries or questioning if what you are doing is the best way to do it. Net result - you've failed your self and your employer, and if you end up needing a new job, well we all know how picky the tech industry is about having a perfect match of acronyms on your CV.



Ever since I had my first computer I've always had a personal project, be it software or hardware. These projects are for me to solve problems I have, every now and then one of them makes it out the door and I share it with the rest of the internet.

Ten years ago I was a Lab Rat, working as a scientist testing blood gas analyzers and pH meters. In my spair time I had I started buying bits from the US (I'm in the UK) and trying to figure out the cost in pounds wasn't that easy (open calculator, figure out if it's divide or multiply, press buttons, hit equals). So I started, mainly for myself, and the 10 other people who initially visited the site every month. That's now grown a lot, but is still true to it's roots, an easy way to calculate the cost of something in your local currency.

If I hadn't started I wouldn't have touched any web programming. In 2002 I went a step further and built a beast of a site in php, now that's taught me one thing, I'm not a fan of lots of php, it's great to get started with but once things get busy, ohh boy it's not so great.

The thing is, during the time I was building those sites my day job involved connecting up scientific instruments to computers, one way or another. I'd never have touched web based programming if I only did my day job. And you know what, that would have been a huge mistake. I've learnt so much from escaping the binds of the day to day work and trying new things.

I can't emphasise enough as a developer how eye opening it is to release a product or service your self. It's all to easy to sit in our cubical wearing our Dilbert curled up ties and laugh about marketing and sales, but until you've actually tried it and seen the results you will never understand how frustrating it can be, or how amazing the feeling is to see people using YOUR product and how strange the things that your users do truly are, and the actual challenges your product faces on the internet, not just the perceived problems where you think you need to shave a microsecond off of some routine that it turns out people don't use anyway.

Fast forward 8 years, I've recently transitioned and the network of other exchange rate sites from a dedicated server host onto Amazon's AWS/EC2(1).

Over the years my websites have grown, from the original, shortly after followed,,, as well as Yuan, Won and Rupees, these combined result in a network of around 23 websites.

When these sites were in php I was maintaining a core set of functionality and then some individual domain specific pages, over the years I've put a bit of work in here and there (for the most the sites just run themselves), and moved over to ASP.NET (thanks to listening to DotNetRocks), I've consolidated the code base into one single codebase with just a seperate configuration file, database and installer for each site.

That's been the story for about a year now, my builds were taking >25 minutes to build just the installers (3 mins for the main app), the web msi installer that comes as part of VS2008 has problems, I've found installing upgrades on the site would often not update files, so upgrading the websites involved running un-install on 23 msi's, installing 23 new msi's and then saving them away to be able to automate the un-install next time around.

I also have a CCNET project that goes and runs some basic tests on each of the 23 websites to ensure that the correct configuration got installed, the site is performing ok and is not down.


So here's the problem:

I was kind of happy with that, it wasn't ideal, I didn't like the 30ish minutes between me checking in a change and being able to start a roll out, not to mention the trouble in updating 23 different websites, but it worked.

More and more I see company specific sub-domains (i.e. for my bug tracking) on websites and I'm a big fan of that style, it makes me feel special, unique and I know from Spolsk's wramblings that I have my own little database for my bugs so a bug in the fogbugz code base isn't likely to show another company or competitor my bugs.

I had always wondered how sub-domaining website was done. Was a new IIS/Apache website created and the appropriate files copied over and a new database created in the background? (which by the sounds of it is how StackExchange is currently working). How about DNS updates?


The Soultion:

I was investigating another project thats been on my mind, this project would greatly benefit from sub-domains like the one. So I did a bit of research. Ideally it would be one website that connected to a different database, or just had an additional identifier in the database based on the subdomain (the analysisuk bit).

Well it turns out thats all fairly easy.

  • On IIS you can add bindings each domain you want to resolve to the website. (e.g. bindings of, for a single website will send both CompanyA and CompanyB to the one site, or you can have a single IP based site and have the DNS records point to that IP address so no need to setup bindings).
  • In ASP.NET you can get the Host server variable to tell you the full host name (possibly including the port). So you might get something like “” (HttpContext.Current.Request.Headers["HOST"])
  • it's important to also really include the VartByHeader=”host” in any page caching you do, otherwise CompanyA might see the page for CompanyB and you really don't want that happening!

Bingo, multiple sub domains on one IIS website with a master database giving a host lookup to get some kind of unique database or identifier.

Shortly after finding this out it occurred to me that this solution would work well for it's network of sites. The host variable gives the domain name not just the subdomain, I can't believe I didn't think of that before.  This was the missing link, between one way of working and another.

And so the result of a Saturdays worth of coding is a new table to the master database to give domain name resolution (e.g. and both resolve to the Dollars2Pounds website). Configuration was shifted from web.config files to a simple database table keyed by the website, strings for the website are now keyed by the website and string key.

Twenty three separate databases merged into one database, twenty three IIS websites merged into one website, a 30 minute build became 3 minutes, a 30 minute deploy became a 5 minute deploy, the web server CPU baseline load went from 20%+ to about 10% just by removing those 22 extra websites.

And whilst the benefit of a quicker build and deploy sound nice what's missing from that picture is that it also means I'm more likely to add a small feature to Dollars2Pounds and roll it out quickly which is a massive benefit. I'd tinkers with advertising changes some time ago but never released the code.

The downside? Well there is a small downside, I use to like to roll out changes to one of the quieter websites first ( or something like that) to check that I hadn't missed something in testing that when deployed on the production server wouldn't stuff up the website. Now if I stuff it up I stuff them all up!


What's my point?

Do something different, it might just pay off big time for your main work.

If your job is writing embedded code, go build a website in your spare time, it doesn't matter if it's a failure, you will learn and that means it's not a failure - failure is not doing it. If you job is writing websites go grab an Arduino and play, it's fun and again, you might just learn something.

If you are an employer take Googles advice. Let your developers have 20% time to try something totally different and radical, you might just find your main product benefits. How many big companies do we see spending a fortune to buy a small company with some technology thats different from the big companies. Maybe if the devs had 20% time that would have grown in house and saved a fortune?

And if you are a job seeker, avoid like the plague companies that don't like you having projects in your own time, they are welcome to the 9-5 developers who go home and watch TV or who go down the pub in the evening and don't touch a computer until the next morning.


(1) AWS in its self backs up what I'm trying to say here, look at Amazon, look at AWS, the book seller who's now selling compute and bytes by the hour, who'd have thought it, but it's massive. Again, Amazon tried something different outside of their day to day work. If Microsoft had done it, well no one would be surprised, the surprise is they didn't and now they are playing catch up with Azure.




Configuring IIS 7 Bindings with IIS PowerShell Provider

I searched a fair amount for information on how to add binding information to an existing website under IIS 7 and found few decent examples, the best was PowerShell Snap-in: Configuring SSL with the IIS PowerShell Snap-in which details setting up a website for SSL, but had most of the information I needed.

Naturally it would have helped if I'd remembered the that PowerShell uses New- as well as Add- and had not got my brain stuck into thinking of the bindings as a collection/dictionary, I might have noticed New-WebBinding earlier!

If like me you have a lot of domain names pointing to a single web application (more on this in a future post) you will want to add the binding information for each of the domain names, as well as the www subdomain.

So, here's how to do it (yes, this is more of a reminder to myself for next time!)

a) Fire up the IIS PowerShell console.

b) Check your websites and current binding information by browsing to IIS:\sites


c) Check your current bindings with Get-WebBinding -Name <WebSiteName>


d) Add a new binding with

      New-WebBinding -Name <WebSiteName> -IP "*" -Port 80 -Protocol http -HostHeader <Domain/Host name>


Here's how it looks in IIS manager.


If like me you have a lot of different domains that point to the same IIS site, just add some PowerShell script and you can add a load of domains in one hit, nice and simple!

AB testing with clear results

Like me you've probably heard various voices on the internet speaking of the importance of A/B testing. I've listened and agreed and tried a few bits here and there, but I've never seen such an obvious result as with my latest Adwords.

You may have noticed I recently launched, a reminder service so you don't forget to put the bin out. Initially I set up a Google Adwords campaign with one advert and that didn't get much interest at all so I added a second very similar ad.

Here are my two ads:

Initially these ads were getting served equally, fortunately Google has kicked in and realised it's not making money from the second one so stopped serving it so often.

The “Free SMS, Twitter and email reminders to put the bin out” ad was my first shot and am I glad I decided to do an A/B test on it. Talk about a useless ad. No clicks what so ever, zero, nothing, nada, zilcho and that's over about a month!

It's interesting to note how similar the text of the two ad's are and how different the responses are, they both have the same keywords, cost and even the same words!

My question to you is this: Are you running AdWords, or even SEO, or specific landing pages? Have you tried AB testing? If not, go, go now and try!  I'll wait for you...

Which brings me nicely onto the SEO AB testing. That's a lot more difficult and time consuming because you want the search engines to update their index with what you want them to see. Instead invest some cash in Google AdWords and play with the AB testing through that and see what people click on, what gets served more, then use that information in your SEO campaign.


Getting CCNET working with Git

Getting started with Git has to be the hardest part but it's well worth it as the result is a great distributed source control system.

If you've not tried it yet go get your self a free GitHub account and start playing, they have some excellent introductory articles to help you get started.  After that you might want to check out TekPubs Git series.

Anyway, once your up and running with Git you'll be wanting to get your CI system using it to do your continuous integration builds from, this is where it gets complicated again.

I'm currently running CCNET 1.4.4 which doesn't have Git support baked in, I beleive 1.5 should have but I've not tried that yet.

I had a post ready to go with details on setting up CCNET 1.4 with Git, but never got around to posting it, so to save me the time here's another great post for setting up CCNET with Git which says almost everything I was going to say.

However... after doing all that you might still find CCNET failing to build for no apparent reason.  If you look into the log file you will probably see that the Git task has timed out.

Whats happening behind the scenes is one of two things.

  •   Git is wanting you to accept the remote site as authentic and wishes to remember the key.
  •   Git is unable to use your private/public key pair to authenticate.

However if you try to connect from the Git Bash command line (on Windows) you will probably be fine.

Here's the hack/fix:

Your public/private key pair and the known hosts file are stored in your user folder (i.e. on WinXP C:\Documents and Settings\<UserName>\.ssh), which when you access Git Bash it uses these files. 

When you run CCNET as either the command line or service it doesn't use those files, instead it uses the files from Gits Program Files folder (i.e. C:\Program Files\Git\.ssh). 

Make sure you can access the remote Git from Bash first as the known_hosts file gets updated.

Backup any files you are about to overwrite in the Program Files\Git\.ssh folder.

Now copy the contents for the Documents and Settings .ssh folder over to the Program Files .ssh folder (remember to back up first!) and your away.


Once nice thing I did discover in this process, is that you give out your public key, so you can give it out to numerous hosting sites, such as GitHub and Unfuddle and be able to use just the one key for them all as you sign the requests with your private key.


Powering the Arduino with Power Over Ethernet

In a previous post I mentioned harnessing the power from Power Over Ethernet (POE) systems, I've got a couple of hardware projects lined up that I want to take advantage of this on so I've been on the look out for POE ethernet splitters for a while.

Some time ago I accidentally purchased 2 POE switches from eBay, these are both 10/100 Mb switches and cost less than £100 a piece (gigabit POE switches are a lot more).  As luck has it none of my projects require gigabit ethernet and as companies upgrade their 10/100 POE networks to be Gigabit hopefully a lot more 10/100 POE switches will appear on the market.


The real problem has been at the other end of the cable, extracting the power.  A number of devices are available as “PD” Powered Device, these include VOIP phones (the common use), small switches/hubs and WiFi access points.  Sadly thats about it so far.

For a while you have been able to purchase POE injectors and splitters, however these tend to need to be paired together and don't use the power from the switch, rather they have their own seperate power supply.  It's important when looking for a splitter to ensure that it's 802.3af compliant so that it will work from a POE switch.

The Problems:

I have projects I want POE for.

  1. Supplying power to my ADSL router
  2. Supplying power to an Arduino
  3. Supplying power to a Meridian/P running the .NET micro framework.

Neither the Arduino or Meridian/P have a shield that extracts the power from the ethernet connection, which is a real shame as it's fairly simple to do at that level, and an ideal way to power remote devices.  As for my ADSL router, well theirs little chance of getting a PD version of that.

Why power the ADSL router from POE?

  • Firstly my switches are all powered via a UPS, so if the power goes down the network stays up, except for the ADSL router (which also does DHCP for my network) and I wanted that kept alive during brownouts – a dedicated UPS isn't all that cheap or small, and I've found that with the cheaper ones even a brief brownout will cause the router to reboot or get it's self locked up.
  • The power adaptor for the router wastes about 7W of electricity, it uses 9W when powderer off the mains of which 7W is lost though the adapter alone.  If the device is connected via a POE switch the switch usage goes up by only about 2W.  (OK, granted the switch uses 30W by it's self, but once you have a few devices running from POE that is soon recovered).
  • I'm fortunate that I have a power socket right by my telephone master outlet.  The closer you can get the router to the master socket the more likely you are to get a better speed.  If you don't have a power outlet using a POE system where you connect up just via a CAT 5 network connection can be useful.

As for my Arduino and Meridian/P project, well they want data and power so a POE solution is idea, and the uses of these (internal lighting) benefits from having a UPS so that light can be supplied during a power outage.

The Solution
D-Link are selling a DWL-P50 which is perfect for my ADSL router and initial prototypes for both of my projects.  A quick check on eBay and I found a POE splitter being sold from China that looked like a good match as well.


I was able to get the DWL-P50 from Amazon and it arrived in a couple of days of ordering.  With this device you can run power over a 100m stretch of CAT 5, it will supply 5V at 2.5 Amps or 12V at 1 amp.  The Arduino and my ADSL router will work happily from the 12V supply, but you do need to switch the Arduino over to ext rather than USB for it's power.

So here it is in action, note the light on the Arduino indicating it's powered up.

you can also connect up the Arduino with a ethernet board

One of the nice things about using a POE switch is you can control and measure the power from the switch.  Here my Arduino is connected to the 3COM 2226-PWR Plus switch.

With just the DWL-P50 connected it's drawing 300mW

With the Arduino and Ethernet shield connected up it's drawing 3000mW

and just the Arduino if your interested in power and not data, it pulls 700mW (that's quite an expensive ehternet module in terms of power usage!)

The other nice thing with the POE switch is you can also set power limits, I could limit the Arduino to 4000mW so any fault and it would be isolated automatically by the switch.

So there you have it, powering an Arduino from a Power Over Ethernet source.  Easy, reduces power wastage through power adaptors and means you only need 1 cable to your device. is Live - Never forget to put your bin out again

Over the weekend I finished a few last bits off and released, a free reminder service to help you (well me really) put the correct bin out on the appropriate day.

In a twist of Irony, whilst making the quick introduction video below I set myself a bin reminder for my black bin for the next bin day (Wednesday).  Well, when returning home Tuesday evening I knew it was bin day but couldn't remember if it was the Green+Blue bins or my black bin, and none of my neighbours had put out their bin.  I decided to wait and see in the morning, as it turns out it was the black bin and I was able to get it out in time.

The moral of the story, is here to help me remember which bin I'm supposed to be putting out.  If you also have problems then feel free to use it as well.  Theirs no joining process, just login using your using your FaceBook, Twitter or OpenId credentials, an account is generated automatically then add the bins you need reminding about, setup how you want to be notified and then sit back and wait for the reminders.

You can get reminders via sms text message, email or even Tweeted to you.

The video below shows just how easy it is, 2m 25s is all it took to log in and setup the reminders, which included me waffling.


RememberTheBin intro video