:::: MENU ::::

Backup Ubuntu 8.04LTS to rsync.net using backup-manager (at linode.com)

I’m setting up a new linode360 VPS, based of the Ubuntu 8.04LTS image.

For backups, I want to do weekly backups and daily incrementals of the data files, and sync these off to an external backup location.

Broadly, there are two parts to the backup, creating the backed up files, and then copying them offsite.

Creating the backups

I’m using backup-manager 0.7.6-debian1, which handles backing up sets of files and MySQL databases to tar.gz files.

sudo aptitude install backup-manager
sudo /usr/sbin/backup-manager --version

The comments in the config file make editing it quite straight forward.

sudo vi /etc/backup-manager.conf

One minor points:

  • Separate multiple backup methods with a space, eg:
    export BM_ARCHIVE_METHOD="tarball-incremental mysql"

To test:

sudo /usr/sbin/backup-manager --verbose

The output folder you specified (/var/archives) should now contain some .tar.gz versions of your data. Hurrah!

Getting the files offsite

Originally I intended to use Amazon’s S3 as a backup store, following Michael Zehrer’s instructions on how to rsync with S3. However, I couldn’t get this to work reliably; so I opted instead for rsync.net which offers standard scp, ftp, WebDav and sshfs access to their geographic backup locations.

Backup-manager can rsync over ssh, which is a quick and efficient way to sync changes over to the remote host..

The first step is get your rsync.net account setup; and set up your ssh so you can access without typing in a password

Then, set the BM_UPLOAD_METHOD to rsync, and configure both the scp and the rsync settings in /etc/backup-manager.conf (pay attention not to prefix remote folders with / ).

Test with:

sudo /usr/sbin/backup-manager --verbose

Once its all working, set up a cron job to call backup-manager daily.

crontab -e

I run backup-manager once per day in the wee hours, and log output to /root/crontab/daily_backup-manager.logs

  0 3   *   *   *    /usr/sbin/backup-manager -v > /root/cronlogs/daily_backup-manager.log


The Correlation between Schedule Pressure & Low Quality

Research suggests that

  • 40% of all software errors are caused by pressure on developers to complete quicker (Glass 2004)
  • Under extreme schedule pressure, code defects increase by 400% (Jones 2004)
  • Projects which aim to have the lowest number of defects also have the shortest schedules (Jones 2000)

This makes sense is you consider that good engineering practises are the first to leave the building under pressure to finish, and most teams will revert to quick & dirty hacks to get things implemented, without complete testing etc.

My personal opinion is that the only way to shorted development cycles is to reduce the feature set. Its pleasing for me to see that the research seems to back this up.

When deciding which features will be dropped; I think its worth revisiting the business requirements that are driving a particular set of features. In many cases a simpler “design” could suffice; for example a fancy calendar widget could be replaced with a simple textbox; a little used settings screen could be retired in favour of manually changing config files; or overly complex but little used workflows could be put on the back burner.

I maintain that a lot of “features” can be dropped, without actually impairing the business functionality of the system.

Just remember, what every you do DON’T consider dropping testing or QA in an effort to meet your deadline; unless you want to guarantee that you will continue to miss all future deadlines until the project gets cancelled!

ASP.NET MVC Beta – Setting properties on ViewControls

In ASP.NET MVC Beta, it isn’t possible to set properties on partials when calling them with Html.RenderPartial.

Rusty Zarse blogged about a useful ViewData helper class, which allows you to set properties by passing values to the partial through the ViewData.

I’ve extended this slightly to enable the following syntax:

Which sets properties on a ViewUserControl like this:

     public partial class YUIDataTable : ViewUserControl
        public string ConfigNamespace { get; set; }
        public string DataTableId { get; set; }
        public bool HideFilter { get; set; }

        protected void Page_Load(object sender, EventArgs e)

Here is the full helper code.

using System;

namespace MvcHelpers
    /// With thanks to http://www.vitaminzproductions.com/technology-blog/index.php/2008/11/12/setting-properties-using-aspnet-mvc/
    public static class ViewDataDictionaryBuilder
        public static System.Web.Mvc.ViewDataDictionary Create(object data, ModelType model) where ModelType : class
            return (System.Web.Mvc.ViewDataDictionary)CreateInternal(new System.Web.Mvc.ViewDataDictionary(model), data);

        public static System.Web.Mvc.ViewDataDictionary Create(object data, object model)
            return CreateInternal(new System.Web.Mvc.ViewDataDictionary(model), data);

        public static System.Web.Mvc.ViewDataDictionary Create(object data)
            return CreateInternal(new System.Web.Mvc.ViewDataDictionary(), data);

        private static System.Web.Mvc.ViewDataDictionary CreateInternal(System.Web.Mvc.ViewDataDictionary dictionary, object data)
            AddPropertiesToViewData(dictionary, data);
            return dictionary;

        private static void AddPropertiesToViewData(System.Web.Mvc.ViewDataDictionary dictionary, object data)
            if (data == null) return;

            System.Reflection.PropertyInfo[] properties = data.GetType().GetProperties();

            foreach (var property in properties)
                dictionary.Add(property.Name, property.GetValue(data, null));

        public static void SetPropertiesToViewDataValues(System.Web.Mvc.ViewUserControl viewUserControl)
            foreach (var property in viewUserControl.GetType().GetProperties())
                if (viewUserControl.ViewData[property.Name] != null)
                    property.SetValue(viewUserControl, Convert.ChangeType(viewUserControl.ViewData[property.Name], property.PropertyType), null);


Hope that’s useful to you!

Announcing the TDD TestHelpers opensource project

Whenever I start working on a project; I invariably find myself writing a collection of TDD test helper methods.  I quick survey of other TDDers reveals the same; and thus the birth of my latest opensource project, TestHelpers (http://code.google.com/p/testhelpers/).

The aim of the project is to centralise all those little test helper methods you end up creating into a useful assembly you can use to jumpstart your next project.  Things like:

  • Comparers
    • Generic object comparers
    • DataSet comparers
  • Test Data generators
    • Builder pattern
  • Automocking containers

For example, I’ve just added an “AssertValues” functor; which helps you check whether the values of who object instances are the same. 

One area I keep using asserts like this is in integration tests; where I want to check that the objects I’m persisting to the database via my ORM actually end up in the database in a non-mangled form.  In this case, I new up entityA, persist it, reload it into entityB and then need to check that all the values in entityB are the same as those in entityA.

A standard Assert.AreEqual will fail, because entityA and entityB are different instances.  But, my helper method AssertValues.AreEqual will pass, because it checks the (serialized) string values of entityA and entityB.

Here is another, simpler example to illustrate the concept.

    public class StandardObjectsTests
        public class StringContainer
            public string String1 { get; set; }
            public string String2 { get; set; }

        public void ObjectsWithSameValue_ShouldBeEqual()
            var stringContainer1 = new StringContainer {String1 = "Test String1", String2 = "Test String 2"};
            var stringContainer2 = new StringContainer {String1 = "Test String1", String2 = "Test String 2"};

            Assert.AreNotEqual(stringContainer1, stringContainer2);

            AssertValues.AreEqual(stringContainer1, stringContainer2);

I’m sure you have a bunch of similar helper methods lying about your projects.

How about contributing them to the TestHelper project?

DDD7 – Nov 21, Microsoft campus, Reading UK

Wow.  DDD, the community conference for UK MS developers, hosted by Microsoft, but completly driven by the community continues to go from strength to strength.  This year, the 400 places were filled within 4 hours of this annoucement that registration was open via twitter.

I really enjoyed Mike Hadlow‘s talk on IOC injection; with specific reference to his opensource eCommerce application, SutekiShop.  Clearly an expert on the subject on ASP.NET MVC, Onion architecture, Repositories & Services,  and binding it all together with IOC; he is also a gifted presenter.   If you’re looking for a reference implementation of an ASP.NET MVC application (or indeed just a loosely coupled, TDD driven web application); I’d strongly advise you to check out Mike’s SVN repo.

Toby Henderson gave an interesting demo of how you can run .NET apps under Linux (Ubuntu) using Mono.  Worth bearing in mind when considering your hosting & deployment options.

Sebastien Lambla gave a highly entertaining (if opinionated) presentation of a series of WFP tips and tricks.  My favourite tip (which isn’t really WPF related)

Tired of always checking if your event delegates are null before calling them?  Just declare them with a standard empty delegate.  Then they are never null!

  event MyEvent = delegate {};

Recommended book: WPF Unleashed by Adam Nathan

As always it was a great event – remember, if you want to be at DDD8 (2009); sign up early!

See www.developerday.co.uk for slides & videos from all sessions

Complexity Smells

I propose that one of the principal things that gets projects into trouble is too much complexity.  My theory is that there are a number of “complexity smells” that if identified and addressed early on can radically improve a projects chances of success.

To explore this theory, I recently ran a workshop where we brainstormed complexity smells and possible preventative actions.

We selected and ranked to product the following list:

(1) Over engineered code/applications – 46% of votes
Do you really need all those features? Should you really be introducing this code abstraction or design pattern; or are you just speculating that it will be required in the future? Will a simpler solution work for now?
Prevention strategies
a) Do you really need that functionality / code. No really.
b) Refactor mercilessly (at all levels, functionality, architecture, classes, methods, algorithms)
c) Ensure your team has good coding standards

(2) Lack of TDD & Acceptance test automation – 26% of votes
Is your unit test coverage above 80%? Can you click one button and have all your acceptance tests run automatically? Have you considered that any change made after version 1 is released (aka, in 90% of the lifecycle of the application!) is the equivalent of someone opening a novel they have never read, changing what happens to a couple of characters, and then without re-reading the novel; hoping it all still makes sense.
Prevention strategies
(a) Set up automation in sprint 0
(b) Project manager or tech lead needs to drive automation
(c) Ensure the team has testing experience (or at least a resource to guide & educate them)
(d) Initially test automation just seems to slow down progress; be ready to explain how automated tests are the gift that keeps on giving

(3) Poor time / priority management – 20% of votes
Does your project have a clearly prioritised backlog? When new features are introduced, is it easy to see which features should be moved down the priority list. How frequent are your feedback loops – between deciding on a requirement, designing a feature and getting feedback on whether that implemented feature fulfils the requirements?
Prevention strategies
(a) Break your project up into 3 month releases
(b) Appoint a strong product backlog owner.

(4) Ownership – 5% of votes
Who owns the projects feature set? Who owns the code past release 1? Is there someone who can made decisions quickly and decisively?
Prevention strategies
(a) Customer should decide what is produced, and what acceptance tests validate that it works.
(b) Place emphasis on collaboration, knowledge sharing and transparent communication
(c) Ensure rapid feedback cycles built in
(d) Keep same team on project through who product lifecycle

Some other complexity smells identified


  • External dependencies – consider building “anti-corruption layers” between your application & 3rd parties. Prefer talking to humans rather than documents (re: 3rd party APIs)
  • Poor communication – collocation of team; prefer face to face rather than email conversations; keep teams < 10 ppl
  • Standards – too many or too few; standards = guidelines rather than rules


If your project is exhibiting some of these smells; perhaps its time to have a complexity retrospective with your team and nip them in the bud before they spiral out of control and kill your project.

ALT.NET; London; 13 Sept 2008


Debate over what ALT.NET is; should it have a set of guiding principles like the Agile manifesto?

Continuous integration & deployment

There seemed to be 3 major areas where people encountered difficulties doing continuous integration & deployment.


  1. Configuration files
  2. DB schema migrations
  3. Data migrations.
Best practise approaches discussed were:
Config files
  1. Make sure that  your config files are small. and contain only that config data that changes often (DB connection strings, file paths etc).  Put all your “static” config data into separate files (DI injection config etc).
  2. Consider templated config files; where specific values are injected during deploy process.
  3. Keep all config in simple text files in source control.
DB schema migrations
  1. Migration techniques borrowed from Ruby on Rails – generate change scripts by hand or using tools like SQL Compare; and then apply them using a versioning tool like dbdeploy.
DB data migrations
  1. Take backup before data migration.
  2. Ensure app fails quickly if is a problem; cause if data has changed since deployment then cannot rollback.
  3. Consider apps upgrading themselves and running smoke tests upon startup – and refusing to run if there is a problem – this technique is used by established opensource projects – WordPress, Drupal, Joomla.
Mentioned tools: TFS, Subversion, CC.NET, Jetbrains TeamCity, dbdeploy, SQL compare.
Acceptance testing
It seemed to me that the majority of pain experienced in this area results from a lack of a ubiquitous domain specific language:
  • Build a DSL incrementally during short iterations.  Gives you opportunity to refine, fill in gaps, and train whole team to use same language.
  • Without a DSL, acceptance testing via the UI testing becomes brittle, as you end up specifying your tests at too low a level, (click button A, then check for result in cell B); rather than having a translation from acceptance tests in a higher DSL language to specific UI components.
  • Consider prioritised tests – have a set of facesaving tests / smoke tests that always work, and ensure major things are still working (company phone number correct?  Submit order button still work?).  Acceptance tests can be thrown away if they have served their function of evolving the design / team understanding.
  • The acceptance testing trio – Developers test for success – thus automated testing only tests happy flow – still need exploritory testing by someone with testing mindset; what happens if you do weird stuff?  Tester must have domain knowledge.  Business – what are should happen?  Don’t let developers be forced to make up business rules?
  • Ensure all layers of stack (tests, manuals, code, unit tests) use the same DSL language.
  • How do you get workable acceptance tests – see Requirements Workshops book
  • Short iterations – more focus, incremental specs, opportunity to discuss missing test examples.
  • Key is having a ubiquitous language encoded as a DSL (domain specific language) – develops over time, enables automated accpetance tests, 
  • Sign off against acceptance tests (Green Pepper tool – capture & approve acceptance tests)
  • Talk: The Yawning Gap of ?? doom – infoQ, Martin Fowler
  • Avoid describing these activities as “testing” – people avoid because testing has low social status.
Mentioned tools:  White for Windows GUI testing
Domain driven design
  • Discussion around the difference between DDD; where we treat the concepts & actions as central; vs DB centrered design, where we’re thinking about the data as central, and UI centered design, where the screens are considered central.
  • Concensus was that domain shouldn’t be tightly bound to the DB, or the UI.
  • Ideas around passing DTO objects up to view (UI, webservices etc), andchange  messages bad from view, indicating how the domain should be changed (rather than passing the whole DTO, where you don’t know what has changed).
  • Defined as Dan North’s Given, When, Then
  • Is it any difference from Acceptance testing? Only that it is better branding, because BDD doesn’t have the word “testing” in it; which prevents people being switched off hearing the word test when discussing specifications.
  • BDD is writing failing acceptance testing first; before writing code.  
  • Unit testing is ensuring that the code is built right, but acceptance testing / BDD ensures that the right code is built.
  • Toolset is still immature.  Fitnesse .NET & Java tooling is most mature toolset.  Many BDD tools (other than Ruby’s rSpec) have been started and abandoned (nBehave, nSpec etc)
  • BDD is not about testing, its about communicating and automating the DSL.  Be wary of implementing BDD in developer tools (e.g, nunit), which prevent other team members (business, customer, testers) from accessing them.
  • Refactoring can break fitnesse tests, because it isn’t part of the code base.
  • Executable specs (via acceptance tests) are the only way to ensure documentation / test suites are up to date & trustable
  • Agile is about surfacing problems early (rather than hiding them until its too late to address them).  So when writing acceptance tests up front is difficult; this is good, because you are raising the communication problems early.
  • The real value is in building a shared understanding via acceptance criteria; rather than building automated regression test suite.
  • Requirements workshops can degenerate into long boring meetings.  To mitigate this problem
Tools:  Ruby Rspec, JBehave, Twist, Green Pepper
In the post conference feedback; everyone was overwhelmingly positive; and found the open spaces format very energising.  Fantastic sharing of real world experiences; introductions to new approaches, nuggets of information; great corridor conversations.  Format that allows human interaction.
Next ALT.NET beers on 14th Oct.
Next ALT.NET group therapy in Jan 2009, with larger ventue.

Using Acceptance Criteria & Text based stories to build a common domain language between business and developer

Trade Test Transmissions album coverImage via Wikipedia

Besides precisely pinning functionality, writing text based stories has another – and some would argue more important – benefit, developing a shared domain language between the business & developers.

A large part of developing a new software application is defining and codifying a business solution. To do this, both sides of the equation must be molded to fit the constraints of the other – the business process needs to be expressed in a precise manner that can be automated in software, and software must be molded to fit the use cases of its users.

The mismatch between the way the business sees the solution, and the way the developers view the solution becomes painfully obvious about half way into the project, when you start to try to match what data fields are labeled on the UI, and what they are called in the database / object model.

I’ve worked on what should have been simple projects; where maintenance is a exercise in hair pulling as you try to figure out what data input results in the weird output in a report.

The root problem is a lack of a shared domain language. Projects naturally evolve domain languages; and unless guided, you can guarantee that the language in the customers requirements spec and that in the code base will diverge rapidly.

Sitting developers, testers and the customer together to produce written text user stories following Dan North’s classic BDD story structure goes a long way towards resolving this issue.

Talking through how functionality will work, and being forced to formalize it by writing things down helps the domain understanding and language to evolve naturally, influenced equally by the customers domain understanding, and the constraints the developer must work within.

Its vital that this is done before coding begins for the following reasons:

  • All stakeholders have been indoctrinated in the same domain language
  • Names for domain concepts are at hand when the developer needs them, resulting in better named domain objects.
  • Both the developer and customer know exactly what functionality is expected; helping to keep both focused on solving the right problems.
  • Facilitate ongoing conversations as the solution evolves. Evolving a shared language is difficult, and better done at the beginning of the project whilst everyone’s enthusiasm is high. With that hurdle out the way, ongoing conversations are easier, and the temptation just to guess, or devolve into an us vs them mentality is greatly reduced.

During release planning, the high level “As a [x] I want [Y] so that [Z]” is probably sufficient, with the “Given, When, Then” acceptance scenario’s being fleshed out at the beginning of each sprint.

Specifying your functional requirements as text stories leads to some exciting opportunities:

  1. Your “unit” of work is immediately available, and understood by all. This makes prioritizing which stories to include, work on next, or drop much easier
  2. Its possible to turn the stories into executable specifications.

The Ruby community has made the most progress in the latter opportunity, with their rSpec story runner.

Consider the possibilities of the following development practice:

  • The team begin by specifying text stories & acceptance criteria.
  • The testers turn this into an executable spec, with each step “pending”
  • The developers then work from the executable spec, coding the functionality to make each step pass one by one
  • When the customers receive a new version, the first thing they do is execute the stories, to see exactly which functionality has been implemented, and prove that all is still working as expected.

At any stage its possible to see how far the team is (how many steps pass?), speculative complexity is reduced because the developers are focused on developing only that which the test requires, and all the while a suite of regression tests are being build up!

Reblog this post [with Zemanta]

Domain mapping with WordPress MU, Plesk, Apache2 & Ubuntu

Given a WordPress MU install on Plesk running on Ubuntu with Apache2, we want to configure domain mapping so that

user1 can have myblog1.com mapping to their wordpress blog (myblog1.masterwpmu.com) and
user2 can have myblog2.com mapping to their wordpress blog (myblog2.masterwpmu.com)

We need to configure quite a few moving parts:

  1. DNS for masterwpmu.com – this should be an A record, pointing to the IP of your server
  2. DNS for myblog1.com & myblog2.com – these should be CNAME records, pointing to the A record in (1) – eg. masterwpmu.com
  3. Apache2 – we need to alter the apache vhost conf created by Plesk to setup a wildcard alias
  4. WordPressMU – we need to configure it to serve the right content when receiving a request for myblog2.com or myblog2.com

When someone makes a browser request for myblog2.com, the following sequence happens:

  1. myblog2.com is resolved to masterwpmu.com, which is resolved to the IP of your server.
  2. the browser makes a request to the IP, port 80, passing the host header of myblog2.com
  3. Apache intercepts the request to point 80, checks through all its known vhost server aliases, and not finding a match redirects to the wildcard alias pointing to our WPMU install
  4. WPMU gets the request, matches the host header to the correct blog content, and returns the relevant page.

So, how do we configure this?

  1. Create a new Plesk site, with its own domain name (eg. masterwpmu.com) & install WPMU.  Ensure this works.
  2. Create a new CNAME record myblog2.com which resolves to masterwpmu.com (Its also possible to setup an A record pointing to the same IP as masterwpmu.com; although this will break if the IP of masterwpmu.com ever changes).  Google has a nice set of instructions for doing this on most major DNS providers (obviously you’ll want to point to masterwpmu.com rather than ghs.google.com ;) )
  3. Edit the Apache2 vhost conf created by Plesk at: /var/www/vhosts/masterwpmu.com/conf/httpd.include, changing:
    ServerAlias *
    AllowOverride FileInfo Options
  4. restart Apache2 ( /etc/init.d/apache2 restart)
  5. Log in to the WPMU install as admin, and create a new blog.  Edit the new blog, and change the Domain & FileUpload Url to myblog2.com and http://myblog2.com/files (all the other Urls are automatically updated when you save)
  6. Browse to http://myblog2.com !


  • You can only have 1 wildcard Apache ServerAlias per IP

Hope that helps!