Build server config is generally a very manual process, which makes versioning, collaborating and refactoring difficult.
In this screencast I talk about how these problems can be addressed for Jenkins using the Job-DSL plugin and Docker containers.
Logsearch is the opensource project I lead as part of my day job at City Index Ltd. Based on the Elasticsearch ELK stack; and packaged as a BOSH release, it builds you a log processing cluster tailored to making sense of your IT environment and the apps that run on it.
I gave a talk showing how Logsearch can be used to analyse the logs of a Cloud Foundry cluster at the London PaaS Users Group last week that was well received.
Below is a screencast of that Logsearch for Cloud Foundry presentation (youtube)
The new breed of PAAS systems are all converging on a common deployment model.
Each PAAS (Heroku, Cloud Foundry, flynn.io) has custom components that orchestrate everything, but there is a healthy open source community creating buildpacks for the languages and runtimes near and dear to their hearts; and these typically work (with minor modifications) on any of the PAASes.
Yours truly has now written 4 buildpacks for Cloud Foundry:
One of the major pain points in the process is debugging the staging and runtime containers because:
So, my holiday project was to try and build something to make debugging the deployment process easier.
The result is https://github.com/cloudfoundry-community/container-info-buildpack – a buildpack that exposes information about the staging and runtime containers via a web-app. See the README.md for details on how to use it.
This little experiment has been received with enthusiasm by the CF dev community; so I think I’ve identified a common pain point.
In its development I learnt about two useful things:
I’m currently experimenting with being able to wrap this “info” buildpack around another buildpack, so you can
Developing like this enables:
If you’re interested in finding out more, please join the mailing list
One of the things I love about Git is that your .gitignore
file travels with the repo, so ignore rules remain consistent no matter which machine you are working on.
In the same vein, adding a .gitattributes
to your repo allows you to ensure consistent git settings across machine. This enables the following subtle, but very useful features.
* text=auto
causes Git to autodetect text files and normalise their line endings to LF when they are checked into your repository. This means that simple diff tools (I’m looking at you Github) that consider every line to have changed when someone’s editor changes the ending won’t get confused.core.eol
setting.*.cs diff=csharp
setting tells Git to be a little smarter about tailoring this for a specific language. Notice how in the example below Git is telling us the method name where the change occured for th .cs file, compared to the default first non comment line in the file.filter=
attribute instructs Git to run files through an external command when pulling them from / to the repo. One use of this functionality would be to normalise tabs to spaces (or visa versa).The more I use Git, the more I realise what a powerful tool it is. And I haven’t even touched on how you can use Git hooks for advanced Git Ninja moves…
Having tripped myself up on multiple occasions setting this up, I’m recording these config steps here for future-me.
Scenario: You have a PHP site running on a remote [Ubuntu 12.04] server, and want to connect your local IDE [Netbeans] to the Xdebug running on that server over a SSH tunnel.
zend_extension=/usr/lib/php5/20090626/xdebug.so xdebug.remote_enable=On xdebug.remote_host=127.0.0.1 xdebug.remote_port=9000 xdebug.remote_handler=dbgp
The key is that your Netbeans IDE acts as the server in this scenario, listening for incoming connections to port 9000 from the remote server’s XDebug. Thus the tunnel must be from the remote port to your local port, not the other way around.
Some helpful debugging technques
Start ssh with -vv for debugging output
netstat -an | grep 9000
should show something like:
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:9000 127.0.0.1:59083 ESTABLISHED tcp 0 0 127.0.0.1:59083 127.0.0.1:9000 ESTABLISHED tcp6 0 0 ::1:9000 :::* LISTEN
The AMEEConnect API gives access to a vast amount of climate related data. It also exposes standardise methodologies and to perform calculations based on that data.
As part of the London Green Hackathon I created the AMEE-in-Excel addin to tightly integrate this data and calculations into Excel.
So, if Excel is your preferred way to work with climate data, then this should be in your toolkit.
All code is open source and hosted at . Pull-requests are welcome!
UPDATE
Hurrah! AMEE in Excel won the behaviour change prize:
We believe over 80% of the sustainability field currently use spreadsheets. As a process, this is broken, not scalable and inaccurate. AMEE in Excel Integrates spreadsheets with web-services, to create a behaviour change that could address this issue and bring more credibility to the market.
So, if you want to collaborate on some Award Winning Software :), send in those pull requests
During June 2011 I presented a session at the SPA2011 conference in London, UK.
My session was a hands on introduction to functional programming techniques with code samples in Javascript and F#. The focus on the session was to get peopling thinking about first class functions; and the techniques they enable to simplify and increase readability of code when solving certain classes of problems.
The code samples can be found at:
An online/executable version of the Javascript code is at http://functional-javascript.davidlaing.com.
Judging by the feedback I received, the session went very well. People seemed to like the hands-on format of the session; and just being left alone for a while to learn something at their own pace.
I feel uncomfortable when I see large switch statements. I appreciate how they break the Open Closed Principle. I have enough experience to know that they seem to attract extra conditions & additional logic during maintenance, and quickly become bug hotspots.
A refactoring I use frequently to deal with this is Replace Conditional with Polymorphism; but for simple switches, its always seemed like a rather large hammer.
Take the following simple example that performs slightly different processing logic based on the credit card type:
Its highly likely that the number of credit card types will increase; and that the complexity of processing logic for each will also increase over time. The traditional application of the Replace Conditional with Polymorphism refactoring gives the following:
This explosion of classes containing almost zero logic has always bothered me as quite a lot of boilerplate overhead for a relatively small reduction in complexity.
Consider however, the functional approach to the same refactoring:
Here we have obtained the same simplification of the switch statement; but avoided the explosion of simple classes. Whilst strictly speaking we are still violating the Open Closed Principle; we do have a collection of simple methods that are easy to comprehend and test. It’s worth noting that when our logic becomes very complex; converting to the OO Strategy pattern becomes a more compelling option. Consider the case when we include a collection of validation logic for each credit card:
In this case the whole file starts to feel too complex to me; and having the logic partitioned into separate strategy classes / files seems more maintainable to me.
To conclude then, the fact that languages treat functions as first class constructs, gives us the flexibility to use them in a “polymorphic” way; where our “interface” is the function signature.
And for some problems, like a refactoring a simple switch statement; I feel this gives us a more elegant solution.