That one time I was a zombie…
Posts as part of the A-to-Z blogging challenge.
That one time I was a zombie…
For those not in the know, I’m from Yorkshire. West, to be more precise. And whilst I’m not a religious person, I can see why it’s been nicknamed God’s Own County.
It can sometimes be surprising just how much pride I’ll take in being a Yorkshire lad. Whilst I’m up here, everyday life just goes on as normal. But if I’m away, especially on the other side of the Pennines, I become much more vocal. The old adage of “How do you know if someone’s from Yorkshire?” “Don’t worry, they’ll tell you!” rings fairly true, more so after a few ales.
Any why wouldn’t I be proud to be Yorkshire Born, Yorkshire Bred (strong in t’ arm, weak in t’ head)? We were once the powerhouse of the country, with many of the huge textile mills still standing, most converted into office and commerce complexes which give the town centres a unique feel. Even our local outdoor market is held in the Piece Hall, a 16th century trading building, making it the oldest shopping centre in the UK. Add to that two pubs that are older than the USA, rumours of underground tunnels connecting various buildings from centuries gone by, and the invention of the infamous Halifax Gibbet (a forerunner to the guillotine), it’s clear why I’m happy to spend my regular working days in a town full of rich history.
Look, stop thinking I’m just hitting the keyboard at random!
Back in the mists of time there was a text adventure, creatively named Advent (you could only have six-letter file names in those days). While the format may now be familiar to a lot of people, at the time the “go north, get key” type of game was revolutionary. There was even a part where you were given a magic word to teleport you to another part of the map
That word was xyzzy.
Some 40 years after the game’s initial release, this magic word is still a part of retro gaming and programming culture. I’ve used it for passwords, for debugging commands, even issued it as a network name.
I’m also part of a proud tradition. Many years ago I entered an Interactive Fiction (the modern name for text adventure games) completion and whilst I didn’t win anything (expected, it was a little rushed), the game would respond if you attempted to use the magic word.
Mists pour from the ground below you. They begin to spiral around your legs. In the distance you can make out shadowy figures coming towards you through the mist. You then hear a deep, booming voice say “Oh sorry, wrong game.” The mists vanish, and everything is back to normal.
So next time you find yourself in an old computer game, give xyzzy a try, see if there’s been underlying secrets in the game all this time.
No, I didn’t just mash the keyboard… It stands for Warhammer Fantasy Roleplay.
Being a programmer and being a geek kinda go hand-in-hand. And whilst Dungeons and Dragons is by far the most popular, I tend to learn towards WFRP. Mainly because it isn’t just a hack and slash, especially how our group play it. We get a lot more in to the “roleplaying” aspect of it. If you want to find out if the victim had any enemies by talking to the bar tender, it’s not a case of rolling some dice, you better start talking and hope you can steer the conversation in the right direction.
The whole system has a double appeal to me too, as there’s a wealth of source material and a whole lot of random tables you can roll up. There’s over 100 entries on just a simple mutations chart! And to a programmer, that’s like a waving flag to a bull. I’ve already got some code in place which can generate details in fractions of a second, opposed to the 20 minutes is can take to roll through the tables manually.
Like a few other posts in the A-to-Z challenge, I hope to run a series of posts on this topic once the challenge is over, and hopefully see who else mixes swords and code.
A quick one today, something to bear in mind when using third-party software in your websites.
I was tasked with adding CMS functionality to one of our sites to allow some of our staff to edit the content directly. Did some searching and found that ModX fit the bill nicely. Installed it on a test server, played with it a bit to make sure it would do what we wanted, all was good. As part of the evaluation process I started to add one of the more complicated features that would be needed. It worked fine, so the next feature got added. Before I knew it, the site was done, ready for deploy. A rare occasion where everything went fine.
Until…
Another developer pointed out that I’d installed the latest dev branch of ModX during my initial assessment. For the uninitiated, programs generally come in two flavours, stable and dev. Stable is “everything is fine, everything works, build your website”. Dev is “we’ve added some cool new features, it should work, but don’t use it on a live site, just in case”.
Cue two days of installing the earlier stable package and converting weeks of code to the older, but stable, version.
So always make sure you’re on the right release path before going too far, or you’ll look like a bit of an idiot in front of your team…
I touched on unconferences in an earlier post, but I feel the special qualities of them warrants a little extra coverage.
The normal conferences I attend follow the regular pattern; you have a list of speakers announced a month or so in advance, a morning full of talks, lunch break, an afternoon full of talks. This is great for classroom-style learning, but most of the fun happens in the evening when you have chance to discuss the topics with other attendees.
Unconferences (like the one I’m attending next week) have two major differences. Firstly, after every talk there’s a 15 to 30 minute coffee break. This means you can immediately discuss any ideas that were brought up during a talk. If you’re lucky you can catch the person giving the talk, both to offer thanks and expand your knowledge a bit with a quick chat. With this going on through the day, by the evening you’ve already done your extra learning and can relax. Or fire up the laptop at the bar for an impromptu coding session.
The other difference is they don’t organise any speakers in advance. This has two benefits. One, it makes the whole event a lot cheaper. You’re not paying out speakers fees, arranging transport and accommodation for them, etc. The other benefit is that any attendee can propose a talk. These are submitted on the morning and then voted on by the rest of the conference attendees. This is why I’m preparing a talk at the moment to propose next week. There is no chance I’d get up on stage in front of a couple of hundred people and give a talk. But to a room of twenty people who know in advance I’m not a professional speaker (and so hopefully don’t heckle me too much), well, I no longer have any excuses to stop me.
So as well as being a great place to learn and share ideas, an unconference can be the ideal place for someone to give a talk for the first time. Hopefully…
Since I became a full-time developer a few months ago, I’ve experienced a couple of “non-forecastable critical infrastructure failures on deploy”.
That is, we thought code was ready, deployed it, and it broke. Aside from lack of caffeine, this is the greatest peril a developer faces on a daily basis. It’s all very well running your script 50 times and it returning data 50 times, but what if on the first day of being live the code returns a simple error? Does it fail gracefully? Does it politely inform the user the action couldn’t be completed and to retry? Does it make the server lock up?
I’ve known of PHPUnit, a popular testing framework for PHP, for a few years, but it’s only been the last couple of months I’ve started to use it properly. The first problem is you have to write testable code. If you have something like this:
Get user input Retrieve some data from the database Make a call to the web-service Get the result data Update the database Format it Show it to the user
then every time you run the script it’s hitting the web-service and changing data in your database. If you happen to be charged per-call to a web-service, you’re in trouble.
The first step on my path to enlightenment was The Grumpy Programmer’s Guide To Building Testable PHP Applications. Whilst it only skims the use of PHPUnit, it tells you how to get your code in to a state where testing is easy. The fancy term for it is Dependency Injection. It just means your code shouldn’t depend on anything that can’t be passed in at the start of the code. DI could have a whole series of blog posts written about it (already being considered!), so lets drop to a simpler level.
For instance, let’s focus on the formatting bit of our above script. It doesn’t need to know where the data came from, or what happens to it next. So I could break it up to be
Get some data Format it Pass the data to something else
The outcome would be the same, but now I’ve isolated this chunk. Instead of always using the response from the web-service, I can use PHPUnit to say ‘The data is “[chris,armitage]”, the outcome should be “Chris Armitage” with no errors’. I can run this over and over without ever racking up web-service charges or messing up the database. But where it really comes in to it’s own is errors. Say the web-service itself goes down. The data it returned would be “[]”. Can my script handle that? Waiting to run it against the service that may break once every six months is impractical. But I can PHPUnit to do another run at it, this time passing in the empty data. In that circumstance it should throw an error stating the web-service is down. Does it? Yes? Great, I have confidence that my code will still work fine, even in the most bizarre conditions.
The other great power of PHPUnit is, once the tests have been written, they sit there and can be run whenever you like. Say I do something very odd for another part of the project, like turning all square brackets ([) to curly braces ({). I’ll run my tests and suddenly the formatting code that I wrote months earlier will fail, it can’t understand it’s input any more. I was working on a completely different part of the program so I wouldn’t have thought of going back and checking that bit worked. PHPUnit and testing just saved me from deploying broken code.
I’m now using PHPUnit on a daily basis, following the path of the Grumpy Programmer. 95% of the new code I’m writing is covered by tests, which gives me a huge amount of confidence that it will work first time without any 3am errors . And I like having lots of green ticks on my code…
Following on from the Graphite post, I’ve started to get my head around how to use it properly in the business. It helps that some of the metrics I’m tapping in to are now producing a lot of data, so I can prod and poke them to see what they can tell me. It’s all very well having a line on the page that tells me how many searches we’ve done, but it needs to be more… understandable.
First up, I found summarize(). With data being added to the graph every 10 seconds, the lines can get very busy. Searches can fluctuate quickly, so it becomes a mess to read. The summarize() command can total up all the searches into chunks, say one hour blocks. It’s now clear to see we did 5000 searches in one hour, followed by 5800 in the next. With it now plotting 24 data points (we generally look at things on a day basis) the graph is much easier to look at and understand quickly.
Once I had data that was clearer, I wanted to do some comparisons. Whilst devs can look at script execution graphs all day, management like to see how the business is performing. Using the timeShift() command, I can adjust single lines on the graph forwards or backwards in time. I can have the last 24 hours of data overlaid by the data from 24 to 48 hours ago. We’re now at 48 data points (still very clear on the graph) but you can start to see patterns emerging. Saturday evenings are quieter than Sunday evenings. Makes perfect sense. But a sudden drop in searches on a Wednesday afternoon compared to the Tuesday at the same time? Best go checking the logs, see if something happened.
However, to really make it easy to see what’s going on, I found the diffSeries() command. It’s designed to take two graph lines and subtract one from the other. Combining this with the timeShift() technique, I have a single line telling me the difference, by the hour, of the number of searches. 16:00 and we were fairly even, 18:00 and we were doing 1000 searches an hour more than the same time period on the previous day. Now we have the kind of stats that management can make use of, using the exact same system that I deployed to keep track of websevice call lengths (the kind of thing management go glazed-eyes at).
And with management happy, I get cake.
(I don’t actually get cake, but I might start charging it for making new graphs for them)
To understand recursion, first you have to understand recursion.
Recursion is a technique in programming (and presumably further afield) that can be amazingly useful, but a little difficult to get your head around. It’s when a piece of code calls itself, performing a nested operation.
Or in English, think of a basic family tree. We have the StartingPerson, and we need their parents.
I’m going to be using some pseudo-code to help explain it. It’s not a specific language, but will hopefully be readable to non-programmers.
Mother = StartingPerson->getMother Father = StartingPerson->getFather
But what about the next level? Well we could do
Mother = StartingPerson->getMother MaternalGrandMother = StartingPerson->getMother->getMother MaternalGrandFather = StartingPerson->getMother->getFather Father = StartingPerson->getFather PaternalGrandMother = StartingPerson->getFather->getMother PaternalGrandFather = StartingPerson->getFather->getFather
And the next level? We’re already into very messy code. So we can change the technique. Any person will have a mother and father, so if we change the focus of the starting person, we can now do
CurrentPerson = StartingPerson Mother = CurrentPerson->getMother CurrentPerson = Mother MaternalGrandMother = CurrentPerson->getMother
Now we have the same call twice, “CurrentPerson->getMother”. In programming terms, this means we can start to refactor the code. If a call is used more than once, it can be re-jigged to avoid repeating yourself.
getMotherRecursive = function(CurrentPerson) return CurrentPerson->getMother end function Mother = getMotherRecursive(StartingPerson)
This creates a new function that we can call repeatedly. StartingPerson is then passed in to the function and becomes CurrentPerson. The “return” line then sends the data back, so Mother = StartingPerson->getMother.
Now for the complicated bit, we also want the Mothers Mother.
getMotherRecursive = function(CurrentPerson) return getMotherRecursive(CurrentPerson->getMother) end function Mothers = getMotherRecursive(StartingPerson)
Here’s how this would work
By using a recursive call, we can go down as many depths as we want, with only four lines of code. There are a few problems to be aware of when using recursion though.
Firstly, how do you return your data? You could make a list, so “return CurrentPerson + getMotherRecursive(CurrentPerson->Method)” would give something like “Me, Mother, GrandMother, GreatGrandMother”. You should always be getting back a group of data instead of a single value.
And secondly, when does it stop? This code can go back 10 generations. 100 generations. Back to when we crawled from the primordial ooze! A common problem programmers hit when first using recursion is not putting a limit on the call. This will cause the script to never finish, closely followed by threats issued to the computer. Maybe a check on the Date of Birth of the Mother, if it’s past 1800 then don’t return another mother, return the end.
This is only meant as an introduction to recursion, it’s a tricky subject to understand. Like previous A-to-Z challenge posts, I’m preparing a far more technical version of this for release afterwards targeted at PHP programmers.
What?
Well I could have said “the current version of Ubuntu” but that doesn’t begin with a Q.
For the uninitiated, Ubuntu is a computer operating system based on Linux. It’s what I’m currently using to write this post. As the majority of the internet runs on Linux-based machines, it makes sense for web developers to use the environment they’ll be deploying to when they develop. Whilst most of the tools I use in dev work are available for Windows, almost without exception they’re better and easier to use in Linux. It also ties in to the Open Source ethos, Linux machines are regarded as far more secure than Windows machines; No viruses (virii?), exploits that get patched within hours of being found, and the freedom to hack on it if you want.
But Quantal Quetxal? As well as having proper version numbers (the current one being 12.10), Ubuntu releases are also given code words. The last few include Precise Pangolin, Oneiric Ocelot, Natty Narwhal and Maverick Meerkat. In another few years they’ll be able to take part in the A-to-Z Challenge themselves!