It’s been a while since I wrote a blog about a project, mostly because I’ve been very busy with my day job. However, I thought I would break”radio slience” and write this one up, because it was huge fun, and also I think parts of it might be of use to others.
One of the things I often get asked in my role as a STEM Ambassador, is ”What’s it like to be an engineer”. I’ve thought about this a lot and honed my elevator pitch quite a lot now, but there is one phrase that always comes out “It’s a huge lot of fun, and hard work”. I certainly had a lot of fun and did a lot of hard work (at the last minute!) putting this one together. The program was finished 11pm Saturday night, I boxed it all up, went to bed, got up at 5.30am Sunday and drove to Goodwood to install and run it. But it all worked in the end!
Here’s the story of how I got involved in this project, and the steps involved in putting together the world’s largest Raspberry Pi display.
The Customer Request
On 26th September, Rob, aka “Mr Timekeeper” from Greenpower emailed me thus:
Hi David, Hope you are busy enjoying plenty of pi. Can you cast your mind back to the Greenpower finals last year? Across the track from us was a truck with a big video screen. I've been asked to come up with a way of putting results on it. I think the best chance is to construct a form from data on the live results web page uising suitable text. The tricky thing is that there is not a lot of resolution. As far as I can see from the detailed specification sheet (a whole page, attached) the form needs to be 1600x900 and the text should be no smaller than 24pt. Not a lot of room for masses of text. There are also issues with colours that work well. Something like the top line with number of laps completed and a clock, the next line with the race leader (car number and name) and additional scrolling lines showing the rest of the field, car number, gap from race leader and name. Once all the cars in the race are scrolled then the data can be refreshed and the display starts to scroll again. The data can be gathered from the live web screen. Is this something you can have a go at throwing together?
The race was planned for Sunday 13th October (about 2 weeks away), and I had just agreed with a customer to work onsite fulltime for 2 weeks, and I had clubs organised for all other weekends. Right, no sleep then for 2 weeks!
Rob has run the race timing for Greenpower events for a few years now, he writes his own software in Delphi (a Pascal based environment), which at intervals does a FTP file transfer of the results up to his website.
On his website anyone can view the latest race results. A MyLaps decoder with a loop aerial across the finish line logs each car as it crosses the line, via a uniquely serial numbered “transponder” attached to each car. It’s a pretty robust design and he tours the country with it.
I knew I was busy with my day job and would only have a few hours of time in the wee small hours of the morning to put this together, so I had to come up with a solution that would be robust enough to work reliably, but wouldn’t take too long to put together. Having done quite a lot of work in schools with the Raspberry Pi (and knowing that Greenpower have teamed up with the Raspberry Pi foundation to run a competition to win one of 100 Raspberry Pi’s), I thought it would be a great opportunity to show the power of this tiny computer. I also knew it would give me the chance to use the headline “world’s biggest Raspberry Pi display”, which was a huge incentive to get it all working!
Like any good software project, I split it down into smaller parts:
* PLATFORM – how to configure the Raspberry Pi to be a headless display device.
* PAGE DISPLAY – how to draw and format the pages within strict pixel-size and screen layout requirements
* DATA PROCESSING – how to turn Rob’s data into a form I could process and display
* FETCHING DATA – how to get data into the Pi in the first place
* NETWORKING – how to get a reliable connection to the internet.
I’ll work “bottom up”, so you can see how I built the system from the ground up.
I needed a headless box – something with power and internet in, and HDMI display out.
Some problems I had to solve here were:
* how to auto-start X windows
* how to auto run a program
* how to turn off the screen saver
* how to center the display screen
* turning off the mouse pointer
Most of this is in the attached file “HOWTOs”, with appropriate web links.
This is all about “building a kiosk computer” – a computer that can start and operate completely unattended. Many have struggled with parts of this over the previous months, and there are now quite a few useful solutions out there. I chose to “build on the work of others”, and used the power of internet search to get a platform quite quickly. Most of this is documented in the attached HOWTO files.
This one is quite interesting, and my choice of page display technology is not an obvious one, but had some nice surprises along the way. I was originally going to just use a web browser window, but I was worried I would not have fine enough control over screen size and fine pixel-level control to meet the strict requirements of the display.
The specs of the display are in the links section at the end, for the
ADI ICONIC 100, the world’s largest [moveable] HD LED TV. You have to read the specs and watch the video on their website to realise how awesome this display is. Somehow, even that was not enough preparation for standing in front of the artic lorry that supports this thing!
I also knew that I would have to develop and test the software while I was on the move and while doing other work around the country, and taking my Pi and all it’s bits with me on the road would not be practical. So I wanted to develop the software on my laptop, and at the last minute move it over to the Pi to test and run it. I wasn’t convinced that browser based technology would be consistent enough across these two platforms in order to be able to guarantee it working.
So, I chose to use pygame. I’ve done a bit of work in school Raspberry Pi clubs with pygame, some of the kids are writing old-style arcade games with it, so I knew it had pixel-level control of the screen and would be up to the job.
The challenge here was to fit up to 96 cars on a screen that specified “at least 24 point text”. I came up with a scrolling solution – the top 3 cars are always displayed first on the top half of the screen, and the bottom half of the screen is used to cycle in groups of 3 through the rest of the cars on a 3 second timer. It worked pretty well on the day. The screen was visible right back to the commentary box at Goodwood, and the drivers of the cars crossing the finish line could see the information as they drove past. [There's much we could do to improve the display layout, but it was "good enough" for it's first use.]
[See TagParser.py, PageParser.py, StringShortener.py]
Rob’s email indicated that I would have to scrape his HTML pages for the data. This means to me that he had no spare time to do any changes at his end, so I had to make do with what was already there. So, I knew I would have to connect my Pi to the internet, fetch a HTML page on an interval, scrape through the HTML tags to extract the information I needed, and re-display it.
Here I knew I had to re-use something to get this bit working reliably and quickly, as time was short. But I didn’t like any of the HTML page scraping methods I looked at in python, they all looked a bit too risky to me (see urllib2) given that I was really reverse-engineering the HTML page format and I needed it to either just work, or be very very simple to patch on the day if I found some scenario I had missed.
Last year I wrote a back-end for an estate agent’s website in php that fetched XML data from the RightMove XML API, parsed through it and imported it into the database that drove the website. This script still runs today, once every hour, without fail. I used a “design pattern” in that, that I developed probably over 15 years ago (first in Java). So, this idea has had quite a lot of mileage!
I always prefer to use SAX based parsers because the memory requirements are pretty constant and the program will scale from small to huge data sets with no changes. But my gripe about the Java SAXP libraries is that you still have to do a lot of the “wiring up” yourself – it’s much lower level than DOM.
So, my standard design pattern is to process each XML tag as it is received, every “open tag” is pushed onto a stack, and every “close tag” is popped from a stack. This stack is then used to build a path to the “node presently being processed”. I can then configure a list of paths I am interested in, and a list of functions to handle those paths. The rest is just a few lines of knitting code. It’s very simple to configure and hugely reusable.
So, HTML is sort of XML, if you squint a bit. This design pattern over the years changed from a Java implementation, into a php implementation, and now a Python implementation.
It worked very well. On the day of the race I found a tag that I had handled incorrectly and was able to patch it live very quickly with almost no code changes.
Oh, and there’s a lovely little algorithm I wrote in StringShortener.py that tries to shorten the length of entrant names to fit into the limited available space. Take a look at the code, I was very pleased with that, it’s a lovely “computer science-y” solution using a simple string rewriter with a “least fixed point algorithm”. Love them. Use them all the time.
Here I was completely brutal, and this one is a little bit of a hack.
However, like any good computer scientist, I abstracted the problem away into a separate module, to be tidied up later. The real problem here is that to do this properly and cope with all possible network failures, slow page loads, partial page loads, timeouts and the like, you have to run this in a thread. I’ve only just started to learn the python threading libraries myself and wasn’t convinced I could handle all possible cases in the time I had available to develop this. So I “borrowed” a couple of lines of code from a minecraft mashup project done with @martinOhanlon a couple o weeks ago – I just used the urllib2 page fetcher, inspired by his simple solution.
The code for this part is pretty naff, and if the network times out, it just returns with a flag saying “error”, which causes the display to show old data. But see the section on networking below as to how I handled this on the day so that it wasn’t a real problem.
The PageFetcher calling interface might look a bit top-heavy when you first look at it (there is not much code underneath it), but it was written so that I could add an intelligent thread that monitored how long it was taking on average for pages to load, with a simple control algorithm to fetch the page sooner if the pages were taking a long time – the thinking being that it is a tradeoff between managing “out of date data” vs “getting something just in time for the next display cycle”. None of this is implemented, perhaps I’ll add that for next year.
No programming was required for this bit, but it’s worth a mention, because it was vital on the day that this worked reliably, and took some time to get it all right (including a trip to the Harlow “three” store late Saturday afternoon to buy a topup for my dongle – thanks to the lovely guys and gals in “Three Harlow” for their help here!). I was very pleased with how this performed on the day.
Basically I needed to get an internet connection to the Pi, which was deeply inside a metal van containing all the video mixing equipment. I had a second requirement that I wanted to be able to remotely log in (via ssh) to the Pi in order to monitor it’s operation and also to “fix bugs live” if needed (I knew there would be some bugs as I was slotting this in around my day job).
The architecture here really paid dividends.
The Pi connected to a tiny ethernet based wifi router configured as a wireless client, and configured to automatically connect to a separate wifi router over the other side of the race track (no opportunity for cables here!)
We attached a long ethernet cable and a long USB extender cable (for power) to this, wrapped it in a plastic bag and hung it out the back of the van. If you followed my @whaleygeek tweets on the day, you’ll see we had to attack this with a hair-dryer 3 times as it had filled up with water – the first time it went wrong the light on the front was making funny flashing patterns, we turned it upside down and a load of water drained out of it!!! Kudos to the guys at @ADI_LED for remembering to pack their hair-dryer!!!
Back at race timing control, I had a wifi router/switch, which was then connected to a “three” dongle via their Ethernet dongle base station. This is a fab device that allows me to plug a USB dongle into a box that then provides wired internet access.
The Pi gets a DHCP assigned IP address from the wireless router, then it can access the internet to read and scrape Rob’s web page of (regularly FTP’d) results to build and update the display.
But more importantly, my laptop was plugged into the wifi router and I could now log into it’s admin page, work out from “attached devices” what the IP address of the Pi was, and then SSH into it. Giving direct internet access to the PI would not have allowed this, and I would have no way to monitor it’s progress and a long walk every time I needed to fix a bug!
This remote access was vital. I used it to run a regular PING to the device so I knew it was still alive, ran a process monitor so I could see if my python program had crashed (and restart it if it had), I could edit the Config.py config file between races to change screen dimensions and to change the URL that it was fetching race data from, and generally keep an eye on things.
I had to fix one bug “live” on the day, part way through the first race the program crashed trying to read an entry of data that was not defined. It appeared that in some circumstances Rob’s page returned null data and I had not handled that in the program. I knew it had crashed because I was running a copy of the same program on a second Pi back at Race Timing and that had crashed too. I quickly fixed it, tested it locally, hand patched the program on the remote Pi using the text editor “vi” over ssh, and rebooted the pi to get it to start everything up again. Just as I was deploying the change I got a phone call from Chris at ADI (running the vision mixing desk) saying he thought it had stopped, and I was able to tell him I was already deploying a fix!
I have included the full source code of the entire package here for others to read, pull apart, learn from, and build into other things. I had some teams on the day talk to me about perhaps building it into their cars with a tiny reversing camera display connected to a Pi so that they had a “heads up” display of the live race results available to the drivers to encourage them to work harder for that next race position!
Parts of the code are quite nice, and parts of it are a bit mucky – I’ll leave it up to you to decide which is which.
The programs are written using Python 2.7 and pygame 1.9.1, and run both on the Raspberry Pi and also on a PC (probably on a mac too but I haven’t tried). You’ll need to install the optional pygame package on the PC, the raspberry pi has it pre-installed though.