Of Moodle and First Class Honours

Well, time for a nice hefty blog post I think, as I haven’t done one in a while.
I got my final results from university this week. I got a first. Everyone I know has been congratulating me, which is quite overwhelming. I’m happy, but I don’t seem to be as happy for myself as everyone else is for me! It’s probably becuase I was worried that I wouldn’t get one, that when I did it was more a cause for relief then celebration.
Nonetheless, it’s the weekend now, so party time tonight. Hells Yeah.

I’ve also moved into a new flat with my lovely girlfriend, and started a new job. I’m working at Taunton’s College in Southampton as their in-house web developer. This involves working primarily with Moodle, the open-source Course Management System/Virtual Learning Environment. This is awesome, for 2 reasons. 1 – I get paid to code PHP, which is what I do for fun anyway. 2 – I get paid to contribute to an open source project, which is a position I’ve always wanted to be in. And it pays well enough for my nice new flat. And I get a local government pension. And I get to help people teach. Winner. Dream first job? I think so.

Moodle in itself is a pretty cool system, although it’s suffered a bit from it’s evolutionary development. The main problems are that when new and better solutions get introduced, the old ones remain. This is mainly a backwards-compatibility thing which means a lot of it is being culled for version 2 (the upcoming major release), but it means at the moment there are 3 different ways of keeping track of which javacscript files a page needs, a really flexible permission system which relies on an older “roles” system for assigning the permissions, and lang files for older components all over the place.

That said, the current “best practice” provides some really nifty plug-in APIs,and the database abstraction layer makes interacting with the database a breeze. Hopefully once version 2 hits the mirrors, the cruft will have been cut back, and the new plug-in points will make it an even more versatile platform than it already is (come on, gradebook plugins!).

The Moodle community’s also brilliant, as are my Taunton’s colleagues. I look forward to working with them all to make Moodle better!

DRM-Free iPlayer download link with iplayer-dl

I’m a big fan of BBC iPlayer. However, I’m not such a big fan of DRM-encumbered downloads, and the flash player doesn’t work on my netbook. For this reason, I use iplayer-dl, a clever little ruby app that pretends it’s an iPhone, and lets you download DRM-free versions on the videos.
The downside of this iplayer-dl is that it requires you to manually copy and paste the URL or ID of the video into the command line. I thought it would be handy if there was a link on the iPlayer page to download an episode through iplayer-dl.

After about an hour of messing, I was successful. The steps I followed are below. Please note that this is a VERY hacky solution, and probably isn’t the best way to do it, nor do I recommend anyone else does it this way. One stage of it potentially leaves your system vulnerable to attack.

  1. Install iplayer-dl
    This bit’s pretty straightforward, the instructions are on the iplayer-dl site linked above.
  2. PHP Script
    I run a local web server on my netbook for web development purposes. I created a script called iplayer.php in my public_html folder with code similar to the following:


    This code essentially generates a shell script which will open a gnome-terminal window (it could equally use Konsole, xterm etc), and runs the iplayer-dl with the specified parameters. It gets the show’s ID or URL from the query string.
    Note the Content-Type header. I haven’t actually included the Content-Type I used here, since I’ve told Firefox to open it in Bash automatically (remember the vulnerability I mentioned?). Suffice to say, I chose something fairly innocuous that I’m unlikely to ever come across on the web, so I won’t get weird things suddenly trying to execute in bash.

  3. Displaying a link to the PHP Script
    This piece of magic was done with the Greasemonkey Firefox extension, which is well worth a play with, especially if you’re good with Javascript. Long story short, it lets you define scripts to run automatically after certain pages have loaded, allowing to you edit how they look.
    I wrote a script to run on BBC iPlayer’s episode pages (the pages where the actually video player is displayed). It looks like this:

    var dlFlash = document.getElementById('download-air');
    dlFlash.innerHTML = '<a href="http://localhost/~mark/iplayer.php?id='+window.location+'" target="_blank">Download with iplayer-dl</a>';

    Obviously this exact script only works for me. If you did it yourself, you’d have to change the href to the location of your iplayer.php. Essentially all this script does it replace the normal download link on the page (Which tries download the video in BBC’s Adobe AIR-based client) with a link to my PHP script (effectively, to the shell script generated by the PHP). By a happy accident, it even keeps the icon and background of the download button, so it’s nice and aesthetically pleasing.

  4. Click the Link
    Clicking the link pops up the usual “What shall I do with this file?” dialogue. Now, I want as little faff as possible, so I set it to run with bash, and to do it every time without asking. This associates the MIME type I chose earlier with Bash, so in future all scripts produced from these links will open the download instantly. For this reason I didn’t use application/x-sh as the MIME type, as I didn’t want to accidentally click a link to a real shell script on another page and have it execute automatically! If you were happy to select bash and click “OK” each time you wanted to download from iPlayer, you could use this MIME type.

And that’s it! I’ve now got a handy DRM-free one-click download link on every iPlayer page.

Testing IE6, IE7 and IE8 on one machine

Just a quick post because I’ve found an amazingly useful application.
As a web developer, I’m constantly plagued by Internet Explorer and it’s non-standard behavior. Granted, IE 8 goes a long way to solve this, but all-too-often the places where I work (still!) use IE 6. This means that I build an app, test it on the “proper” browsers and IE 8 (in an Windows Virtual Machine), then take it to the client and they show me a whole load of rendering errors in IE 6. Windows won’t let you install 2 versions of IE, or downgrade from the one you’ve got, so I’m a bit stuck unless I want to run a seperate VM for each version.

The solution: IETester. This is a brilliant Freeware windows app that lets you open a series of tabs for the various versions of IE (5.5, 6, 7 and 8) to test the rendering and javascript engines side-by-side. It Just Works, and until IE 8 adoption is widespread (read: when people are forced to stop using XP), is a must for all web designers and developers!

Of Stem Cells, Twitter, and the Catholic Church

I was looking at Twitter’s trending topics today, and noticed the #stemcellresearch hashtag. A bit of research revealed that the tag referred to the new proposed regarding state-funded stem cell research, which is currently open for comment from the public.

Here’s the rub: The NIH website has a form for people to post their views about the proposed guidelines before they’re finalised and enforced. However, as you’d expect, there’s opposition to the suggestion of Stem Cell research being allowed at all. The main opposition has come from the United States Conference of Catholic Bishops, who have essentially instructed their followers to flood the NIH with comments opposing the guidelines. According to this forum post 99% of the 6000+ comments were opposing the guidelines on this basis. Of course, the scientific community weren’t keen on letting this lie.

This Message was published informing people of the situation, and requesting that they post comments in support of the research, and of some amendments to the guidelines.
It wasn’t long before this found it’s way onto Twitter, with the associated hashtag, thanks to Niel Gaiman, and became a trending topic.

Personally, I’m all for Stem Cell research. I don’t agree with the USCCB that it constitutes government funded “killing”. I don’t think an embryo is a human being any more than a strand of hair is.

The real question this poses is, who will win the debate? Obviously, the comments will have to be reviewed a bit more than “Who shouted loudest?” to assess how the guidelines should be reviewed, but I’m interested to see how much sway Twitter has. Can Twitter get more people to comment than the Catholic Church? Personally, I think that people will be more concerned with re-tweeting the message than actually commenting on the guidelines, but if a cat can get half a million followers, surely the Twitter community could make a dent on 6000 comments?

Oh, and if you’re on Twitter, you might want to check out What The Trend?, which keeps track of trending topics, and what they’re about. It’s pretty handy for explaining some of the more obscure hashtags.

Displaying MySQL enum values in PHP

I’ve always been in the habit of using MySQL’s enum data types, allowing you to define a set of values allowed in a particular field. This saves having to create an extra table which only has 2 fields while still having a strict set of values for a field. The problem with this over the other option is that you can’t just do a SELECT query to get all the available options for the field, for example to display in an HTML <select> box.

After some googling I turned up a HowTo by Bill Heaton with a reasonable solution. However, there’s 2 niggles I had with his solution: it uses a load of string processing functions when it could use a single Regular Expression, and it’s a function, not a class.

The solution I’ve got in practice uses a “query” class I wrote, but for the example I’ll use plain old mysql_query():

class enum_values {
	public $values;
	public function __construct($table, $column){
		$sql = "SHOW COLUMNS FROM $table LIKE '$column'";
		if ($result = mysql_query($sql)) { // If the query's successful
			$enum = mysql_fetch_object($result);
			preg_match_all("/'([\w ]*)'/", $enum->Type, $values);
			$this->values = $values[1];
		} else {
			die("Unable to fetch enum values: ".mysql_error());

Let’s have a look at that code. If you’re not familiar with OOP in PHP 5, we start off by creating a class called “enum_values” with 1 property ($values) and a constructor that accepts 2 arguments, the table and column we’re getting the values from. This constructor is called when we create an instance of the class (an object), e.g.

$example = new enum_values('table_name', 'column_name');

Right, so within the constructor with have a “SHOW COLUMNS” query. This will show us the structure of a specified column in a specified table. The fields returned include “Name” (containing the column’s name) and “Type”, which contains the column’s data type, and in the case of enum, it’s possible values. This is the field we’re interested in.

To get at the values field we use mysql_fetch_object to create an object containing the fields and their values. We could equally use mysql_fetch_array here, but I like objects.
The next line is where the magic happens. The string we’ve got to work with in the Type field looks like:


Bill’s solution suggests using substr to cut the string down to the list of values within the brackets (after finding the position with strpos), using explode to split the resulting string into an array by using the commas as delimeters, then looping through the array and removing the quotes around each value with str_replace. But wouldn’t it be a lot nicer if we could just extract the values without the quotes in the first place?

The way I solved this was with preg_match_all, which has the wonderful ability to do all this in a single line. Firstly I needed a regular expression that matched characters in single quotes. The regex ‘([\w ]*)’ will match a single quote, followed by any number of alphanumeric characters and spaces, followed by another single quote. Note that I haven’t used a . to allow any character inside the quotes, since this would allow single quotes, meaning the entire string would be matched. The parentheses are used to “group” the characters inside the quotes, meaning we can refer back to them.
The regex is then delimited with slashes (to show PHP it’s a regex), then double quotes (since we need to pass it as a string). The resulting argument passed to the function is “/'([\w ]*)’/”.

The second argument is simply the string we’re operating on, $enum->Type. The third argument is a variable that’s going to store every match found in an array. But it’s even better than that. What we actually get is a multidimensional array, with [0] containing an array of the entire matched string, and [1] containing the contents of the first “group” within that match. If there was a second group, it would be in [2], a third in [3], etc. So while $values[0][0] would contain ‘value1’ with the single quotes, $values[1][0] would simply contain value1, as the characters inside the quotes were grouped.

Now we’ve got our array of values we store it in the object’s $values property (note that it’s referred to as $this->values, it’s not the same as the $values array I was just referring to). Now we can access an array of the values like this:

// Db connection stuff goes here
$example = new enum_values('table_name', 'column_name');


Don’t thank me, thank Super GRUB Disk!

The last thing I needed when turning my desktop back on this afternoon was GRUB merrily announcing it had experienced “Error 2”. This left my system unbootable, at a crucial time when I really really need it. Fortunately for me, I had my netbook handy, so I whipped it out and Googled the error. This told me that GRUB can’t find my hard disk. This was interesting since it’s the disk it’s on that it’s booting from, so I took comfort in the fact that the drive at least was still there.

The solution I found resided on the ever-usefulUbuntu forums, advising me to edit the GRUB menu to change the disk it’s trying to boot from. Fine, if I had GRUB set up to display a menu. Unfortunately, so such luck. Lucky for me, some months ago I had the foresight to download a burn a handy utility called Super Grub Disk. Essentially, it’s a CD (or hard drive partition) that boots into GRUB, with a load of pre-configured menu options. These do anything from trying to boot your system as best it can to re-installing GRUB to your Master Boot Record (very handy if you’ve just installed Windows to dual boot with Linux). It really is the best system recovery tool since sliced Knoppix. In particular, I chose the option to boot Linux manually, which gave me the same error as I’d seen before, then dropped me out at the boot menu for my own GRUB installation. This let me change the disk I was trying to boot from, et voil√†, one working system!

Calling PUT and DELETE on RESTful PHP services with Prototype.js

So I was asked to create a RESTful web service in PHP. No problem. I was asked to create a PHP client that connects to it through cURL. No problem. I was asked to create an AJAX interface to administer it. Problem.

The problem wasn’t the same origin policy, as the AJAX interface was to run on the same server as the service. The problem was implementing the HTTP methods.
I use Prototype.js for all of my Javascript coding. I’d recommend it to anyone, especially for AJAX as it makes your life a doddle. The basic syntax of an AJAX request using Prototype looks like this:

ajax = new Ajax.Request('test.php',{
      method: get,
      onSuccess: function(xmlHTTP) {

There’s a host of options for making various types of request, but that’s the gist of it. The problem with this, however, is that not all browsers support the PUT and DELETE methods, which in REST are used to update and delete records, respectively. As such, Prototype’s Ajax objects don’t try and send an XmlHttpRequest object using PUT or DELETE.
It turns out that these two methods are implemented using POST as a proxy. It then tells the web service the method you really wanted in $_POST[‘_method’]. This means that to implement calls through AJAX, where your code would have looked something like this:


  echo("This looks like a GET request to me!");

} else if ($_SERVER["REQUEST_METHOD"] == "POST") {

  echo("This looks like a POST request to me!");

} else if ($_SERVER["REQUEST_METHOD"] == "PUT") {

  echo("This looks like a PUT request to me!");

} else if ($_SERVER["REQUEST_METHOD"] == "DELETE") {

  echo("This looks like a DELETE request to me!");


It would now need to look like this:


  echo("This looks like a GET request to me!");

} else if ($_SERVER["REQUEST_METHOD"] == "POST" && !isset($_POST['_method'])) {

  echo("This looks like a POST request to me!");

} else if ($_SERVER["REQUEST_METHOD"] == "PUT" || $_POST['_method'] == 'put') {

  echo("This looks like a PUT request to me!");

} else if ($_SERVER["REQUEST_METHOD"] == "DELETE" || $_POST['_method'] == 'delete') {

  echo("This looks like a DELETE request to me!");


I’m guessing that any data you’re trying to send to PUT that would normally be read in from php://input would have to by hidden in the _POST array somewhere. More experimentation required methinks!

Of Wicd and networking

I’ve got an Asus EeePC 900 sporting a funky purple design. As with all my computers, it’s running the Linux operating system. Up until about a month ago it was running Ubuntu eee, an excellent netbook specialised version of the Ubuntu Linux distribution. When the first major update came for Ubuntu eee, they decided a name change was in order (primarily because their name was a composite of 2 different companies’ trade marks), and became Easy Peasy. A quick reinstall later (the Easy Peasy forum was awash with upgrade problems so I decided to save the hassle) I was running the new OS, based on Ubuntu Intrepid. However, as I began using it, a problem arose: I couldn’t connect to WPA networks. I use WPA at home and at uni, so obviously this was a problem, and I knew my wireless card was working as I could connect to my girlfriend’s WEP network fine.

A poke around the Easy Peasy wiki led me to the discovery of Wicd, an alternative to the NetworkManager tool that’s currently standard with most Linux distros. I thought I’d give it a go.
Installation was pretty painless. I plugged in to the network (I was at home), added the Wicd APT repository as instructed, and installed with no problems. It even removed NetworkManager for me. The only weirdness here was the fact that I had to add the APT repository for Ubuntu Hardy, not Intrepid, but all seemed to work OK.
A quick restart later and some fiddling with configuration (the Wiki gave me some incorrect info that slowed my boot time right down by always trying to configure the wired interface, even if it’s not plugged in), I had a nicely set up Wicd installation with a pretty tray applet, much like that of NetworkManager. What’s more, it connected to my WPA network with no trouble.

Next I decided to try and install it on my desktop running KDE 4. Again, installation was a simple case of adding the APT repo and giving the order. The real test here was to see how it handled bridging – I have the wired connection bridged to allow VirtualBox to connect directly to the network. Since I set up the bridge, NetworkManager has always seen the wired connection as “unmanaged” and left it to it’s own devices, giving me no feedback as to whether it’s connected or not. Wicd didn’t make a lot of sense of the connection at first, but a few seconds in the preferences menu allowed me to change the default wired interface from eth0 to br0, and would you believe it, it all works. Auto connection, bridging, visual feedback, and wireless (although the desktop is a little to close to the radiator for a decent signal). Full marks for Wicd!

PS: I’ve now added a feed of my Twitter posts on the right as my “Little Blog”, with this being the “Big Blog”. Enjoy.

Of Digg and New Zealand

So through my Twitter account today (specifically as I’m following Stephen Fry) I found out about this new law coming into effect in New Zealand. Essentially, if an Internet user is accused 3 times of copyright infringement (note the word “accused” not “found guilty of”), their ISP is obliged to sever their Internet connection. Obviously there’s a lot of issues brought up be this – copyright and filesharing in general, NZ’s relationship with the US and whether this is a reaction to international pressures, but those are discussions for other places and times. The issue here is assumption of guilt and enforcing punishment based on accusation, not on trial. It’s an outrage. The “blackout” campaign has been started to raise awareness of the travesty. All it involves is changing your avatars etc. to a completely black image until the law comes into force on 23rd of Feb. Of course, the NZ government aren’t going to repeal any laws because of a few black avatars, but the more people who know this is going on, the better.

New Zealand's new Copyright Law presumes 'Guilt Upon Accusation' and will Cut Off Internet Connections without a trial. Join the black out protest against it!

Further to my modern web experimentation, I took the step to join Digg, the social bookmarking site. I don’t use non-social bookmarking so I’ve never seen the point before, but a lot of the sites I visit have a “Digg This” button so I’ll give it a go.

Of Twitter and RSS

So, I’ve decided to get myself to grips with some modern webby stuff. I’d call it “Web 2.0” but that’s a rubbish term. As such, I’ve now subscribed to a load of RSS feeds and signed up to Twitter. I’ve also actually decided to start writing stuff in my blog, rather than just having it as an empty page.
This was all prompted by a presentation at university during one of my web design lessons from the MD of a web design company called ClickFire, who recommended we get involved in as much of this stuff as we can to “keep our finger on the pulse of the industry”, so to speak.

My thoughts so far:

RSS is good. I used it many moons ago when I still used Windows and had an RSS reader plugin for Trillian. I’ve subscribed to Wired and Slashdot which were the 2 main feeds I read then, along with various other computer related stuff, Linuxy stuff and general news. It’s certainly helped me keep up-to-date with the latest and greatest from the computer industry.

Twitter, I’m not sure about yet. My girlfriend said “isn’t that a new Facebook type thing?” so I tried to explain it to her. She understood what it was but said she didn’t get why people care what you’re doing every 10 minutes. I tend to agree with her to the most extent. Besides following Stephen Fry’s daily antics, there’s not a huge amount I’ve gotten out of it as of yet, and my only follower is 10 Downing Street, who only added me because I added them. All the RSS is saying that Twitter is on the tipping point of becoming mainstream, but we’ll wait and see. All I know is that I waited about as long as I did after hearing about Twitter before joining it as I did with Facebook. When I joined Facebook, about 60% of my friends where on there. None of my friends are on Twitter.

So here’s to my Web 2.0 adventure. I’ll keep you posted.