GuruPlug – Part 2

Powering on the GuruPlug was pretty straightforward. Turn the socket on, lights flash, network connects, job done.
However, not being provided with any documentation, I was left out in the cold as to how to access it.
As I expected, it was configured to use DHCP be default, so I was able to find it’s IP address from my router. However, I was never furnished with the root password. Luckily for me, someone had created a GuruPlug Quickstart guide on the plugcomputer.org wiki, which contained this nugget of detail (I’ve since changed it, in case any opportunistic crackers are reading).

sheevaplug-debian:~# cat /etc/apt/sources.list
deb http://ftp.us.debian.org/debian/ lenny main contrib non-free
deb http://http.us.debian.org/debian stable main contrib non-free
deb http://security.debian.org lenny/updates main contrib non-free
deb http://www.backports.org/debian lenny-backports main contrib non-free
deb http://10.82.108.51/kedars/sheevaplug_wifi/builds/packages/ binary/

The GuruPlug ships running Debian “Lenny” 5.0 with a minimal set of packages. The enabled repositories include the standard Debian binary repos, plus backports.org and, bizarrely, a WiFi driver repo pointing to a 10.x.x.x address, presumably left over from the factory install. Disable that, then.

sheevaplug-debian:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
tmpfs                 251M     0  251M   0% /lib/init/rw
udev                   10M  708K  9.4M   7% /dev
tmpfs                 251M  4.0K  251M   1% /dev/shm
rootfs                463M  207M  256M  45% /
tmpfs                 251M   23M  229M   9% /var/cache/apt

The base system takes up about half of the 512MB internal flash. Having attached a 1TB hard drive, it only seems sensible to expand the available space somewhat.
I won’t go into the nitty gritty, but suffice to say I used LVM for flexibility, and put /home /usr and /var on their own partitions.

Now that I had somewhere to put a home directory, next step was to create a user. The ‘Plug only comes with the root user setup, and logging in as root is poor form, so I set myself up a user with sudo privileges.

A quick reboot to make sure everything’s working and… Oh, wait a minute… it stopped connecting to the network. No worries because whenever you reboot it reverts the wireless to Access Point Mode. Except… that didn’t work either. And no remote access means no log in. Ah.

It turns out that I needed to buy one of these JTAG boards in order to access the GuruPlug’s serial port. Remember that USB cable I mentioned in part 1? Turns out that the JTAG plugs in to the ‘Plug and acts as an adapter. If you clicked that link, yes you’re reading it right, that’s £30 to get access to my own device.
Suffice to say I wasn’t overly chuffed by the situation. The NewIT website implies that there’s some sort of Mini-USB connection to the device itself (as was the case with the SheevaPlug) and the only indication otherwise was in the documentation that I was emailed 2 days after I received the device.
Fortunately for me, NewIT saw my point of view on this and offered my a discount on the board, which I gratefully accepted. But please be aware that if you’re looking at getting a GuruPlug for anything other than it’s out-of-the-box functionality, buy a JTAG board too!

So, a day after my email discussion with NewIT’s customer services, I got my JTAG board. Here it is:
IMAG0144.jpg

I hooked the funny wiry connectors to the ‘Plug and the USB connector to my EeePC, an connected through Minicom
IMAG0145.jpg

On rebooting, I could see that the LVM devices weren’t being found. I’m not sure exactly why this prevented it from connecting to the network, but I’m guessing there’s something important in /usr or /var that it couldn’t access.
After a few posts on the HantsLUG mailing list and analysis of the boot messages, it turned out that the USB disk wasn’t being “found” by udev until after LVM had initialised and scanned the Volume Groups. This in turn meant that the logical volumes weren’t active when they needed to be mounted, leaving me without a /home /usr or /var.
The mailing list and some Googling told me that most distributions had solved this problem with udev rules that re-scan the volume groups when a new device is discovered, but Debian doesn’t appear to have gotten there yet (at least, not in Lenny). I’m no master with udev rules, but I do know my way around an init script, so I modified /etc/init.d/mountall.sh (the script that mounts everything in /etc/fstab) to manually re-scan the volume groups before it tries to mount. It’s not pretty, but it works.

...
pre_mountall
vgscan # re-scan the Volume Groups
vgchange -a y # Activate the Logical Volumes
if [ "$VERBOSE" = no ]
then
        log_action_begin_msg "Mounting local filesystems"
        mount_all_local
        log_action_end_msg $?
else
        log_daemon_msg "Will now mount local filesystems"
        mount_all_local
        log_end_msg $?
fi
post_mountall
...

So, now I can get back to business setting up the software. Join me again in part 3!

Guru Plug – Part 1

A few months ago I pre-ordered a GuruPlug, GlobalScale‘s new plug computer, and the successor to the SheevaPlug.
I’ve been wanting to run a home server for a while, to act as a web server as well as performing a few other always-on functions, but I’ve not been willing to pay the cost of running my desktop 24/7. Drawing under 5 Watts of power, the GuruPlug satisfies this need perfectly.
On Friday I finally received an email telling me my GuruPlug had been shipped, and on Saturday it arrived in the post.

Specs

The idea of plug form-factor machines is that everything is housed in the plug, that is to say, a block about the size of a plug with an integrated power transformer. Think along the lines of an Ethernet-over-mains adapter.
In the GuruPlug Standard, you get:

  • A 1.2 GHz Processor
  • 512 MB of RAM
  • 512 MB of NAND Flash
  • Wifi b/g
  • Bluetooth
  • 1 GB Ethernet Port
  • 2 USB 2.0 Ports
  • U-SNAP I/O for home automation devices

In the “Plus” model, you get an additional GB Ethernet port, a MicroSD slot and an eSATA interface.

Packaging

GuruPlug Box
Upon opening the bag the ‘Plug shipped in, I was expecting a nondescript corrugated affair to await me inside, but instead I was presented with a rather nicely printed box, with a satisfying magnetic catch holding the lid shut.
Opened GuruPlug Box
Unpacked GuruPlug Box
Opening up, the box contains

  • The plug
  • A 3-pin adapter for plugging directly in to the wall
  • A 2-pin adapter for attaching a longer cable, if you want to sit the plug on a desk/shelf/etc
  • Said longer cable with a 3-pin adapter
  • A USB to Mini-USB cable (The Plug aparrently has a Mini-USB interface on the site, but with a non-standard connector and no adapter provided)
  • A Cat5e Ethernet Cable

A notable omission from this contents list is any sort of setup instructions or documentation whatsoever. Not such a good start.

Size

Guruplug next to a tape measure and deck of cards
Here’s a quick indication of the size – the unit’s about 10cm long, slightly wider and longer than a deck of cards, and about as deep as 3 stacked on top of each other.

Setup

Assembled GuruPlug
Here the plug’s been assembled with the 3-pin adapter, ready to go.

GuruPlug in a wall socket with connected peripherals
I’ve hooked up the Plug to a 1TB hard drive in a USB enclosure, to allow me to use it for NAS/UPnP functionality. I’ve also plugged it into the router. As you can see it’s a cheap-as-free Netgear Special from my ISP, and the WiFi’s a tad dodgy, so while I could just connect the Plug wirelessly, I’d rather the security of the wired connection.

Close-up of powered on GuruPlug
Powering on, you get a nice display of LEDs on the front of the unit. Lacking any documentation, I don’t know what any of these mean, by they look pretty cool. Also note the decision to print the logo such that it’s upside down when plugged directly into the wall.
It’s notable that the Plug is completely silent when powered on. Unfortunately I can’t say the same for the hard drive.

So the hardware set-up’s all gone pretty well. Join me in part 2 for the software setup!

Getting wiipresent working on Ubuntu Lucid

I wanted wiipresent (a handy tool that lets you use a WiiMote as a handy wireless presenter) running on Ubuntu Lucid (10.04). Here’s how I did it:

First, I had to download the latest rpm from The Wiipresent site.

Next I needed to install alien to allow me to install the rpm on my debian-based system:
sudo apt-get install alien
sudo alien -i wiipresent-0.7.5.2-1.el5.rf.x86_64.rpm

That worked fine, but wiipresent can’t find the wiimote or bluetooth libraries it needs. So first, make sure they’re installed:
sudo apt-get install libcwiimote3 libbluetooth3

Then symlink the installed libraries to the place wiipresent is looking for them:
sudo ln -s /usr/lib/libbluetooth.so.3 /usr/lib/libbluetooth.so.2
sudo ln -s /usr/lib/libcwiimote.so.3 /usr/lib/libcwiimote.so.0

Run wiipresent, press 1+2 on your Wiimote, job done!

Software Patents Explained to my Grandfather

This article is a write up of the talk I gave at OggCamp10, and some of the discussions thereafter. It’s not a reiteration of the talk, however. There’s a video that I’ll link to once it’s uploaded.

My idea started when someone suggested to my that I should apply for a patent if I wanted to market a piece of my software. I thought this was a terrible idea – software patents are destructive to software innovation. But not having a succinct way to explain why to someone who’s not an open source programmer, I just shrugged off the issue.

Then, at the first OggCamp, I saw an excellent talk by Bruno Bord on explaining programming and open source to non-techies. He used an analogy of Jam making to show how he explained programming to his grandmother. This seemed like a good method of explaining software patents, if I could think of the right analogy.

There are 2 reasons I wished to give this talk at OggCamp. Firstly, to see if my analogy worked, and secondly to see if my arguments held water. Let’s face it, if I couldn’t convince a bunch of Open Source Geeks that I was along the right lines with a criticism of software patents, I certainly couldn’t convince anyone else!

The analogy I settled on was chess. In a chess set, you have a board and pieces, which are analogous to a the computer hardware. You also have a book if defined rules which are analogous to a computer’s instruction set, the basic operations that can be performed by a processor. By combining these rules with the board and pieces you can create games of chess, just like you use the computer’s instruction set to write programs. Whatever language you write, whatever you do with the data, they all boil down to a sequence of the same instructions executing on the processor in the same way.

There are 3 conditions any invention must satisfy to be granted a patent: It must be new, it must have an industrial application, and it must represent an inventive step. Now, there’s no denying that software has an industrial application, and some of it is certainly new, but does it represent an inventive step? I’d say not.
As a programmer, when I get asked to solve a problem with software, I certainly never have to invent anything to do so. I may need to write a lot of code, I may have to rewrite it to work a bit differently to how I have before. But would other programmers in my field, given the same problem, have ended up with a solution that fundamentally does the same thing in the same way? Almost certainly. If it’s as obvious to them as it is to me, it’s not an inventive step.

One of the counter-arguments presented to my viewpoint was that if all inventions are merely the sum of their parts, then surely nothing represents an inventive step?
This put me in my place for a moment or two. Had I known when I was writing the talk that I’d be delivering it on the main stage, I’d certainly have prepared better for this sort of question. However, while it’s true that physical patented inventions do use the same processes that have come before, those processes aren’t the end product like they are with software. A Dyson vacuum cleaner may be made with conventional plastic molding techniques, but the end result is a fundamentally different way of moving air within the device to extract dirt without the need for a bag. When a piece of software is written, the end result is always that of moving data around the hardware. You can’t change how the processor manipulates electrons with code.

Other arguments were raised about how best to protect your inventions. The example of SQL’s invention, a completely new way of querying databases at the time, was cited. Had it been protected, IBM could have prevented companies like Oracle from even have existing, yet they didn’t. It was suggested that the best way to protect your interests is to “be first to market and shut up,” the very behaviour that real-world patents were created to discourage. However, in the open source world, we’ve seen that sharing ideas and technologies isn’t incompatible with commercial interest. Red Hat releases all it’s code as open source, yet it still makes money. There’s even CentOS who give away exactly the same product for free that Red Hat sell, and yet their stocks are on a recession-defying rise.

The final point that was raised was that of copyright. Do I think that copyright needs to be strengthened, or that the terms of open source licences be more restrictive to provide us with the protection we lose out on by rejecting patents? No, I don’t. Copyright is already effective immediately, and it’s effective for longer than any technology created today will still be useful for. As far as licencing goes, we even have licences that enforce sharing if we want to, which is what patents were for in the first place.

In conclusion, I think that my analogy works quite well. My argument as a whole needed work, but now I’ve seen the counter arguments, I think that the modern, open software industry has moved beyond patents to a place where we don’t need to sue the competition into submission in order to protect our interests.

SVG: Your new graphics format for the web

This post is a write-up of the talk I gave at OggCamp10 over the weekend.

It’s been announced that IE9 will finally support SVG rendering, bringing it up to speed (in this area at least) with the other modern browsers. This has some potentially huge implications for web designers and developers, as it gives us a fundamentally different way of displaying content on the web.

What’s SVG?

SVG stands for “Scalable Vector Graphics”. It’s a W3C Standard format which, surprisingly enough, defines a vector graphics. What’s different about that? Well, the graphics in use on the web until now have been bitmap formats such as JPEG, PNG and GIF. A bitmap requires that the colour, position and (where supported) opacity of each pixel in the image is stored. Compression can be used, but in general this means that a lot of data is stored and transferred for each image, which increases exponentially as the image becomes larger or more complex.
Vector Graphics take a fundamentally different approach. Rather than storing individual pixels, they store data about shapes. Each shape in the image has properties such as it’s height, width, position and colors stored. When the image is displayed, these properties are used to dynamically render the image. This provides 2 advantages. Firstly, the file size is, generally speaking, a lot smaller for a vector image than for a bitmap image since less data has to be stored. Also, it allows the image to be scaled and stretched up or down in size, without losing resolution and becoming pixelated.
SVG is also an XML format, so is pretty easy to learn for anyone used to XHTML or other XML formats. It even supports CSS styling, including pseudo-classes like :hover!

What’s so good about that then?

One of the great advantages that vector graphics offers on the web is screen resolution-agnostic images. All too often images are a constraint on the width of a web page where they’re required for a heahttp://raphaeljs.com/analytics.htmlding graphic or an important layout element. With SVG, you can have a design which scales to fit the user’s screen, rather than the other way around. Furthermore, SVG already has solutions for many of the problems that CSS 3 has been trying to solve, such as rounded corners (all SVG rectangles can have a corner radius defined) and custom fonts (fonts are just vectors after all, so they can be easily embedded in an SVG).

Another cool thing that SVG’s good for is data visualization. As SVG is an XML format, there’s various ways of dynamically generating graphs from raw data.

For me, the real selling point of SVG is it’s potential to provide a realistic, open alternative to the proprietary Flash format for multimedia in the browser.
SVG can be manipulated through Javascript just like HTML can (albeit currently with a standard XML DOM). This combination of vectors and scripting is remarkably similar to Flash’s use of vector graphics in conjunction with ActionScript. In fact, ActionScript and JavaScript are both based on the ECMAScript standard, so moving from one to the other certainly isn’t a huge paradigm shift.
And video, I hear you ask? Well, SVG also supports a tag called “foreignobject”, allowing you to easily include elements from another XML spec inside an SVG image. This can be used very effectively in conjunction with HTML5’s video tag to display video, with the added advantage that you’re not limited to a rectangular player.
There’s still work to be done in this area. While this is all *possible* at present, there’s nothing with the ease of Adobe Flash for producing SVG with animation or embedded video/audio.

How do I produce SVG?

There’s 3 main options at the moment:

  1. Hand-Code it

    Like any other XML format, you can produce an SVG document quickly and easily with a text editor. The following line is all that’s needed to display a simple rectangle.
    <svg width="100" height="100"><rect width="100" height="100" /></svg>

  2. Dynamically Generate it

    As an XML format, it’s possible to transform any other XML data into an SVG graphic using XSLT. You can also use javascript libraries like Raphael and gRaphael to do the hard work for you!

  3. Draw it

    There are a few packages that support SVG, but the best by far is Inkscape. With full support for all SVG elements, layers, and even a raw XML editor if you want, you can’t do a lot better if you want to produce an SVG quickly. Inkscape’s native format is SVG with some extra metadata, but you can also save to “Plain SVG” which is web-ready.

This material is just a preview if what’s going to be available. We are going to have to wait and see exactly how good IE9’s SVG support is and how long it takes people to switch over, but with SVG as a reliably-supported format to complement not only existing image formats, but HTML itself, I can see exciting things in the next few years of the web.

The importance of being happy

Last week I had the fortune of being at the JISC Developer Happiness Days, or Dev8D. I’d like to write a little about what went on there, and why events like it are important.

The format of the event is somewhat unconventional, and therefore quite hard to justify. A large number of developers, not necessarily with any previous affiliation, are brought together for 4 days to work on whatever they feel like. There’s no real schedule, just a few pre-planned events which are constantly subject to change. No-one’s obliged to take part in any of the sessions. And it’s free, which means someone other than the developers’ employers have to stump up a not unsubstantial amount of cash. And it’s during the week, so their employers are still paying them.

The event is described as “4 days of 100% pure software developer heaven,” and that’s right on the money. There’s unlimited tea, coffee, snacks, electricity and dodgy WiFi. There’s everyone from gurus to newbies, and most people are both in one respect or another. Any developer in this environment is going to be happy, but to justify its expenditure the event has to provide more than just smiles.

The first argument in support of the event can be taken straight from The Simpsons. There’s an episode where Homer gets promoted to an executive position after growing hair with a baldness cure, and he tells Mr. Burns that there’s not enough tartar sauce in the cafeterias at lunch time. After more explanation, Mr. Burns realises that a “happy worker is a busy worker,” and by the gods he’s right. While I was at Dev8D I achieved more in a day than I sometimes achieve in a week in my office (where I’m the only full time developer). We learned programming languages, we built applications, we designed algorithms, we gave talks, and all for fun! You can see that happiness can be an end in itself, because happiness provides motivation.

Another clear justification is looking at what he developer community produced during just 4 days. Everyone was encouraged to documents their doings on the wiki, and the list is as long as the printout of MPs expenses receipts that was produced on the Friday. A few developers including myself produced a set of web widgets to integrate with VLEs that I’ll describe more in another post. People found new uses for existing public APIs. The Arduino workshops produced a storm of ideas for new electronic devices.

Finally, one of the most powerful outcomes from Dev8D is the community it builds. Bringing together like minded people in a situation where they aren’t under pressure to see talks and report back to their bosses, but instead have the chance to meet each other and find out what makes each other tick promotes some of the strongest professional and social connections you’re likely to find. The whole point of Dev8D is to bring the “chat in the pub” part of the conference (which, in all honesty, is where a lot of the best ideas and connections are made) to the fore, and it truly succeeds.

Here’s to Dev8D 2011!

Government responses to Digital Economy questions

I’ve been concerned for a while about the Digital Economy bill. So concerned that with a little encouragement from the Open Rights Group, I’ve been writing to my MP, John Denham (JD from now on). He’s from Labour, and is currently Secretary of State for Communities and Local Government. While I may not agree with his party’s policies, I’d like to start this post by applauding him for doing a great job of representing his constituents. Without him, I wouldn’t have anything to write this post about.

So, this all began with me sending an email to JD about the Bill, outlining some of my concerns about it, mainly focused on the provision to disconnect users accused (not convicted) of file sharing, and the introduction of reserve powers allowing the Copyrights, Designs and Patents Act (CDPA) to be amended, in order to (in JD’s words) “allow the Government to tackle quickly any misuse of emerging technologies for copyright infringement and provide element [sic] of future proofing.”
I sent a reply to the initial response I received, and yesterday received a copy of a response to my queries from Stephen Timms. I’d like to look at some key points of that response here, as I think they make some things abundantly clear about the nature of this bill. I’ll take very good care not misquote or take any of this out of context. I’ve added some emphasis to make the key points stand out:

The first point I would like to make is to correct Mr. Johnson’s understanding of our aim. We do want to bring about a reduction in on-line copyright infringement, but do not want to mate it easier for copyright owners to prosecute alleged infringers.
Rather we want to bring about a shift in consumer behavior from the unlawful to the legal, and the deterrent to back up the educational message is a means by which the copyright owners can take targeted legal action against the most serious infringers.

I think that this pretty much covers the crux of the issue. The problem that the Government’s faced with is essentially a black market. There’s demand for a product (media available over the web, without DRM, at a reasonable price), but no supply. When you have demand but supply is illegal, you create a black market. Another example of this is the industry for recreational drugs.

The Government’s solution to this problem is to make people stop wanting the product. However, they don’t seem to have made any suggestion that they might do (in my view) the sensible thing, and encourage the entertainment industry to provide the product that’s being demanded. This would not only satisfy the consumers, but bring about a tidy profit for the industry from a sector who’s currently giving them nothing. I’m sure industry spokespeople would say that file sharers are costing the industry money, but can you seriously expect me to believe that every person who downloads a film would have paid £8 each to see it in the cinema if they couldn’t? I know people who downloaded Avatar, and yet it made $2 billion dollars in 7 weeks. Now, I bet I can find a fair few of those downloaders who’d happily pay a few quid to download the movie in decent quality, so they can watch it a couple of times on their TV. They might even buy the DVD later on so they could get the extra features. Am I missing something here?

Next, with regards to the “technical measures” (the power to disconnect or throttle bandwidth of those accused, not convicted, of filesharing):

We are all too aware of the issues surrounding wireless hi-jacking and indeed other measures such as use of proxy servers or the like to avoid detection. We have always accepted that there will be a hard core of infringers who will be hard to stop. We also realise that it is possible for these evasive activities to point towards the innocent neighbour, although in truth their is no indication that this will be anything other than the exception.

I wonder if there’s any indication that this won’t be anything other than the exception? I can’t believe that they interviewed filesharers, and when asked “Would you use someone else’s network to avoid detection so you can avoid our measures” they said no.
This paragraph sums up quite nicely that it’s pretty futile trying to stop people filesharing altogether, just like it’s futile trying to stop people in Iran speaking out against the election results online.

…we would accompany any technical measures with a two-stage robust appeals mechanism including an appear to a 1st Tier Tribunal, which is a judicial body. No measures will be applied without the appeals process having been exhausted.

This made me feel a bit better at first, but then I read a quote from the Open Rights Group, which I think counters that point very well:
Appeals are not the same thing as ‘due process’. They circumvent a priori requirements to test the evidence. Given that severe punishments are being suggested – and the evidence may be flawed – there is a fundamental obligation to presume innocence and test the case. Due process is more important when dealing with new fields of evidence and misdemeanour, not less.

Finally, regarding the CDPA:

The bill itself makes very clear in Clause 17 (7) that: “The power does not include the power to create or modify a criminal offense,” and is strictly limited in its scope. Furthermore, this power would only be used after consultation with stakeholders and the public, and with approval of both Houses of Parliament via the “super-affirmative” procedure which involves a very high level of scrutiny.

So, it can’t change any offenses, and it needs to go through both Houses, but it’s obviously preferable from the Government’s point of view to having to put a bill forward to amend the CDPA. I don’t know enough about the super-affirmative procedure to fully understand why, but it doesn’t help quell my fears.

In general, this letter makes it pretty clear to me that this bill is not in the public interest. It’s in the interest of an entertainment industry whose business model is stuck in the 1990s and who are very happy about it. An industry who won’t move with the times is bad for consumers, and shouldn’t be protected by legislation like this. If you think your MP will vote for this bill, let them know you wont be voting for them.

If we can’t change the world, let’s at least get the charts right.

As I write this, music fans across the UK are rejoicing at Rage Against the Machine’s “Killing in the Name” getting to Christmas Number 1.

This all started last year when someone got fed up of X Factor dictating the charts with it’s particularly vile brand of manufactured pop. For 3 years running at the time, X Factor had it’s winner at Christmas number one with a pretty average cover of an otherwise good song. In response to this, a Facebook group was set up to encourage everyone to buy Jeff Buckley’s version of Hallelujah, instead of the X Factor version. Unfortunately, the campaign died on its arse somewhat and X Factor got it’s 4th Christmas Number one.

However, this year we saw something even more out there. Instead of encouraging people to buy the original version of X Factor’s cover, a campaign was set up to get people buying the classic funk metal record “Killing in the Name,” previously reaching 25 in the singles chart in 1993, for Christmas number one. The difference this time round, is that it worked.

Why do I care? It’s not like I listen to pop music, and while I think Rage are an excellent band, they’re not one of my favourites. I think that point is here that it shows the music industry that people have moved on. Killing in the Name was *only* available via download, not on CD, while the X Factor single was available in both formats, yet Killing in the Name still won. There was an expectation that Rage’s download-only lead would be crushed when the CD single hit the shops, but it wasn’t. Some people said it wasn’t work trying anyway, but I’ve made it pretty clear by now that it was.

I hope that the rest of the entertainment industry pricks up its ears and listens to what’s going on here. There’s a huge commercial force out there who aren’t interested in manufactured crap, and don’t go into shops to buy their entertainment. They entertainment made with passion, and it available on line, at a reasonable price, where ever they are. Let’s see it!

P.S. Sorry if that last bit sounded like a Morrison’s advert.

Playing with Kubuntu Netbook Edition

Being the KDE fanboy that I am I was delighted to see that Kubuntu 9.10 was to include a release of Kubuntu Netbook Edition (KNE), with the new KDE Plasma Netbook desktop.
After several failed install attempts (the image on the HTTP download mirror I used appears to be corrupt) I managed to get it installed today thanks to bit torrent. Here’s what I think!

Firstly, lets establish some points of reference. The previous OS I had on my eeePC was EasyPeasy 1.1, based on Ubuntu Netbook Remix (UNR) 8.10, running a few KDE apps. I’ve always used KDE on my desktop, and have run Kubuntu since its first release.

Booting

KNE booted in under 10 seconds. That’s about a quarter of the time taken by EasyPeasy 1.1 on the same machine, and a few seconds faster than the full-blown Kubuntu install on my 2 GHz desktop. The latter difference is probably down to the eeePC’s solid-state drive.
However, logging in was where it fell down. KDE is a rather heavy desktop with lots of components and config to load. The inital log in took about 30 seconds, with subsequent logins taking about as long as the machine takes to boot. It’s not a problem in the grand scheme of things, but compared to EasyPeasy, it does feel sluggish.

Appearance

The “desktop” itself looks slick and shiny, benefiting from KDE 4 and all of it’s eye candy. The window manager even manages to provide some compositing effects. When you log in, the splash screen fades smoothly into the desktop, and switching between applications is done via the “Present Windows” feature (like Mac OSX’s exposé). Considering integrated graphics and the 900MHz processor in my eeePC, this really impressive and all the effects are delivered flawlessly.

Interface

This is the real paradigm shift in KNE. The interface introduces two new Plasma “Activities” to KDE: Search and Locate Containment, and Newspaper. The newspaper is basically a container for 2 columns of plasmoids, delivering online content from various feeds and streams (such as RSS, Weather, comic strips and so on) all in one page. The Search and Locate Containment is like the full-screen application launcher found in UNR. Application categories are displayed as a row of large icons, which take you to a set of icons for the applications in that category. There’s also a container for “favourite” applications, which show all the time. The feature here that’s supposed to differentiate it from UNR is the search feature, but at the moment it just falls short of the mark.
At the far right of the panel (located at the top of the screen) is the Search plasmoid. Clicking this (or pressing the assigned shortcut) summons a search box. Typing in here will search through your applications and files and display the results in the Search and Locate containment, in place of the application category icons. This is great – I use KRunner all the time on my desktop in place of the menu, which works just like this. The problem is that KRunner is better, and it’s present on KNE too. Here are the problems with the Search at the moment, which KRunner doesn’t suffer from:

  • There’s no shortcut key assigned by default
  • When the search box is summoned, it doesn’t automatically get focus
  • When you’ve found what you’re looking for, you still have to click the icon – there’s no keyboard interface for selecting the one you want

The other “elephant in the room” is ever-present in KDE, and that’s KWallet’s lack of PAM integration. On EasyPeasy, my WPA password was unlocked when I logged in, and Network Manger could connect straight away. However if I want to connect with KNE I get prompted for my password to unlock KWallet. I can put up with this on my desktop because when I log in to my desktop the time it takes for me to re-enter my password in negligible compared to the time it’ll be on for. However, with my netbook (and the new super-fast boot time) I’m far more likely to turn it off and on to check my emails, look something up, take some notes at a conference, and the dual-password requirement would start to become a hassle.

Conclusions

Now, it would be grossly unfair of me to pass judgement on KDE’s netbook offering based on this. KNE’s download page and installer both clearly state that this is a technology preview release, and that the first production release will be with Ubuntu Lucid (10.04 LTS), after KDE 4.4 is released. Looking at it as a technology preview I’m excited. I never imagined when I bought my eeePC that a machine as low spec as that could have such a good-looking interface running so smoothly. The Newspaper view needs a bit of spit and polish, but reflects and accommodates the way I use my netbook very well. The Application launcher needs some work, though. If it doesn’t get as easy-to-use as KRunner, I’d remove it and use that instead.
For now, I’m to switch to UNR 9.10. We’ll see how it goes once Lucid is released!

The line of code that could

--- a/theme/styles.php
+++ b/theme/styles.php

@@ -116,7 +116,7 @@ $files = array();
 // here can be overridden by theme CSS.
 if ($pluginsheets) {
     foreach ($THEME->pluginsheets as $plugintype) {
-        $files += get_sheets_for_plugin_type($plugintype);
+        $files = array_merge($files, get_sheets_for_plugin_type($plugintype));
     }
 }

That’s the one-line patch I submitted to the Moodle tracker last week. It fixes a bug that was preventing plugins in the upcoming Moodle 2 from being able to include their own stylesheets.

Having been on holiday for a few days, I came back in to work today to find that the patch had been accepted and committed to the core Moodle CVS. This is a first for me, so rather chuffed with myself, I tweeted about it.
This was picked up upon by Steve Lee from OSS Watch who provide advice on the use of Open Source in education. He decided that it provided a good enough example of the benefits of the open development model to be worthy of a post in the OSS Watch blog. All because of one little line of code!

To pick up on some of they key points from Steve’s post, the Open Source model really does prove advantageous when developing software. I’ve been working to update our moodle plugins to use the Moodle 2.0 APIs over the last few weeks, when I hit this bug. The open nature of the code not only helped me find and fix the bug quickly (Tim Hunt, who maintains this code, was on holiday), but being able to submit the patch for inclusion in Moodle’s CVS repository makes it a lot easier for me to maintain the code at my end, rather than having to re-apply the fix every time I pull updates from moodle.org.
The opportunity to interact with the Moodle community is also invaluable, as it has allowed me to discuss various methods of solving the problems I’ve come across, giving me the knowledge to help others with similar problems. I’ve also contributed to discussions on future Moodle developments, such as the formation of the User Interface guidelines. When I studied HCI at university I thought it was a pain, but now I’m in the “real world” it turns out I can put my knowledge to good use!

This is also a prime example of what Jono Bacon posted recently about validation, so thanks to Steve, Tim and Tony for the encouragement!.