Migrating Lastpass to pass password store

I’ve been a Lastpass customer for several years, and it’s been pretty much the only service I’ve used which stores my data on someone else’s servers (albeit encrypted).  I’ve never been particualrly happy with this, but haven’t found a solution that allows me to access to my passwords easily from multiple devices across multiple platforms, so have stuck with it until now.

My Lastpass subscription is due for renewal this month, and this week Lastpass suffered a security breach.  This coincides with my discovery of pass, a unix password manager that stores your passwords locally in plain text files encrypted with GPG.  It also integrates with git to allow your password store to be easily shared between devices, and has clients for Android (which I need for my phone) and Windows (which I need for work).  I decided to have a go at migrating to see how I got on.


Setting up on Linux was straightforward.  I’m running Ubuntu 14.04, so installed with apt-get install pass  I generated a key with gpg --gen-key and ran pass init to create a password store using the key.  I then ran pass git init to initialise the git repository.  Next, I exported my passwords from LastPass using their CSV export feature, and ran the file through this script to import then into pass.  Similar scripts are available for migration from other password stores.

I installed the Firefox extension, and it works like a charm, matching the current site and filling in login forms for me.

Before I could install a client on another device, I needed to push the git password store to a server.  I logged into my server that’s accessible via the Internet, created a folder and ran git init --bare since I don’t need to have the files checked out on the server.  I then ran pass git add remote to add the server, and pass git push to sync the passwords.


For Android, there is a client called Password Store which can be found in F-Droid or the Play Store.  First, you need to install OpenKeychain (available from the same places), and import your GPG key.  I followed this guide to export my key, copied it to my phone and used the “Import from File” option to add it to OpenKeychain.

In Password Store, I set up the Git repository and synced down my passwords.  I then set OpenKeychain the the OpenPGP provider, and I was set.  When unlocking a password, Password Store will automatically copy it to the clipboard for a defined number of seconds, then clear it.  OpenKeychain allows you to cache your key’s password for a defined number of minutes, so you don’t have to enter it repeatedly.  It then forgets it automatically.


Update: I’ve since worked out how to set up pass properly on Windows, including the Firefox extension.  See this post for a full guide.

There are several solutions for Windows, none of them are as complete as the Linux equivalents yet (for example, no Firefox plugin).  However, you can get a similar copy-to-clipboard-then-auto-delete workflow like on Android.

Firstly, you need to install Git and GPG.  I already had msysgit installed which includes gpg, but it’s an older version so I installed GPG4Win as well.  You then need to import your key into gpg.  I found this was easiest using the gpg CLI in git-bash (see the guide linked above again).

The “Windows Client” listed on the pass website is Pass4Win, but I found this to be buggy.  Instead, I went for the “Cross-platform GUI” listed in the site, QtPass.  This gives you the option to use native pass, or to use GPG and Git directly.  I went for the latter option (be sure to select the gpg2.exe executable installed by GPG4Win, not the older one provided by msysgit).

Running QtPass prompted me to create a password store – I selected the key I’d already added to GPG and it created the empty store.  To configure the git repository, I found it easiest to use the command line (it didn’t prompt me for git details in QtPass. I went to the password store directory that had just been created, then ran git init git remote add added the remote details to .git/config and ran git pull  Closing an re-opening QtPass found the git repository and I was good to go.


Lastpass has invested a lot in the usablitity of its soltution.  The browser plugins and Android apps take care of identifying websites and filling in the password for you.  pass is part way there, but still has a long way to go.  I’m willing to comprimise on the usability for the peace of mind of holding all my own data.  However, I wouldn’t recommend it to anyone who primarily uses Windows, and I wouldn’t want anyone who’s not familiar with what GPG is to try and set it up for themselves.  Once set up with the browser extension, it’s certainly a decent alternative to Lastpass on Linux, and a pretty good one on Android.

Diversity at OggCamp

Important note: This is a personal blog post on my personal blog. While I was largely responsible for the organisation of this year’s OggCamp, there is no formal organisation called “OggCamp”, and this post is intended to communicate my personal thoughts on these issues, not those of anyone else involved in past, present or future OggCamp events. At this point there are no plans regarding an OggCamp in 2015, as to where it will be, who will organise it, when or even whether it will happen.

OggCamp 14 took place this weekend in Oxford.  Shortly before the event, Twitter user @zenaynay mentioned that she would be keeping a tally of how many non-male and non-white attendees were at the event.  I was interested to see what she found, and today looked over her timeline from the weekend to find the comments posted below (with her permission), which I felt warranted a considered response.

Before I continue, I feel I should point out that I’m a middle-class white male living in the UK and working in the IT industry, which means I have no first-hand experience of what it’s like to be part of an under-represented minority in my everyday life. This means that when talking about these issues I fear that I may come across as patronising, insensitive, or otherwise offensive. However, to avoid discussing these issues on that basis would be to say that improving diversity is the sole responsibility of the under-represented, which won’t get us anywhere.

To summarise @zenaynay’s observations, she found that while there were a lot of white women (WW) at the event, there were almost no people of colour (POC) in general or women of colour (WOC) in particular, other then herself. In addition, the vast majority of the speakers at the event were men. As a result, she felt out of place, and as though she wasn’t part of the culture of the event.

This is a problem for me, as I want OggCamp to be an inclusive place for everyone. We have done a better job than other tech and open source-related conferences I’ve been to at attracting women and children, although we have made no specific effort to ensure this. To realise that we’re still excluding a group of potential attendees is disappointing, but I choose to take the criticism as an opportunity to make future events even better rather than a reason that this event was unsuccessful.

Personally, I’m more concerned with the content of the talks being diverse and interesting than the people that give them, but I also understand that members of a diverse audience may feel out of place watching a homogenous group of speakers to which they feel they dont belong, and may therefore be put off attending the event in the first place.  This isn’t a situation I’m happy with.

One point of @zenaynay’s observations that I don’t agree with is the assertion that the organisers use the unconference model of the event to get us off the hook regarding speaker diversity. This isn’t the case. From my point of view, one reason why we use the unconference model is that it gives OggCamp the energy and dynamic atmosphere that makes the event unique. The second (and probably main) reason why is that arranging a 3-track 2-day conference schedule is serious amount of work, and we simply don’t have the resources to do it.

We do have a small number of scheduled speakers each year, which is usually made up of people who I can think to ask. This is, of course, limited by the people that I know about, and then further by those who respond to me.  I dont think this has ever resulted in us having an all white-male schedule, but they have certainly been in the majority. If we had the capacity to manage the process, an open call for papers may be a useful device for getting a more diverse line-up of speakers.

As for diversity among unconference speakers, I’d like to hear from existing non-white-male attendees as to why they don’t tend to offer talks. It’s not necessary to indicate your gender, race, or age when submitting a talk to be voted on, so I can’t imagine that attendees use those metrics to decide which talks to watch.  However, there’s clearly something we’re missing here that’s putting people off.

Finally, we come to what I see as the most important issue, which underpins all of this: the diversity of attendees. More diverse attendees means a more diverse pool of speakers to draw on for the unconference, and a more diverse and inclusive culture to bring future attendees into, hopefully allowing them to feel more comfortable.
I don’t know for sure how people hear about and decide to come to OggCamp, but I suspect that it was initially members of the LugRadio community, plus listeners to the UUPC and Linux Outlaws podcasts, and then word of mouth spread from there. For whatever reason, this word of mouth didn’t spread to many people of colour.

Perhaps, therefore, what we need for OggCamp is more widespread marketing. The easiest way to market the event (and therefore the one I focused on this year) is to speak to previous attendees on social media, which is obviously never going to increase diversity. Knowing where and how to promote the event to make it visible to attendees who don’t necessarily fit the existing “mould” which we’ve apparently developed could be a big step in the right direction.

Another step in the right direction may be to adopt a formal code of conduct (CoC).  It’s not something we’ve ever felt the need to introduce before, but I was made aware this year of someone who was put off attending by the lack of a CoC.  Codifying and honouring our intention to make the event safe and welcoming for everyone may help encourage those who worry that they might not be welcome, to attend.

I’ve mentioned to several people this year that I’d like to increase the involvement of the community in the organisation of OggCamp by creating a permanent online discussion forum (web forum, mailing list or whatever). If we go ahead with this and you’re interested in helping OggCamp become more diverse, I’d encourage you to get involved in the discussion. Follow @OggCamp on Twitter and we’ll keep you posted as plans are developed.

My Steam Box – PVR power management

Update: Updated post-record bash script for Ubuntu 16.04, and re-wrote ruby script to use HTTP instead of Telnet.

I mentioned before that I’m using tvheadend and XBMC as a PVR on my Steam box/HTPC. This allows me to schedule recordings and do things like series link recordings to ensure I dont miss an episode. However, it does have the slight disadvantage that I need to leave a full-power PC on all the time, otherwise it can’t record. I needed a smarter solution for the sake of my electricity bill, so I devised a way to have the PC turn on when a recording is scheduled, do the recording, then power off again when its done.

Power on to record

My first discovery when hunting for a solution was the ACPI wake alarm feature. This allows you to set an alarm on your hardware clock at which point the computer will turn itself on, even if it’s completely powered off, just as though you’d pressed the power button.

There’s a couple of steps I needed to enable this feature, found thanks to the MythTV wiki. Firstly, it needed to be enabled in the BIOS/UEFI. The setting for my motherboard was called something like “Hardware Clock Wake Up”. Secondly, Ubuntu’s shutdown scripts overwrite the hardware clock with the current time, erasing any alarm set, so a small modification to /etc/init/hwclock-save.conf was required. This ensures that the alarm is written back to the hardware clock.

With the feature enabled, I then needed a command to set the alarm. XBMC’s TV settings have a “Power Saving” section, with a “Set wakeup command” option. This lets you give a command which will be called with the unix timestamp of the next recording as an argument. I set this to sudo /home/xbmc/wakeup.sh. I used sudo since I needed permission to write to the wakealarm device, and added a sudoers rule to let XBMC run the command without a password:
xbmc ALL=NOPASSWD: /home/xbmc/wakeup.sh

Finally, the script itself:

The business is all on lines 4 and 5. 4 clears any previous alarms, and 5 sets a new one using the passed timestamp. XBMC has a “Wakeup before recording” option which lets you adjust the timestamp arugment to be a few minutes ahead of the actual record time. This script is triggered whenever XBMC is shut down.

Power off after recording

Powering off was a bit of a trickier business. Tvheadend has a “Post-processor command” setting which executes after a recording completes, which is simple enough. However, just putting shutdown -h now in there isn’t enough, since it wont cause XBMC to call its wakeup script, meaning the next recording could be missed. XBMC’s shutdown or exit command has to be called explicitly for this to happen. Furthermore, I didn’t always want the system to turn off – what if I was watching something, or playing a game at the time?

After some poking around, I found that using the shutdown button in XBMC’s web interface was sufficient to trigger the wakeup script. Furthermore, this was using XBMC’s JSONRPC interface, which could be fed commands by sending raw JSON strings over telnet. This gave me a way of triggering the shutdown and wakeup from a script, with the added bonus of giving me a way to find out if XBMC was currently playing something. This led to the creating of this ruby script and a bash script to call it:

The bash script is the command actually called by tvheadend, which calls the ruby script. The ruby script checks that no other users are logged in, no video is playing in XBMC, then calls XBMC’s shut down routine, which in turn sets the alarm for the next recording. Job done!

My Steam Box – Amazon Instant Video on Ubuntu

While my Steam Box is running XBMC for media playback, there’s one service I use which XBMC can’t provide: Amazon Instant Video (formerly Lovefilm Instant).

AIV can be streamed through various apps or through Silverlight in a web browser. However, none of these options are supported on desktop Linux.  Of course, with the Ubuntu ecosystem being what it is, “not supported” is far from “impossible”.

The solution to the problem comes in the form of Pipelight – a browser plugin for Firefox which runs Silverlight and other Windows-only browser plugins in a special version of Wine.  This clever little hack (installed from a PPA through apt-get) allows you to watch Silverlight content within Firefox for Linux! It’s worth noting that I use the pipelight-multi package which allows you to set up Pipelight and Pipelight’s WINE installation for specific users, rather than for every user on the system.

With this problem solved, I wanted to make the user experience of accessing AIV a bit smoother.  To acheive this, I created a small autorun script which runs when the lovefilm user logs in to openbox.  The script contains the following commands:

switch-to-xbmc &
pkill -u lovefilm

This means that firefox launches on login. Firefox is configured to open AIV when it starts, and to run in fullscreen mode, has all but a few toolbar buttons removed and consolidated into a single toolbar. When we’re done watching, we close firefox, which lets switch-to-xbmc execute to return us to the XBMC menu, then pkill kills any other processes belonging to the lovefilm user, logging it out.

The final issue with using AIV on a TV is that the web page is noisy and not designed to be used on a big screen on the other side of the room. To fix this I’ve written a GreaseMonkey userscript (very much a work-in-progress) to remove a lot of the Amazon bumf and reformat the page to make it work better on a large screen.

My next and final post in this series will look at how I’ve got Steam and some associated utilities set up.

My Steam Box – Media Playback

For media playback on my Steam Box/HTPC I’m (mostly) using XBMC.  This lets me play videos from my server and watch DVDs from my DVD drive.  On top of this basic functionality I’ve installed the BBC iPlayer and YouTube plugins to allow me to stream content from the web.


I mentioned in my hardware post that I’d purchased a DVB-T2 USB dongle to allow me to watch HD TV.  For the past several years, the standard solution for TV/PVR functionality on Linux has been MythTV.  However, these days XBMC also has a good deal of this functionality in its PVR plugins, as long as you can get a backend service installed to operate the tuner.

One of these options is, of course, MythTV Backend. However, after struggling through the “Setup Wizard” being asked every question under the sun and still not getting it working, I gave up and found TVHeadEnd.  This gives you a simple web interface which detects your hardware and scans for channels with ease.  Adding the TVHeadEnd PVR plugin to XBMC gave me live TV and PVR functionality with minimum fuss.


XBMC gives you several remote control options, including a web interface and a service for other remote control apps to connect to.  I have a remote control widget on my android phone which works well enough, but I’ve found it easiest just to use the regular keys on my Rii Touch keyboard.


I’m not a particular fan of XBMC’s default Confluence theme, in particular it’s menu which only shows the selected option.  After looking around and finding this guide on Lifehacker, I switched to the Transparency theme which has a much better menu, and could be customised to have just the bits I need.

Switching Users

I mentioned in my last post that I’d written scripts using dm-tool to switch between users.  To run these from XBMC I installed the Advanced Launcher addon. This addon lets you create launchers for any executable within XBMC, and add them to the main menu in themes that support it.  Using this method I created launchers for the switch-to-steam and switch-to-lovefilm scripts on the main menu.

My Steam Box – OS and Software

In my last post I went over the hardware I used for my new Steam Box/HTPC all-in-one living room PC.  In this post I’m going to go over how I’ve got the OS set up and touch on the software I’m running to provide me with gaming and media playing functions.  I’ll then go over the details of each function in separate posts.

To start with, I did a vanilla Ubuntu 12.04 LTS desktop install.  I’d considered going for SteamOS, but to be honest, Big Picture Mode isn’t quite there yet, and I know where I am when it comes to getting extra packages and cool hacks for Ubuntu.  One part of SteamOS I was really impressed with is how they’ve set up Steam and the desktop session on separate profiles letting you can switch easily between the two functions, so I chose to emulate that on my set up.

The 3 main functions I wanted were media playback, a basic desktop (mainly for administrative tasks) and a desktop session to run Steam.

For administrative functions, I created a user called “mark” during installation (as I usually do).  Mark is a sudoer, with a standard 12.04 Unity desktop.

For media playback, I installed XBMC.  I created an unprivileged user called “xbmc”, set to auto-login to the XBMC standalone session with no password, making XBMC the initial interface on boot.

For gaming, I created a second unprivileged user called “steam”, set to log in to a Unity desktop session with no password.  Steam is set to auto run on log in, and display the Library tab in Grid view (showing the artwork for each game like Big Picture Mode does).

There’s also a third unprivileged user called “lovefilm” which logs in to an openbox session with no password, but I’ll talk about that more in its own post.

To switch to each user, I’ve created a scripts called “switch-to-xbmc” etc. which use the dm-tool utility.  These can be called from the appropriate interface (a menu item in XBMC, a non-Steam application launcher in Steam) to quickly switch to between users.

In the next post I’ll talk about how I’ve set up XBMC for media playback in a bit more detail.

My Steam Box – Hardware

Having played with SteamOS for my last post, I decided that it would be a lot more fun if my gaming PC, rather than being in my spare room connected to a small screen, was in my living room attached to my big TV.  In addition to this, I had several devices under my PC to provide me with various media-viewing functions (streaming services, DVD playback, TV), which was a pain and took up a lot of room.  To this end, I elected to build a box which could do all these jobs in one.  I’ve now got the box in a “stable” enough state that I thought it time to write about it, starting this post with the hardware.


I started the build by cannibalising the insides of my existing gaming PC, which I’d upgraded not long ago.  This gave me a starting point of an AMD A8 APU (quad-core with integrated 3D accelerated graphics), 8GB of RAM, a motherboard along the lines of this one, and a 240GB SSD.

It also gave me a very noisy heatsink. This was a problem as a box sitting under my TV needs to be quiet. After some research I bought a Zalman CPNS8900 Quiet heatsink, which does a great job of cooling with minimal noise and a low profile, but takes up a lot of horizontal room around the processor. So much, in fact, it lent against one of the RAM DIMMs.

To solve this problem along with screen limited resolution due to my TV’s poor VGA support, I upgraded my motherboard to An AsRock FM2A88M Extreme 4+ which had 3 key features: the FM2 socket for the processor, an HDMI port to ease connection to the TV, and 4 RAM Slots, meaning I could move the 2 DIMMs away from the processor allowing room for the heatsink. As an added bonus, the stock heatsink mount was screwed on rather than using plastic toggle bolts at the old one had, making mounting the Zalman much easier than it had been as I could just screw it onto the existing back plate.

The final piece of the puzzle was a power supply. Again, I wanted something quiet so went for a Corsair CM430M which is 80+% efficient and has a 120mm fan. It’s also modular, meaning only the required cables need to be attached, so reducing cable management needs inside the case.

Photo showing the inside of the steam box from above.

Obligatory internal shot, taken from above. The big power supply in the bottom right draws air from vents in the underside of the case and straight out the back, while the big heatsink on the left draws air in through vents above and out through vents in the side and back. Note the RAM slot nearest the heatsink is obstructed. Top-right is a short-depth DVD drive, with the SSD mounted underneath.


When building a PC you can basically pick 2 qualities from powerful, small, and quiet. My main concerns for this machine were power (for gaming) and quietness, meaning I’d inevitably be building something fairly big.  I plumped for a SilverStone ML03B – a half-height MicroATX case which isn’t the most beautiful case, but is really well designed and fits everything inside nicely. I’ve written a full review here.

Photo showing the front of the Silverstone case, with a DVD drive and Gamecube USB adaptor installed.

The completed steam box viewed from the front, with Gamecube USB adapter on top.


I’ve always been a big fan on Nintendo controllers, I’ve still got a few Gamecube controllers as well as a couple of Wii Remotes.  With the launch of Steam’s Big Picture Mode, Valve are encouraging games developers to make their games with well with gamepads.  For those games, I use a Gamecube controller via this USB adapter (found via the Dolphin Emulator site).  I’ve owned several Gamecube-USB adapters, but this one is particularly good, firstly because it has 2 inputs, and secondly because it’s the only adapter I’ve found which works with Wavebird wireless controllers.

For games designed to be used with a mouse cursor, I connect a Wii Remote using a bluetooth dongle and a USB-powered sensor bar.

I also needed a keyboard and mouse that I could use from across the room.  There’s some nice IR remotes out there, but I went for the easy option and got a Rii Touch handheld keyboard with built-in touchpad.  I initially bought a bluetooth model but bluetooth connectivity requires pairing the device and the OS to boot before it can connect, which wasn’t terribly smooth.  I ended up with the  proprietary RF version which connects as long as the USB port has power, and just appears as a regular wired keyboard and touch pad to the OS.  It’s not perfect perfect but I’d give it 9/10 as a solution.


While my TV has Freeview built in, I didn’t have a way to watch live HD channels.  To enable this I bought a PCTV nanoStick T2, a USB DVB-T2 (Freeview HD) dongle.  Notably, this is the only USB DVB-T2 tuner which has support in the Linux kernel at the time of writing, so it Just Works with no additional drivers needed.


That’s all for the hardware at the moment. In my next post I’ll look at how I’ve set up the OS and Steam.

SteamOS first impressions

Being the casual gamer and general geek that I am, I’m currently planning to build a living room PC, primarily for gaming. My current conundrum is, will it be running Ubuntu with Steam installed, or will it be running SteamOS? This week Valve released SteamOS for download, so I popped it on a spare hard drive to take my first steps in answering this question. Here’s how I got on…

The current version of the Debian-based SteamOS is only recommended for “intrepid Linux hackers”, mainly due to the install process, and they’re not wrong. One option uses Clonezilla to copy a pre-built system image onto your hard drive, which while simple enough requires a spare 1TB hard drive laying around, unless you want to completely overwrite your current system. The second option is to use the installer which is based on Debian’s own installer, followed by some manual post-install steps. The installer gives you the option to do an “Automatic” install (use whatever hard drive it finds and default settings) or “Expert” (letting you choose custom configuration options). I went with Automatic, although I unplugged my main hard drive first as I didn’t want my Ubuntu install getting overwritten.

Once the initial install completes, you need to log in as the “steam” user (an unprivileged account that just runs Steam and your games) to get Steam set up, pre-configured to launch in Big Picture mode. You then need to switch to the “desktop” user (a regular user capable of running commands as root) to run a script which installs drivers, does some configuration and then reboots to Clonezilla, which will capture an image of your configured system to a recovery partition.

This process all went reasonably smoothly for me, except that the Steam icon in the Applications menu didn’t seem to do anything. I had to poke around in a terminal to find the Steam binary and run it manually.

Once this is all done, your system boots directly into Steam in Big Picture Mode, asks you for some configuration and lets you log in to your Steam account. From here it’s basically your standard Steam experience. I plugged in my Gamecube controller via USB and set up a mapping in Steam’s utility, and was able to download an play games with it no problem.

Far from being a locked-down appliance, SteamOS has the option to allow you to access the regular Gnome Shell desktop which is stock with Debian Wheezy. This option can be enabled in Steam’s preferences then accessed via the “Exit” menu. The way this actually works is by having 2 user sessions, one for Steam and one for the desktop, with a button for each to switch to the other. Once in the desktop, you can do whatever you’d normally do on a Linux system, with the caveat that APT is configured by default to use Steam’s own package repository, not Debian’s.

One slight niggle I had was the lack of display settings provided by Steam. I was able to access the normal Display utility on the desktop user, but these settings didn’t transfer over to the steam user. This meant that Steam ran in mirrored-screen mode on my dual-head system, and I’m not sure what resolution it was using. That said, in a more typical situation it’ll be plugged into a single TV, and PC games tend to have their own resolution settings, so this will be less of a problem.

All in all, SteamOS gives a nice experience once set up, but if you’re not an OS geek it’s probably best to wait until you can buy a box with it already set up for you. I’m no closer to deciding what my living room PC will be running, but from my first impressions SteamOS is a definite contender.

Wot no blogging?

I’ve published any blog posts here in a while. This doesn’t mean I’ve stopped blogging, in fact I’m blogging more than I used to at the moment. The difference is that most of my blogging is done at work, where I try to blog at least once a week. The things I blog about at work are much the same as I’d blog about here, save for any personal projects, so between that and the Ubuntu Podcast, I don’t have much else to post here. If you’re interested in my work blog, the feed is embedded in the right-hand column of this site.

Dual-booting Android and Ubuntu Touch on the Nexus 7

Since I wrote a silly app in QML I’ve been keen to have a play around with Ubuntu’s developer preview for tablets (variously referred to as “Ubuntu Touch” and “phablet” (phone/tablet)). I have both a Nexus 7 tablet and a Nexus 4 phone which the images support. The trouble is, the images are designed to be run on “spare devices” – there’s no support for backing up and restoring an existing Android ROM. I’m not the kind of person who has a spare tablet or smartphone lying around, I use mine a lot, so I’ve been shying away from trying it out.

Note: The current Ubuntu Touch images are definitely a Developer Preview. Not much actually works other than the web browser, it’s just to give you a feel for the interface and let you try out apps you’re writing on a touch screen. If you’re hoping this blog post will tell you how to dual-boot 2 usable systems, I’m afraid you’ll be disappointed. Maybe in 6 months.

When we were discussing the phablet images on the Ubuntu Podcast, Alan Pope mentioned that there were people in the community who were playing around with getting devices to dual-boot. This seemed like a reasonable solution – I could boot into Ubuntu when I wanted to play around, then back in to Android when I actually needed to use my tablet.

It turns out the solution is called MultiROM. Using a modified version of the TWRP custom recovery system[1], you first flash a modified kernel into the stock Android ROM then flash the MultiROM interface that sits in the boot process before the init system is loaded and allows you to select a ROM to boot. The custom recovery system then has an extra option allowing you to flash alternative ROMs to your internal storage which can be selected through this interface, which can be selected in the MultiROM interface (it’s just a touch-screen menu).

I followed this How to install MultiROM video to install the custom recovery, MultiROM and modify the stock Android kernel[2]. I then followed this Ubuntu Touch Preview guide to download the Ubuntu Touch image and flash it to the device’s internal storage alongside the stock Android ROM. After this, you can choose between the Internal (Android) and quantal-preinstalled (Ubuntu) ROMs in the MultiROM menu when the device boots.

It looks like it’s possible to add other ROMs too – there’s an option for an Ubuntu Desktop image and CyanogenMod should work, but support for other ROMs is dependant on a MultiROM-compatible kernel being available.

Update: Quick video showing dual-boot in action

[1] The installation of which requires unlocking your bootloader, which will factory reset your Android system and possibly void your warranty, so backup all your data first. I used Google Nexus 7 Toolkit to do my backup/unlock/restore. It’s Windows only but I had a hard time finding a Linux-based tool that’s as easy to use for doing the backups (I assume adb does it, but I didnt dig around enough). Holo Backup is a good cross-platform tool for doing backup/restore, and unlock instructions can be found on the Ubuntu Wiki

[2] Note that this video assumes your bootloader is already unlocked (see [1]). It also assumes you’re using Windows, but I just rebooted to the bootloader manually (Power Off, then hold Power Button+Volume Down) and used fastboot from the Nexus 7 Installer PPA on Ubuntu with no problem.