Blog

Introducing Piccle

I recently published Piccle, a static site generator for photographers. It's a commandline tool that uses the metadata within your photos to build a website. You can learn more & try it out or see it in action, but I want to talk about the philosophy behind it.

Genesis

Photographers have many options for online services to share their work, but none align with the conversations I have about photography. If I'm talking with somebody "about photography", that usually means a starting point like:

  • "I just bought a new camera, I really like it..."
  • "I went to Norway last year..."
  • "I take a lot of portraits..."

And I want to follow these up with "Here, let me show you some photos."

This is harder than you might think! None of the major photo sharing sites1 make this easy, even though digital photos automatically contain a rich sea of metadata. "Date taken" and "camera model" is a given, and cameras increasingly add location info too. It's not much work for a human to add a description, title, and keywords when saving an edited image. But I don't think many people bother, because there's rarely a clear benefit.

It's a real shame that more sites don't use this metadata. Even if you only provide filtering or sorting by "Date taken", this satisfices for most of the use cases above2. Sadly, the popular sites either cannot toggle between "date taken" and "date uploaded", or hide the option away. Their focus seems to be "display one image", not "explore these works"; navigating your photos by metadata is not a priority.

So what's your best option? Most sites – not Instagram – will let you collect photos into albums. This is tedious and pointless. Why should you have to manually add photos to albums like "Taken in 2019" or "Trip to Norway"? It's trivial for the computer to do it – processing and filtering data is kind of their whole thing – but no: the work is on you. It angers me when computers needlessly burden humans. I have filled out countless forms like this:

A screenshot of a credit card form, with an error message about spaces.

The computer knows what's up, but I still need to fix it? Just remove the non-numbers for me! Let me enter "6011000990139424" or "6011 0009 9013 9424", and you can strip the spaces. You already detected this specific case to display the error message. Detect this specific case, fix it, and let's move on with our lives.

Another drawback to building albums is that your labour is on the destination – where your photos are displayed – rather than the source (the photos themselves). Storing metadata in your files makes it available everywhere – any site you upload to, any app you open, and the operating system itself. Cataloguing your photos on a third-party site isn't portable – if you spend years building up a well-organised Flickr profile and want to move elsewhere, you'll have to start from scratch3.

What I wanted

I was unhappy with how existing web services presented my photography. They would show individual images nicely, but didn't let people explore my body of work. I didn't enjoy posting to them, and it never felt like I had a good place to link people when talking about photography. I also wanted to host the portfolio myself; you never know when an external service will wither away, be acquired, or shut down abruptly. And if your main "online home" is a third-party service, your audience is not your own4.

When a web developer starts thinking along these lines – a self-hosted, explorable photography portfolio – the big temptation is to build a "proper" web app. Something database-backed, with dynamic pages. It's doable, but there are multiple tar pits:

  • There's a strong temptation to build an admin interface to manage and edit your photo's metadata. This is a classic example of something which sounds easy5, but isn't. The reason third-party sites do a bad job is because this is hard, not because they haven't thought of it.
  • It suggests a lot of customisability, and now you are hurtling towards "content management system" rather than "show off my photos".
  • It's hard for people to try it on their own photos. They must download your software, set up a database, set up a web server, add their photos, and run it. (You can make this a little easier with something like Docker – but now you're presuming they're familiar with Docker).
  • It's hard for people to host it. If they want to publish their site – not just try it out – they need the technical skills to configure it on the external host.
    • As well as needing a dynamic web server and a database, they must also deal with photo files. This means they either need to configure their server to allow uploads, or set up something like an S3 bucket too.
    • How do people preview something before publishing? You have two tricky options: export photos from your local instance to a remote instance, or include the concept of "preview" vs. "published". Now we're back in "content management system" territory.
    • How do you handle backups? An import/export system might help with that local instance → hosted instance problem, but it's yet another thing to code. And part of the problem is that you have both data and content; some database content, and some image files. So either you're crafting some kind of specialised binary blob or creating a ZIP file with a manifest – both of which seem somewhat fragile.
    • Once they've got it working, it must stay working. Your code must persevere through language, system, and database upgrades. You need to think about cross-version compatibility and dealing with updates.

In short: your photographer is now a sysadmin.

"Stay working" is important. It's not enough for your software to work; it must keep working. If any of it breaks – the database, the web server, your web app – the user doesn't have a portfolio any more. And this content rarely changes – how often are you publishing photos? The keenest photographer posts a few times a day at most6. Why not generate a static site instead?

A database-backed website is software. A static site is a document. The former's alive – it only works when the software runs – and the latter is dead (it works as long as some other software is alive to read it). This neatly sidesteps a lot of the issues above: hosting static files is the most basic form of web hosting, both for a provider7 and an individual. There's no performance tuning or system administration. It's easy to try out and preview (view the generated site on your own computer) and easy to publish (copy the files to your webhost). And your portfolio is frozen in time – but still works as-is – if the generator breaks.

Mostly dead

Permit me a tautology: a static site is static. The code doesn't change unless someone regenerates it from new data. You might think this sounds dull, flat, and lifeless – but it doesn't have to be. Books are also static, as are movies and albums. Harry Potter, Die Hard, and Rumours are the same every time but feel vividly alive. Your static site can be the same.

Even though we're generating static files, we can include features that make it feel more alive. An Atom feed lets users subscribe for updates; including OpenGraph tags gives informative previews when people share links. CSS transitions & animations can give visual pizazz, and you can use a smattering of JavaScript for features like slide shows.

You could go further, and build a JavaScript-driven single page app. When people think of SPAs, they think of JavaScript talking to an HTTP API. But the API is optional: our JavaScript can read a static JSON file instead. After the initial page load we can render everything in the browser, returning only to the server for images. This gives you the best of both worlds: the speed and reliability of a server-rendered site, combined with the slickness of client-side rendering. All the contemporary JavaScript options are available to you, if you want them.

I still don't know if I want them, though I've designed Piccle with the presumption I do. The idea of switching to client-side rendering after the initial page load is attractive; it's the progressive enhancement dream scenario. The question is: is it worth loading a megabyte of JSON for that8? Piccle already feels snappy when changing pages, and the HTML is very light. I'll experiment, but it might not be an improvement.

What I built

A screenshot of my photography portfolio, showing a grid of images and navigation.

Piccle is a command-line utility built in Ruby. Given a directory of images, it generates a complete site. There's still a database under the hood, but acting as a cache instead of powering the website directly.

One command generates your site, but it's a multi-step process internally:

  1. New and updated metadata is extracted from the photos to the database.
  2. The photos are faceted based on an aspect of the metadata (eg. "camera model", "date", "location"). Each facet is implemented as its own "stream", which groups photos according to its facet. Streams are registered upfront; then, as each photo is added, the stream adds its own data.
  3. The website is generated. This uses a NodeJS helper utility for performance reasons. (There's a pure Ruby renderer available too, but it's impractically slow for more than a handful of photos.)

Earlier versions supported serving a website from the database, but some queries (and working both statically & dynamically) were too unwieldy. It was simplest to abandon that approach.

I describe Piccle as a zero-configuration tool, but that's a lie. There are some customisations available. "Events" are the most notable – named date ranges, handy for themed shoots or trips. I don't know a good way to store these in EXIF data, so events are defined in YAML instead. Generating a gallery from one command makes it easier to slot Piccle into your existing workflow – you can set an Automator action to watch a directory for new files and then run Piccle. Add an rsync command too: now your photo is automatically published to the web the instant you export your photo from your image editor.

Overall, there's nothing technically revolutionary in Piccle. The value is in how lots of small elements come together to be pleasant to use. Good help text and sensible defaults in the interface; responsive output so galleries look great on phones, tablets, and desktops. Quilt images for each subsection, so shared links look good.

A screenshot from iMessage, showing the quilt.

But at a higher level, I want to hide all of this from users. Piccle should feel obvious; of course social media shares look good. Of course you run one command and get a complete site that's easy to deploy. Of course you can browse by date, location, and tags. Photographers shouldn't need to know anything about web development, or databases, or programming. They should feel like Piccle is a small tool doing one thing well – even if, underneath, it's working hard to do ten things well.

Built for everybody, and yet not

A static site is a good technical fit for a photo gallery. It's also simpler for users: "run once and put the output on the web" is easier than "keep this database-backed system running." Simplicity is one of my goals for Piccle; I want to bring better tools to more people, to make it easier for people to host their own photo galleries. From one perspective, I achieved this: Piccle takes a directory of images and generates a nice-looking website with one command. Publishing HTML is easier than hosting a CMS or a Docker container. If you have some photos in a folder you can try it easily. The generated gallery is OK even if your photos have limited metadata.

But I mentioned Piccle to a friend a couple of months ago, and she said “Let me know when it’s ready & I’ll get the people in my camera club to test it.” This should have been delightful, but I was deeply reluctant.

A camera club is a group of people who own complicated equipment, are enthusiastically operating them, and are technical enough to transfer photos onto a computer & edit them using specialist software. Yet there's a huge distance between them and someone who finds Piccle easy to use. How many camera club members already use a command line? How many will have Ruby and NodeJS installed? For me, Piccle is straightforward9. But if you’re not like me, it’s arcane.

I built Piccle for my own use, but I'm bothered by this gulf between my goal – break down publishing obstacles – and the result. Imagine you're a member of that camera club: you’re into photography, you have your photos on your computer, and you want to make a website for them. Your side quests include:

  • Learn enough about the CLI to navigate directories and run commands.
  • Install Ruby and NodeJS (or check your OS includes them).
  • Install Piccle. Hopefully all of its dependencies install cleanly – otherwise you’ll need to chase down things like ImageMagick and SQLite libraries.
  • Find a suitable web host, set up an account, and figure out how to upload all your files in one go.
  • Figure out how to integrate all of this into your workflow. To be useful this can’t be a one-time process; it’s got to be repeated, and reliable, and feel natural.

I don’t have a solution for this. Packaging Piccle up into a GUI and shipping static versions of Ruby/Node/etc is one possibility10. It’s not how I want to use Piccle, but it would open it up to a wider audience. This is an open question, and one I’m still mulling over.

Next steps

Now that I’ve launched Piccle I’ve swung back towards photography rather than coding: editing past images and adding metadata to existing shots. It's satisfying seeing it become closer to a comprehensive archive of my photography, and it’s taught me more about my own work than I expected. This is an ideal outcome: it's nice when programming sparks an interest in programming, but better when it sparks an interest in creativity. Nothing's better than when a computer inspires you to create.


  1. Flickr is the exception here; they have superb filtering tools. But it's not a perfect solution: you have to pay if you want more than 200 photos visible, the filters aren't easy to find, and you can't host it yourself.  ↩

  2. "I just bought a new camera" implies "show me the most recent photos". You can probably recall the rough month/year of a trip, so jumping to a given date works for travel.  ↩

  3. Or hope that your new home has written a special importer, just for Flickr.  ↩

  4. There's a philosophy called POSSE – "Publish Own Site, Syndicate Elsewhere." It boils down to "Keep your creative stuff on your own domain, and federate it out." Your blog should live on your own website. Publish your articles on Medium, Substack, Wordpress, etc. if you like – but keep the canonical version on your own website. Ideally, you get the best of both worlds: the increased audience from the third-party sites, and the control inherent in first-party hosting. It's not as popular as it was, but it still has a lot of appeal for me.  ↩

  5. To a programmer.  ↩

  6. Editing takes time, and if people do produce a lot of work in one go they're likely to publish it as one batch.  ↩

  7. Any webhost can host static files – as can Amazon S3, Github Pages, or Geocities.  ↩

  8. You can split the JSON and load chunks on-demand, but why? You could load the actual page instead.  ↩

  9. I am a programmer, spend a lot of time on the command line, and generally wish for a simpler time on the internet.  ↩

  10. This sounds tricky to implement across Windows, Mac, and Linux.  ↩

Sudbury's Eiffel Tower

I recently passed through Sudbury, Ontario. We visited the Big Nickel.

The Big Nickel in Sudbury

On our way out of town I spotted this structure. Was it... the Tiny Eiffel Tower? A tower curiously reminiscent of the Eiffel Tower

Alas no:

Actually, it’s a custom-built antenna tower to communicate with the cement trucks of Wavy Industries by VHF FM radio. Commissioned by philanthropic millionaire Owner Clifford Fielding in the mid 1970s. Designed & built by Toronto’s Trylon Mfg. Ltd.
Still, maybe the greatest thing about the internet is its ability to answer questions like these.

Two small photography-related scripts

I just added a couple of small photography-related scripts to Github – one for photo organisation, and one for generating preview galleries. Every photographer has their own workflow, so these are more useful as starting points for your own tools rather than something to use out of the box. They're easier to adapt with some backstory.

How I sort my photos

When I finish a photoshoot1, I copy the photos into a directory with a datestamp and a title. For instance:

  • 20190617 - Raptors victory parade
  • 20190728 - fire spinners
  • 201911 - Thailand

Next, I pick the photos I want to edit. I use macOS' filesystem tags to organise these into two categories – "definitely" (for my best shots) and "maybe" (for shots with potential). Tags are great:

  • They're part of the OS – so you can use them in searches, in file → open dialogs, and so on. One click in the Finder sidebar shows me my edit queue.
  • You're not locked into one app for organisation, making it easier to change apps or use multiple editors.
  • Your tags get backed up as part of your regular Time Machine backups.
  • Tags are attached to individual files, so if you copy photos to an external drive the tags are copied too.
  • You can interact with tags programatically, so you can build your own tooling on top of them.

The Finder's built-in tools are enough to sort through my photos from small photoshoots. Open a Finder window, switch it to thumbnail mode, maximise the zoom, and resize to get two photos per row. Then select a file, and press space to open Quick Look to check out the photo. If you like it, right-click → click a tag to add.

A screenshot of a Finder window, showing the maximum zoom for filtering images

This doesn't work so well for larger photoshoots. It isn't easy to winnow down your selections2, and you move between the keyboard and the mouse a lot.

I use Affinity Photo as an image editor, and I really like it. But I recently picked up a copy of Exposure X5, mostly for its many film presets/colour grades (including some passable imitations of my camera's built-in presets). Affinity Photo's similar to Photoshop, whereas Exposure's more like Lightroom – which means it's also capable of organising your images and faster winnowing. Load a directory of images, sort through them applying flags/ratings, then filter down to the flagged/rated images and upgrade/remove the ratings to suit. There's keyboard shortcuts for these, so it's potentially faster – though Exposure seems to do a poor job of pregenerating RAW previews, so I've found some frustrating delays.

If you did this in Lightroom, those ratings would be stored in Lightroom's central catalog. Exposure, however, stores all its metadata alongside your files. This makes it much easier to work with programatically! I wrote a small script, tagsfromexposure, which reads Exposure data and creates filesystem tags. Flagging an image as "pick" in Exposure translates to "definitely"; 2 stars or above translates to "maybe". This uses the tag command line utility for tagging, as it looked complicated in standard Ruby.

Generating preview galleries

Once I've picked my photos, I generate a preview gallery for online sharing. If it was a "proper" photoshoot, this lets my model pick their favourites for me to edit; if it was a day out or a trip abroad, I can share the good shots with family & friends. I dislike image editing and I'm a slow editor, so most of my photos would never be seen if I didn't publish these previews.

albumfromtags uses Sigal to generate these galleries based on macOS tags. Anything with a "maybe" or "definitely" tag gets included. This script also uses the tag utility to read filesystem tags. It copies the images to a new directory, uses Sigal to generate the gallery, then removes the temporary directory. You'll need to customise the sigal.conf.py.example file to add your name for the gallery's copyright notice, save it somewhere permanent, and edit the albumfromtags file to point to the config.

----------

One thing I like about these scripts & this workflow is that it's very modular. I've taken several unrelated tools, and used a small amount of code to glue them together so they work for me. They're easy to extend and change. There's no custom data formats involved – just the operating system's features. These are all unixy approaches to building tools – an approach as useful in 2019 as it was in 1989.


  1. This can be an actual photoshoot, or a day out, or a trip abroad, or whatever. I also keep monthly "miscellaneous Toronto" directories for my handful of day-to-day shots around the city.  ↩

  2. For instance, if you have 3 good portraits of someone in a particular pose, there's a lot of clicks and filters needed to tag all 3 as "maybe" and then later upgrade one to "definitely".  ↩

Four Cool URLs

One of the things I've always found elegant about programming is how small, simple elements can be combined to accomplish complicated tasks. Sometimes this is a compound elegance (like stringing unrelated UNIX commands together to achieve your goal); sometimes it's an atomic elegance (like someone using something simple in a way you never imagined).

URLs have been around for more than 20 years. They're a method of "identifying a resource", which means "unambiguously pointing to something on the internet." That's how they're normally used: URLs point to websites, articles, images, songs, videos, downloads – everything. Most people don't give them much thought, in part because web browsers increasingly hide URLs away. But some URLs have special powers. Here are some that knock my socks off.

Goodhertz – Vulf Compressor preset

https://goodhertz.co/vulf-comp/2.0.1/?cm:90/wf:0/out:-7.5/cat:8/cre:1.3/csl:0/drl:0

Goodhertz is a company that makes audio processing plugins1. Presets let people save their favourite sounds, share them with others, and keep them safe so they can revisit the settings later. They're also a great way to learn: if you want to learn how someone got a great sound, they can show you the settings.

This URL is cool for a couple of reasons: first up, it's a preset in itself. Pass it to the audio plugin, and the plugin uses the settings from the URL. But it's also a valid web page2 that shows the settings used, in a more human-readable format. This is great for learning, and if you find yourself in a niche situation3 it might make your life easier.

Reddit - /r/redditlaqueristas

https://reddit.com/r/redditlaqueristas

There's nothing specific about the Laqueristas subreddit that made me select it, beyond it being a great example of an online interest-specific community. I could have picked any Reddit URL for this demo – and indeed, that's what makes this URL cool. Take any Reddit URL, append .json to it, and you'll get back a JSON version:

https://reddit.com/r/redditlaqueristas.json

This is a really user-friendly way of exposing an API! It makes it easy for programmers to get started – no need to register an API key, set up oAuth, or anything like that. Those barriers are often counterproductive (as people will just scrape the web pages rather than jump through those hurdles). It's not just subreddits – if you want to retrieve user details, or the contents of a post, it works there as well. Removing a .json can be useful too: if you're trying to debug some code that interacts with Reddit, you can take the relevant link and turn it back into a form designed for human consumption rather than trying to read through the JSON.

The concept of content negotiation has been around for a while, where you can do this via the HTTP Accept header, but I'd really like to see more of it. Wouldn't it be neat to get a PDF version of your receipt by appending .pdf to the URL? Websites can use <link rel="alternate"> to expose these other versions to the world, though sadly most browsers don't communicate those alternate versions to users.

Combine.fm

https://combine.fm/spotify/album/4PXQAD8t4hfKeAlQiEc7mM

It seems the idea of an open music web is pretty much dead. It's all closed silos these days: there's no interoperability between Spotify, YouTube, Apple Music, Soundcloud, and so on. If you're sharing a track with a friend, what do you do? Generally, the most practical solution is to send a YouTube link. That tends to work OK, but isn't ideal4. You could send them a link to your preferred service, but they might use a different one. And you probably don't keep track of which friend uses which service.

Enter Combine.fm. You hand it a link to something on a music service:

https://open.spotify.com/album/4PXQAD8t4hfKeAlQiEc7mM

And it gives you links to that music on all the services it can:

https://combine.fm/spotify/album/4PXQAD8t4hfKeAlQiEc7mM

Now your friend can click through to their music provider and play what you shared. That's a cool service, but look at the URL: it's pretty similar to the Spotify one.

This means you don't need to use Combine.fm to construct a Combine.fm URL – you can do it yourself, or write a script to handle it. If the Combine.fm service disappears or changes, you haven't lost a link to your music – the original ID is intact.

Traintimes.org.uk

Traintimes.org.uk is a site that provides "accessible UK train timetables". It's an unofficial site, but I vastly prefer it to National Rail Enquiries. It's got a simple visual design and works great without JavaScript – which means it's a tiny web page. Tiny web pages are quick to load, and work well even on poor 3G coverage. Just what you need if you're in the middle of nowhere trying to figure out a route home.

The URL is cool because it's designed to be human editable. Let's look at trains from London to Brighton:

https://traintimes.org.uk/london/brighton/11:30/today

Do you happen to know the official 3 letter code for London Bridge, and want to travel from that station specifically? You can drop that in:

https://traintimes.org.uk/LBG/brighton/11:30/today

Do you want to leave a couple of hours later? That's also an easy change:

https://traintimes.org.uk/LBG/brighton/13:30/today

Are you actually planning for tomorrow? Change the last part.

https://traintimes.org.uk/LBG/brighton/13:30/tomorrow

This is all documented on the front page of the site, but you don't need to know these details to use the service. There's a friendly form to start your search, and the results page lets you navigate to earlier or later trains (along with the return trip). But a user interface that tried to cater for every use case would be cluttered, and supporting URL editing doesn't cost anything. If you use the site often enough to be familiar with it the URLs, it's often quicker to edit the URL rather than perform a fresh search.


We can see some general principles at work in these URLs:

  • A URL points to a thing, but it can also be the thing itself. In the Goodhertz case, the preset was the URL (and the URL was the preset). A page of search results is a more prosaic example: the results displayed match the parameters for the search in the URL.
  • URLs can be for both human and machine consumption. The Goodhertz URL is primarily for consumption by software – the plugin that uses the preset. The Traintimes URL is designed to be easy for a human to manipulate. The Reddit page is a hybrid: easy for a human to transform into a computer-friendly format by adding .json to the end.
  • URLs can be robust. Even if the Combine.fm service fails or dies, you can easily return to the original links.
  • URLs can be predictable. You don't need API documentation to get started with Reddit's API. You don't need to involve Combine.fm to generate a valid Combine.fm link. If you learn the format of the Traintimes links, you could type one in by hand.
  • Let power users edit your URLs. Most Reddit or Traintimes users aren't going to retrieve JSON or explore times by editing the URL, but power users can do so. There's no extra cost to that apart from the initial design, and your users benefit from having the option.
  • Good URLs are descriptive. The Traintimes URL describes the results you'll see. Even if you've never visited the site, you can make a reasonable guess about what https://traintimes.org.uk/london/brighton/11:30/today would show you.

These principles aren't applicable to every scenario. Sometimes a link is just a pointer to a particular document online. A news website would find it hard to let users edit their links in a meaningful way. And sometimes you want your URLs to be hard to predict (for instance, you don't want people to guess the links to your Google docs).

URLs are consumed by machines, but they should be designed for humans. If your URL thinking stops at "uniquely identifies a page" and "good for SEO", you're missing out.


  1. Software that changes sound. Want to add an echo effect? Make your vocals sound like a robot? Make your audio sound like it's underwater? These plugins are one way of accomplishing that.  ↩

  2. Some software constructs strings that look like URLs to store info, and the software can use the URL, but it doesn't lead to a working web page. Those always make me sad.  ↩

  3. "My laptop doesn't have an internet connection in the studio, but I only have this preset on my phone. Do I have to type it all out by hand? Oh cool, I can see the settings on my phone & use the interface to set them all."  ↩

  4. The quality can be iffy, it might get taken down due to copyright claims, it might not be available in their territory. They can't add it to a playlist, save it to their music library, or listen in the background when on a mobile device. Plus it's video rather than audio, so it's not mobile-data-friendly.  ↩

A review of Beyerdynamic's Byron headphones

I bought a pair of Beyerdynamic Byrons1 this year, and I hate them. But before I explain why, I'm afraid we must explore my history of headphones.

I've used and loved a pair of Sennheiser HD280s at home & work for over a decade. Two pairs, in fact; one pair broke after a respectable 8 years of daily usage. They're great: amazing sound, solid build quality, spare parts are available, people nearby can't hear your music, and they smother enough ambient noise to make open-plan offices more peaceful. But they are indoor headphones: they look ridiculous if worn outside, and they're too bulky for that anyway. So the Sennheisers live on my desk, and I use earbud headphones when out in the world.

The particular choice of earbuds has varied through the years. I used some Sennheisers for a while, and they were fine. When I got an iPhone the convenience of the remote/microphone made me switch to the included EarPods. And I liked them a lot! The sound was acceptable, the remote/mic was useful2, and the build quality was good. Well, the build quality was OK. Kind of OK. Not... not that OK.

The earpieces themselves are solid, and the remote's always worked fine. But I've owned several pairs of EarPods, and they all end up like this:

The EarPods, with heatshrink

Designers generally include strain relief when a cable meets a plug. This prevents the wire from being pulled into sharp angles and getting damaged. The EarPods have this too, but it's just a small piece of rubber. Over time, this gets broken and splits off. Then the headphones start to cut out in one ear, and the remote goes wild when you're not touching it. It's easy enough to add some heatshrink to compensate for this if you have a nearby Hackspace, and this fixes things for a while, but you're now on borrowed time. The cable will fail, and your headphones will crackle or be silent.

I've gone through at least 3 sets of EarPods since 2012, and when my latest pair started to fail I decided enough was enough. Surely I could spend a little more, and get something that doesn't consistently fail in the same way? I knew I didn't want Apple's AirPods3, so I looked around, read some reviews, and settled on Beyerdynamic's Byron headphones. Around $65 from Amazon, compared to $35 for the EarPods.

When looking around, here's what I thought my priorities were:

  1. They've got to sound good. They don't have to be amazing4, but it's got to be good. At least as good as the EarPods.
  2. They've got to have a remote.
  3. They've got to have decent build quality. If I'm spending a bit more money, they should last longer than the EarPods.
  4. They've got to be wired. I don't need another gadget to charge in my life, and can deal with trailing cables.
Beyerdynamic's Byron headphones

Let's start with their good points. They've got a slightly longer cable, which was more convenient than I expected. The build quality seems stronger, particularly around the plug. Just look at that strain relief! It's going to last way longer than the EarPods. They cut down a lot of outside noise – great for plane/coach trips – and they're more comfortable to sleep with. Finally, the sound quality can be great. There's been a couple of times when I've listened to music I knew well, and I heard something new – something I'd missed even on my big Sennheisers. That's strong praise.

But despite these upsides, I cannot recommend them. More than that: I feel that people must be warned. It turns out I had a greater priority, one I never realised. One I never thought to enumerate:

  1. They've got to stay in your ears.

And they don't! They just don't. Whenever I go for a run they fall out constantly, and I look like a newscaster who thinks he's lost contact with the studio. These headphones are pushed into the ear canal, yet still fall out constantly. The EarPods just sat on your ear-shelf, but never fell out. It's genuinely baffling, and it fills me with anger. The Byron's earpieces are made from grippy rubber, they fit tightly, and have more surface area in contact with the ear. Yet they're not as secure as the EarPods? How is that possible? It's not just running, either: it happens when walking around the city too. Not as much, but it's still a notable flaw.

But let's put that to one side, and return to the sound quality. I said before it can be great - but it can also be terrible. The Byrons will sound great when sitting at exactly the right point in your ears. Too deep, and they will be thuddy and dull and overbearing. And if they're loosening – which they will be – they're just empty, with nothing but high frequencies. But that's not all: if the cable moves at all you'll get an annoying constant sub-bass rumble. This is annoying on a treadmill, and a bit irritating at other times. I'm not convinced this is a problem specific to the Byrons – I suspect it's a problem with all in-canal headphones – but it's still a reason not to buy them.

Beyerdynamic themselves recommend that you could buy some alternative ear pieces for sports. I briefly considered this, before realising that it was completely bananas to spend another $20 in the hope that these headphones were not a lost cause. You get 3 sizes of ear pieces in the box and I have tried them all. None of them seem to make any difference.

The remote is another problem – though this one's my fault. The Byrons are designed for Android phones. This really means that a couple of the connections in the TRRS jack are transposed, so the remote/mic won't work with Apple devices. I should have read the description more closely before buying, and I could live with a non-functional remote. But the remote works! To different levels on every device!

  • On my iPhone, the remote acts completely normally. Yaaaaay.
  • On my iPad, the volume buttons don't work but the play/pause button does.
  • On my laptop, the volume buttons move the volume in the correct direction, but don't stop moving it. So I have a "Turn sound off" or "Destroy my ears" buttons.

As I say: my fault. But if it's going to work weirdly, could it at least be consistent? Or maybe include an adapter in the box, if Android/Apple compatibility is just a case of transposing two connections?

Finally, let's take a closer look at the headphones after a month or so of use.

Byron headphones, looking into the drivers

Look closely at the driver on the top. No mesh! Somehow, the mesh has fallen out somewhere. I don't know how I'll try to clean this when it gets gunky. I've also managed to lose two of the rubber cups somewhere – so now both ears fit poorly and differently. An asymmetrical annoyance.

I bought a pair of Beyerdynamic Byrons this year, and I hate them.


  1. This is an affiliate link, as other the other Amazon links in this article. If you follow it to the Byrons, I recommend you buy something else.  ↩

  2. Extremely useful after I switched to a MacBook Pro. I always had the EarPods on me, so I always had an acceptable Skype headset. Perfect for surprise calls wherever I was.  ↩

  3. Which discombobulates me, as they have astonishingly good reviews, but I don't think they'd work for me. I worry about them falling out when running/cycling, I would miss the remote, and I really don't want to own something that needs to be charged several times a day.  ↩

  4. Sound quality is pretty closely tied to the size of the driver, so in-ear headphones will almost always sound worse than over-ear headphones.  ↩

Exploring weird maths with code

Sometimes, while reading an innocuous-seeming article, I stumble across an aside that makes me sit bolt upright and mutter something incredulous. Asides like this one:

A counterintuitive property of coin-tossing: If Alice tosses a coin until she sees a head followed by a tail, and Bob tosses a coin until he sees two heads in a row, then on average, Alice will require four tosses while Bob will require six tosses (try this at home!), even though head-tail and head-head have an equal chance of appearing after two coin tosses.

Wired

This was a surprise! The four possible outcomes of two tosses are equally likely, so it seems weird that a heads-tails outcome would take longer to reach than a heads-heads. Weird enough to try it at home – at least by programming. Let's write some Ruby and see if we get the same result. (I recommend opening irb and exploring these examples for yourself if you want to fully understand them.)

Checking some assumptions

First of all, let's agree to toss a coin by picking a random symbol from an array1:

def coin_toss
  %i(heads tails).sample #=> :heads or :tails. 
end

And let's confirm that this is close enough to 50/50, by counting the result of tossing a coin 100,000 times:

results = {heads: 0, tails: 0}
1000000.times { results[coin_toss] += 1 }

puts "After 100000 tosses we saw #{results[:heads]} heads and #{results[:tails]} tails."

.sample chooses an element at random, so the result will be a little different each time. I ran this program 10 times, and got these results:

After 100000 tosses we saw 50131 heads and 49869 tails.
After 100000 tosses we saw 49845 heads and 50155 tails.
After 100000 tosses we saw 50094 heads and 49906 tails.
After 100000 tosses we saw 49672 heads and 50328 tails.
After 100000 tosses we saw 50062 heads and 49938 tails.
After 100000 tosses we saw 50046 heads and 49954 tails.
After 100000 tosses we saw 50003 heads and 49997 tails.
After 100000 tosses we saw 50094 heads and 49906 tails.
After 100000 tosses we saw 50124 heads and 49876 tails.
After 100000 tosses we saw 49838 heads and 50162 tails.

I think these results look OK, but the next thing I tried was busting out some statistics and checking the standard deviation. You can think of it as a measure of how closely-clustered our results are – we'd expect to get a low standard deviation if .sample is fair. Calculating the standard deviation is a little bit complicated, so I used the descriptive_statistics gem to make it easier. Let's calculate the standard deviation of the number of heads in each run:

require 'descriptive_statistics'
[50131, 49845, 50094, 49672, 50062, 50046, 50003, 50094, 50124, 49838].standard_deviation #=> 146.014

But is 146.014 low or not? I have no idea! This is where my statistics knowledge runs out. For now, let's presume our eyeballs are correct and our coin tosses are fair.

Back to the question

If we can toss a coin fairly, we can return to our original question: how many tosses, on average, does it take to reach a given combination?

We'll need a target combination, we need to toss at least twice, and we want to toss until we hit the target:

target = [:heads, :heads]
tosses = [coin_toss, coin_toss]

until tosses.last(2) == target
  tosses << coin_toss
end

I ran this in irb and I got [:tails, :tails, :heads, :tails, :heads, :heads]. It works! Let's turn this into a method so we can reuse it:

def tosses_until(target)
  tosses = [coin_toss, coin_toss]
  until tosses.last(2) == target
    tosses << coin_toss
  end
  tosses
end

Running the experiment repeatedly will make our result more reliable. If something weird happens once it could be a fluke, but you can't fluke something thousands of times. We could use the .times method again, and build up an array of results like we built the array of tosses:

experiments = []
100000.times { experiments << tosses_until([:heads, :heads]) }

Or we can make this shorter by using Ruby's .map method. .map applies a method to every element in a list. It's normally used to modify an existing list:

["cat", "dog", "avocado"].map { |t| t.upcase } #=> ["CAT", "DOG", "AVOCADO"]
(1..4).map { |n| n * 3 } #=> [3, 6, 9, 12]

But it doesn't matter if we throw the original elements away instead. You can try this in the console, but beware! It's going to print out all 100,000 results.

experiments = (0..100000).map { tosses_until([:heads, :heads]) }

It's not really relevant to our experiment, but I wondered what the shortest and longest sequence until our target was. You might expect that we can use experiments.min and experiments.max to find out:

experiments.min #=> [:heads, :heads]
experiments.min.length #=> 2
experiments.max #=> [:tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :heads, :tails, :heads, :tails, :heads, :heads]
experiments.max.length #=> 21

But that's not quite right2 for the maximum case. It looks right, though – a handy reminder that verifying data by eye can lead you astray. Instead, we need to use .max_by to explicitly look at the length of the array:

experiments.max_by { |e| e.length }

This pattern – calling a method on the value passed into the block – is common, so Ruby provides a shorthand for this:

experiments.max_by(&:length) #=> [:heads, :tails, :tails, :tails, :heads, :tails, :tails, :tails, :heads, :tails, :tails, :tails, :tails, :tails, :heads, :tails, :tails, :heads, :tails, :heads, :tails, :tails, :tails, :heads, :tails, :tails, :heads, :tails, :tails, :heads, :tails, :tails, :heads, :tails, :tails, :heads, :tails, :heads, :tails, :tails, :heads, :tails, :tails, :heads, :tails, :tails, :heads, :tails, :tails, :heads, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :tails, :heads, :heads]
experiments.max_by(&:length).length #=> 61

Let's put all this together in one place, and add some output about our results:

def coin_toss
  %i(heads tails).sample #=> :heads or :tails. 
end

def tosses_until(target)
  tosses = [coin_toss, coin_toss]
  until tosses.last(2) == target
    tosses << coin_toss
  end
  tosses
end

experiments = (0..100000).map { tosses_until([:heads, :heads]) }
average_toss_count = experiments.reduce(0) { |sum, n| sum + n.length } / experiments.length.to_f # We'll talk about this line below.

puts "Our shortest sequence was #{experiments.min_by(&:length)}"
puts "Our longest sequence was #{experiments.max_by(&:length)}"
puts "On average, we had to toss #{average_toss_count} times before (heads, heads) came up."

.reduce is a close cousin of .map. .map does something to every element in a list; .reduce takes two elements from a list and boils them down into one. It does that repeatedly to produce a final value:

[1, 2].reduce { |a, b| a + b } #=> 3
[1, 2, 3].reduce { |a, b| a + b } #=> 6: [1, 2, 3] → [3, 3] → 6.
[1, 2, 3, 4].reduce { |a, b| a + b } #=> 10: [1, 2, 3, 4] → [3, 3, 4] → [6, 4] → 10.

You can also give .reduce a starting value, which is what we did in our program:

[1, 2].reduce(10) { |sum, a| sum + a } #=> 13: 10 + 1 = 11 then 11 + 2 = 13.
[1, 2, 3].reduce(10) { |total, a| total + (a * 2) } #=> 22.

We started our toss count at 0, then added the length of each run to that total. Finally, we divided it by the total number of runs to get an average. The .to_f on the end converts the length to a floating point number, because we'd like to see the decimal places in the result.

9 / 2 #=> 4; really "4 remainder 1", but Ruby throws the remainder away
9 / 2.to_f #=> 4.5

Simplifying our code

This works, but is more complicated than it needs to be. Our goal was to find out how many tosses, on average, it takes to hit our target – we don't care about the sequence of tosses to get there. Let's change our tosses_until method to return the number of tosses instead of the sequence itself:

def tosses_until(target)
  tosses = [coin_toss, coin_toss]
  until tosses.last(2) == target
    tosses << coin_toss
  end
  tosses.length
end

This lets us make our trial run code simpler. We could build an array of the sequence counts, then add it up:

experiments = (0..100000).map { tosses_until([:heads, :heads]) }
average_toss_count = experiments.reduce(&:+) / experiments.length.to_f

We could skip the array entirely, and just maintain a total:

total_experiments = 100000
total_tosses = 0
total_experiments.times { total_tosses += tosses_until([:heads, :heads]) }
average_toss_count = total_tosses / total_experiments.to_f

Or we could use reduce again:

total_experiments = 100000
total_tosses = (0..total_experiments).reduce(0) { |sum, _| tosses_until([:heads, heads]) }
average_toss_count = total_tosses / total_experiments.to_f

The "best" version is a matter of taste, but personally I prefer the first version. It uses more memory, but that doesn't matter in experiments like these. It's the shortest code, we can find the longest run of tosses, and it's reasonably clear how it works once you get your head around .reduce.

Let's put the first version into a method that runs the experiment and reports the outcome for a given target:

def coin_toss
  %i(heads tails).sample
end

def tosses_until(target)
  tosses = [coin_toss, coin_toss]
  until tosses.last(2) == target
    tosses << coin_toss
  end
  tosses.length
end

def average_toss_count(target, num_experiments)
  experiments = (0..num_experiments).map { tosses_until(target) }
  average_toss_count = experiments.reduce(&:+) / experiments.length.to_f

  # sprintf formats the average so it prints to two decimal places only.
  puts "On average, we had to toss #{sprintf('%.2f', average_toss_count)} times before #{target.inspect} came up. Our longest run was #{experiments.max} tosses."
end

The other cases

Now we have all the building blocks to run the experiment for each of the four possible outcomes:

targets = [[:heads, :heads], [:heads, :tails], [:tails, :heads], [:tails, :tails]]
targets.each { |target| average_toss_count(target, 100000) }

Which produces:

On average, we had to toss 5.98 times before [:heads, :heads] came up. Our longest run was 52 tosses.
On average, we had to toss 4.00 times before [:heads, :tails] came up. Our longest run was 22 tosses.
On average, we had to toss 3.99 times before [:tails, :heads] came up. Our longest run was 20 tosses.
On average, we had to toss 6.00 times before [:tails, :tails] came up. Our longest run was 55 tosses.

Sure enough, it takes longer on average to hit [:heads, :heads] or [:tails, :tails] than [:heads, :tails] or [:tails, :heads], even though each outcome has an equal probability. It's still weird, but now I'm satisfied it's true.

Why does this happen?

Let's go back to Alice and Bob, who are targeting [:heads, :tails] and [:heads, :heads] respectively:

Player Target
Alice H T
Bob H H

Let's presume they both win their first toss – they both get a result they're looking for:

Player Target Result 1
Alice H T H
Bob H H H

Then, presume they lose their second toss:

Player Target Result 1 Result 2
Alice H T H H
Bob H H H T

There's now a major difference between the two players: Alice can hit her target on toss 3, but Bob can't until toss 4. Bob must start over after losing on toss 2; Alice's loss can be part of a win if she gets a tails on turn 3.

Exercises

If you'd like to explore this some more, here's some suggestions for things to try:

  1. Change the program so it runs the experiment a million times instead of 100,000.
  2. If we toss three coins, there's eight possible outcomes. How long does it take, on average, to hit each combination? Are there some sequences that take longer than others?
  3. We left our proof of a fair coin toss at "Yeah, that looks OK." Can you do better? How would you satisfy yourself that it's producing fair results?

  1. %i() is Ruby shorthand that generates an array of symbols. %i(foo bar baz) means the same as [:foo, :bar, :baz].  ↩

  2. But why doesn't this work? When we call .min, Ruby uses the <=> comparison operator to find the smallest value in the list. experiments is an array of arrays; <=> for arrays calls <=> on each of the elements of the list in turn until it finds a difference. In this case, our list elements are symbols. Symbols get converted to strings before comparison, and "heads" < "tails" because "h" < "t". So the upshot of this is that experiments.max returns the result with the longest initial streak of tails.

    Yes, I had to look this up in the documentation.  ↩

The EU Referendum: A Retrospective

I have tried for days to write about the referendum, but I keep getting overwhelmed by the immensity of it. Will we actually leave, or prevaricate forever? Can we negotiate reasonable trade deals, or will the EU make an example out of us? Will companies still open offices in the UK now we're no longer a gateway to Europe? Will Scotland become independent? What happens next in Ireland? Will our most deprived regions keep their funding? Will workers' rights be protected? Any one of these would be Pandora's box; we have opened many at once.

All of these issues are important, and all of these are beyond my control. They're also beyond my foresight: I have no idea what happens next. The stock market is suffering and the pound is at a thirty year low. These falls came from the decision to leave, but the fluctuation comes from the uncertainty. Uncertainty is the UK's greatest national resource now. We can certainly export that to the world.

A collage of anti-EU, anti-migrant front pages from UK newspapers.
Nothing says tolerance, compassion, and decency like calling people "Ethnics". Collage via @gameoldgirl.

Everyone promptly found out that the "leave" campaign was a Potemkin village, but its shoddy foundations were laid over the previous decades. The tabloid press constantly pumped out anti-EU & anti-immigrant froth, and nobody found a way to combat it effectively. Politicians found they could use these fears to their advantage, so why try to dispel them? Besides, it would invite the wrath of the press.

Without this backdrop – a nation flooded by freeloaders, powerless to prevent pointless meddling from Brussels – the UK would never vote to leave. It would have sounded preposterous. It was our government alone that failed to invest in the NHS, to build houses and schools, to make sure our post-industrial regions weren't dependent on grants, and allowed employment to become more precarious. Nothing to do with Europe. But it's no surprise that the people on the losing end of rising inequality would vote against the status quo.

As a user, given that I have a time machine...

I've grappled with two questions since Thursday night: "What should I1 have done differently?" and "What should I1 do now?". I'd kept my own counsel in previous elections but I spoke up a little this time. Some of that was amongst friends, but I also made a small website that laid out the benefits of European co-operation. I tried to back up all my claims, but my goal was to change people's feelings – not their minds. I wanted undecided people to see this long list and think "Wow, I never realised that the EU had a part in all this". People in the UK think of the EU as faceless, ineffectual, meddling bureaucrats who force legislation upon us; I hoped to replace that with some affection.

The site was a small success. It reached a couple of thousand people, and sparked some discussion showing it reached folk who weren't voting "remain" already. But I can't shake the feeling that my aim was off. Older people are more likely to vote, and more likely to vote "leave" – but they're harder to reach through the internet, and I don't have a voice in traditional media. People outside of large cities were more likely to vote "leave" – but they're harder to reach as my social circle is very urban. What could I have done differently to reach those groups? What medium should I have used? Would a different message have resonated more?

This campaign seriously impressed me. It's so simple, but appeals directly to the viewer's sense of identity. Just three words and a picture of Churchill speak volumes about persevering through tough times and standing with our neighbours.

Or is this the wrong question? Instead of asking how to reach a different audience, perhaps it's better to convince my audience they need to vote. My gut says that's a harder problem – people have been trying to motivate the younger generation to participate in politics unsuccessfully for years. Transforming online activism into real-world action is Herculean. I don't know what I can do as an individual, but Facebook's "I voted" feature is the strongest encouragement I've seen online.

What do we do now?

I doubt we'll see a second referendum. We'd need to negotiate a new deal with the EU – one different enough to merit putting it to the vote again. But Europe wants us out and doesn't need to negotiate with a gun to its head. We already had many exceptions to EU rules but voted to leave anyway. So Europe has no motivation to offer us a deal, and no pro-EU politician will want to risk a second "leave" outcome. We might hope for a stalemate – the UK never invoking Article 50, the EU not finding a way to force us out – but I expect some combination of economic uncertainty & European resentment will result in Britain leaving the EU.

Journalists and politicians will try to identify the effects of leaving, but conclusive evidence will be scarce. You can't see the corporate headquarters that gets built in France instead, nor can you see the uncreated jobs from a lack of economic growth. Businesses don't fail for one reason alone. Infrastructure takes at least a decade to become obviously dated; too slow to recognise and attribute.

Individuals can't change the UK's situation, but we can make our communities better. I have four concrete suggestions:

  • Stand up for others when you see abuse and prejudice.
  • Talk with your friends and neighbours about your beliefs. Don't proselytise; just listen to what they say, and gently try to move their opinions a little. Be compassionate and polite. You're trying to show people that there's a huge range of perspectives in the world, and to dispel myths & fears.
  • Lobby your MP to focus on their constituency instead of party politics. MPs need to support job security and job creation. They need to protect worker's rights and the social safety net. Let them know you expect this of them.
  • Hold the people who got us into this mess to account. They convinced us to leave, but don't want the responsibility of figuring out the details or standing by their pledges. And don't forget the disgusting parts either.

I also have an idea for another project. Something that makes it easier for people to engage with the politics that affects them, not the Westminster soap opera. I don't know if it will see the light of day, but I'm trying to use my anxiety about the future to propel it forward. It might not help after all, but anything's better than just looking on in horror.


  1. "I" really means "we": "what should an individual citizen, acting in their country's best interest, have done differently?"  ↩

How to Expand an OSX FileVault partition on El Capitan

I switched to OSX as my primary operating system around a year ago, after a lifetime of running Linux on the desktop. Using Ubuntu on a Macbook Pro is surprisingly straightforward and didn't require any low-level finagling, but it did come with some annoyances; annoyances that led me to try OSX as my primary OS. I kept the dual boot, but over time I wanted more disk space available to OSX. I had to piece the process together every time; here's what worked for me.

Step 0: take a good backup.

Editing partitions carries a risk of losing all your data. Backup everything! In both operating systems! These steps worked for me, but might not work for you. And while we're talking precautions: choose a time when you don't have important deadlines, meetings, or other computer-centric tasks.

Step 1: make space on the drive.

You need free space for your OSX partition to expand into. The disk utility in OSX is limited and can't resize Linux partitions, but GParted can. I downloaded Ubuntu and made a bootable USB key. Plug it in, then reboot while holding down the option ⌥ key. You can then choose a device to boot from.

Once in Linux, my process was:

  1. Clear some space on the Linux partition beforehand, then shrink it in GParted.
  2. Move the now-smaller partition to the end of the drive.
  3. Move any other partitions (eg. OS X recovery partitions) towards the end of the drive, so there's unallocated space after the partition you want to expand. Something like this1:
A screenshot of gparted, showing some unallocated space.

GParted will let you queue up these changes and try to apply them all in one go, but that gave me some (apparently harmless) error messages. I'd recommend making the changes one at a time.

Step 2: reboot into OSX and turn off CoreStorage.

OSX uses a volume manager called CoreStorage that acts as an intermediary between the operating system and the hardware. It's a requirement for FileVault encryption, but we can't expand drives while it's enabled. First, let's see all the CoreStorage volumes using diskutil cs list on the terminal:

CoreStorage logical volume groups (2 found)
|
+-- Logical Volume Group UUID 9559695B-73C6-40ED-B6EB-F3DE8767058A
|   =========================================================
|   Name:         Macintosh HD
|   Status:       Online
|   Size:         249222377472 B (249.2 GB)
|   Free Space:   0 B (0 B)
|   |
|   +-< Physical Volume UUID A76BF102-C0CF-41C4-9D88-27F8BB9A180E
|   |   ----------------------------------------------------
|   |   Index:    0
|   |   Disk:     disk0s2
|   |   Status:   Online
|   |   Size:     249222377472 B (249.2 GB)
|   |
|   +-> Logical Volume Family UUID EDA455C5-3FD0-444E-B00C-F9F8F2EF88EC
|       ----------------------------------------------------------
|       Encryption Type:         AES-XTS
|       Encryption Status:       Unlocked
|       Conversion Status:       Complete
|       High Level Queries:      Fully Secure
|       |                        Passphrase Required
|       |                        Accepts New Users
|       |                        Has Visible Users
|       |                        Has Volume Key
|       |
|       +-> Logical Volume UUID 4F3C168A-F0BB-40B6-B3FF-CE94D38506AD
|           ---------------------------------------------------
|           Disk:                  disk1
|           Status:                Online
|           Size (Total):          248873222144 B (248.9 GB)
|           Revertible:            Yes (unlock and decryption required)
|           Revert Status:         Reboot required
|           LV Name:               Macintosh HD
|           Volume Name:           Macintosh HD
|           Content Hint:          Apple_HFS

The most nested entry is the logical volume with a UUID of 4F3C168A-F0BB-40B6-B3FF-CE94D38506AD. Copy that UUID and use it in diskutil cs revert <UUID>:

diskutil cs revert 4F3C168A-F0BB-40B6-B3FF-CE94D38506AD

Reverting back to a regular volume takes some time, but you can check on the progress by running diskutil cs list until it shows as complete.

Step 3: Reboot, then expand the partition.

Your drive is no longer encrypted and doesn't use CoreStorage any more, so you can use Apple's Disk Utility to expand it. The 'partition' section has a pie chart with handles you can drag. Something like this:

A screenshot of Apple's disk utility, showing the pie chart with handles.

Step 4: Reboot again, then convert your drive back to a CoreStorage partition.

Run diskutil list to see all the partitions in your Mac. The one you want to convert is probably called "Macintosh HD". Let's re-enable CoreStorage:

diskutil cs convert "Macintosh HD"

Step 5: Reboot, then re-enable FileVault.

You can re-enable this from System Preferences → Security & Privacy → FileVault. This will also prompt you to reboot, for the last time.

Getting out of trouble

Everything broke with this final reboot. On startup, the Apple logo & a progress bar appeared before being replaced with a "no entry" logo (🚫) around ⅔ of the way through. This is how a Mac says "I found something that looked like OSX, but didn't contain a valid system folder."

The short-term fix was to reboot while holding the option ⌥ key. There was only one option ("Macintosh HD") in the list, which booted fine. The permanent fix was to use "System Preferences" → "Startup disks" and ensure that "Macintosh HD" was selected.


  1. This screenshot isn't from my system, so don't worry about the lack of OSX partitions here. It's just to show the unallocated space after the first partition on the drive.  ↩

Using structs in Arduino projects

I had some trouble getting structs to work in my Arduino project. This is how I fixed my code.

My project's ultimate goal is to replace the innards of a fibre optic lamp with a custom lightshow, but it's also a chance to play around with low-level circuitry & coding1. So far, I've designed and prototyped a hardware LED controller that's driven by an Arduino. The Arduino pumps out binary to the controller; this determines which LEDs light up.

A close-up of the circuit board

Each of the 4 LEDs you see on the board is an RGB LED, meaning it's actually a package of individual red, green, and blue LEDs. My old code used numbers to choose which colour to display, so it had function signatures like these:

void turnOnLED(int colour, int led);
void bounceColour(int colour); // A 'chase' pattern across all 4 LEDs.
void fadeBetween(int startingColour, int endingColour, int duration); // Fade between two colours in `duration` milliseconds

That's fine for pure colours, but it doesn't allow for compound colours (mixes of red, green, and blue) because the controller can only turn LEDs on and off. If you want purple, for instance, you turn on the red LED for a few milliseconds, then turn it off & turn blue on for a few milliseconds. Repeat this over & over and persistence of vision does the rest.

I could have used an integer for each colour, but using a struct keeps all the information in a single variable. I also created some constants for common colours using my new struct:

struct Colour {
    byte red;
    byte green;
    byte blue;
};

const Colour C_RED = {1, 0, 0};
const Colour C_BLUE = {1, 0, 1};
const Colour C_PURPLE = {1, 0, 2}; // 2 parts blue to 1 part red.
const Colour C_COLOURS[] = {C_RED, C_BLUE, C_PURPLE};

Next, I updated my functions to take a Colour parameter instead of an int. I also changed my parameters so they were pointers to the Colour instead; I couldn't get my code to compile without this.

void turnOnLED(Colour* colour, int led);
void bounceColour(Colour* colour);
void fadeBetween(Colour* start, Colour* ending, int duration);

The -> operator is used to access the values from the Colour* arguments2:

void turnOnLED(Colour* colour, int led) {
    float total_time = 3500;
    float total_colours = colour->red + colour->green + colour->blue;
    float timeRed = total_time * (colour->red / total_colours);
    float timeGreen = total_time * (colour->green / total_colours);
    float timeBlue = total_time * (colour->blue / total_colours);

    // Rest of function that rapidly changes between red, green, and blue removed for brevity.
}

Now the functions take a pointer to a Colour, I can change the calls to them to pass the address of a colour (using the & operator):

void loop() {
    bounceColour(&C_RED);
    bounceColour(&C_COLOURS[random(0, 3)]);
}

The final piece of the puzzle is to work around some limitations in the Arduino IDE. The IDE preprocesses your code before passing it to the compiler. One of its transformations is to generate function prototypes for your code – but it doesn't get it right for functions that use custom types, so you have to define them youself. The docs recommend you add it to a header file, but if you've only got a few then you can add them directly to your sketch. I added my function signatures below the struct Colour definition.

To summarise:

  1. Declare your struct at the top of your file.
  2. Update or override your functions so they take a pointer to your new struct. Remember to use the & operator when calling the new functions!
  3. Add function prototypes immediately after your struct definition for functions that take your struct as a parameter (or return the struct).
  4. Use the -> operator to access the properties in your struct.

An aside on understanding pointers

Pointers are a straightforward concept – "a variable that holds the address of a value, rather than holding the value directly," – but are really challenging to fully understand and use.

One trick that helped me was pronouncing the * in variable declarations as 'a pointer to'3, and pronouncing & as "the address of". So turnOnLED(Colour* colour, int led) is read aloud as "a function turn-on-LED that takes a pointer to a Colour and an integer". Or, consider the call to bounceColour here:

void loop() {
    bounceColour(&C_RED);
}

I say this as "call bounce-colour, and pass it the address of C_RED."

Another thing that helped was grasping that * means different things in declarations & usage. In declarations, * means "this is a pointer":

int* avocado; // Define a variable 'avocado' containing a pointer to an integer

But when using a variables, * means dereference: follow this pointer and use the thing it's pointing at.

// Define a couple of numbers, and a pointer to an integer
int x = 3;
int y = 8;
int* num;
num = &x; // "num equals the address of x"; it's now a pointer to x.
*num = *num + 4; // "Follow num and use the value of what it's pointing to, add 5 to it, and then store it back in the slot pointed to by num."

After this code, x is equal to 7, and *num is equal to 7 – they're both ways to access the same section of memory. std::cout << num will print the memory address (eg. "0x1234ABCD"); std::cout << *num will print the value "7".

This difference is why I write declarations as int* foo (and not int *foo). Keeping the asterisk next to the type emphasises that it's part of the type, not part of the name. foo is a pointer to an integer; there's no variable named *foo in this program. I find it a useful reminder (as does Bjarne Stroustrup), but it can trip you up if you declare multiple variables on one line:

int* bell, book, candle; // Declares one pointer to an int, and 2 ints
int *blinky, inky, clyde; // Also declares one pointer to an int, and 2 ints

int* bell, *book, *candle; // 3 pointers, but looks messy
int *blinky, *inky, *clyde; // Also 3 pointers, with a consistent style

Personally, I think declaring multiple variables on one line is best avoided. It's hard to tell if it's clever code or a subtle bug with declarations like that. But like all code formatting choices, it's down to individual taste.


  1. There are much better ways to make your own lighting projects if you don't have an interest in the low-level details. Neopixels, for instance, are reasonably priced, easier to extend, and easier to code for.  ↩

  2. -> is used because the argument is a pointer. If we were passing a Colour directly, we'd use the . operator instead (eg. colour.red).  ↩

  3. I preferred the word 'reference' instead of 'pointer' when I was learning C, but C++ has a references feature that makes that confusing now.  ↩

The Friend Multiplier

The second in an occasional series about product design heuristics. The first article was about mutual benefits.

Another heuristic that's useful in product design is something I call "The Friend Multiplier". There's two parts to it, and both are requirements:

  1. Your product must be useful on its own.
  2. Your product must be n times more useful if a user's friends use it too.

It has a powerful effect when your product is inordinately more useful (or more compelling, fun, deep, etc) when a user's friends also use it. Done right, it feels like an entirely new dimension opening up: a new world of possibilities, not a couple of new features1. It's important that the benefits are genuine (there's nothing magical about an artificial barrier of "Wait 4 hours, or invite 5 friends to continue"), and that the friends are a user's real friends2 – people who would have a coffee with your user, given the chance.

But don't overlook the first part: few products can get away with being useless for solo users. Users are more likely to stick around if it's easy for them to start using your product3, and everybody wants to try something before recommending it to their friends. And what if they're not your ideal user? What if they don't live in a big city, or lack early-adopter friends? What if they've got a 4 year old phone or a 10 year old laptop? Will your product still work well for them? Millions of people fall into this demographic and they're often neglected, especially by tech startups. They might not be your primary audience, but if you improve their experience then your product will be better for everybody.

But the multiplier is the magic part; the reason this heuristic indicates your product is on-track, and that word-of-mouth growth is plausible.

Some examples of friend multiplier magic

I really like FourSquare as an example of the friend multiplier, despite their recent disoriented product decisions. You use FourSquare to log where you've been4 and it recommends new places to you. That's mildly useful & fun on its own, but the day FourSquare really clicked for me is burnt into my brain. I was in a café that I'd never visited before and I checked in on FourSquare. FourSquare popped up a 2-year-old note from a friend, giving me the wifi password. Magical! My friend hadn't left it for me; it was a generic tip. But FourSquare knew we were friends, and knew that my friend had left a note here, so it showed me the note when I checked in. There's no wizardry in the technology powering it, but the effect was profound. I was now aware of hidden traces left by my friends; traces I could stumble upon, seek out, or leave for others.

LiveJournal is a simpler example. LiveJournal users post diary entries to the web, either publicly or privately. Which is useful in itself, if you want to keep a diary. But when your friends are on LiveJournal too, you get an intimate window into their lives. LJ's fine-grained privacy settings mean your friends have control over what they're sharing – but you'll know more about how they feel, what's on their mind, and what they're doing. I never had a LiveJournal, but a lot of my university friends did. I always had a sense that there was an in circle – a deeper, mutually supportive, more emotionally bonded group – that I was outside of. Again, there was no groundbreaking technology involved. But the social impact on individuals was huge.

It won't last forever

It's important to have a target number in mind for your ideal user's friends. The magical feeling normally ebbs away after a (normally low) threshold. Twitter's a rare example with a high threshold: it's better when you follow a few hundred people. Most products have more in common with a shared calendar; more than a handful of people using one results in a deluge of events, making it hard to find ones relevant to you & making the calendar feel like a mess. So pick a low number & design with that number in mind.

Some counterexamples

This heuristic isn't a law. Lots of products thrive without any kind of social features, and (as before) there's lots of products where it's best avoided. Sometimes that's immediately apparent: I don't want my friends involved in my banking or my to-do list. I think that it's best to keep them away from dating & relationships, too; Tinder's "friends you have in common" feature always felt like a warning about potential future awkwardness, rather than an endorsement.

There's also fuzzier cases. Features that seem like a good idea can fall apart like wet cake once the complications of real life get involved. Windows 10 has a feature where your friends can log on automatically to your wifi. Which sounds great – your computer already knows who your friends are, so why not let them onto your wifi without having to ask for the password? Ah, but the details: your computer doesn't know about your friends. It knows about your contacts. Contacts contain friends, family, colleagues, ex-lovers, stalkers, businesses, abusers, toxic people you've cut out of your life. A whole lot of people you'd want kept away from your wifi network.


The friend multiplier is a simple but effective guideline. It's not a requirement – if it doesn't make sense for your product, don't worry about it. But in an increasingly connected world, it makes sense to think about how that can make your products much more compelling.


  1. A "solo" vs "with friends" feature comparison would be missing the point, anyway: product design is about what your product lets users accomplish, not the bullet points you can list.  ↩

  2. "Real" isn't the same as "real life"; internet-only friends count for this too. It's about genuine, two-way friendships – not people followed because a user likes their jokes, or stuff they make.  ↩

  3. Or service – especially services. If it's easy for someone to start using your service it's more likely they'll find value in it, so it's less likely they'll cancel. Fewer cancellations mean a lower churn rate, and a lower churn rate makes it easier to build a viable business.  ↩

  4. Which is interesting in itself; "Wow, have I really not been here for a year and a half? Really?"  ↩