Recently we have been working on Guardian ‘Glassware’ for Google Glass in time for the UK release of the device. It’s been an incredibly exciting challenge to develop for this platform, and to explore how Guardian content might best be served on wearable technology.
Most of the user experience for Glass is controlled by the timeline, a collection of all activity and notifications. Items in the timeline are called cards, and these are the main user interface components for Glass. Third party applications can populate the timeline with cards and these will be available in the user’s timeline in the order of time sent. Cards also have menu items, which allow users to carry out actions on the cards. For example, ‘share’ gives users the ability to share the card with their contacts.

One of the challenges of developing for Glass was thinking of ways to best present Guardian content on this type of device. The UI for Glass best suits large text, images or videos – displaying anything more than a headline is usually more text than is comfortable to read on Glass. The Glassware we developed sends a selection of cards containing a summary of the latest news to the user’s timeline at regular intervals. It also sends notifications of breaking news. To give users the ability to access the contents of the article we implemented the ‘read later’ action which stores the URL of the article to the user’s Guardian Glass homepage. Users also have the option of sharing an item and having Glass read a short synopsis of the story.
There are two ways to develop ‘Glassware’ - Google’s term for software running on Glass. One is the using Glass Development Kit (GDK), which lets you build Glassware that run directly on the Glass device. The other is with the Mirror API, which allows you to build web-based service applications that interact with Glass without running code on the device. This works by first writing an authentication flow which obtains permissions to write to a user’s glass timeline API when a user installs Glassware. Then any content you post to this API is synced with the user’s Glass device. This is the approach we took because it meant we could reuse code from existing applications and it did not require any previous experience of developing for Android. This sample application provides examples of how you might interact with the Mirror API, and using this in combination with the Guardian’s Content API, we were able to rapidly produce the first iteration of Guardian Glassware.

Google offers the ability for developers to define custom actions on cards, which we used for the ‘read later’ functionality. Custom actions can be implemented by subscribing to the user’s timeline and listening to events. As a user interacts with Glassware, Mirror sends information about that interaction back to Glassware, which can then perform logic depending on which interaction was chosen, and possibly update the contents of the card. This opens up many possibilities for two-way interactions between Glassware and user.
Most of our development work needed to be tested on Glass itself. Google has developed a Mirror API Playground which lets you test out how your cards would be displayed on Glass and provides example templates, but this isn’t sufficient to use as an emulator. Wearing the device around the office did spark a few conversation starters and demo requests. At first it felt quite strange to use, but it was surprising how quickly the swiping mechanism for accessing the timeline became second nature.
It’s been a fantastic experience to work with Google Glass and discover new ways of presenting Guardian content on this new platform. The scope for wearable technology is massive, and it will be interesting to further develop news consumption on these devices.