I remember the day Google invited me to their Glass Explorer Program. Wincing a bit as I clicked the submit order button on the web portal that streamed a sizable sum of money from my bank account to Google, I wondered what I had gotten myself into. I’d heard the stories of people using Glass at inappropriate times. I’d seen the videos on YouTube of pointless things captured from Glass. I’d heard the promises of Augmented Reality and real time data displayed for every day tasks. To be honest, I really didn’t know what to expect.
Expecting an agonizing wait for my device, I was pleasantly surprised that Google overnighted my headset. With all the subtlety of an 8-year-old on Christmas morning, I tore into the packaging and had the headset powered on, linked to my Google account, and running within minutes. After some adjustments, gestures, and voice commands I hit the first major question every Glass Explorer has – now what?
The answer must be apps. With cool technology the answer is always apps, right? I opened up the Glassware store and started enabling every app that sounded even remotely interesting; it was time to see what Glass could do. After a short time, I hit the next series of questions that Glass Explorers run into: Are all the apps for Glass the same as apps for my mobile phone? Where is the benefit? What did I get myself into?
I am by nature a tinkerer and a technologist. I had the various dev kits downloaded for Glass within a few hours of its arrival; I needed to know why the apps for Glass work the way they do. It was during this exploration that I discovered what so many Glass developers, pundits, and detractors commonly miss: Glass is not a smartphone – in fact, it is a pretty poor replacement for a smartphone. It has a separate set of features and abilities that, when leveraged, can create incredibly innovative applications. Glass can deliver relevant, timely, and focused information when and where you need it. You can interact with Glass hands-free. Glass can be your heads-up display to the world.
Glass isn’t the first technology to deliver this ability; the aviation world has been using something similar for a very long time and the parallels are pretty obvious. Let’s look at the cockpit of an F/A-18 E/F Super Hornet:
All of the buttons, switches, dials, and gauges in the lower part of the cockpit allow for complete control of the aircraft. There are literally thousands of options and abilities represented here. Think of this portion of the cockpit as your smartphone; a superset of options and features. You can fly the aircraft using just the controls listed here. Now look up to the top portion of the image, the Heads Up Display (HUD). When a pilot flies, the HUD displays a very small but very relevant set of information. The pilot doesn’t have to take their eyes off the task at hand to get the info they need to do their job. The information displayed in the HUD changes depending on the pilot’s situation: take off, landing, combat, etc. Google Glass is a HUD: a subset of data and controls presented at the right time and in the right context.
Once I understood the context in which to use Glass, I saw all kinds of possibilities for killer apps. I probably drove the team at ReelDx crazy going over the ways we could leverage Glass in our workflow and with our clients. It didn’t take long before we started development of our own prototype Glass software. But how and where does Glass fit into the ReelDx workflow? How do we make sure to build a HUD and not a whole cockpit?