I, Glasshole: My Year With Google Glass

Where can you wear wearables?

It is pretty great when you are on the road — as long as you are not around other people, or do not care when they think you’re a knob.

When I wear it at work, co-workers sometimes call me an asshole. My co-workers at WIRED, where we’re bravely facing the future, find it weird. People stop by and cyber-bully me at my standing treadmill desk.

Do you know what it takes to get a professional nerd to call you a nerd? I do. (Hint: It’s Glass.)

I, Glasshole: My Year With Google Glass | Wired.com.

Wolfram Language Demo

Stephen Wolfram introduces the Wolfram Language – a “general-purpose knowledge-based language”. It looks very impressive, but then again, most demos do until you finally get your hands on the actual product. Nevertheless, I would have loved this as a kid learning to program. Tight and responsive feedback loops, lots of capabilities out of the box, immediate results – the very things so often missing from contemporary programming environments. In a way it reminds me of what Bret Victor described in his essay on Learnable Programming.

I don’t think we want our bodies to be UIs. 

The assumption driving these kinds of design speculations is that if you embed the interface–the control surface for a technology–into our own bodily envelope, that interface will “disappear”: the technology will cease to be a separate “thing” and simply become part of that envelope. The trouble is that unlike technology, your body isn’t something you “interface” with in the first place. You’re not a little homunculus “in” your body, “driving” it around, looking out Terminator-style “through” your eyes. Your body isn’t a tool for delivering your experience: it is your experience. Merging the body with a technological control surface doesn’t magically transform the act of manipulating that surface into bodily experience.

Your Body Does Not Want to Be an Interface | MIT Technology Review.

Social Web of Things


LG introduced a line of new smart appliances at CES that you can text and chat with. The idea is to allow people to communicate with their home appliances in natural language through well-established and understood communication channels.

While my initial gut-reaction was to dismiss the idea as silly internet fridges for the social media age, the more I think about it I can’t shake the feeling there might just be something there. Lots of people seem to really like text messaging as a means of communication and I don’t see why that predilection wouldn’t translate from communicating with people to communicating with machines.

While natural language interaction is still a bit of a novelty, it’s gaining traction in recent years (in no small part thanks to Siri). We still don’t have a very good idea of how we’re supposed to control and interact with a growing number of smart, connected devices in our homes. For a long time it appeared to me that both researchers and developers favored the intelligent, autonomous agent model, where smart devices adapt to their owners needs on their own, as if they could somehow magically read their mind. I never really bought into this particular vision because these autonomous software agents generally cause more trouble than they’re worth as soon as anything goes wrong, and things inevitably do go wrong from time to time.

In addition, most existing approaches are limited in interoperability, isolated in their respective manufacturers service silo (what Jean-Louis Gassée just recently called the basket of remotes problem). Using a reasonably open and widespread communication channel (such as text messages), with natural language interaction substituting for rigid, proprietary and undocumented protocols could solve this problem.

The lowly text message of yesteryear as the glue of tomorrow’s Internet of Things, quite a thought.

Anyways, I would be amiss not to mention that Ericsson has been doing some interesting research in this area for the past few years on what they call a social web of things.

The Future Mundane

As a counter to the fantasy-laden future worlds generated by our industry, I’d like to propose a design approach which I call ‘The Future Mundane.’ The approach consists of three major elements, which I will outline below.

  1. The Future Mundane is filled with background talent.
  2. The Future Mundane is an accretive space.
  3. The Future Mundane is a partly broken space.

Source: The Future Mundane by Nick Foster.

Learning to See

A useful discussion of functional and formal design dimensions, which are often conflated when talking about design:

There is plenty of good design that is ugly, and of course there’s good design that both works well and looks pretty. But a design that doesn’t work can never be substantially good — ugly and broken is just worthless crap, and pretty and broken is phony or kitsch:


Source: Learning to See by Oliver Reichenstein.

We’re surrounded by objects and systems that are too big or too opaque to understand — everything from the global banking system, to the Edgerank algorithm Facebook uses to order your newsfeed [...] And the effect of this alienation is felt subtly: I believe it means we can never build a good mental model of the technologies we use. We’re constantly having our expectations slightly violated, we feel a little itchy, like we don’t fit comfortably in our own world.

Matt Webb in The Virtual Haircut That Could Change the World | Wired Design.


Get every new post delivered to your Inbox.