Research Spotlight: LumiWatch and SmartMat

Two cool research projects I wanted to mention on here before I forget and that I’ll just lump in here together:

The LumiWatch is a new projector smartwatch prototype from Carnegie Mellon University. Unsurprisingly Chris Harisson is involved, who’s been working on similar concepts (cf. Skin Buttons, SkinTrack, OmniTouch) for quite some time now. In some sense the LumiWatch seems to be bringing a lot of these earlier concepts together.

Then there’s Microsoft’s Project Zanzibar smart mat, a new hardware device enabling tangible interaction through a combination of capacitive sensing capabilities for touch interaction and NFC for object detection and recognition. I’m reminded of Microsoft’s tremendous output in the fields of earlier multitouch and tangible interaction research within its Surface research group, before they appropriated the Surface name for their lineup of consumer products.

Advertisements

Saving lives, one second at a time

This is an interesting example of contextualizing small gains in efficiency happening at massive scale and thus reaping huge results:

[If each iOS device] would be unlocked using a 4-digit PIN, the time to bring them into use would be about 2 seconds. Expanding to a 6-digit PIN would probably increase that to perhaps 2.5 seconds (accounting also for failures due to input errors.)

[…]

It turns out that, based on the installed base numbers, moving to the more secure 6-digit code would add 2.8 billion hours to the total time to unlock the world’s iPhones and iPads. That’s 321,000 years of waiting added for every year of use.

Fortunately we got Touch ID to replace PIN entry and the time to unlock the iPhone/iPad has decreased to perhaps 1 second, saving 5.6 billion hours of unlock time vs. 4-digit PIN.

Reminds me of an interview with Larry Tesler in Dan Saffer’s Designing for Interaction, where he said the following:

If a million users each waste a minute a day dealing with complexity that an engineer could have eliminated in a week by making the software a little more complex, you are penalizing the user to make the engineer’s job easier.

I guess it’s no coincidence that Larry Tesler was an early Apple employee, back when Steve Jobs argued that decreasing Macintosh boot times would save lives.

The Hawaii Missile Alert Incident

A few weeks ago, there was a false state-wide missile alert broadcast in Hawaii. You’ve probably heard about it, because it was all over the news and the event even has its own Wikipedia page by now.

While the incident was certainly unpleasant for Hawaiians, and particularly for the poor man responsible for the broadcast, it is a stroke of luck for me. I’m always on the look-out for poor user interface design leading to catastrophe, and this event is about as good an example as it gets. In the days following the incident there was ample discussion of the user interface for sending out the broadcast (which turned out to be surprisingly difficult to reliably pin down) and how to prevent such mistakes in the future, e.g. from The Verge, Ars Technica, Jason Kottke and Nick Heer, all of which are worth a read.

As I said, I’m always looking for examples of poor user interface design, which is surprisingly difficult and seems to have become more difficult over time. By and large, the importance and necessity of good usability and user interface design seems to be so pervasive and commonly accepted that nowadays any application with a sizeable audience is well designed. While that’s obviously a very good thing for most people, it makes it quite difficult for me to find examples illustrating bad design and its consequences.

The Terrible State of USB-C

Marco Arment recently posted about the myriad of frustrations surrounding USB-C:

I love the idea of USB-C: one port and one cable that can replace all other ports and cables. It sounds so simple, straightforward, and unified.

In practice, it’s not even close.

As he points out, USB-C ports can vary drastically in their capabilities, and it’s impossible to tell their capabilities by looking at them. For example, USB-C can carry data via both USB and Thunderbolt protocols, but you can’t tell by looking at a port or cable which protocols are supported. You can sometimes use it for charging your laptop, but there are different power delivery standards and, again, it’s hard to tell which of those are supported. Microsoft mentioned this confusing mess of supported standards and capabilities as one of the reasons why they didn’t include any USB-C ports on their Surface Laptops:

Kyriacou points out many of the issues anybody who’s used USB-C has run into. “What happened with USB-C is the cables look identical, but they start to have vastly different capabilities. So even someone in the know, confusion starts to set in,” he argues. Some cables support 3 amps, some 5, some Thunderbolt, some not.

Microsoft has since gone on to include a USB-C port in their new Surface Book 2 (one that doesn’t support Thunderbolt, in case you were wondering), but the problems surrounding USB-C haven’t been solved.

This mess reminds me of an old desktop PC that I had in the 90s: It featured two PS/2-ports on the back, one for plugging in a mouse, another for plugging in a keyboard. Problem was, those two ports were completely identical looking, so you never knew which cable should be plugged into which port without resorting to trial and error. Some PCs back then solved this problem with iconography or color coding, but mine didn’t.

This was bad design 20 years ago, and it’s still bad design today. The design principle of consistency states that things that are the same should look the same, but the inverse is also true: things that are different should look different. USB-C is clearly in violation of this principle. The only solution to this problem that comes to mind is for USB-C to support a uniform and consistent set of features. Unfortunately this will certainly take some time to come to fruition. In the meantime I’m sticking with USB-A.

Deliberately Horrible UI Design

A few months ago, in response to a horrible web form for entering phone numbers shared by Stelian Firez, Twitter responded with a barrage of deliberately horrible and obtuse design alternatives, seemingly in search of the worst option possible. Many of these are collected and archived here.

Then more recently, Reddit set to the task of designing the worst possible volume sliders.

Hilarity aside, there are certainly lessons to be learned in trying to design a worst possible UI, as well as in studying these fine examples.

How an edge-to-edge screen could change the iPhone’s UI

With new iPhones almost upon us it’s that time of the year when iPhone rumors and speculation are everywhere. It is pretty much accepted as fact that we will see three new iPhones this year, two of them based on the familiar iPhone 7 design and one completely new design with minimal bezels and an edge-to-edge display.

Allen Pike had some interesting ideas how such a display could shake up the default screen layout of iPhone apps: he thinks that a lot more functionality as well as basic navigation will move to the bottom of the screen, maybe like this.

Max Rudberg also picked up on the idea and created a few more mockups to illustrate the possibilities:

iphone-pro-ui.png

As an aside: It’s kinda weird that I still care about this now that I no longer personally use an iPhone. I guess it’s hard to escape the pervasive excitement surrounding a new iPhone design.

Remembering the First iPhone

I don’t remember much about the original iPhone announcement, back in January 2007. I’m sure it was a momentous keynote and I was thoroughly impressed at the time, but as I said, I don’t remember much of it today.

I do however remember the first time I held an iPhone in my hands and experienced it in the flesh: I was visiting a mobile technology research group in Vienna and they had two new touchscreen devices on hand to try out: the original, first generation Apple iPhone and the LG Prada phone. Superficially the two devices were similar, just as today every modern smartphone is similar to every other modern smartphone: A huge, high quality capacitive touchscreen and no physical hardware keyboard (which were a common fixture on phones at the time).

On first glance the LG Prada phone almost seemed preferable to me, with its elegant, consistent and more restrained visual design language, but when I picked up both phones and started playing around with them, the superiority of the iPhone became immediately obvious: The way it reacted to touches, the immediacy and fluidity of interaction was staggering and unlike anything I had ever experienced in a phone before. At that moment it was obvious to me that Apple had created something in a league of its own, something entirely new, something that defied superficial comparison with other phones on the market. This was the future of smartphones.

I never bought the original iPhone because of limited distribution here in Austria and for its lack of 3G connectivity, but I picked up its successor, an iPhone 3G, as soon as it became available.

Lightform: Augmenting Reality Through Projection

Lightform is an interesting little device: It does automatic mapping for full-room projection mapping, so you can hook it up to a projector and project interfaces anywhere in the room:

The device itself looks a little bit like a Kinect and the whole concept is reminiscent of Microsoft’s RoomAlive, IllumiRoom and Lightspace research projects, which isn’t entirely surprising considering that Lightform was developed by former Microsoft researcher Brett Jones, who worked on the IllumiRoom project.

I rather like the idea of augmenting the real world around us with projections because in a way it turns the traditional idea of augmented reality by wearing heads-up displays on its head. Instead of these individual, private augmented realities you get a shared, public, consensus augmented reality. Kinda like the difference between smartphones and large TVs I suppose.

You can read more about Lightform at Wired and The Verge, and for another take on automated projection mapping see Razer’s Ariana.

JSON Feed

Brent Simmons (creator of Netnewswire, my favorite feedreader for many years) and Manton Reece recently introduced JSON Feed, a JSON based alternative format to RSS and Atom. Dave Winer, the inventor of RSS (some people might argue about this claim, but not me), also shared his reaction. I think it’s safe to assume that he’s not a big fan of new, additional formats in general, and there are certainly good reasons for that. Of note, Dave Winer already proposed a JSON-based RSS format back in 2012, but it never took off.

Nevertheless, I’m happy this exists. I went looking for a JSON-based alternative to RSS a few years ago and was surprised that there weren’t any. JSON has replaced XML as developers’ choice for APIs and data exchange on the web, and in my personal experience it is much nicer to work with than XML. Let’s just hope it gains some traction, but early signs are looking good.

Opinions on MacBook Touch Bar are divided

Following up on that previous post about buttons: Steven Troughton-Smith’s Twitter poll with nearly three thousand votes shows opinions on the new MacBook Touch Bar to be divided:

Michael Lopp probably isn’t among its fans:

In week #3 of actively using the 15” MacBook Pro, I am delighted by its build quality. I love its weight. Last night, I found myself admiring the machining of the aluminum notch that allows me to open the computer. I type deftly on this hardware.

I am also equally deft at randomly muting my music, unintentionally changing my brightness or volume level, and jarringly engaging Siri.