Blocking JavaScript

Brent Simmons recently wrote about two features he wants most in a web browser: The ability to block cookies and JavaScript by default and whitelist them on a case by case basis.

To which Nick Heer added the following observation:

When you think about it, it’s pretty nuts that we allow the automatic execution of whatever code a web developer wrote. We don’t do that for anything else, really — certainly not to the same extent of possibly hundreds of webpages visited daily, each carrying a dozen or more scripts.

Which, if you put it like that, makes a lot of sense. So I’ve been trying (and struggling) to approximate the desired behavior of a JavaScript whitelist recently – there are browser add-ons and extensions for that, but most of them are not very good (or not what I had in mind). The one I like best after trying several, both large and small, was the generically named Javascript Control for Firefox by Erwan Ameil. Maybe it works for you, too. Or maybe I’ll give up on this little experiment within a week, once I realize that today’s web doesn’t function without client-side JavaScript.

❤️ Palm

The Palm Pre launched ten years ago today (as I was helpfully reminded by The Verge’s eminent Palm enthusiast Dieter Bohn), which gives me the opportunity to dig out another article by Dieter Bohn that I’ve always meant to post here but never got around to: What the iPhone X borrowed from the Palm Pre.

I have a soft spot for Palm in my heart – not just the new Palm webOS, but the original Palm OS as well. Back in 2007 when I first laid hands on the iPhone, I was struck by the similarity to Palm OS (especially the application launcher) and I do believe that our current smartphone platforms owe some debt to the work of Palm.

50 years on from the Mother of All Demos

Fifty years ago today, on December 9th, 1968, Douglas Engelbart presented the Mother of All Demos, introducing the oN-Line System (NLS) to the world and with it then novel concepts such as the computer mouse, hypertext or remote collaborative document editing. In my humble opinion it is probably the most important moment of HCI history in the 20th century.

Watching this today, it is astonishing how seemingly simple and (nowadays) familiar technologies such as the computer mouse had to be explained from the ground up back then. If you got an hour and a half to spare, spend some time (re-)watching the event (now available on Youtube, but still also available at Stanford), or read about its importance and legacy at Wired or Ars Technica. Wired also shared a fascinating look at how they pulled it all off.

(Previously)

Research Spotlight: LumiWatch and SmartMat

Two cool research projects I wanted to mention on here before I forget and that I’ll just lump in here together:

The LumiWatch is a new projector smartwatch prototype from Carnegie Mellon University. Unsurprisingly Chris Harisson is involved, who’s been working on similar concepts (cf. Skin Buttons, SkinTrack, OmniTouch) for quite some time now. In some sense the LumiWatch seems to be bringing a lot of these earlier concepts together.

Then there’s Microsoft’s Project Zanzibar smart mat, a new hardware device enabling tangible interaction through a combination of capacitive sensing capabilities for touch interaction and NFC for object detection and recognition. I’m reminded of Microsoft’s tremendous output in the fields of earlier multitouch and tangible interaction research within its Surface research group, before they appropriated the Surface name for their lineup of consumer products.

Saving lives, one second at a time

This is an interesting example of contextualizing small gains in efficiency happening at massive scale and thus reaping huge results:

[If each iOS device] would be unlocked using a 4-digit PIN, the time to bring them into use would be about 2 seconds. Expanding to a 6-digit PIN would probably increase that to perhaps 2.5 seconds (accounting also for failures due to input errors.)

[…]

It turns out that, based on the installed base numbers, moving to the more secure 6-digit code would add 2.8 billion hours to the total time to unlock the world’s iPhones and iPads. That’s 321,000 years of waiting added for every year of use.

Fortunately we got Touch ID to replace PIN entry and the time to unlock the iPhone/iPad has decreased to perhaps 1 second, saving 5.6 billion hours of unlock time vs. 4-digit PIN.

Reminds me of an interview with Larry Tesler in Dan Saffer’s Designing for Interaction, where he said the following:

If a million users each waste a minute a day dealing with complexity that an engineer could have eliminated in a week by making the software a little more complex, you are penalizing the user to make the engineer’s job easier.

I guess it’s no coincidence that Larry Tesler was an early Apple employee, back when Steve Jobs argued that decreasing Macintosh boot times would save lives.

The Hawaii Missile Alert Incident

A few weeks ago, there was a false state-wide missile alert broadcast in Hawaii. You’ve probably heard about it, because it was all over the news and the event even has its own Wikipedia page by now.

While the incident was certainly unpleasant for Hawaiians, and particularly for the poor man responsible for the broadcast, it is a stroke of luck for me. I’m always on the look-out for poor user interface design leading to catastrophe, and this event is about as good an example as it gets. In the days following the incident there was ample discussion of the user interface for sending out the broadcast (which turned out to be surprisingly difficult to reliably pin down) and how to prevent such mistakes in the future, e.g. from The Verge, Ars Technica, Jason Kottke and Nick Heer, all of which are worth a read.

As I said, I’m always looking for examples of poor user interface design, which is surprisingly difficult and seems to have become more difficult over time. By and large, the importance and necessity of good usability and user interface design seems to be so pervasive and commonly accepted that nowadays any application with a sizeable audience is well designed. While that’s obviously a very good thing for most people, it makes it quite difficult for me to find examples illustrating bad design and its consequences.

The Terrible State of USB-C

Marco Arment recently posted about the myriad of frustrations surrounding USB-C:

I love the idea of USB-C: one port and one cable that can replace all other ports and cables. It sounds so simple, straightforward, and unified.

In practice, it’s not even close.

As he points out, USB-C ports can vary drastically in their capabilities, and it’s impossible to tell their capabilities by looking at them. For example, USB-C can carry data via both USB and Thunderbolt protocols, but you can’t tell by looking at a port or cable which protocols are supported. You can sometimes use it for charging your laptop, but there are different power delivery standards and, again, it’s hard to tell which of those are supported. Microsoft mentioned this confusing mess of supported standards and capabilities as one of the reasons why they didn’t include any USB-C ports on their Surface Laptops:

Kyriacou points out many of the issues anybody who’s used USB-C has run into. “What happened with USB-C is the cables look identical, but they start to have vastly different capabilities. So even someone in the know, confusion starts to set in,” he argues. Some cables support 3 amps, some 5, some Thunderbolt, some not.

Microsoft has since gone on to include a USB-C port in their new Surface Book 2 (one that doesn’t support Thunderbolt, in case you were wondering), but the problems surrounding USB-C haven’t been solved.

This mess reminds me of an old desktop PC that I had in the 90s: It featured two PS/2-ports on the back, one for plugging in a mouse, another for plugging in a keyboard. Problem was, those two ports were completely identical looking, so you never knew which cable should be plugged into which port without resorting to trial and error. Some PCs back then solved this problem with iconography or color coding, but mine didn’t.

This was bad design 20 years ago, and it’s still bad design today. The design principle of consistency states that things that are the same should look the same, but the inverse is also true: things that are different should look different. USB-C is clearly in violation of this principle. The only solution to this problem that comes to mind is for USB-C to support a uniform and consistent set of features. Unfortunately this will certainly take some time to come to fruition. In the meantime I’m sticking with USB-A.

Deliberately Horrible UI Design

A few months ago, in response to a horrible web form for entering phone numbers shared by Stelian Firez, Twitter responded with a barrage of deliberately horrible and obtuse design alternatives, seemingly in search of the worst option possible. Many of these are collected and archived here.

Then more recently, Reddit set to the task of designing the worst possible volume sliders.

Hilarity aside, there are certainly lessons to be learned in trying to design a worst possible UI, as well as in studying these fine examples.

How an edge-to-edge screen could change the iPhone’s UI

With new iPhones almost upon us it’s that time of the year when iPhone rumors and speculation are everywhere. It is pretty much accepted as fact that we will see three new iPhones this year, two of them based on the familiar iPhone 7 design and one completely new design with minimal bezels and an edge-to-edge display.

Allen Pike had some interesting ideas how such a display could shake up the default screen layout of iPhone apps: he thinks that a lot more functionality as well as basic navigation will move to the bottom of the screen, maybe like this.

Max Rudberg also picked up on the idea and created a few more mockups to illustrate the possibilities:

iphone-pro-ui.png

As an aside: It’s kinda weird that I still care about this now that I no longer personally use an iPhone. I guess it’s hard to escape the pervasive excitement surrounding a new iPhone design.