Steven Sinofski recently wrote about his experience in switching to an iPad Pro as a replacement for his laptop. This in itself isn’t particularly noteworthy, but before describing his own experience in switching to an iPad as a laptop replacement he spends some time discussing how broad shifts across computing platforms happen and why some people react with enthusiasm to these shifts while others resist them, depending on their needs, preferences and circumstances:
By far, the biggest obstacle to change is most people have jobs to do and with those jobs they have bosses, co-workers, customers and others that have little empathy for work not happening because you’re too busy or unable to do something you committed to, the way someone wanted you to do it.
[C]hange, especially if you personally need to change, requires you to rewire your brain and change the way you do things. That’s very real and very hard and why some get uncomfortable or defensive.
Now it’s worth keeping in mind that Steven Sinofski was president of the Windows division at Microsoft and a highly visible public face for the development of the rather controversial Windows 8. One of the criticisms leveled against Windows 8 was that it focussed too narrowly on tablet usage and in doing so compromised the usability of the typical keyboard-and-mouse desktop model, which is undoubtedly the predominant way of Windows usage to this day. Jakob Nielsen concluded in his usability assessment that “Windows 8 UX [is] weak on tablets, terrible for PCs”.
In that light I find the following observation (a variation of Amara’s Law) quite interesting, because it might just explain some of the reasons behind the Windows 8 design direction, but also why it ultimately backfired and failed:
As difficult as they are, we more often than not over-estimate platform shifts in the short term but under-estimate them in the long term.
In the introduction to his somewhat pessimistic article titled The Tragedy of Pokémon Go (which I don’t completely agree with, btw), Ian Bogost provides a short timeline of earlier alternate reality and location-based games.
This reminded me that location-based games aren’t really new. Paul Baron was already collecting them in a list back in 2004 (his website has since moved, with a newer list from 2005 available here) and Justin Hall was reporting on a new location-based item hunt game named Mogi for TheFeature:
Mogi is a collecting game – “item hunt”. The game provides a data-layer over the city of Tokyo. As you move through the city, if you check a map on your mobile phone screen, you’ll see nearby items you can pick up and nearby players you can meet or trade with.
This is what it looked like (photo by Paul Baron):
Looking at Mogi, it’s not difficult to draw many parallels to Pokemon Go. The graphics aren’t as advanced and sophisticated, but the basic game mechanics are pretty much identical. I think it’s remarkable that all the technology to realize a game like Pokemon Go was already available 12 years ago (at least in Japan), just as it is remarkable that it took more than a decade for this kind of game concept to succeed. I’m reminded of William Gibson’s famous quote “the future is already here, it’s just not very evenly distributed”.
It also once again shows that it can take many years, if not decades, to move from invention to mass market success, a phenomenon that Bill Buxton has called the Long Nose of Innovation [pdf].
I’m not really surprised that Pokemon finally brought this mass market success, not just because of the franchise’s popularity, but also because it’s such a perfect fit for this kind of game: The Pokemon Company and Niantic have taken the core game experience of earlier Pokemon titles and successfully transplanted it on top of the real world: Where in the past players roamed a virtual world collecting Pokemon, they now get to roam the real world collecting Pokemon. I can’t think of any other franchise that would be a better fit for this kind of game. That’s why I don’t see the game’s success as a tragedy (in contrast to the aforelinked article) but rather as well deserved.
Last week, a Tesla driver was killed in a fatal car crash while using Autopilot. Neither the driver, nor the Autopilot noticed a tractor trailer crossing lanes and Tesla explained that the vehicle’s sensors had failed to recognize “the white side of the tractor trailer against a brightly lit sky”. This will undoubtedly raise many questions about autonomous vehicles and driving, which are necessary and important, but aside from that it should also raise questions about semi-automated driving.
As Tesla points out, the Autopilot isn’t quite a fully autonomous driving assistant, but rather “an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Because of this, Tesla (and many media outlets covering the accident) were quick to shift the blame from Autopilot to the driver, who didn’t comply with instructions and placed too much trust in the Autopilot feature. Reportedly the driver was even watching a Harry Potter movie while being chauffeured by his car.
While it is true that the driver apparently didn’t follow the instructions for Autopilot properly, it would be too simple to absolve the technology from any fault, because semi-autonomous driving features blatantly disregard how humans function: We don’t have unlimited attention spans, we can get tired and we can easily lose focus and concentration when we’re bored. Advanced-but-imperfect partial automation lulls humans into a false sense of safety, yet requires human intervention at the most critical moments – when imminent danger or catastrophe looms. Don Norman has been arguing for many years that the transition from partial to full automation poses the greatest danger when it is almost complete and he just published a paper on the challenges of partially automated driving in May 2016, which I would highly recommend for anyone interested in the topic.
Zhang, Y., Zhou, J., Laput, G., Harrison, C. SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the Skin In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York.
Previously: Chris Harrison’s Skinput & Omnitouch
Justin O’Beirne explores some design changes to Google Maps over the past few years:
Browsing Google Maps over the past year or so, I’ve often thought that there are fewer labels than there used to be. Google’s cartography was revamped three years ago – but surely this didn’t include a reduction in labels? Rather, the sparser maps appear to be a recent development.
Coming across this article I was immediately reminded of an article from 2010 that provided in-depth analysis and discussion of Google Maps in comparison to Bing Maps and Yahoo Maps – turns out that article was by Justin O’Beirne as well!
Desktop Neo is a user interface concept developed by Lennart Ziburski, a design student from Berlin.
While it’s impossible to properly assess the day-to-day practicality and long-term usability of a desktop environment without actually using it, there are a number of interesting ideas packed into this concept. The window management is reminiscent of the new iPad multitasking features or the tiles in Windows 8, but well thought out. Other parts such as App Control and Tags seem like mild iterations on concepts already present in Mac OS X, but I was particularly impressed by the ideas surrounding gaze detection and how to incorporate it into a desktop computing environment seamlessly.
If you look at an app and see a only a bunch of code, then the only kind of improvement you can see is to the code itself.
Illustration by Helen Green.
“Left to his own devices he couldn’t build a toaster. He could just about make a sandwich and that was it.”
– Mostly Harmless, Douglas Adams, 1992
Andy George spent six months of his life and $ 1500 to make a sandwich “from scratch”, growing a garden, producing salt from ocean water, making cheese and killing a chicken:
This project immediately reminded me of The Toaster Project, which I encountered at Ars Electronica 2010 (I can’t believe that was five years ago, it seems as if it was yesterday and a lifetime away at the same time): Thomas Thwaites spent nine months and £ 1187.54 to make his own toaster, “a product that Argos sells for only £3.99”. The Ars Electronica curators put it rather well: A project that “exposes the fallacy in a return to some romantic ideal of a pre-industrialized time”.