The perfect music video for this super-disastrous election year.
Steven Sinofski recently wrote about his experience in switching to an iPad Pro as a replacement for his laptop. This in itself isn’t particularly noteworthy, but before describing his own experience in switching to an iPad as a laptop replacement he spends some time discussing how broad shifts across computing platforms happen and why some people react with enthusiasm to these shifts while others resist them, depending on their needs, preferences and circumstances:
By far, the biggest obstacle to change is most people have jobs to do and with those jobs they have bosses, co-workers, customers and others that have little empathy for work not happening because you’re too busy or unable to do something you committed to, the way someone wanted you to do it.
[C]hange, especially if you personally need to change, requires you to rewire your brain and change the way you do things. That’s very real and very hard and why some get uncomfortable or defensive.
Now it’s worth keeping in mind that Steven Sinofski was president of the Windows division at Microsoft and a highly visible public face for the development of the rather controversial Windows 8. One of the criticisms leveled against Windows 8 was that it focussed too narrowly on tablet usage and in doing so compromised the usability of the typical keyboard-and-mouse desktop model, which is undoubtedly the predominant way of Windows usage to this day. Jakob Nielsen concluded in his usability assessment that “Windows 8 UX [is] weak on tablets, terrible for PCs”.
In that light I find the following observation (a variation of Amara’s Law) quite interesting, because it might just explain some of the reasons behind the Windows 8 design direction, but also why it ultimately backfired and failed:
As difficult as they are, we more often than not over-estimate platform shifts in the short term but under-estimate them in the long term.
In the introduction to his somewhat pessimistic article titled The Tragedy of Pokémon Go (which I don’t completely agree with, btw), Ian Bogost provides a short timeline of earlier alternate reality and location-based games.
This reminded me that location-based games aren’t really new. Paul Baron was already collecting them in a list back in 2004 (his website has since moved, with a newer list from 2005 available here) and Justin Hall was reporting on a new location-based item hunt game named Mogi for TheFeature:
Mogi is a collecting game – “item hunt”. The game provides a data-layer over the city of Tokyo. As you move through the city, if you check a map on your mobile phone screen, you’ll see nearby items you can pick up and nearby players you can meet or trade with.
This is what it looked like (photo by Paul Baron):
Looking at Mogi, it’s not difficult to draw many parallels to Pokemon Go. The graphics aren’t as advanced and sophisticated, but the basic game mechanics are pretty much identical. I think it’s remarkable that all the technology to realize a game like Pokemon Go was already available 12 years ago (at least in Japan), just as it is remarkable that it took more than a decade for this kind of game concept to succeed. I’m reminded of William Gibson’s famous quote “the future is already here, it’s just not very evenly distributed”.
It also once again shows that it can take many years, if not decades, to move from invention to mass market success, a phenomenon that Bill Buxton has called the Long Nose of Innovation [pdf].
I’m not really surprised that Pokemon finally brought this mass market success, not just because of the franchise’s popularity, but also because it’s such a perfect fit for this kind of game: The Pokemon Company and Niantic have taken the core game experience of earlier Pokemon titles and successfully transplanted it on top of the real world: Where in the past players roamed a virtual world collecting Pokemon, they now get to roam the real world collecting Pokemon. I can’t think of any other franchise that would be a better fit for this kind of game. That’s why I don’t see the game’s success as a tragedy (in contrast to the aforelinked article) but rather as well deserved.
I just read and enjoyed this:
To spice up our monster essay on icons, we created an icon monster shooter arcade game. Planned as a one week hackathon, it turned into an amazing one year adventure. Here is what UX designers learned creating an arcade game.
Last week, a Tesla driver was killed in a fatal car crash while using Autopilot. Neither the driver, nor the Autopilot noticed a tractor trailer crossing lanes and Tesla explained that the vehicle’s sensors had failed to recognize “the white side of the tractor trailer against a brightly lit sky”. This will undoubtedly raise many questions about autonomous vehicles and driving, which are necessary and important, but aside from that it should also raise questions about semi-automated driving.
As Tesla points out, the Autopilot isn’t quite a fully autonomous driving assistant, but rather “an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Because of this, Tesla (and many media outlets covering the accident) were quick to shift the blame from Autopilot to the driver, who didn’t comply with instructions and placed too much trust in the Autopilot feature. Reportedly the driver was even watching a Harry Potter movie while being chauffeured by his car.
While it is true that the driver apparently didn’t follow the instructions for Autopilot properly, it would be too simple to absolve the technology from any fault, because semi-autonomous driving features blatantly disregard how humans function: We don’t have unlimited attention spans, we can get tired and we can easily lose focus and concentration when we’re bored. Advanced-but-imperfect partial automation lulls humans into a false sense of safety, yet requires human intervention at the most critical moments – when imminent danger or catastrophe looms. Don Norman has been arguing for many years that the transition from partial to full automation poses the greatest danger when it is almost complete and he just published a paper on the challenges of partially automated driving in May 2016, which I would highly recommend for anyone interested in the topic.
I just read and enjoyed this:
In a time when consumers routinely replace gadgets with new models after just two or three years, some products stand out for being built to last. Witness the Linksys WRT54GL, the famous wireless router that came out in 2005 and is still for sale.
I still have one of these stashed away somewhere, sadly unused. It was a great little router back in its day, but I wouldn’t buy one today. Nevertheless, probably the only piece of networking equipment to achieve true cult status.
I just read and enjoyed this:
So you got a lot done this week? Good for you. But what exactly did you get done? Was it work you’ll remember next month? Was it work that’ll matter next year? Did you learn anything that’ll help you tomorrow?
Zhang, Y., Zhou, J., Laput, G., Harrison, C. SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the Skin In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York.
I just read and enjoyed this:
Indeed, cannibalizing a federated application-layer protocol into a centralized service is almost a sure recipe for a successful consumer product today. It’s what Slack did with IRC, what Facebook did with email, and what WhatsApp has done with XMPP. In each case, the federated service is stuck in time, while the centralized service is able to iterate into the modern world and beyond.
So while it’s nice that I’m able to host my own email, that’s also the reason why my email isn’t end to end encrypted, and probably never will be. By contrast, WhatsApp was able to introduce end to end encryption to over a billion users with a single software update.
Blake Ross, co-founder of Firefox and former director of product at Facebook, recently shared a personal revelation of his: That he can’t visualize things in his mind:
I just learned something about you and it is blowing my goddamned mind.
This is not a joke. It is not “blowing my mind” a la BuzzFeed’s “8 Things You Won’t Believe About Tarantulas.” It is, I think, as close to an honest-to-goodness revelation as I will ever live in the flesh.
Here it is: You can visualize things in your mind.
Upon reading the article, I immediately panicked and started wondering whether I could lack that ability myself without ever having noticed, but upon trial and reflection I’ve concluded that I am capable of visualizing things in my mind, I’m just not very good at it🙂
It’s remarkable to imagine how different the experience of the world and memories must be when you can’t visualize anything in your mind, and I find it even more remarkable to go through life for more than 30 years without noticing that your mind works differently than the minds of most people around you.