N9

There’s something tantalizing, almost irresistible about the new Nokia N9 to me. Its polished exterior and unique look in cheery colors is unmistakably Nokia and so very distinct from all the Korean iPhone lookalikes.

Even more tantalizing than the quality of its hardware engineering is its unique position as Meego’s swan song, the first and last device to demonstrate the culmination of years of Nokia’s platform development. While I have my doubts about the actual day-to-day utility and usability of the Harmattan UI, it looks like it could have been a solid foundation for future iteration and refinement.

Back when Stephen Elop announced Nokia’s switch to Windows Phone 7 in February, many in the tech press lauded that move. Meego had been in development hell for years with little to show for it. A recent Businessweek profile of Stephen Elop quotes Nokia Chief Development Officer Kai Oistämö saying, “MeeGo had been the collective hope of the company [...] and we’d come to the conclusion that the emperor had no clothes. It’s not a nice thing.” Back then only Nokia insiders could assess whether Meego had any legs, but looking at the N9 as it was introduced this week it’s hard to imagine that it made such remarkable strides from burning platform to what they are showing now in four short months.

Unfortunately we will in all likelihood never find out what Meego could have been as the N9 is destined for failure. Nokia made it abundantly clear in recent months that they’re betting their future on Windows Phone 7. There won’t be a thriving ecosystem of software, services, updates and accessories around the N9. It’s hard to imagine that consumers, operators and, above all, developers will flock to a dying platform.

Parsing Nokia’s marketing speech (from this Wired UK article) strongly suggests that the N9 does indeed mark the end of a road:

As to how many future devices will run on the MeeGo platform, unfortunately, we do not comment on unannounced devices or our planned product roadmap. We had previously stated that we would bring a MeeGo operated device to market during 2011 and that is exactly what we achieved with yesterday’s announcement of the Nokia N9.

It seems the N9’s raison d’être rests somewhere between making good on old promises and Nokia’s old guard’s pride to prove to the world that years of development resulted in something good enough to bring to market. But still… between Nokia’s marketing push and the overwhelmingly positive press reception it’s hard to fathom the N9 as a stillborn. Just look at the developer UX guidelines – all these efforts and resources for a hopeless platform, to be ignored by most developers for lack of traction?

Despite all potential shortcomings, the N9 is almost worth buying for being such a singular technological dead end alone.

Further reading: This is my next…, Engadget.

GROUP:

GROUP is a collective sound work that will start on individual mobile devices and ends with participants coming together for a large-scale gathering at 12:45 PM on June 21, 2011 near the corner of Wall and Broad streets. Anyone with an iPhone or iPod Touch can download the GROUP app from the Apple App Store and be a part of this experience. Participants will start the app in the morning of the June 21st and keep it running until 12:45 PM when they congregate in front of the Stock Exchange. The piece begins with a dense drone that sheds layers throughout the day and then transforms into a monumental sound as hundreds of participants come together with their sounding devices to activate the hallowed downtown area. Group is a collaborative project between Aaron Siegel and Larry Legend.

Twitter Digest for Week Ending 2011-06-19

In Praise of Not Knowing:

I hope kids are still finding some way, despite Google and Wikipedia, of not knowing things. Learning how to transform mere ignorance into mystery, simple not knowing into wonder, is a useful skill. Because it turns out that the most important things in this life — why the universe is here instead of not, what happens to us when we die, how the people we love really feel about us — are things we’re never going to know.

About That Kevin Slavin AR Talk

You know, this one. Three responses you might be interested in, by…

Julian Oliver:

Expressing discontent at the ocular focus of Visual AR is like giving Painting a hard time because it denies the rich plethora of experiential possibilities afforded by audio, and that all paintings should ship with a small orchestra. All the while painting’s have been ‘augmenting reality’ for centuries, changing the way people see the world, even their own faith..

Usman Haque:

there was something missing that i think would add even more weight to your argument and i wanted to throw it out there for future conversation. i think it’s problematic, for the construction of your thesis, to counter the “it’s all in the eyes” perspective with its polar opposite “it’s all in the brain” because most visual research of the past few decades shows that it’s actually a little of both.

Greg Smith:

I think the key takeaway point is in Slavin’s suggestion that “reality is augmented when it feels different, not looks different” – which basically echoes Marcel Duchamp’s (almost) century-old contempt for the ‘retinal bias’ of the art market. If AR development (thus far) is lacking imagination, perhaps the problem is that we’re very much tethering the medium to our antiquated VR pipe dreams and the web browser metaphor.

Invoked Computing

Direct interaction with everyday objects augmented with artificial affordances may be an approach to HCI capable of leveraging natural human capabilities. Rich Gold described in the past ubiquitous computing as an “enchanted village” in which people discover hidden affordances in everyday objects that act as “human interface “prompt[s]” (R. Gold, “This is not a pipe.” Commun. ACM 36, July 1993.). In this project we explore the reverse scenario: a ubiquitous intelligence capable of discovering and instantiating affordances suggested by human beings (as mimicked actions and scenarios involving objects and drawings). Miming will prompt the ubiquitous computing environment to “condense” on the real object, by supplementing it with artificial affordances through common AR techniques. An example: taking a banana and bringing it closer to the ear. The gesture is clear enough: directional microphones and parametric speakers hidden in the room would make the banana function as a real handset on the spot.

In other words, the aim of the “invoked computing” project is to develop a multi-modal AR system able to turn everyday objects into computer interfaces / communication devices on the spot. To “invoke” an application, the user just needs to mimic a specific scenario. The system will try to recognize the suggested affordance and instantiate the represented function through AR techniques (another example: to invoke a laptop computer, the user could take a pizza box, open it and “tape” on its surface). We are interested here on developing a multi-modal AR system able to augment objects with video as well as sound using this interaction paradigm.

Invoked Computing: spatial audio and video AR invoked through miming. (via)

“[A] ubiquitous intelligence capable of discovering and instantiating affordances suggested by human beings” – i had to read that twice until my head wrapped around it…