Headed to FutureEverything 15 by Scott Smith

I'm very excited to be making it to FutureEverything 15 for my third year in a row next week in Manchester, along with Susan Cox-Smith, to meet up with fellow attendees. The excitement is not simply because it marks the 20th year of this fantastic festival and conference—I'm also privileged to chair the first session of the event, What Now For...Memory and Identity, running from 10AM to 1PM on Thursday the 26th in the main event hall. This session will feature great talks from Jer Thorpe, Gemma Galdon-Clavell and Moritz Stefaner, as well as friends Matt Locke, Alexis Lloyd and Matt Boggie, including a panel with Jer, Gemma and Moritz. Don't miss it, and do stop and say hello if you're coming along. 

And I'd be remiss if I didn't mention the excellent microconference happening within FE, Haunted Machines, put on by Natalie Kane and Tobias Revell, which will explore where technology and perceived magic bleed into each other.  

If you're in Manchester, we look forward to seeing you in the sessions, hallways, or nighttime events. If not, we'll catch you on Twitter.

** If you have questions for the morning panel, please let me know via Twitter.

Talking to CBC Spark About Emotion and Technology by Scott Smith

British Deputy PM Nick Clegg looking algorithmically sad. Image: Dan Williams. http://nickclegglookingalgosad.tumblr.com/

British Deputy PM Nick Clegg looking algorithmically sad. Image: Dan Williams. http://nickclegglookingalgosad.tumblr.com/

Last week's edition of Spark, the technology and culture radio show and podcast of Canadian broadcaster CBC, featured a segment on the "Internet of Emotions," where I was a guest. Spurred by my Thingscon talk in Amsterdam last November, in my chat with host Nora Young (play below), I talked about what I find interesting about the use of personal technology to monitor and "interpret" our emotional state, the feasibility and desirability of doing so. We discussed the role of ethics, the slipperiness of "feelings" in a digital construct, and under what conditions monitoring emotions through something like a smartphone or home listening device could be useful, or problematic.

The segment can be found here, and the specific clip is here for listening

This isn't an academic discussion. Businesses are chafing to apply sentiment-reading technology as a means of "knowing" their customers more intimately—computational empathy, if you will. Hospitals and insurers are curious not only about your emotional state during treatment, but even if you've sounded stressed just trying to get an appointment to see a doctor. Your bank might want to check how you feel while applying for a credit card, or a soda company send you a treat to "cheer you up". Backing what sounds like futuristic technology into the current arms race around data analytics, already happening, leads us into "emotional credit score" territory quite quickly, for example. Or Amazon offering you product "purchased by people who feel like you do now". 

We're only scratching the surface of this topic, and I plan to stay with it. Your views and feedback, as always, are welcome. Tell us how you feel.  

Language, Power and the Future by Scott Smith

Are you here to help me or kill me?                            Image: Flickr/David Rodriguez Martin

Are you here to help me or kill me?                            Image: Flickr/David Rodriguez Martin

I wrote a piece for Quartz this past week about an issue that's been on my mind for a while: our growing inexactitude with language at a time when complexity and ambiguity are also increasing. The result is a more and more dense fog of confusion just when we need clarity of vision(s).

I picked on public use of the words "hacker," "drone," "algorithm," and "robot," but the list could have gone on and on. Phenomena such as search-engine optimization, the mad scramble for attention by a wider array of media, and an overall authority vacuum make it harder and harder to get, take, or use the time needed to properly define words and topics that are of major consequence. 

From the piece:

But “hackers,” “algorithms,” and to some extent “robots,” sit behind metaphorical — or actual — closed doors, where obscurity can benefit those who would like to use these terms, or exercise the realities behind them to their own benefit, though perhaps not to ours. We need better definitions, and more exact words, to talk about these things because, frankly, these particular examples are part of a larger landscape of “actors” which will define how we live in coming years, alongside other ambiguous terms like “terrorist,” or “immigrant,” about which clear discourse will only become more important.
— http://qz.com/318214/in-2015-well-need-different-words-to-talk-about-the-future/

As Rafael Osio Cabrices pointed out, this problem is compounded by the role English plays as a central, global language of technology and business, leaving other languages and cultures to deal with the semantic messes we create. The result? We stumble toward uncertain futures without a clear voice.

Thanks for the feedback and kind comments on this piece. I'm sure I'll come back to this theme again—it doesn't appear to be going anywhere soon.