Eliza, an early emulator of emotional interaction.

Eliza, an early emulator of emotional interaction.

Last month I was invited to speak at Thingscon's inaugural Dutch event in Amsterdam while on a swing through the country (I'll be speaking at the main Thingscon in Berlin in April, stay tuned for details). My topic of choice focused on the emerging use of "emotion" in the Internet of Things—a topic that's been on my mind, and one that an increasing array of startups are beginning to build their pitches around. The idea that technologies that sense and report, and will be present both on our bodies and in our immediate environments is one that can be both exciting and fraught with problems. In 20 minutes, I tried to scratch the surface of some of these issues, illustrated by the slides embedded above.

Thingscon Amsterdam, November 7, 2014.

I've been tracking so-called emotional technology for some time, but a recent product introduction from French company Withings—a wireless camera for the home that features, among other selling points, something it calls "cry recognition"—prompted me to think about how we got here, why companies are pitching products they claim can detect mood, stress, sentiment and distress and act on them, and what might come next. 

Assessing a user's state of mind isn't a new thing in human-computer interaction. Joseph Weizenbaum's ELIZA program, designed at MIT in the mid 1960s as a test of natural language processing, mimicked a psychotherapist that asked a sequence of questions to assess a user's state of mind. While it matched patterns in responses to generate the next question in a fairly flat, mechanical fashion, some users were taken in and felt they were engaging with something that understood them. 

Flash forward almost 50 years, and technologists and researchers are trying to do the same thing, but with more sophisticated kit and better pattern matching. Google's Glass has been used by groups like the Fraunhofer Institute to match facial geometry of that cataloged as exhibits of particular emotional states. Even some entry-level camera tech now detects the geometry of a smile as a trigger to take a picture—though it's the shape of a smile and not happiness as a state. So we can fake a machine into thinking we're happy for the camera, just as ELIZA faked interviewees into believing it was concerned with their wellbeing.  

The kind of multi-sensor devices that make up the IoT are a boon to companies seeking to identify emotional states. Companies like Neumitra and Affectiva correlate physiological responses such as changes in skin temperature with increased motion, as one example, to flag states of increasing stress, which in cases of PTSD or certain forms of autism can be helpful for both individual and/or caregiver. Like earthquake detection, small signals detected early on can both make someone with PTSD or prone to emotional outbursts aware of what's coming, as a precursor perhaps to behavioral change. These are, however, conditions studied mostly in clinical or therapeutic settings, based on peer-reviewed research. And yet, such skin conductance or motion detection can be fairly cheaply built into a consumer device where ongoing testing doesn't happen past the R&D stage. A racing pulse, sweat, and increased motion can mean many things, not just a stress attack.

Voice is another hot area for the emotional IoT, as evidenced by the cry detection from Withings, and tech like Beyond Verbal's, which again reads patterns, here in the sound of a person's voice, to match against patterns that suggest certain emotions. Beyond Verbal has developed what it calls a "wellness API" that makes this capability available to other developers' services, making it easier to build emotion detection into a range of hardware and software—both clinical and consumer. Think about a car not wanting to start because you sound stressed, or your mobile flagging what sounds like excitement to an app or, say, an advertiser. As Anthony Townsend recently wrote, the Internet is a two-way street here, and we've seen emotional manipulation happen already in social networks, so why not the IoT?

Given that we are pushing into the era of always listening devices, or in the case of in-home cameras, always watching lenses, this creates an interesting situation where your personal devices are monitoring your "emotions" pretty much constantly, looking for behavioral and physical patterns that match preset assumptions about state of mind. And given how many consumer digital services fund themselves, by selling data collected about you to advertisers or other third parties, this sets up some interesting and undoubtedly problematic near futures. Similar APIs are being developed for facial expression recognition, such as one from a company called Emotient, so faces are game too. 

These are early days, and we can learn some interesting things in the consumer sphere from collective "emotion" as extrapolated from sensor data. The recent data from Jawbone from wearers near the Napa earthquake that suggests some people couldn't get back to sleep after the early morning quake suggests something interesting about group responses to such disturbing events. What, we don't yet know. We certainly wouldn't want anti-anxiety meds sprayed on the community based on it—sample sizes are too small, and, more importantly, we know only that some people kept moving and didn't settle back to sleep right away. It only broadly matches the patterns of anxiety, but doesn't prove much of anything. Likewise we can see some interesting correlations between movement, maybe stress levels, and group activity from big social events, as shown here in this recent look at wearable data from a Burning Man attendee. But we don't see much more than time and motion—not states of happiness or depression.

Calibrating what we expect technology to do is important in any discussion, but many times more so in discussions about emotion, so critical are they to human social interaction and wellbeing. The distance between calling something sadness or depression, for example, is as wide as the distance between an app store and a hospital. And too often technology developers and marketers ignore the difference—sometimes with innocent intentions, and sometimes for expedience. The recent uproar over British charity Samaritans' creation of Samaritans Radar, a service that it hoped would identify people on Twitter at risk of suicide based on key words in their tweets shows how problematic good intentions can be. 

I suppose my point here is that the trivialization of emotion that can occur in the rush to commercialize new technologies and market based on their appeal to wellness, happiness or safety, for example, is something those involved in creating these products and services have to be extremely careful to avoid. Many emotional issues require proper research, clinical experience, testing, approval and serious support services. Hoping to "move fast and break things" in the rush to create "disruptions" and ship product doesn't work here. In a world of indiscriminate data collection, sharing and leakage, promiscuous APIs, and falling cost of development and deployment of technologies, using emotion as an appeal requires careful, thoughtful approach.

Hopefully as the IoT matures, its proponents and participants will be willing to slow down and trade a quick hit now for long-term consumer trust. Otherwise, we're stepping into a world where emotions, not just data, are manipulated by both code and coders alike.  

Comment