Actin’ Squirrely
Emotion Recognition and Categories
In considering potential speakers for the next year, the name of Rosalind Picard comes to mind. My campus director of Spiritual Development & I have brought up her name many times in the past, as Picard is an “out” Christian who’s “nevertheless” a professor at MIT, and an entrepreneur in the form of co-CEO of emotion-recognition AI startup Affectiva.
And Kate Crawford, who I respect and whose work I consider to be significant, recently published an article in The Atlantic slamming emotion recognition AI applications, likening them to modern invocations of physiognomy.
The emotion recognition is structured explicitly as a classification problem: Various psychologists, notably Paul Ekman (influenced by Silvan Tompkins), set up a finite taxonomy of emotions (happy, surprised, disgust, etc.) and found —- supposedly — that people all over the world demonstrate the same facial expressions for each one. (Go ahead, show my your “surprised face”, then compare it to… [some example]). Crawford:
“Emotion-recognition systems share a similar set of blueprints and founding assumptions: that there is a small number of distinct and universal emotional categories, that we involuntarily reveal these emotions on our faces, and that they can be detected by machines. These articles of faith are so accepted in some fields that it can seem strange even to notice them, let alone question them. But if we look at how emotions came to be taxonomized—neatly ordered and labeled—we see that questions lie in wait at every corner.”
One of Crawford’s complaints is that there’s no way to always correctly infer someone’s emotional state based on their facial expression, and that this could lead to all kinds of problems. Humans make such mistakes too, all the time. People misread people. Police do this. When I was a teenager, some fellow geeks & I were pulled over by a policeman in Oakton, VA for “actin’ squirrely.”
So, the inability to inerrantly infer emotional states is not an issue per se.
An issue is that surveillance systems can be attributing these states to us without our being aware of it, and making decisions that affect us based on these classifications, and we would have hardly any recourse to correct them.
When a human guesses your internal state incorrectly, you often have an opportunity to correct them.
So, let’s take AI and statistical models out of the picture entirely. Say we instead employ an army of gig-economy workers to continually watch video feeds and continually press one of, say 8 buttons, describing the emotional state of the person. Maybe even have an ensemble of three people per surveilled-person just to reduce the noise. When at least two of them press the button for “ACTIN’ SQUIRRELY”, a drone near your location deploys and flies up to your face with a “ATTENTION CITIZEN…” message, or maybe even a real-live police officer…. or maybe a real-live police officer who’s piloting a drone remotely from a distant office. ;-)
This sounds like a police state, right? Probably not where many of us would like to live. “But that’s absurd,” you say, “No one could employ that many people to surveil everyone else.” Ahh, but that’s where the AI part comes in: with an AI system this becomes trivial to implement, even cheap.
Let’s continue the “AI aside” thought experiment. With the “ATTENTION CITIZEN” notification, you could theoretically have the opportunity to refute the claim that you’re actin’ squirrely. It would probably take the form of “If you would like to refute this claim, please go to this website or call this number…” After going to the website and finding it in-navigable, you could try calling the phone number, where you’ll be presented with an even more in-navigable phone menu (classification system), in which if you press “0” or “*” to speak to a human being you’ll be told “Sorry, that is not a valid option” but if you’re lucky enough to get a human being, you’ll meet someone who is just using the website and has no more access than you. Please allow 3 to 5 business days to have your decision reversed. Meanwhile you’re stuck at a TSA holding facility and the flight you were kept from boarding is non-refundable.
Classification! :-)
But you might not get a notification. It might just be that this data (“So-and-So was ACTIN’ SQUIRRELY at 20:04 PM on Fri Aug 2, 2022 at 38.4326713°N -79.8409707°W”) is permanently added to some data broker’s file on you, which then gets sold to insurance companies, prospective employers, dating sites, you name it. In the EU there are (probably) laws against this, but elsewhere it’s a free-for-all. Suppose you’re a person with an unusual gait — maybe you wear orthotic shoes? — but this triggers the ACTIN’ SQUIRRELY classification most of the time. Would anyone want to hire you? Would they tell you that the reason they don’t want to hire you is that records show to be a frequent squirrely-actor? Probably not, because that might expose them to legal liability. But internally….?