top of page
Writer's pictureHarry Ven

The problem with facial emotional recognition technology

Recent research by Psychologist and Author Lisa Feldman Barret found people not being consistent when it comes to what does an individual’s facial expression means.

The study says that “people not only use different facial movements to communicate different instances of the same emotion category (someone might scowl, frown, or even laugh when they’re portraying anger), they also employ similar facial configurations to communicate a range of instances from different emotion categories (a scowl might sometimes express concentration, for example)”

Emotions are co-created experiences.

We cannot observe emotions independent of the context that triggers them. For example, is being angry with yourself the same as being angry at your spouse? What happens when the anger is moral anger versus when it’s based on a past trauma?



So if you assess my emotion to be anger, the relevance of the emotion is lost without the whom and why of the anger. Based on these variables the quality of anger and the actions change.

Our intention to identify emotion is that it could predict how a person is feeling and thereby what range of actions they could produce in that state.

Now, consider cultural differences. An angry person in India might do something totally different from an angry person in Finland. One of my friends paints when she is angry. And her “anger” paintings are so emotion-provoking. I know other people who go violent when they are angry. How would we know what one’s person-emotion combination is capable of?

By identifying emotions as “universal” and coming up with technology that interprets what a person’s emotions might imply — we are playing the “psychological roulette” game that could be very dangerous for the people being assessed.

Comments


bottom of page