A panel of senior scientists with backgrounds in neuroscience, psychology, computer science, electrical engineering, biology, anthropology, psychiatry, paediatrics, and public affairs spent two years reviewing over 1,000 research papers on the topic.
After two years they have published the most comprehensive analysis to date and concluded: "It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown."
Most of the current emotion reading tech depends on how people communicate anger, disgust, fear, happiness, sadness, and surprise on their face.
Neuroscientist Lisa Feldman Barrett, the author of the book How Emotions are Made who was an author on the paper, further said: "People scowl when angry, on average, approximately 25 percent of the time, but they move their faces in other meaningful ways when angry. They might cry, or smile, or widen their eyes and gasp. And they also scowl when not angry, such as when they are concentrating or when they have a stomach ache. Similarly, most smiles don't imply that a person is happy, and most of the time, happy people do something other than a smile."
The report has attracted the attention of the American Civil Liberties Union, which said: "This paper is significant because an entire industry of automated purported emotion-reading technologies is quickly emerging."
The ACLU said that the market for emotion recognition software is forecast to reach at least $3.8 billion by 2025. Emotion recognition (aka 'affect recognition' or 'affective computing') is already being incorporated into products for purposes such as marketing, robotics, driver safety, and (as we recently wrote about) audio 'aggression' detectors. This tech is being mooted as helping police decided if you are being agresssive and allow them to shoot you.