When will we finally learn we cannot predict people’s character from face?

Researchers recently learned that the US Immigration and Customs Enforcement used facial recognition on millions of driver’s licence photographs without the licence-holders’ knowledge, the latest revelation about governments employing the technology in ways that threaten civil liberties.

But the surveillance potential of facial recognition — its ability to create a “perpetual lineup” — isn’t the only cause for concern. The technological frontiers being explored by questionable researchers and unscrupulous start-ups recall the discredited pseudosciences of physiognomy and phrenology, which purport to use facial structure and head shape to assess character and mental capacity.

Artificial intelligence and modern computing are giving new life and a veneer of objectivity to these debunked theories, which were once used to legitimise slavery and perpetuate Nazi race “science”. Those who wish to spread essentialist theories of racial hierarchy are paying attention. In one blog, for example, a contemporary white nationalist claimed that “physiognomy is real” and “needs to come back as a legitimate field of scientific inquiry”.

New applications of facial recognition — not just in academic research, but also in commercial products that try to guess emotions from facial expressions — echo the same biological essentialism behind physiognomy.

Composite photographs, new and old

One of the pioneers of 19th-century facial analysis, Francis Galton, was a prominent British eugenicist. He superimposed images of men convicted of crimes, trying to find through “pictorial statistics” the essence of the criminal face. Galton was disappointed with the results: He was unable to discern a criminal “type” from his composite photographs. This is because physiognomy is junk science — criminality is written neither in one’s genes nor on one’s face. He also tried to use composite portraits to determine the ideal “type” of each race, and his research was cited by Hans F K Günther, a Nazi eugenicist who wrote a book that was required reading in German schools during the Third Reich.

Galton’s tools and ideas have proved surprisingly durable, and modern researchers are again contemplating whether criminality can be read from one’s face. In a much-contested 2016 paper, researchers at a Chinese university claimed they had trained an algorithm to distinguish criminal from noncriminal portraits, and that “lip curvature, eye inner corner distance, and the so-called nose-mouth angle” could help tell them apart. The paper includes “average faces” of criminals and non-criminals reminiscent of Galton’s composite portraits.

The paper echoes many of the fallacies in Galton’s research: that people convicted of crimes are representative of those who commit them (the justice system exhibits profound bias), that the concept of inborn “criminality” is sound (life circumstances drastically shape one’s likelihood of committing a crime) and that facial appearance is a reliable predictor of character.

It’s true that humans tend to agree on what a threatening face looks like. But Alexander Todorov, a psychologist at Princeton, writes in his book Face Value that the relationship between a face and our sense that it is threatening (or friendly) is “between appearance and impressions, not between appearance and character”. The temptation to think we can read something deeper from these visual stereotypes is misguided — but persistent.

In 2017, Stanford professor Michal Kosinski was an author of a study claiming to have invented an AI “gaydar” that could, when presented with pictures of gay and straight men, determine which were gay with 81 per cent accuracy. (He told The Guardian that facial recognition might be used in the future to predict IQ as well.)

The paper speculates about whether differences in facial structure between gay and straight men might result from underexposure to male hormones, but neglects a simpler explanation, wrote Blaise Agüera y Arcas and Margaret Mitchell, AI researchers at Google, and Todorov in a Medium article. The research relied on images from dating websites. It’s likely that gay and straight people present themselves differently on these sites, from hairstyle to the degree they are tanned to the angle they take their selfies, the critics said. But the paper focuses on ideas reminiscent of the discredited theory of sexual inversion, which maintains that homosexuality is an inborn “reversal” of gender characteristics — gay men with female qualities, for example.

Echoes of the past

Parallels between the modern technology and historical applications abound. A 1902 phrenology book showed how to distinguish a “genuine husband” from an “unreliable” one based on the shape of his head; today, an Israeli start-up called Faception uses machine learning to score facial images using personality types like “academic researcher,” “brand promoter,” “terrorist” and “paedophile”.

Faception’s marketing materials are almost comical in their reduction of personalities to eight stereotypes, but the company appears to have customers, indicating an interest in “legitimising this type of AI system”, said Clare Garvie, a facial recognition researcher at Georgetown Law.

In the early 20th century, Katherine M H Blackford advocated using physical appearance to select among job applicants. She favoured analysing photographs over interviews to reveal character, Todorov writes. Today, the company HireVue sells technology that uses AI to analyse videos of job applicants; the platform scores them on measures like “personal stability” and “conscientiousness and responsibility”.

Facial recognition programs are being piloted at American universities and Chinese schools to monitor students’ emotions. This is problematic for myriad reasons: Studies have shown no correlation between student engagement and actual learning, and teachers are more likely to see black students’ faces as angry, bias that might creep into an automated system.

Classification and surveillance

The similarities between modern, AI-driven facial analysis and its earlier, analog iteration are eerie. Both, for example, originated as attempts to track criminals and security targets.

Alphonse Bertillon, a French policeman and facial analysis pioneer, wanted to identify repeat offenders. He invented the mug shot and noted specific body measurements like head length on his “Bertillon cards”. With records of more than 100,000 prisoners collected between 1883 and 1893, he identified 4,564 recidivists.

Bertillon’s classification scheme was superseded by the fingerprinting system, but the basic idea — using bodily measurements to identify people in the service of an intelligence apparatus — was reborn with modern facial recognition. Progress in computer-driven facial recognition has been spurred by military investment and government competitions.

Emotional ‘intelligence’

Facial analysis services are commercially available from providers like Amazon and Microsoft. Anyone can use them at a nominal price — Amazon charges one-tenth of a cent to process a picture — to guess a person’s identity, gender, age and emotional state. Other platforms like Face++ guess race, too. But these algorithms have documented problems with non-white, non-male faces. And the idea that AI can detect the presence of emotions — most commonly happiness, sadness, anger, disgust and surprise — is especially fraught. Customers have used “affect recognition” for everything from measuring how people react to ads to helping children with autism develop social and emotional skills, but a report from the AI Now Institute argues that the technology is being “applied in unethical and irresponsible ways”.

Affect recognition draws from the work of Paul Ekman, a modern psychologist who argued that facial expressions are an objective way to determine someone’s inner emotional state, and that there exists a limited set of basic emotional categories that are fixed across cultures. His work suggests that we can’t help revealing these emotions. Ekman’s work has been criticised by scholars who say emotions cannot be reduced to such easily interpretable — and computationally convenient — categories.

Much like the 19th-century technologies of photography and composite portraits lent “objectivity” to pseudoscientific physiognomy, today, computers and artificial intelligence supposedly distance facial analysis from human judgment and prejudice. In reality, algorithms that rely on a flawed understanding of expressions and emotions can just make prejudice more difficult to spot.

© The New York Times 2019

First Published: Fri, July 12 2019. 23:34 IST

read the full story about When will we finally learn we cannot predict people’s character from face?

#theheadlines #breakingnews #headlinenews #newstoday #latestnews #aajtak #ndtv #timesofindia #indiannews