The AI wasn’t 81 percent correct when being shown random photos: it was tested on a pair of photos, one of a gay person and one of a straight person, and then asked which individual was more likely to be gay. On the face of it, this sounds like “AI can tell if a man is gay or straight 81 percent of the time by looking at his photo.” (Thus the headlines.) But that’s not what the figures mean. The paper states: “Given a single facial image, could correctly distinguish between gay and heterosexual men in 81 percent of cases, and in 71 percent of cases for women.” These rates increase when the system is given five pictures of an individual: up to 91 percent for men, and 83 percent for women. Its results have been poorly reported, with a lot of the hype coming from misrepresentations of the system’s accuracy.
“It’s a controversial and upsetting subject, and it’s also upsetting to us,” he tells The Verge.īut is it possible that pseudoscience is sneaking back into the world, disguised in new garb thanks to AI? Some people say machines are simply able to read more about us than we can ourselves, but what if we’re training them to carry out our prejudices, and, in doing so, giving new life to old ideas we rightly dismissed? How are we going to know the difference? Can AI really spot sexual orientation?įirst, we need to look at the study at the heart of the recent debate, written by Kosinski and his co-author Yilun Wang. Critics of Kosinski’s work accuse him of replacing the calipers of the 19th century with the neural networks of the 21st, while the professor himself says he is horrified by his findings, and happy to be proved wrong. This pseudoscience, physiognomy, was fuel for the scientific racism of the 19th and 20th centuries, and gave moral cover to some of humanity’s worst impulses: to demonize, condemn, and exterminate fellow humans. With statements like these, some worry we’re reviving an old belief with a bad history: that you can intuit character from appearance. Some warn we’re replacing the calipers of physiognomy with neural networks One of the paper’s authors, Dr Michal Kosinski, says his intent is to sound the alarm about the dangers of AI, and warns that facial recognition will soon be able to identify not only someone’s sexual orientation, but their political views, criminality, and even their IQ. (And to be clear, based on this work alone, AI can’t tell whether someone is gay or straight from a photo.) But the research captures common fears about artificial intelligence: that it will open up new avenues for surveillance and control, and could be particularly harmful for marginalized people. The work was first covered by The Economist, and other publications soon followed suit, with headlines like “New AI can guess whether you're gay or straight from a photograph” and “AI Can Tell If You're Gay From a Photo, and It's Terrifying.”Īs you might have guessed, it’s not as straightforward as that. Using hundreds of thousands of images taken from a dating website, they said they had trained a facial recognition system that could identify whether someone was straight or gay just by looking at them. Two weeks ago, a pair of researchers from Stanford University made a startling claim.