“Training Humans,” a photography exhibition unveiled this week at the Fondazione Prada museum in Milan, shows how artificial-intelligence systems have been trained to “see” and categorize the world. Image courtesy of Fondazione Prada; Marco Cappelletti © Trevor Paglen


Trevor Paglen's "ImageNet Roulette" Examines How A.I. Might Label You

The New York Times reviews the digital art project and viral selfie app

Written by Cade Metz for The New York Times

Facial recognition and other A.I. technologies learn their skills by analyzing vast amounts of digital data. Drawn from old websites and academic projects, this data often contains subtle biases and other flaws that have gone unnoticed for years. ImageNet Roulette, designed by the American artist Trevor Paglen and a Microsoft researcher named Kate Crawford, aims to show the depth of this problem.

“We want to show how layers of bias and racism and misogyny move from one system to the next,” Mr. Paglen said in a phone interview from Paris. “The point is to let people see the work that is being done behind the scenes, to see how we are being processed and categorized all the time.”

Unveiled this week as part of an exhibition at the Fondazione Prada museum in Milan, the site focuses attention on a massive database of photos called ImageNet. First compiled more than a decade ago by a group of researchers at Stanford University, located in Silicon Valley in California, ImageNet played a vital role in the rise of “deep learning,” the mathematical technique that allows machines to recognize images, including faces.Packed with over 14 million photos pulled from all over the internet, ImageNet was a way of training A.I. systems and judging their accuracy. By analyzing various kinds of images — such as flowers, dogs and cars — these systems learned to identify them.

What was rarely discussed among communities knowledgeable about A.I. is that ImageNet also contained photos of thousands of people, each sorted into their own categories. This included straightforward tags like “cheerleaders,” “welders” and “Boy Scouts” as well as highly charged labels like “failure, loser, non-starter, unsuccessful person” and “slattern, slut, slovenly woman, trollop.” By creating a project that applies such labels, whether seemingly innocuous or not, Mr. Paglen and Ms. Crawford are showing how opinion, bias and sometimes offensive points of view can drive the creation of artificial intelligence.

Read the full review, written by Cade Metz, in (opens in a new window) The New York Times.
  • Press — Trevor Paglen's "ImageNet Roulette" featured in The New York Times, Sep 22, 2019