Who’s watching? Carnegie Museum exhibition explores dark side of AI


Share this post:
In 1984, Rockwell sang a song called “Somebody’s Watching Me,” a paean to paranoia that included the line, “I always feel like somebody’s watching me.”
Maybe it was just a timely shout-out to George Orwell’s dystopian novel — in which somebody always was watching — or maybe Rockwell really was onto something.
Photography actually has been used since the late 19th century to keep an eye on people and to keep them in line, according to the brochure for “Trevor Paglen: Opposing Geometries,” a new exhibition at the Carnegie Museum of Art in Pittsburgh’s Oakland section.
In this exhibition, Paglen examines how images are weaponized against humans and the environment through the use of artificial intelligence. The American-born artist, who splits his time between New York City and Berlin, Germany, is known for work tackling the issues of mass surveillance and data collection — and the dangers inherent in them.
“When we first started thinking about this kind of an exhibition, his was the first name that came to mind,” said Dan Leers, the Carnegie’s curator of photography. “He’s been thinking about big data longer than anyone else.”
As visitors enter the lobby of the museum’s Forbes Avenue entrance, they are greeted by “CLOUD #902,” a 16-by-32-foot site-specific photographic rendering of a cloud to which an algorithm that is trained to identify circles has been applied.
The random way in which the circles are applied “illustrates the rigidity of how machines see the world,” Leers said. The title of the piece refers to both the cloud as a natural phenomenon and to the nebulous database to which we entrust our most important information.
Landscapes and faces
Upstairs, Gallery 1 hosts a series of Paglen’s photographs of landscapes and human faces, some of which are making their museum debut, that also explore machines making interpretations through the use of algorithms, Leers said.
Photos of landscapes from the American West call attention to how the use of AI and automated image-making can accelerate the exploitation of natural resources. The portraits were originally used by private institutes and the United States government in early facial recognition research dating back to the mid-20th century — without the knowledge or permission of the subjects.
“It raises questions about how that is still happening today,” Leers said. “There was a need for faces to train those early computers. These images raise awareness of how faces are used today, the issue of convenience versus what we are sacrificing.”
To access the final installation of the Paglen exhibition, visitors must proceed through the galleries to Gallery 16, which is empty except for a pedestal containing a small sculpture entitled, “Autonomy Cube.”
[gps-image name=”2983508_web1_ptr-carnegiephotos2-090520.jpg”]
The sculpture actually provides a Wi-Fi hot spot from which visitors can log on to the open-source network Tor, where their digital movements cannot be tracked and data cannot be gathered. The Plexiglas box surrounding it is “a metaphor for the transparency the artist seeks to bring to machine operations,” according to the exhibition brochure.
“This is a first for the museum,” Leers said. “It was important that we create a digital safe space, to give visitors a space where they are not subjected to surveillance.”
Special programs
The exhibition, running through March 14, was set up deliberately in separate spaces “to take visitors on a tour of the entire museum,” Leers said. It was organized by Leers with Taylor Fisch, project curatorial assistant, as part of the museum’s Hillman Photography Initiative.
Special programming associated with “Opposing Geometries” includes:
• In Conversation Online: Trevor Paglen and Dan Leers, 7:30-8:30 p.m. Sept. 17 — Virtual tour of the exhibition.
• Algorithms & Social Spaces Workshop, noon-1:30 p.m. Oct. 14 — By giving participants the opportunity to create their own social algorithms and reflect on the impacts those algorithms have, this workshop shows the ways in which algorithms used in social spaces can exclude and prioritize certain people.
• AI & Speculative Fictions: Online Workshop, noon-1:30 p.m Oct. 28 — Participants will get a crash course in the basics of AI and then create a collaborative, speculative story depicting a future of technology that is equitable and inspiring.
• Machines That Learn: Online Workshop, noon-1:30 p.m Nov. 11 — Session will unpack the concepts and functions of machine learning within AI, looking at technology that tries to predict what we want based on our previous decisions and preferences.
Participation in these programs is free; for registration information, visit cmoa.org.
A podcast series will be released over successive weeks beginning in October, with each episode spotlighting a different facet of the conversation around artificial intelligence, from biometrics to racial bias to navigating contact in a post-covid-19 world.