Artist Stephanie Dinkins has long been a pioneer in combining art and technology in her Brooklyn-based practice. In May, she received $100,000 from the Guggenheim museum for his groundbreaking innovations, including an ongoing series of interviews with Bina48, a humanoid robot.
For the past seven years, she has experimented with the ability of AI to realistically depict Black women, smiling and crying, using a variety of verbal cues. Her first results were mediocre, if not alarming: her algorithm produced a pink-colored humanoid cloaked in black.
“I was expecting something with a little more of a black female look to it,” she said. And while the technology has improved since his early experiments, Dinkins found himself using indirect terms in text prompts to help AI imagers achieve the desired image, “to give the machine a chance to give me what I wanted.” “. But whether you use the term “African-American woman” or “black woman,” the machine distortions that mangle facial features and hair textures happen at a high rate.
“The improvements hide some of the deeper questions we should be asking about discrimination,” Dinkins said. The artist, who is black, added: “Bias are deeply embedded in these systems, so it becomes entrenched and automatic. If I’m working within a system that uses algorithmic ecosystems, then I want that system to know who black people are in a nuanced way, so we can feel better supported.”
She’s not the only one asking tough questions about the troubling relationship between AI and race. Many black artists are finding evidence of racial bias in artificial intelligence, both in the large data sets that teach machines how to generate images and in the underlying programs that run the algorithms. In some cases, AI technologies appear to ignore or distort text cues from artists, affecting how Black people are portrayed in images, and in others, appear to stereotype or censor history and culture. black.
Discussion of racial bias within artificial intelligence has increased in recent years, with studies showing that facial recognition technologies and digital assistants have trouble identifying the images and speech patterns of non-white people. . The studies raised broader issues of fairness and bias.
The major companies behind the AI imagers, including OpenAI, Stability AI, and Midjourney, have all committed to improving their tools. “Bias is a major issue across the industry,” Alex Beck, a spokesperson for OpenAI, said in an email interview, adding that the company continually tries to “improve performance, reduce bias, and mitigate harmful outcomes.” He declined to say how many employees were working with racial bias or how much money the company had allocated to the problem.
“Blacks are used to not being seen”, the Senegalese artist Linda Dounia Rebeiz wrote in an introduction to his exhibition “In/Visible”, to wild file, an NFT marketplace. “When we are seen, we are used to being misrepresented.”
To prove his point during an interview with a reporter, Rebeiz, 28, asked the OpenAI imager: DALL-E 2, imagine buildings in his hometown, Dakar. The algorithm produced barren desert landscapes and dilapidated buildings that, according to Rebeiz, looked nothing like the coastal houses of the Senegalese capital.
“It’s demoralizing,” Rebeiz said. “The algorithm leans towards a cultural image of Africa that the West has created. It builds on the worst stereotypes that already exist on the internet.”
Last year, Open AI saying was establishing new techniques to diversify the images produced by DALL-E 2, so that the tool “generates images of people that more accurately reflect the diversity of the world’s population.”
A featured artist in the Rebeiz exhibition, Minne Atairu is a doctoral candidate at Teachers College of Columbia University who planned to use imagers with young students of color in the South Bronx. But now he worries that “it could cause students to generate offensive images,” Atairu explained.
Included in the Feral File exhibition are images from his “Blonde Braid Studies,” which explore the limitations of Midjourney’s algorithm for producing images of black women with naturally blonde hair. When the artist asked for an image of identical black twins with blonde hair, the show produced a brother with lighter skin.
“That tells us where the algorithm is grouping images from,” Atairu said. “This is not necessarily a corpus of black people, but one that is directed at white people.”
He said he was concerned that young black children might try to image themselves and see children whose skin had lightened. Atairu recalled some of his previous experiments with Midjourney before the recent upgrades improved his abilities. “It would generate images that were like blackface,” she said. “You would see a nose, but it wasn’t a human’s nose. It looked like a dog’s nose.”
In response to a request for comment, David Holz, founder of Midjourney, said in an email: “If anyone finds a problem with our systems, we ask that you send us specific examples so we can investigate.”
Stability AI, which provides imaging services, said it planned to collaborate with the AI industry to improve bias assessment techniques with a greater diversity of countries and cultures. The bias, the AI company said, is caused by “overrepresentation” in its general data sets, though it did not specify whether the problem here was the overrepresentation of white people.
Earlier this month, Bloomberg analyzed over 5,000 images generated by Stability AI, and found that his program amplified stereotypes about race and gender, typically depicting people with lighter skin tones holding high-paying jobs, while subjects with fairer skin tones darker ones were labeled “dishwasher” and “housekeeper.”
These problems have not stopped an investment frenzy in the tech industry. A recent bullish report from consulting firm McKinsey predicted that generative AI would add $4.4 trillion to the global economy annually. Last year, nearly 3,200 start-ups received $52.1 billion in funding, according to the GlobalData offer database.
Technology companies have fought accusations of bias in portrayals of dark skin since the early days of color photography in the 1950s, when companies like Kodak used white models in its color development. Eight years ago, Google disabled the ability of its artificial intelligence program to let people search for gorillas and monkeys through its Photos app because the algorithm incorrectly classified black people into those categories. As recently as May of this year, the issue was still not fixed. Two former employees who worked on the technology told The New York Times that Google had not trained the artificial intelligence system on enough images of black people.
Other experts who study artificial intelligence said the bias goes beyond the data sets, referring to the early development of this technology in the 1960s.
“The problem is more complicated than data bias,” said James E. Dobson, a cultural historian at Dartmouth College and author of a recent book on the birth of computer vision. There was very little discussion of race during the early days of machine learning, according to his research, and most of the scientists working on the technology were white men.
“It’s hard to separate today’s algorithms from that history, because engineers are building on top of those earlier versions,” Dobson said.
To lessen the appearance of racial bias and hateful imagery, some companies have banned certain words from text messages users send to generators, such as “slave” and “fascist.”
But Dobson said that companies hoping for a simple solution, such as censoring the type of prompts users can send, were avoiding the more fundamental problems of bias in the underlying technology.
“This is a worrying time as these algorithms become more complicated. And when you see garbage coming out, you have to ask yourself what kind of garbage process is still inside the model,” added the professor.
auriea harveyartist included in the recent collection of the Whitney Museum exhibition “Refiguration,” about digital identities, ran into these bans for a recent project using Midjourney. “I wanted to question the database about what I knew about the slave ships,” he said. “I got a message that Midjourney would suspend my account if he continued.”
Dinkins ran into similar problems with the NFTs she created and sold, showing how enslaved people and settlers brought okra to North America. She was censored when she tried to use a generative program, Replicate, to make drawings of slave ships. He eventually learned to outwit the censors by using the term “pirate ship.” The image she received was an approximation of what she wanted, but it also raised troubling questions for the artist.
“What is this technology doing to history?” Dinkins asked. “You can see that someone is trying to correct for bias, but at the same time that erases a part of the story. I find those deletions as dangerous as any bias, because we’re just going to forget how we got here.”
Naomi Beckwith, chief curator of the Guggenheim Museum, credited Dinkins’ nuanced approach to issues of representation and technology as one reason the artist received the museum’s first prize for Art and Technology.
“Stephanie has become part of a tradition of artists and cultural workers breaking through these general, all-encompassing theories of how things work,” Beckwith said. The curator added that her own initial paranoia about AI programs replacing human creativity was greatly reduced when she realized that these algorithms knew next to nothing about black culture.
But Dinkins isn’t quite ready to give up on technology. He continues to use it for his art projects, with skepticism. “Once the system can generate a high-fidelity image of a black woman crying or smiling, can we rest?”