One day in 2020, Abeba Birhane found herself on Wikipedia, scouring a list of slurs. At the time, Birhane was pursuing a PhD in cognitive science at the University College Dublin and was trying to see how many of those slurs appeared in the image descriptions for a massive data set that's often used to train AI systems.
She had already turned up plenty of matches on the obvious filth, but Birhane was running out of ideas for what to search next. "The reason I went to Wikipedia is because I couldn't think of enough slur words," she says.
As the list of terms grew, so did Birhane's findings, until she had amassed enough evidence to co-author a paper detailing just how rampant derogatory terms were within this important bit of technological infrastructure. That paper prompted the Massachusetts Institute of Technology, which housed the data set, to take it offline, and cemented Birhane's position as a leading auditor of the data sets that feed the world's increasingly sophisticated AI models. Now Birhane is continuing that work under a newly launched independent research lab of her own, called the AI Accountability Lab.
Birhane's research focuses on the fact that AI models are trained on massive quantities of unfiltered data scraped from the open internet, much of which consists of hateful 4chan boards and misogynistic porn sites. Without proper safeguards in place, those AI models can end up replicating the same hate and misogyny when people prompt them for answers later on. In one recent paper, Birhane and her co-authors found that the bigger data sets get, the more likely the AI models trained on them will produce biased results, like classifying Black people as criminals.