Timnit Gebru was one of Google’s most notable Black female employees until she was fired from her position as co-leader of the company’s Ethical AI project last year. After the dramatic departure of researcher Timnit Gebru, the company has faced a bombardment of criticism in the following months, particularly on Twitter. Gebru claims she was fired through email, following her refusal to retract her paper on large language models. Now, she’s set up a fresh new research center named DAIR, which is devoted to the issues she believes were being overlooked at Google.
According to its press release, the Distributed Artificial Intelligence Research (DAIR) Organization is an independent, community-rooted institute established to oppose Big Tech’s ubiquitous influence on AI research, development, and deployment. The MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundation, and the Rockefeller Foundation collectively contributed $3.7 million to DAIR.
DAIR intends to chronicle damages while also developing a vision for AI applications that can benefit the same people. Gebru had earned herself a name in AI for co-authoring in the study of facial recognition software’s prejudice against persons of race, which led to Big Techs like Amazon changing their policies. She was sacked from Google a year ago after writing a research article criticizing the company’s profitable AI work on huge language models, which can assist with conversational search queries.
The backlash at Google brought to light the inherent conflicts that arise when tech corporations fund or hire academics to investigate the ramifications of the technology they want to benefit from. DAIR, according to Gebru, will question the possible drawbacks of AI since it will be free of the academic politics and pressure to publish that may stifle university research. In other words, emphasis will be on academic paper publication, but without the constant pressure of traditional academia or the paternalistic involvement of multinational companies restricting the researchers.
This will be something different from Google’s step to impose additional restrictions to the topics that its researchers are allowed to investigate following the Gebru incident.
DAIR will also demonstrate AI applications that are unlikely to be created elsewhere, according to Gebru, with the goal of inspiring others to push the technology in new ways. One such effort is developing a public data collection of aerial images of South Africa to investigate how apartheid’s legacy is still inscribed in land usage. According to a preliminary examination of the photos, most unoccupied property built between 2011 and 2017 was turned into wealthy residential districts in a densely populated zone historically confined to non-white people, where many poor people now reside.
Later this month, at NeurIPS, the world’s most prestigious AI conference, DAIR will make its formal debut in academic AI research with a paper on that project. Raesetje Sefala, DAIR’s inaugural research fellow, is the principal author of the publication, which also contains contributions from outside researchers.
Gebru said she hopes to use the funding to break free from the shackles of Big Tech such as the exclusion of outspoken researchers, the evaluation of potential harms after an AI system is in use. DAIR will also prioritize research against commercial AI projects, such as large language models, whose negative impact is investigated only after they’ve been deployed in the real world. Gebru’s ill-fated paper, too, evaluated the recognized problems of so-called large language models like GPT-3, which was taking the tech world by storm last year. She said that notions like AI applications that did not employ massive data sets or were focused on less profit-oriented goals, such as language revitalization, were given little thought.