“I would give a talk at the UN, and then the next day they would wonder if I could even be a manager.”

Timnit Gebru to Vice

You may have heard of Timnit Gebru. If you are interested in Ethics and AI, you may know her as a high-profile researcher who is an outspoken advocate for the need of ethics in AI. Maybe you have heard of the ground-breaking Gender Shades study she co-authored, which highlighted racial and gender biases in facial recognition software. She is also the co-founder of Black in AI – a project that seeks to address the “dire lack of black professionals in AI”. Or you might have heard her name when Google unceremoniously fired her in late 2020.

The Firing

I first stumbled across Gebru’s name research a previous blogpost, in relation to ethics in the field of computer vision. Digging deeper I found that she had been a co-lead and researcher of an Ethical AI team at Google. In her time there she built a team that included many people from minority groups in the field – as a black woman Gebru belongs to a very small minority in AI herself. 

However, according to Gebru and her team co-lead, Margaret Mitchell, it was always a battle to get any kind of recognition from Google internally. Gebru had previously been called “difficult” by higher-ups when pushing for her team to be consulted on relevant matters. The situation finally escalated when she submitted a research paper for a conference. In the paper she and her co-authors critically discuss the potential risks and harms of large language models. This technology has generated a lot of interest and many companies, Google among them, are looking to profit from the technologies being developed. Unsurprisingly, she was asked to either withdraw the paper or remove her name from it.

Gebru wrote back asking for clarification, and simply received another email informing her that her resignation had been accepted.  She maintains she did not submit a letter of resignation and had instead been fired and gaslit about it. Her team and many others at Google protested. They signed a petition to re-instate and promote her. Jeffrey Dean, Googles Head of AI, wrote an email in which he sought to address the “concerns over Gebru’s firing”. In it he wrote that the paper did not meet Googles standards for publication, in part because it “ignored too much relevant research”, which seems like a short-sighted criticism for a paper citing 128 publications.

An Uphill Battle

My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review.

Nicholas Le Roux on Twitter

Given that Google has considerable interests in the growing AI field, it seems like a stretch to argue their censorship comes from a place of pure academic concern. Nicholas Le Roux, another AI researcher at Google, commented that his submissions were “always checked for the disclosure of sensitive material, never for the quality of literature review”. Employees of Google Research also voiced concerns that researchers praising large language models did not receive the same scrutiny as those who wrote about them critically. Given such context, this seems like censorship grounded in a failure to accept, explore and incorporate criticism of research. 

The whole situation looks bad for Google. Beyond the company failing to self-police and accept critical scholarship, it looks like Gebru had to fight structural racism and misogyny on top of her uphill battle against Googles commercial interests. In interviews Gebru spoke about how she had to fight for the inclusion of her team on relevant initiatives. How her work had to gain traction externally for it to gain any recognition within the company. But she also spoke about a concerning level of cognitive dissonance in how she was treated. While she was treated with respect by others, within Google her competence would constantly be questioned. In a Vice interview she summed this up neatly: “I would give a talk at the UN, and then the next day they would wonder if I could even be a manager”.

When asked whether she would re-join the company, Gebru said no. The company was still the same, no steps had been taken to change the power structures she criticised, or the decision-making processes that led to her firing.

And Whats Next?

But Gebru has not been idle. Since her firing from Google, she founded a new organisation: the Distributed Artificial Intelligence Research Institute (DAIR). The goal of the institute is to produce interdisciplinary research free from the pressures of academia and big businesses. To simply let researchers develop their work over time, to let them think through the impacts of their work holistically and on how to communicate their findings in accessible ways.

Knowing Gebru’s background in research, her continuing advocacy for underrepresented groups and her desire to improve ethical frameworks for AI, this is not a surprising turn of events. It is refreshing to see someone with such high visibility sticking to their convictions and receiving the support and means to keep continuing their research. Especially as her research has the potential to have such high impact in how we keep developing AI technology.


A second Google A.I. researcher says the company fired her. – The New York Times

About Google’s approach to research publication – Google Docs

AI Ethics Researcher Timnit Gebru’s Firing Doesn’t Look Good For Google – YouTube

Black in AI

Gender Shades

The DAIR Institute

The two-year fight to stop Amazon from selling face recognition to the police | MIT Technology Review

Timnit Gebru Is Building a Slow AI Movement – IEEE Spectrum

Timnit Gebru was critical of Google’s approach to ethical AI – The Washington Post

We read the paper that forced Timnit Gebru out of Google. Here’s what it says. | MIT Technology Review