How far can we trust science? – A meditation on the 2023 Hubert Butler Essay Prize

“But our trust in science has to be bounded by the fact that science is not a thing that is, but rather it is a thing that people do together.”

Shane Conneely, 2023 Hubert Butler Prize

Hubert Butler (1900-1991) is primarily remembered for his work as an essayist, writing extensively about history, politics, and religion – always situating his native Ireland in a European context. Much of his work likely was inspired by his travels through central and eastern Europe, as well as his efforts to save Austrian Jews from Nazi persecution during World War 2.

Its this broad intellectual and historical legacy that the Hubert Butler Essay Prize seeks to recognise and commemorate. Established as an annual writing contest in 2018, each year’s prompt harkens back to Butler’s work, covering current topics such as the abuse of political power, global citizenship, and individual freedom within the community. This year’s theme was “how far can we trust science?”.

In Science we trust?

There are a thousand ways to approach that question. An epistemological approach might question the scientific process, and how it informs what we know about our universe. A more anthropological perspective might be to question the cultural assumptions behind the scientific method, and the people performing it. Practically, and most pressingly in a world swept up by predictive algorithms and artificial “intelligence”, we can also ask whether we can really trust technology (built on “science”) to make decisions for us.

This years’ winning essay, authored by Shane Conneely, provides a sweeping overview of some of these approaches. Anyone interested in reading the essay themselves can find it here. One of his primary observations is located at the most mundane interface of science with daily life – technology. Recalling Arthur C. Clarke[1], he writes about how we blindly trust the technology our daily life is suffused with, even if we lack the knowledge to explain it. Understanding the basic principle behind how a fridge or a microwave works, would not qualify anyone to build one themselves.  

Scientists are only human

Conneelys entire essay is insightful, but it feels most lucid when he writes about the need to recontextualise science as a process rather than a thing. He asks not only how far we can trust science, but the scientists too. After all, he points out, science is but a tool, and any tool can become a weapon if mishandled.

While scientists might be “visited by Genius”, they are only human, and as such not infallible. The list of scientists who have held beliefs or worked on projects that ultimately harm other humans is long indeed.

Conneely mentions James Watsons ostracization from the scientific community for his racist beliefs, and Francis Galton, who was both a scientific pioneer and the father of eugenics. In fact, in 2020 my alma mater renamed a building and two lecture theatres which were formerly named after Galton, and his successor, Karl Pearson[2].

There are also those who misinterpret scientific findings to fit within their beliefs, or to exploit others for their own gain. Examples of this might be Richard Lynn, prominent conspiracy theorist and racist, and Andrew Wakefield, the father of the anti-vaccination movement.

Thrown into stark relief by this summers release of Christopher Nolan’s film about J. Robert Oppenheimer, is another category: scientists who works to develop a technology that can be (mis-)used to hurt and kill others. The atomic bomb is the easiest example of this kind of science, which is deeply entangled in world politics. Did Oppenheimer really know the harm that his work could cause? Did he care?

A Thing that we do

Beyond the fallibility of humans, we finally have to consider what science even is. As Conneely points out, it’s not some perfect, immutable thing that simply is –it’s a thing we do. It’s a process, a language, a mode of thinking and behaving. And in that sense, it is always vulnerable, to human desires, urges and errors.

When technology appears to be discriminatory or racist, this is very rarely an intended or inherent quality. They simply carry an imprint of (unconscious) biases which their human developers held. This is why I keep mentioning the need for more ethics oversight and training within STEM, especially in the AI and machine learning industry, in these posts. In short, Conneely is right:  

Science is only one of those things that we do, and we can only trust it as far as we can trust each other.



Hubert Butler Essay Prize: What happened to Europe without frontiers? | LSE BREXIT

Hubert Butler: Ireland’s George Orwell – The Irish Times

UCL denames buildings named after eugenicists | UCL News – University College London

We Need to Talk about Ethics and Computer Vision | Hibernian Recruitment

[1] Clarke famously wrote that „any sufficiently advanced technology is indistinguishable from magic”.

[2] Pearson was the first Chair of Eugenics at UCL, which was a bequest from Galton. The renaming is part of a more recent effort of the university to investigate and disavow its eugenicist history.