We Need to Talk about Ethics and Computer Vision

Computer Vision is subfield of artificial intelligence and machine learning which deals with the analysis of images. The same way that AI teaches machines to “think”, computer vision teaches them to “see”. The potential applications for this technology are uncountable. It is already used in retail, the automotive industry, robotics, the healthcare sector and many more places. Because there are so many possible uses for this technology, there is considerable interest and growth in this relatively young field. In 2020 the global market for computer vision was valued at approximately 11.32 billion USD, a number that is projected to increase dramatically in the coming years.

Its not all sunshine and daisies though. With the impact these technologies will inevitably have on the world, there are significant ethical concerns about their development and use.

Bias in the Machine

Bias in machine learning can be conceptualised in two ways. From a technical point of view, this bias means that a model is somehow wrong. Usually this happens because the data used to train said model is somehow incomplete or skewed to reflect stereotypes or preconceptions. This kind of bias may be introduced accidentally, but it will serve to make the model inaccurate or return incorrect conclusions, which can have sweeping real-world consequences.

Computer Vision technologies are used in many contexts where inaccuracy or errors can impact lives. Research shows that some models are less accurate at identifying people with darker skin tones than they are with light-skinned people. There also is a general colourism bias, where models will often equate lighter skin with better qualities and traits than dark skin. This is obviously harmful, but the treatment of skin colour as a light-to-dark scale also flattens the complexities of ethnicity and race.

Chart showing the accuracy of different companies gender classification AIs. Source: gendershades.org

Its not just skin colour. Studies have also shown that models drawing from image databases make associations between race, gender and occupation which are reflective of misogynistic and racial stereotypes. Pictures of women are matched with “feminine” terms, men with “masculine terms”. And if a hand is holding a power tool, a model is more likely to identify it as a gun if said hand has dark skin.

It takes no genius to see how this kind of biased technology can be incredibly harmful when applied to the real world. While some of it might seem harmless – what does it matter that a social media filter helps perpetuate racist beauty standards? – some of these models might be used in surveillance, border control and medical care. These are contexts where incorrect identification of individuals and objects has the potential to ruin lives. This is why we need to have conversations on how to recognise and avoid such bias, as well as how to mitigate the harms it might cause.  

Chart showing the potential harms of algorithmic decision-making. Source: gendershades.org

Beginning the Conversation

It is not surprising that there is little conversation, and even less frameworks about ethics in the computer vision field.  The field is still young and so far, there has been little time or urgency to develop safeguarding procedures to guide research and development.

On some level it is not surprising that there are no real ethics frameworks for the computer vision field yet. It is a relatively new field meaning there are no established safeguarding or ethical procedures to limit or guide the rapid expansion of research and development.

But especially with the wider implementation of computer vision technologies, we need to develop these conversations and frameworks. We have to begin talking about the social consequences this technology may have – whether through bias, abuse or exploitation.

The Computer Vision and Pattern Recognition Conference (CVPR) is an annual event, which attracts big name companies looking to scout AI talent. Ahead of the event, the organisers published a set of Ethics Guidelines, as part of which they “strongly encourage authors to discuss ethical and societal consequences of their work”. While this is only a suggestion to the researchers submitting papers, some of them bristled against it.  

One researcher argued that it was simply not their job to think about this kind of thing, and another argued it wasn’t that the technology they were research that was harmful, but rather how it was applied. Some of the pushback might be simply an unwillingness to do extra work which is felt to have no concrete connection to their work. Another contributing factor is that especially in corporate environments, developers are told to build systems and monetise them as fast as possible, which leaves little space for ethical considerations.

A Need for Further Education

Coming from an anthropological background, this kind of thinking baffles me. I was taught to always consider how our research might impact the people and communities we worked with. Any fieldwork had to be assessed and approved by an ethics committee. This kind of ethical consideration was an integral part of my education, baked into it with all the other components of my degree. Perhaps this is what the computer vision field needs too. Not just nebulous top-down suggestions to “discuss ethical and societal consequences (…) in a concrete manner”, but an active effort to educate people on how to consider and think about these consequences.

As we can see by the pushback the CVPR organisers received, it may be quite a while yet before these conversations on ethics become mainstream. There needs to be an active effort to incorporate ethics, into the education of people entering the field, as well as the academic research and corporate development. And there remains a need for ethical frameworks, which can guide researchers in thinking about the impacts of their work. To establish ethics as a cornerstone of computer vision, an AI in general, there needs to be a concerted effort at all levels of the industry.


Computer Vision Market Size & Share Report, 2021-2028

Ethical Issues in Computer Vision and Strategies for Success — Innodata

Ethics Guidelines | CVPR 2022

Many computer vision interns lack AI ethics training – Protocol

How computer vision works — and why it’s plagued by bias | VentureBeat

How to avoid bias in computer vision models

Researchers show that computer vision algorithms pretrained on ImageNet exhibit multiple, distressing biases | VentureBeat

The Bias Problem in Computer Vision and Artificial Intelligence

What is Computer Vision? | IBM