What do business leaders need to know about AI Ethics and Risk?

Artificial intelligence is currently the hottest technology on the market. It has the potential to dramatically reshape how we work (as developers and users of technology) and play (as consumers and creators).

As AI is becoming more widely used, the risks associated with AI are also being laid bare. But how do we go about understanding those risks? And perhaps more importantly, how do we safeguard implementation and future use for both businesses and consumers?

Understanding Risk

“No risk, no reward” is an often repeated and generally accepted wisdom. Putting yourself on the line a little and accepting risk is seen as the key to success. But does that hold true for AI implementation?

The potential prize for harnessing AI can be great – from optimisation of supply chains to improvements in automation and productivity. But no new technology is ever risk-free. While AI ethics used to be a niche topic of academia, it is rapidly emerging into the mainstream.

Mega corporations like Google and Microsoft have established focus groups to deal with ethical concerns. The United States Department of Defence (DOD) issued recommendations on ethical AI use in 2019, and the US National institute of Standards and Technology published an AI Risk Management Framework earlier this year. UNESCO similarly published guidelines on the ethical use of AI and the European Parliament is actively working on a regulatory AI law.

AI Risk Categories, taken from PWC’s “Responsible AI – maturing from theory to practise” report. Source

Many of the risks associated with AI can seem opaque, especially to those who are not working with the AI applications first hand. Some of these concerns relate to privacy and data use, the potential for accidental biases and the lack of transparency and human agency (See Image). But there are also risks at the business level, especially as the conversation around ethical use and AI regulation grows. Misuse of AI can harm a businesses’ reputation or draw the ire of regulatory and legal bodies.  

The consequences can range from wasted resources (e.g., scrapping a project over mounting ethical concerns) to legal repercussions and fines.  Given that these risks can apply at any level of business, the only solution is to create a comprehensive protocol around raising and addressing ethical issues.

Building AI Infrastructures

In a way artificial intelligence isn’t unlike other new technologies, where rapid implementation and proliferation at all levels of society has led to the revelation of many potential flaws and risks. But what makes AI unique, is that it has long occupied the human mind. While discussions about the risks of social media only began happening after we were already feeling its effects, the field of AI ethics emerged long before AI tangibly entered the mainstream consciousness.

As a result, we already have a considerable body of literature to fall back on to guide us in establishing guiding principles and concrete protocols. By now many such guideline sets are available, published by national and international bodies, from government organisations to NGOs.

Ethical AI Principles, taken from PWC’s “Responsible AI – maturing from theory to practise” report. Source

A research paper from 2019 titled The global landscape of AI ethics guidelines set out to dissect 84 such guidelines. What they found was a significant overlap between many of these documents, with a convergence on five primary principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy.

Such lofty guidelines are not enough to build a robust framework for AI ethics. Implementing these principles means hammering out concrete programmes to deal with these concerns and risks. To not only offer possible solutions to these problems, but to raise awareness and educate employees about them.

For anyone interested in having a look at existing frameworks, toolkits, and playbooks for ethical AI, I recommend this website, which is a treasure-trove of resources.

Moving Forwards

So, what is the takeaway about AI Ethics and Risk?

 Like with every new technology there are a lot of issues to be hammered out still, and problems we haven’t quite understood. There are risks at the technological level and at the business/national level, which can have tangible consequences.

While the field of AI ethics is still in its infancy, it already has produced a lot of literature – within academic circles, government contexts and from business perspectives. And while a lot of ethical guidelines can seem lofty and hard to translate into concrete action, there also are many resources for practical frameworks and protocols.

Artificial Intelligence will likely suffuse every part of our future, whether we like it or not, whether we are aware of it or not. The potential benefits to harnessing AI are great, but to reap those benefits we have to have a robust understanding of, and protocols for dealing with all its pitfalls.


A Practical Guide to Building Ethical AI

AI Ethics Welcomes The Prospects Of A Standardized AI Risk Management Framework, Which Could Bolster Autonomous Self-Driving Car Efforts Too

AI Resource Center – NIST

DOD Adopts Ethical Principles for Artificial Intelligence

Ethical AI Frameworks, Guidelines, Toolkits | AI Ethicist

Ethics of Artificial Intelligence | UNESCO

EU AI Act: first regulation on artificial intelligence | News | European Parliament

How can we integrate generative AI into the future? | Hibernian Recruitment

How to use generative AI in business | Hibernian Recruitment

Responsible AI – Maturing from theory to practice – PWC

The global landscape of AI ethics guidelines | Nature

Why addressing ethical questions in AI will benefit organizations | Research & insight | Capgemini