Management Course

3 hours | Online or in-person

This three-hour interactive presentation from Gradient Institute’s experts introduces organisational leaders to the key challenges and opportunities in building ethical AI systems.

“Excellent session, good use of practical examples.”

SENIOR CORPORATE EXECUTIVE

Participants will understand the key challenges in building ethical AI systems from the perspective of an organisation’s leadership.

They will come to see the responsibilities leaders have for determining trade-offs between ethical and other organisational objectives, and how to  create a culture of rigour and accountability around the design, use and supervision of AI decision-making systems in their organisation.

Introduction

The core concepts of ethical AI, exploring some of the key considerations for designing, implementing, maintaining and governing ethical AI systems. 

Quantifying intent

AI decision-making systems require precise, mathematical objectives that need to be specified in measurable quantities. We examine the challenge of defining these objectives, and review some of the strategies that help ensure the operation of the resulting system is aligned with its designers’ intent.

Modelling impact

AI systems are adept at discovering useful correlations from an organisation’s historical data and use this for prediction. However, such data – utilised in ‘training’ a system – may not account for the consequences of its interaction with the real world: a key question when understanding the system’s ethical impact. We discuss the causal approach to modelling needed; how designers might recognise when it is necessary; and when a simpler correlational approach is sufficient.

Balancing objectives

It is rare for an AI system to satisfy all its objectives simultaneously. We analyse the processes those responsible for a system must therefore undertake to ensure that what are fundamentally subjective – but nevertheless consequential – choices are made about how much of each objective is satisfied. We also discuss the set of trade-offs that unavoidably arise between different notions of fairness and a system’s accuracy.

Testing and iterating

Building a trustworthy AI system requires careful testing and iteration of the mathematical, computational and organisational aspects of the system’s design and implementation. We explore the challenges of attaining this assurance, and some of the approaches which can be used.

Responsibility and governance

Systems that make decisions automatically must, by law, still have a human accountable for their actions, and for the design choices that govern those decisions. We discuss empowering the right people in an organisation to be responsible for an AI system, and the need to ensure they have the necessary tools and capability for that responsibility to be meaningful.

Interested in this course?

Register for this course, or speak to us about tailoring this course for your organisation.