Technical introduction to Ethical AI

1 day | Online or in-person

A one-day course for professionals who use or interact closely with models and the technical teams building them. This includes domain experts, data custodians and policy staff using (or planning to use) models as part of their work, as well as managers of technical teams and people responsible for the oversight of AI systems. This course content is similar to the Data Scientists course but taught at a more conceptual level.

“Well executed, huge wealth of knowledge from the speakers and fact-based through their experience. This wasn’t just textbook, theory-led, and I greatly appreciated the practical applications.”

CORPORATE CHIEF TECHNOLOGY OFFICER


Outcomes: Participants will gain a conceptual understanding of some of the theoretical, technical and organisational challenges in creating ethical AI systems as well as the approaches that begin addressing those challenges.

Prerequisites: This course is for people with a technical background who have had exposure to machine learning or statistical modelling, and are comfortable interpreting data from a graph and discussing concepts such as averages across different groups and population averages.

Format: This interactive workshop is run in classes of up to fifteen students with two instructors from our team of machine learning practitioners. A combination of working through interactive models and visualisations, as well as presentations and discussion.

Automated decision making

We explore the core concepts underlying how machine learning systems operate with an emphasis on conceptual understanding and implications for ethics.  Other topics include the supervised machine learning paradigm and what it means to “learn from data”; how to specify objectives or intent within a machine learning system; understanding uncertainty in predictions from automated decision-making systems; and classification and regression. 

AI system governance

AI systems require precise, mathematical objectives specified in terms of measurable quantities. We explore in depth how objectives and intent are encoded within data-driven decision-making systems and the impact these choices have on outcomes. Other topics include encoding values in loss functions; the role of data in specifying objectives; and predicting and making decisions based on probabilities.

Algorithmic bias and fair machine learning

There are many examples of purportedly ‘neutral’ algorithms that nevertheless systematically disadvantage certain groups in society. We explore how bias can arise in automated decision-making systems and some approaches to detecting and mitigating it. Other topics include what ‘fair or unbiased’ can mean, and what it ought to mean; sources of bias in automated systems; and detecting and mitigating bias. 

Interpretability, transparency and accountability

These are widely recognised as critical ingredients of an ethical AI system. However, it is not always clear what it means for a model to be ‘transparent’, nor how improving interpretability drives more ethical results. We explore some of the tools and techniques for making AI systems more transparent and interpretable and discuss the extent to which these tools may satisfy the underlying rationale of transparency. Other topics include motivation and audience for transparency and interpretability; understanding what information a machine learning system is leveraging; and the limitations of interpretability.

Interested in this course?

Register for this course, or speak to us about tailoring this course for your organisation.