Training

Gradient Institute has developed introductory training courses in ethical AI targeting different levels of an organisation. These courses aim to start closing the gap between high-level principles such as 'be fair' and working AI systems in production. Gradient currently offers four mutually exclusive introductory courses for all levels of an organisation, from the board and the executive, through leaders, to data scientists and users of AI systems.

Board & Executive's Course

This is a two hour course introducing the key risks (including ethical considerations) in building, procuring and operating AI systems and provides strategies for managing these risks. The course is technically-informed from the work of the Institute and helps board members and executives understand enough about the technical risks to identify and ask the key questions of their staff. The course has been designed for boards, senior executive teams and ethics committees and can be customised for the organisation. If two hours is not available, a one hour version can be delivered (though we recommend the two-hour version as there is a lot of material of relevance to boards and executives).

It highlighted issues that I never thought about, and now know that I should.
hidden danger

Outcomes: Participants will understand the key risks (including ethical risks) in building, procuring and operating AI systems and the key questions they should be asking staff. They will also be more prepared for the responsibilities that the Board and executive have for determining trade-offs between ethical and other objectives, and for creating a culture of rigour and accountability around the design, procurement, use and monitoring of AI decision-making systems.

Select Topics: After an introduction to the concepts of ethical AI, participants will explore some of the key risks and challenges in making AI systems ethical and how to manage these risks. Topics include:

  • Risks in operating an AI system: We identify the key risks in data-driven automated decision-systems (including examples of where such systems have led to unintended consequences at a massive scale).

  • AI system governance: Systems that make decisions automatically must still have humans accountable for their actions, and for the design decisions that govern those actions. An aspect of creating ethical AI systems is empowering the right people within the organisation to be responsible for an AI system, and ensuring that they have the tools and ability for their responsibility to be meaningful.

  • Building an ethical AI system: We outline some key stages in the development of an ethical AI system. Each stage involves a different mix of designers, decision makers, stakeholders and engineers. We outline how leaders and decision makers are involved in eliciting and ultimately setting the different objectives (ethical and organisational) of the system and how these objectives are balanced.

  • Managing risks of AI systems: We discuss strategies that can be used to manage the risks when building, procuring, operating and monitoring AI systems. The course materials include samples that can be used at Board level for AI system risk management.


Leader's Course

This is a three hour interactive presentation introducing leaders to the key challenges in building ethical AI systems.

Excellent session, stimulated thought across the teams, good use of practical examples.
trade-offs

Outcomes: Participants will understand the key challenges in building ethical AI systems from the perspective of leadership. They will see the responsibilities that leaders have for determining trade-offs between ethical and other objectives, and for creating a culture of rigour and accountability around the design, use and monitoring of AI decision-making systems.

Outline of Topics: After an introduction to the concepts of ethical AI, participants will explore some of the key considerations for designing, implementing, maintaining and governing ethical AI systems. These include:

  • Quantifying intent: AI systems require precise, mathematical objectives specified in terms of measurable quantities. We examine the challenge of defining these objectives, and some strategies to help ensure that the operation of the resulting system is aligned with its designers’ intent.

  • Modelling impact: Typical AI and machine learning systems are concerned with discovering useful correlations for prediction in previously acquired training data. However, this training data may not account for the consequences of the AI system interacting with the world: a key question when understanding the system’s ethical impact. We discuss the causal approach to modelling needed in this situation, and how designers might recognise when it is necessary, and when a simpler correlational approach is sufficient.

  • Balancing objectives: It is unlikely that an AI system will be able to satisfy all of its objectives simultaneously. We explore the process that those people responsible for the system must undertake of making fundamentally subjective but consequential choices about the degree to which each objective is satisfied. We also discuss the unavoidable set of trade-offs between different notions of fairness, and of the system’s accuracy.

  • Testing and iterating: Building a trustworthy AI system requires careful testing and iteration of the mathematical, computational and organisational aspects of the design and implementation. We explore this assurance challenge and some approaches with which to address it.

  • Responsibility and governance: Systems that make decisions automatically must still have a human accountable for their actions, and for the design decisions that govern those actions. We discuss empowering the right people to be responsible for an AI system, and the need to ensure they have the tools and ability for that responsibility to be meaningful.


Data Scientist's Course

This is a two day course for people that can (or do) build data-driven models professionally. The course aims to develop the technical skills necessary for building systems that use machine learning to make automated decisions whilst accounting for ethical objectives.

The biggest appreciation I have is the emphasis of what the responsibility of a data scientist is. i.e. not to choose a set of metrics and the approach to get the best model, but to explain all the risks/choices to decision makers so they can make the most informed decisions. This means being competent technically but also able to consider all the major ethical risks in advance of project creation.
Seriously – well done, as mentioned the calibre was high, it felt rather authentic and each of the presenters were passionate in their field. I truly loathe online sessions where it’s text book module approach and really don’t care who’s listening/how engaged. Your team truly did a great job and should be commended for being so engaging in the sessions.
local optima

Outcomes: By the end of the course, participants will have explored simple example systems that use machine learning to make automated decisions whilst accounting for ethical objectives. Participants will understand and explore some of the technical pitfalls that prevent machine learning systems from behaving ethically, and how to identify and correct for them.

Note: While many of the concepts discussed in this course are applicable to a wide range of AI systems, the content primarily focuses on models built using structured and labelled training data.

Prerequisites: This course is for people who have experience building data-driven models, interpreting graphs and chatting about terms such as “parameter optimisation”, “overfitting” and “model validation”. Exercises and activities are based on interactive models and visualisation (no coding is needed during the course).

Format: This course is run in classes of up to fifteen students with two instructors from our team of machine learning practitioners. At the beginning of each topic, there will be a short presentation introducing key concepts followed by class discussions. Participants' learning is assisted by working through exercises and examples in Jupyter Notebooks. The notebook solutions and the presentation material will be provided after the course as a reference.

Outline of topics:

  • Automated decision making: We provide a review of the foundations of machine learning and model validation, with an emphasis on ensuring a strong conceptual understanding and the ethical implications of algorithmic decision making.

    • Core concepts underlying supervised learning
    • Overfitting & underfitting
    • Model uncertainty
    • Classification & regression
  • Automated decision making under uncertainty (coding only): This is an alternative to the automated decision making module that is intended for advanced students. We review the foundations of machine learning, discuss the importance of quantifying uncertainty for ethical decision making and explore some approaches for estimating uncertainty with machine learning models.

    • Machine learning as optimisation
    • Estimating parameterized uncertainty via maximum likelihood methods
    • Quantifying model uncertainty with Bayesian modelling and MCMC
    • Bootstrapping models
  • Loss functions and robust modelling: Building a data-driven automated decision system requires explicitly specifying its objectives, often in the form of a loss function and constraints. The particular choice of loss, including what considerations to omit and include, are the primary mechanism of control designers have over the ethical operation of the system. In data driven systems, losses are specified with respect to data and rely on assumptions about that data. We examine the design choices and assumptions that are made when translating a real-world problem into an algorithmic decision making system and the ethical issues that can arise from this process.

    • Encoding values in loss functions
    • Cost sensitive classification
    • Calibration and decision making based on predicted probabilities
    • Dataset shift
  • Causal versus predictive models: ML models leverage correlations in data to predict outcomes, on the assumption that the data generating process is fixed. Where models are used to drive decisions and interventions, failure to consider causality can lead to poor consequences despite good intentions. We clarify the distinction between causal and predictive models and how they can be used & interpreted.

    • Identifying when a causal model is required
    • Understanding Simpson’s Paradox
  • Fair machine learning: ML systems can perform well on average and still systematically err or discriminate against individuals or groups in the wider population. We examine some of the common notions of algorithmic fairness that attempt to measure and correct for such disparate treatment or outcome in ML systems.

    • Sources of unfairness in machine learning models
    • Fairness metrics
    • Approaches to removing bias
  • Interpretability, transparency & accountability: These approaches help us identify when models might break down, when they are lacking vital context, and whether they have been designed and motivated in an acceptable way. We provide an introduction to some of the tools and techniques available for making models more interpretable and transparent and discuss how to communicate key information about model behaviour and ethical risks to those ultimately accountable for the system.

    • Motivations & audience for interpretability
    • Feature importance and partial dependence and causality
    • Local interpretability & LIME
    • Global interpretability & surrogate models
  • Project: The final component of the course is an applied project that challenges students to put the concepts learned into practice. The group will work in teams to analyse an algorithmic system, identify potential ethical issues, propose solutions and present the results to the group at the end of the day.

Register to attend this two day interactive workshop starting Tuesday 8th December.


Technical Introduction to Ethical AI

This is a one day course for people who use or interact closely with models and the technical teams building them. This includes domain experts, data custodians and policy staff using or planning to use models as part of their work, as well as managers of technical teams and people responsible for oversight of AI systems.

Note: this course content is similar to the Data Scientists Course, but at a more conceptual level.

The team at Gradient did an excellent job... I really enjoyed the session, took an increased level of appreciation for the role our AI specialists fulfill and warmly will encourage others to consider these sessions – well done Team and thank you! ... well executed, huge wealth of knowledge from the speakers and well fact based through their experience – this wasn’t just text book theory led, which I greatly appreciated the practical applications.
It was amazing. I haven't even heard of anything else like it. It is like applied philosophy of science. And the instructors were knowledgeable and patient and really insightful...Amazing job! You guys are incredible.

Outcomes: Participants will gain a conceptual understanding of some of the theoretical, technical and organisational challenges in creating ethical AI systems, as well as, approaches to begin addressing those challenges.

Prerequisities: This course is for people with a technical background who have had exposure to machine learning or statistical modelling, and are comfortable interpreting data from a graph and discussing concepts such as averages across different groups and population averages.

Format: This interactive workshop is run in classes of up to fifteen students with two instructors from our team of machine learning practitioners. A combination of working through interactive models and visualisations, as well as presentations and discussion.

Outline of Topics:

  • Automated decision making: We explore the core concepts underlying how machine learning systems operate with an emphasis on conceptual understanding and implications for ethics.

    • The supervised machine learning paradigm and what it means to “learn from data”
    • How we specify our objectives or intent within a machine learning system
    • Understanding uncertainty in predictions from automated decision making systems
    • Classification and regression
  • Quantifying intent: AI systems require precise, mathematical objectives specified in terms of measurable quantities. We explore in depth how objectives and intent are encoded within data-driven decision making systems and the impact these choices have on outcomes.

    • Encoding values in loss functions
    • The role of data in specifying objectives
    • Predicting and making decisions based on probabilities
  • Algorithmic bias and fair machine learning: There are many examples of supposedly neutral algorithms systematically disadvantaging certain groups of society. We explore how bias can arise in automated decision making systems and some approaches to detect and mitigate it.

    • What it means to be fair or unbiased
    • Sources of bias in automated systems
    • Detecting and mitigating bias
  • Interpretability, transparency & accountability: These are widely recognised as critical components of ethical AI. However, it is not always clear what it means for a model to be transparent or how improving interpretability drives more ethical outcomes. We explore some of the tools and techniques for making AI systems more transparent and interpretable and discuss the extent to which these tools may satisfy the underlying rationale of transparency.

    • Motivation and audience for transparency and interpretability
    • Understanding what information a machine learning system is leveraging
    • The limitations of interpretability

Register to attend this interactive workshop Monday 30th November.


More Information

For more information on our courses, please don’t hesitate to contact us.