Gradient Institute has developed introductory training courses in ethical AI targeting different levels of an organisation. These courses aim to start closing the gap between high-level principles such as 'be fair' and working AI systems in production. Gradient currently offers four mutually exclusive introductory courses for all levels of an organisation, from the board and the executive, through leaders, to data scientists and users of AI systems.
This is a two hour course introducing the key risks (including ethical considerations) in building, procuring and operating AI systems and provides strategies for managing these risks. The course is technically-informed from the work of the Institute and helps board members and executives understand enough about the technical risks to identify and ask the key questions of their staff. The course has been designed for boards, senior executive teams and ethics committees and can be customised for the organisation. If two hours is not available, a one hour version can be delivered (though we recommend the two-hour version as there is a lot of material of relevance to boards and executives).
It highlighted issues that I never thought about, and now know that I should.
Outcomes: Participants will understand the key risks (including ethical risks) in building, procuring and operating AI systems and the key questions they should be asking staff. They will also be more prepared for the responsibilities that the Board and executive have for determining trade-offs between ethical and other objectives, and for creating a culture of rigour and accountability around the design, procurement, use and monitoring of AI decision-making systems.
Select Topics: After an introduction to the concepts of ethical AI, participants will explore some of the key risks and challenges in making AI systems ethical and how to manage these risks. Topics include:
Risks in operating an AI system: We identify the key risks in data-driven automated decision-systems (including examples of where such systems have led to unintended consequences at a massive scale).
AI system governance: Systems that make decisions automatically must still have humans accountable for their actions, and for the design decisions that govern those actions. An aspect of creating ethical AI systems is empowering the right people within the organisation to be responsible for an AI system, and ensuring that they have the tools and ability for their responsibility to be meaningful.
Building an ethical AI system: We outline some key stages in the development of an ethical AI system. Each stage involves a different mix of designers, decision makers, stakeholders and engineers. We outline how leaders and decision makers are involved in eliciting and ultimately setting the different objectives (ethical and organisational) of the system and how these objectives are balanced.
Managing risks of AI systems: We discuss strategies that can be used to manage the risks when building, procuring, operating and monitoring AI systems. The course materials include samples that can be used at Board level for AI system risk management.
This is a three hour interactive presentation introducing leaders to the key challenges in building ethical AI systems.
Excellent session, stimulated thought across the teams, good use of practical examples.
Outcomes: Participants will understand the key challenges in building ethical AI systems from the perspective of leadership. They will see the responsibilities that leaders have for determining trade-offs between ethical and other objectives, and for creating a culture of rigour and accountability around the design, use and monitoring of AI decision-making systems.
Outline of Topics: After an introduction to the concepts of ethical AI, participants will explore some of the key considerations for designing, implementing, maintaining and governing ethical AI systems. These include:
Quantifying intent: AI systems require precise, mathematical objectives specified in terms of measurable quantities. We examine the challenge of defining these objectives, and some strategies to help ensure that the operation of the resulting system is aligned with its designers’ intent.
Modelling impact: Typical AI and machine learning systems are concerned with discovering useful correlations for prediction in previously acquired training data. However, this training data may not account for the consequences of the AI system interacting with the world: a key question when understanding the system’s ethical impact. We discuss the causal approach to modelling needed in this situation, and how designers might recognise when it is necessary, and when a simpler correlational approach is sufficient.
Balancing objectives: It is unlikely that an AI system will be able to satisfy all of its objectives simultaneously. We explore the process that those people responsible for the system must undertake of making fundamentally subjective but consequential choices about the degree to which each objective is satisfied. We also discuss the unavoidable set of trade-offs between different notions of fairness, and of the system’s accuracy.
Testing and iterating: Building a trustworthy AI system requires careful testing and iteration of the mathematical, computational and organisational aspects of the design and implementation. We explore this assurance challenge and some approaches with which to address it.
Responsibility and governance: Systems that make decisions automatically must still have a human accountable for their actions, and for the design decisions that govern those actions. We discuss empowering the right people to be responsible for an AI system, and the need to ensure they have the tools and ability for that responsibility to be meaningful.
This is a two day course for people that can (or do) build data-driven models professionally. The course aims to develop the technical skills necessary for building systems that use machine learning to make automated decisions whilst accounting for ethical objectives. There is a ‘coding’ version of this course that involves Python exercises and activities, and a ‘non-coding’ version whose exercises and activities are based on interactive models and visualisations.
It was very relevant to the work we are doing, and the tools/methodologies learned can improve our AI systems.
Outcomes: By the end of the course, participants will have explored simple example systems that use machine learning to make automated decisions whilst accounting for ethical objectives. Participants will understand and explore some of the technical pitfalls that prevent machine learning systems from behaving ethically, and how to identify and correct for them.
Note: The coding version of this course involves substantial hands-on programming activities using Python, Numpy and Scikit-learn. It is intended to provide participants who are currently building AI systems with the capacity to write code that captures ethical constraints and objectives. The non-coding version of the data scientists course covers the same concepts but without the coding requirement. The non-coding version of this course is highly recommended for participants without experience using Python for data science work.
Format: Participants work through Jupyter Notebooks with exercises and examples under the tutelage of Gradient Institute data scientists. The tutors will provide a short presentation at the beginning of each section and will lead class discussions. Gradient Institute provides the notebooks to participants for their reference after the course. This course is run in groups of up to fifteen students with two Gradient Institute tutors.
Outline of topics:
Automated decision making: We provide a review of the foundations of machine learning and model validation, with an emphasis on ensuring a strong conceptual understanding and the ethical implications of algorithmic decision making.
Automated decision making under uncertainty (coding only): This is an alternative to the automated decision making module that is intended for advanced students. We review the foundations of machine learning, discuss the importance of quantifying uncertainty for ethical decision making and explore some approaches for estimating uncertainty with machine learning models.
Loss functions and robust modelling: Building a data-driven automated decision system requires explicitly specifying its objectives, often in the form of a loss function and constraints. The particular choice of loss, including what considerations to omit and include, are the primary mechanism of control designers have over the ethical operation of the system. In data driven systems, losses are specified with respect to data and rely on assumptions about that data. We examine the design choices and assumptions that are made when translating a real-world problem into an algorithmic decision making system and the ethical issues that can arise from this process.
Causal versus predictive models: ML models leverage correlations in data to predict outcomes, on the assumption that the data generating process is fixed. Where models are used to drive decisions and interventions, failure to consider causality can lead to poor consequences despite good intentions. We clarify the distinction between causal and predictive models and how they can be used & interpreted.
Fair machine learning: ML systems can perform well on average and still systematically err or discriminate against individuals or groups in the wider population. We examine some of the common notions of algorithmic fairness that attempt to measure and correct for such disparate treatment or outcome in ML systems.
Interpretability, transparency & accountability: These approaches help us identify when models might break down, when they are lacking vital context, and whether they have been designed and motivated in an acceptable way. We provide an introduction to some of the tools and techniques available for making models more interpretable and transparent and discuss how to communicate key information about model behaviour and ethical risks to those ultimately accountable for the system.
Project: The final component of the course is an applied project that challenges students to put the concepts learned into practice.
For the coding version of this course, Gradient Institute also makes refresher material available online for participants to study before the course:
This is a one day course for people who use or interact closely with models and the technical teams building them. This includes domain experts, data custodians and policy staff using or planning to use models as part of their work, as well as managers of technical teams and people responsible for oversight of AI systems. The Introduction to Ethical AI course covers similar content to the data scientist’s course, but at a more conceptual level.
Outcomes: Participants will gain a conceptual understanding of some of the theoretical, technical and organisational challenges in creating ethical AI systems, as well as approaches to begin addressing those challenges.
Prerequisities: Participants should have had exposure to machine learning or statistical modelling.
Format: Interactive workshop. A combination of working through interactive models and visualisations, as well as presentations and discussion. This course is run in groups of up to fifteen students with two Gradient Institute tutors.
Outline of Topics:
Automated decision making: We explore the core concepts underlying how machine learning systems operate with an emphasis on conceptual understanding and implications for ethics.
Quantifying intent: AI systems require precise, mathematical objectives specified in terms of measurable quantities. We explore in depth how objectives and intent are encoded within data-driven decision making systems and the impact these choices have on outcomes.
Algorithmic bias and fair machine learning: There are many examples of supposedly neutral algorithms systematically disadvantaging certain groups of society. We explore how bias can arise in automated decision making systems and some approaches to detect and mitigate it.
Interpretability, transparency & accountability: These are widely recognised as critical components of ethical AI. However, it is not always clear what it means for a model to be transparent or how improving interpretability drives more ethical outcomes. We explore some of the tools and techniques for making AI systems more transparent and interpretable and discuss the extent to which these tools may satisfy the underlying rationale of transparency.
For more information on our courses, please don’t hesitate to contact us.