Training

Courses, training and guidance



Organisations have recognised the challenge of creating ethical AI systems, and are in the process of developing frameworks for data ethics and the ethical use of AI. However, a gap remains between defining high-level principles such as ‘minimise harm’ or ‘be fair’, and building an AI system that embodies principles such as this.

Moving from a set of ethical principles to a working system requires the combined effort of data scientists, their leaders, the senior decision-makers in the organisation and the people affected by the system. It requires eliciting, in precise detail, legal constraints, the trade-offs between different goals of the system, and different ethical concerns. It requires understanding the cause and effect relationships between the system's actions and their impacts in the world, in both the short and long term. And it requires implementing a regime of constant and ongoing monitoring and validation of these systems to ensure they continue to perform as intended.

Gradient Institute believes it is these skills organisations using AI must next acquire. We have designed training courses to begin that process: technical courses for data scientists and data engineers with hands on the data and tools, and conceptual courses for leaders and executives to understand the issues and their responsibilities in defining the proper intent and trade-offs of the system.

Board & Executive's Course

This is a two hour course introducing the key risks (including ethical considerations) in building, procuring and operating AI systems and provides strategies for managing these risks. The course is technically-informed from the work of the Institute and helps board members and executives understand enough about the technical risks to identify and ask the key questions of their staff. The course has been designed for boards, senior executive teams and ethics committees and can be customised for the organisation. If two hours is not available, a one hour version can be delivered (though we recommend the two-hour version as there is a lot of material of relevance to boards and executives).

It highlighted issues that I never thought about, and now know that I should.
hidden danger

Outcomes: Participants will understand the key risks (including ethical risks) in building, procuring and operating AI systems and the key questions they should be asking staff. They will also be more prepared for the responsibilities that the Board and executive have for determining trade-offs between ethical and other objectives, and for creating a culture of rigour and accountability around the design, procurement, use and monitoring of AI decision-making systems.

Select Topics: After an introduction to the concepts of ethical AI, participants will explore some of the key risks and challenges in making AI systems ethical and how to manage these risks. Topics include:

  • Risks in operating an AI system - we identify the key risks in data-driven automated decision-systems (including examples of where such systems have led to unintended consequences at a massive scale).
  • AI system governance - systems that make decisions automatically must still have humans accountable for their actions, and for the design decisions that govern those actions. An aspect of creating ethical AI systems is empowering the right people within the organisation to be responsible for an AI system, and ensuring they have the tools and ability for that responsibility to be meaningful.
  • Building an ethical AI system - we outline some key stages in the development of an ethical AI system. Each stage involves a different mix of designers, decision makers, stakeholders and engineers. We outline how leaders and decision makers are involved in eliciting and ultimately setting the different objectives (ethical and organisational) of the system and how they are balanced.
  • Managing risks of AI systems - we discuss strategies that can be used to manage the risks when building or procuring, operating and monitoring AI systems. Course materials include samples of material that can be used at Board level for AI system risk management.

Leader's Course

This is a 2 to 3 hour interactive presentation introducing the key challenges in building ethical AI systems from the perspective of leadership. The course highlights responsibilities that leaders have for determining trade-offs between ethical and other objectives, and for creating a culture of rigour and accountability around the design, use and monitoring of AI decision-making systems.

Excellent session, stimulated thought across the teams, good use of practical examples.
trade-offs

Outcomes: Participants will understand the key challenges in building ethical AI systems from the perspective of leadership. They will see the responsibilities that leaders have for determining trade-offs between ethical and other objectives, and for creating a culture of rigour and accountability around the design, use and monitoring of AI decision-making systems.

Select Topics: After an introduction to the concepts of ethical AI, participants will explore some of the key components and challenges in making AI systems ethical. These include (and are not limited to),

  • Ethical & technical principles - when making ethical AI systems it is not sufficient to only have an ethical intent for the system. Many unethical consequences are as a result of these systems taking actions that do not align with the intent of the system’s creators. Translating from ethical intent to ethical outcomes is a technical challenge when implementing these systems. We will discuss both ethical principles to motivate what an ethical AI system will do, and technical principles to ensure how the system will honour its designers’ intent. Some example principles we discuss are,
    • Ethical: wellbeing, fairness, autonomy.
    • Technical: Science, accountability, assurance.
  • Understanding your system’s actions - most of AI and machine learning is concerned with discovering useful correlations for prediction. However, many AI systems actually interact with the world, thereby changing it! We will discuss in what situations correlational modelling is sufficient, and when a more detailed causal understanding of the world is required when building these systems.
  • Responsibility & oversight - systems that make decisions automatically must still have a human accountable for their actions, and for the design decisions that govern those actions. Part of creating ethical AI systems is empowering the right people to be responsible for an AI system, and ensuring they have the tools and ability for that responsibility to be meaningful.
  • Building an ethical AI system - we outline some key stages in the development of an ethical AI system. Each stage involves a different mix of designers, decision makers, stakeholders and engineers. We outline how leaders and decision makers are involved in eliciting and ultimately setting the different objectives (ethical and organisational) of the system and how they are balanced.

Data Scientist's Course

A two-day hands-on-tools course to develop the technical skills necessary for building systems that use machine learning to make automated decisions whilst accounting for ethical objectives.

It was very relevant to the work we are doing, and the tools/methodologies learned can improve our AI systems.
local optima

Outcomes: By the end of the course, participants will have built simple example systems that use machine learning to make automated decisions whilst accounting for ethical objectives. Participants will understand and explore some of the technical pitfalls that prevent machine learning systems from behaving ethically, and how to identify and correct for them.

Select topics:

  • Dataset and Problem Shift - Machine learning (ML) models are built on strong assumptions about their data, and it is very easy for many real systems to violate these assumptions. We examine the limits of ML and cross validation, and when pathological circumstances can be addressed with alternative approaches.
  • Causal versus Predictive Models - ML models leverage correlations in data to predict outcomes, on the assumption that the data generating process is fixed. This assumption is generally violated if the goal is to infer the outcome of an intervention in a system, which requires a causal model. We clarify the distinction between causal and predictive models and how they can be used & interpreted.
  • Loss Functions - Building a data-driven automated decision system requires explicitly specifying its objectives, often in the form of a loss function and constraints. The particular choice of loss, including what considerations to omit and include, are the primary mechanism of control designers have over the ethical operation of the system.
  • Fair Machine Learning - ML systems can perform well on average and still systematically err or discriminate against individuals or groups in the wider population. We examine some of the common notions of algorithmic fairness that attempt to measure and correct for such disparate treatment or outcome in ML systems.

More Information

For more information on our course, please don't hesitate to contact us. We also have available a detailed course brochure.