Course for Data Scientists

Two-day course for anyone working closely with AI and machine learning systems

One of the most urgent problems facing Australian business today is developing and deploying artificial intelligence (AI) that is both ethical and trustworthy.

This two-day course aims to develop the technical skills necessary for building systems that use machine learning to make automated decisions whilst accounting for ethical objectives.

Participants in this instructor-led two-day course will explore simplified systems that use ML to make automated decisions that account for ethical objectives, and explore some of the technical pitfalls that prevent ML systems from behaving ethically, and how to identify and correct for them.

DATETIME
Tuesday 25 May 2021
Wednesday 26 May 2021
9:00 am – 5:00 pm AEST
9:00 am – 5:00 pm AEST

STANDARD TICKET: $1,760 per person
EARLY BIRD TICKETS – $1,600 per person: Book by 11:59pm Monday 3 May 2021 to get 10% off the standard ticket price.
GROUP BOOKINGS – $1,280 per person: Book two or more people to get 25% off the standard ticket price.
Registrations close at 11:59pm on Monday 10 May 2021. Listed prices are inclusive of discounting and GST.
WHO SHOULD TAKE THIS COURSE
  • Technical staff who have experience building data-driven models
  • Professionals experienced in interpreting graphs and familiar with terms such as “parameter optimisation”, “overfitting” and “model validation”

While many of the concepts discussed are applicable to a wide range of AI systems, the course content primarily focuses on models built using structured and labeled training data. It includes presentations, discussions, hands-on interactive exercises, and an applied project.

WHAT YOU WILL LEARN
Automated Decision Making
  • Core concepts underlying how ML systems operate and implications for ethics
  • Review of the foundations of model validation
  • How to specify objectives/intent in an ML system
  • ‘Supervised learning’, ‘overfitting’ and ‘underfitting’
  • Understanding uncertainty in predictions from automated decision-making systems
  • Classification and regression
Causal Versus Predictive Models
  • How ML models rely on correlations in data to predict outcomes, and assume data-generating process is fixed
  • How failure to consider causality in models used to drive decisions and interventions can lead to poor results and unintended consequences, despite good intentions
  • The distinction between causal and predictive models
  • How causal and predictive models can be used and interpreted
  • Identifying when a causal model is required
  • Understanding Simpson’s Paradox
Interpretability, transparency and accountability
  • Why it’s not always clear what it a ‘transparent’ model means
  • The three critical ingredients of an ethical AI system
  • How these help identify when models may break down, lack vital context, or have been designed or motivated in an acceptable way
  • Tools and techniques to make ML models more interpretable and transparent
  • How to communicate key information about model behaviour and ethical risks to decision makers
  • Motivations and audience for interpretability
  • Feature importance and partial dependence and causality
  • Local interpretability and LIME
  • Global interpretability and surrogate models
Loss Functions and Robust Modeling
  • Building a data-driven automated decision system with explicit objectives (e.g. loss function and constraints)
  • Choice of loss function, inc. what considerations to omit and include, as primary control mechanism for ethical operations
  • How losses in data-driven systems are specified by data and rely on assumptions about that data
  • Design choices and assumptions when translating a real-world problems into an algorithmic decision-making
  • The ethical issues that arise from design choices and assumptions
  • Encoding values in loss functions, and cost-sensitive classification
  • Calibration and decision-making based on predicted probabilities
Fair’ Machine Learning
  • How ML systems perform well on average, but can still systematically err, or discriminate against individuals or groups
  • Some of the common notions of algorithmic fairness
  • Approaches to measuring and correcting for disparate treatment or outcomes in ML systems
  • Sources of unfairness in ML models
  • Fairness metrics
  • Approaches to removing bias
Applied Project

Participants will undertake a project that puts the concepts learned into practice. They will work in teams to:

  • Analyse an algorithmic system
  • Identify potential ethical issues
  • Propose solutions
  • Present the results to the rest of the class at the end of the day
COURSE FORMAT

This course is live, remotely delivered by two instructors. We replicate a classroom setting by using video-conferencing, live chat and ‘break-out rooms’. At the start of each topic, there’s a presentation of key concepts followed by class discussion. Concepts are then explored in interactive exercises using Jupyter notebooks with one-on-one guidance by the tutors. The presentation material and notebook solutions are provided as reference material.

To participate in the course, you will need:

  • Reliable internet connection
  • Computer with a webcam, microphone, and audio
  • Current version of Firefox, Chrome/Chromium, Safari, or Edge browsers
  • Access to Microsoft Teams and gradientinstitute.org from your network

Information about how to join the course will be sent to you closer to the date of the course.

Questions? Email us, or ask us about tailoring this course for your organisation.

Terms and Conditions: By registering for this event, you accept the terms and conditions of Gradient Institute, an independent non-profit that researches, designs, and develops ethical AI systems. All published ticket prices are in Australian dollars, and the course will be taught in English.