In a first-of-its-kind study published in Scientific Reports, machine learning researchers at Gradient Institute, working in partnership with psychology researchers at the Australian National University, have found significant evidence for causal links between students’ self reported well-being and academic outcomes. The complexity and amount of data on students and school factors used in the study
AI-LEAP is a new Australia-based conference aiming at exploring Artificial Intelligence (AI) from a multitude of perspectives: Legal, Ethical, Algorithmic and Political (LEAP). It draws broadly on computer science, social sciences and humanities to provide an exciting intellectual forum for a genuinely interdisciplinary exchange of ideas on what’s one of the most pressing issues of our times. The first edition will take place in Canberra in December 2021.
Gradient's Lachlan McCalman and Dan Steinberg presented a tutorial, along with colleagues from Element AI, at the ACM Fairness Accountability and Transparency Conference on 4 March 2021. See the video.
Gradient Institute Fellows Chris Dolman, Seth Lazar and Dimitri Semenovich, alongside Chief Scientist Tiberio Caetano, have written a paper investigating the question of which data should be used to price insurance policies. The paper argues that even if the use of some "rating factor" is lawful and helps predict risk, there can be legitimate reasons to reject its use. This suggests insurers should go beyond immediate business and legal considerations, but in addition be guided by a more holistic ethical framework when considering whether to use a certain rating factor to set insurance premiums.
Gradient Institute has written a paper that extends the work we submitted to the 2020 Ethics of Data Science Conference on fair regression in a number of ways. First, the methods introduced in the earlier paper for quantifying the fairness of continuous decisions are benchmarked against “gold standard” (but typically intractable) techniques in order to
In this paper we study the problem of how to create quantitative, mathematical representations of fairness that can be incorporated into AI systems to promote fair AI-driven decisions.
Finn Lattimore, a Gradient Principal Researcher, has published her work on developing a Bayesian approach to inferring the impact of interventions or actions. The work, done jointly with David Rohde (Criteo AI Lab), shows that representing causality within a standard Bayesian approach softens the boundary between tractable and impossible queries and opens up potential new