In this article, Gradient’s Dan Steinberg and Finn Lattimore show how machine learning can be used for evidence-based policy. They show how it can capture complex relationships in data, helping mitigate bias from model mis-specification and how regularisation can lead to better causal estimates.
AI-LEAP is a new Australia-based conference aiming at exploring Artificial Intelligence (AI) from a multitude of perspectives: Legal, Ethical, Algorithmic and Political (LEAP). It draws broadly on computer science, social sciences and humanities to provide an exciting intellectual forum for a genuinely interdisciplinary exchange of ideas on what’s one of the most pressing issues of our times. The first edition will take place in Canberra in December 2021.
At the request of the Australian Government’s Department of Industry, Science, Energy and Resources, Gradient Institute provided an interactive “Artificial Intelligence Primer” training session for attendees at the National AI Summit.
In this article, Finn Lattimore and David Rohde show how a Bayesian approach to inferring the impact of interventions or actions representing causality softens the boundary between tractable and impossible queries, and opens up potential new approaches to causal inference.
Gradient Institute’s Chief Practitioner, Lachlan McCalman, describes our collaborative work with the Monetary Authority of Singapore and industry partners to develop a practical AI Fairness assessment methodology.
Gradient’s Chief Scientist, Tiberio Caetano, explains how next-best-action systems are often used to optimise business metrics and individual customer outcomes, but questions whether they could also become a vehicle for promoting social good.
Gradient’s Lachlan McCalman and Dan Steinberg presented a tutorial, along with colleagues from Element AI, at the ACM Fairness Accountability and Transparency Conference on 4 March 2021. See the video.
An article in The Conversation by Gradient’s Tiberio Caetano and Bill Simpson-Young discussing a technical paper co-written with Australian Human Rights Commission, Consumer Policy Research Centre, CHOICE and CSIRO’s Data61.
A new technical paper for the Australian Human Rights Commission produced by Gradient Institiute with the Consumer Policy Research Centre, CHOICE and CSIRO’s Data61 shows how businesses can identify algorithmic bias in artificial intelligence systems, and proposes steps they can take to address the problem.
Gradient Institute and Ethical AI Advisory have formed an alliance to tackle one of the most urgent problems facing Australia today: the development and deployment of artificial intelligence that is both ethical and trustworthy.
Gradient Institute Fellows Chris Dolman, Seth Lazar and Dimitri Semenovich, alongside Chief Scientist Tiberio Caetano, have written a paper investigating the question of which data should be used to price insurance policies. The paper argues that even if the use of some “rating factor” is lawful and helps predict risk, there can be legitimate reasons to reject its use. This suggests insurers should go beyond immediate business and legal considerations, but in addition be guided by a more holistic ethical framework when considering whether to use a certain rating factor to set insurance premiums.
Gradient Institute Fellow Kimberlee Weatherall and Chief Scientist Tiberio Caetano have written a submission to the Australian Human Rights Commission on their “Human Rights and Technology” discussion paper.
Our Chief Scientist, Tiberio Caetano, has summarised some lessons we have learned over the last year creating practical implementations of AI systems from ethical AI principles. Tiberio is a member of the OECD’s Network of Experts in Artifical Intelligence and wrote this article for the network’s blog.
Gradient Institute has written a paper that extends the work we submitted to the 2020 Ethics of Data Science Conference on fair regression in a number of ways. First, the methods introduced in the earlier paper for quantifying the fairness of continuous decisions are benchmarked against “gold standard” (but typically intractable) techniques in order to
Gradient Institute Chief Scientist Tiberio Caetano has been appointed to the OECD Network of Experts on AI (ONE AI). The expert group provides policy, technical and business input to inform OECD analysis and recommendations. It is a multi-disciplinary and multi-stakeholder group.
Finn Lattimore, a Gradient Principal Researcher, has published her work on developing a Bayesian approach to inferring the impact of interventions or actions. The work, done jointly with David Rohde (Criteo AI Lab), shows that representing causality within a standard Bayesian approach softens the boundary between tractable and impossible queries and opens up potential new
Gradient has released a White Paper examining four key challenges that must be addressed to make progress towards developing ethical artificial intelligence (AI) systems. These challenges arise from the way existing AI systems reason and make decisions. Unlike humans, AI systems only consider the objectives, data and constraints explicitly provided by their designers and operators.
Gradient Institute made a submission to the Australian Department of Industry, SCience, Energy and Resources for the public consultation on the discussion paper “Artificial Intelligence: Australia’s Ethics Framework (A Discussion Paper)” released by the Department of Industry, Innovation and Science on 5 April 2019.
We at the Gradient Institute are often asked who decides the particular ethical stance encoded into an ethical AI system. In particular, because we work on building such systems, the question also takes the form of “whose ethics” we will encode into them. Our Chief Practitioner, Lachlan McCalman, has written a blog post to address such questions.
Societies are increasingly, and legitimately, concerned that automated decisions based on historical data can lead to unfair outcomes for disadvantaged groups. One of the most common pathways to unintended discrimination by AI systems is that they perpetuate historical and societal biases when trained on historical data. Two of our Principal Researchers, Simon O’Callaghan and Alistair Reid, discuss whether we can improve the fairness of a machine learning model by removing sensitive attribute fields from the data.