Latest News

Articles, opinion pieces, white papers, reports and news releases

  • Explainer on Causal Inference with Bayes Rule
    In this article, Finn Lattimore and David Rohde show how a Bayesian approach to inferring the impact of interventions or actions representing causality softens the boundary between tractable and impossible queries, and opens up potential new approaches to causal inference.
  • Practical fairness assessments for AI systems in finance
    Gradient Institute’s Chief Practitioner, Lachlan McCalman, describes our collaborative work with the Monetary Authority of Singapore and industry partners to develop a practical AI Fairness assessment methodology.
  • Next-best-action for social good
    Gradient’s Chief Scientist, Tiberio Caetano, explains how next-best-action systems are often used to optimise business metrics and individual customer outcomes, but questions whether they could also become a vehicle for promoting social good.
  • Tutorial: Using Harms and Benefits to Ground Practical AI Fairness Assessments in Finance
    Gradient’s Lachlan McCalman and Dan Steinberg presented a tutorial, along with colleagues from Element AI, at the ACM Fairness Accountability and Transparency Conference on 4 March 2021. See the video.
  • Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this
    An article in The Conversation by Gradient’s Tiberio Caetano and Bill Simpson-Young discussing a technical paper co-written with Australian Human Rights Commission, Consumer Policy Research Centre, CHOICE and CSIRO’s Data61.
  • New tools for fairer AI
    A new technical paper for the Australian Human Rights Commission produced by Gradient Institiute with the Consumer Policy Research Centre, CHOICE and CSIRO’s Data61 shows how businesses can identify algorithmic bias in artificial intelligence systems, and proposes steps they can take to address the problem.
  • Turbocharging ethical AI in Australia
    Gradient Institute and Ethical AI Advisory have formed an alliance to tackle one of the most urgent problems facing Australia today: the development and deployment of artificial intelligence that is both ethical and trustworthy.
  • Ethics of insurance pricing
    Gradient Institute Fellows Chris Dolman, Seth Lazar and Dimitri Semenovich, alongside Chief Scientist Tiberio Caetano, have written a paper investigating the question of which data should be used to price insurance policies. The paper argues that even if the use of some “rating factor” is lawful and helps predict risk, there can be legitimate reasons to reject its use. This suggests insurers should go beyond immediate business and legal considerations, but in addition be guided by a more holistic ethical framework when considering whether to use a certain rating factor to set insurance premiums.
  • Submission to Australian Human Rights Commission
    Gradient Institute Fellow Kimberlee Weatherall and Chief Scientist Tiberio Caetano have written a submission to the Australian Human Rights Commission on their “Human Rights and Technology” discussion paper.
  • Converting ethical AI principles into practice
    Our Chief Scientist, Tiberio Caetano, has summarised some lessons we have learned over the last year creating practical implementations of AI systems from ethical AI principles. Tiberio is a member of the OECD’s Network of Experts in Artifical Intelligence and wrote this article for the network’s blog.
  • Fast methods for fair regression
    Gradient Institute has written a paper that extends the work we submitted to the 2020 Ethics of Data Science Conference on fair regression in a number of ways. First, the methods introduced in the earlier paper for quantifying the fairness of continuous decisions are benchmarked against “gold standard” (but typically intractable) techniques in order to
  • Using probabilistic classification to measure fairness for regression
    In this paper we study the problem of how to create quantitative, mathematical representations of fairness that can be incorporated into AI systems to promote fair AI-driven decisions.
  • Gradient Institute Chief Scientist appointed to OECD advisory group on artificial intelligence
    Gradient Institute Chief Scientist Tiberio Caetano has been appointed to the OECD Network of Experts on AI (ONE AI). The expert group provides policy, technical and business input to inform OECD analysis and recommendations. It is a multi-disciplinary and multi-stakeholder group.
  • Causal inference with Bayes rule
    Finn Lattimore, a Gradient Principal Researcher, has published her work on developing a Bayesian approach to inferring the impact of interventions or actions. The work, done jointly with David Rohde (Criteo AI Lab), shows that representing causality within a standard Bayesian approach softens the boundary between tractable and impossible queries and opens up potential new
  • Practical challenges for ethical AI (White Paper)
    Gradient has released a White Paper examining four key challenges that must be addressed to make progress towards developing ethical artificial intelligence (AI) systems. These challenges arise from the way existing AI systems reason and make decisions. Unlike humans, AI systems only consider the objectives, data and constraints explicitly provided by their designers and operators.
  • Our response to “Artificial Intelligence: Australia’s Ethics Framework”
    Gradient Institute made a submission to the Australian Department of Industry, SCience, Energy and Resources for the public consultation on the discussion paper “Artificial Intelligence: Australia’s Ethics Framework (A Discussion Paper)” released by the Department of Industry, Innovation and Science on 5 April 2019.
  • Whose ethics?
    We at the Gradient Institute are often asked who decides the particular ethical stance encoded into an ethical AI system. In particular, because we work on building such systems, the question also takes the form of “whose ethics” we will encode into them. Our Chief Practitioner, Lachlan McCalman, has written a blog post to address such questions.
  • Ignorance isn’t bliss
    Societies are increasingly, and legitimately, concerned that automated decisions based on historical data can lead to unfair outcomes for disadvantaged groups. One of the most common pathways to unintended discrimination by AI systems is that they perpetuate historical and societal biases when trained on historical data. Two of our Principal Researchers, Simon O’Callaghan and Alistair Reid, discuss whether we can improve the fairness of a machine learning model by removing sensitive attribute fields from the data.
  • Helping machines to help us
    Our Chief Scientist, Tiberio Caetano, has written a blog post outlining Gradient Institute’s approach to building ethical AI.