Yaya’s background includes researcher and business analyst in the Australian Public Service, focusing on generating user-centric outputs to address Whole-of-Government and agency needs. With a degree in Software Engineering (Honours) from The Australian National University she is a passionate advocate for bridging the gap between understanding technology and the people who use it, as well
Gradient Institute submitted a response to the Department of the Prime Minister and Cabinet’s Digital Technology Taskforce Issues Paper “Positioning Australia as a Leader in Digital Economy Regulation – Automated Decision Making and AI Regulation”.In this submission, Gradient Institute provides perspectives on the regulation of the digital economy in Australia, focusing on AI and Automated
Future.NSW presented by NSW Government held their thought-leadership event on the 2nd May, 2022. The event brought together the public, the private sector, business and industry leaders and experts that help shape NSW Government. NSW Government supports the building of a smarter, customer centric and a fully digital NSW. Working together with NSW residents, the
Gradient Institute’s Chief Practitioner, Lachlan McCalman wrote this latest blog post about the metaverse on Medium. The post argues that the metaverse has the potential to have a profound impact on the world, and as a result, we would be wise to plan conservatively and ensure that this technological convergence helps, rather than hurts, humanity
Our Linkage Grant Proposal “Socially Responsible Insurance in the Age of Artificial Intelligence” has been approved by the Australian Research Council. The project will be led by Gradient Institute Fellow and ANU Professor Seth Lazar, along with Dr Jenny L. Davis and Dr Damian Clifford from the ANU, Gradient Institute Fellow and University of Sydney
Today, in collaboration with Minderoo Foundation, we are releasing a report on de-risking automated decisions, which includes practical guidance for AI governance and AI risk management. Many organisations are using AI to make consequential decisions, such as deciding who gets insurance, a loan, or a job. When humans delegate decisions to AI, problems can happen
In partnership with Minderoo Foundation, Gradient Institute has released the first version of our AI impact control panel software. This tool helps decision-makers balance and constrain their system’s objectives without having to be ML experts. There is no objectively ‘correct’ solution to this balance of objectives: the answer depends on the values and priorities of
Gradient Institute, along with collaborators from ServiceNow, Vector Institute and The University of Tübingen, just published an article in the January edition of IEEE Computer laying out conceptual foundations for practical assessment of AI fairness. The article describes an AI fairness assessment approach developed by the authors along with collaborators from financial institutions, tech companies
In a first-of-its-kind study published in Scientific Reports, machine learning researchers at Gradient Institute, working in partnership with psychology researchers at the Australian National University, have found significant evidence for causal links between students’ self reported well-being and academic outcomes. The complexity and amount of data on students and school factors used in the study
The Responsible AI Index report is a study of over 400 Australian-based organisations and their awareness of, intentions around, and use of Responsible AI practices. This report, a first in Australia of its kind, was launched in partnership with IAG, Telstra, Fifth Quadrant CX, and Ethical AI Advisory. Overall, Australian businesses are classified as Initiating
We are excited to announce that Gradient Institute and Ethical AI Advisory have joined together! We have been working together as partners for the last year and realised how complementary we were – with Gradient working on the technical aspects of ensuring AI is used responsibly and Ethical AI Advisory working on the organisational and
In this article, Gradient’s Dan Steinberg and Finn Lattimore show how machine learning can be used for evidence-based policy. They show how it can capture complex relationships in data, helping mitigate bias from model mis-specification and how regularisation can lead to better causal estimates.
AI-LEAP is a new Australia-based conference aiming at exploring Artificial Intelligence (AI) from a multitude of perspectives: Legal, Ethical, Algorithmic and Political (LEAP). It draws broadly on computer science, social sciences and humanities to provide an exciting intellectual forum for a genuinely interdisciplinary exchange of ideas on what’s one of the most pressing issues of our times. The first edition will take place in Canberra in December 2021.
At the request of the Australian Government’s Department of Industry, Science, Energy and Resources, Gradient Institute provided an interactive “Artificial Intelligence Primer” training session for attendees at the National AI Summit.
In this article, Finn Lattimore and David Rohde show how a Bayesian approach to inferring the impact of interventions or actions representing causality softens the boundary between tractable and impossible queries, and opens up potential new approaches to causal inference.
Gradient Institute’s Chief Practitioner, Lachlan McCalman, describes our collaborative work with the Monetary Authority of Singapore and industry partners to develop a practical AI Fairness assessment methodology.
Gradient’s Chief Scientist, Tiberio Caetano, explains how next-best-action systems are often used to optimise business metrics and individual customer outcomes, but questions whether they could also become a vehicle for promoting social good.
Gradient’s Lachlan McCalman and Dan Steinberg presented a tutorial, along with colleagues from Element AI, at the ACM Fairness Accountability and Transparency Conference on 4 March 2021. See the video.
An article in The Conversation by Gradient’s Tiberio Caetano and Bill Simpson-Young discussing a technical paper co-written with Australian Human Rights Commission, Consumer Policy Research Centre, CHOICE and CSIRO’s Data61.
A new technical paper for the Australian Human Rights Commission produced by Gradient Institiute with the Consumer Policy Research Centre, CHOICE and CSIRO’s Data61 shows how businesses can identify algorithmic bias in artificial intelligence systems, and proposes steps they can take to address the problem.
Gradient Institute and Ethical AI Advisory have formed an alliance to tackle one of the most urgent problems facing Australia today: the development and deployment of artificial intelligence that is both ethical and trustworthy.
Gradient Institute Fellows Chris Dolman, Seth Lazar and Dimitri Semenovich, alongside Chief Scientist Tiberio Caetano, have written a paper investigating the question of which data should be used to price insurance policies. The paper argues that even if the use of some “rating factor” is lawful and helps predict risk, there can be legitimate reasons to reject its use. This suggests insurers should go beyond immediate business and legal considerations, but in addition be guided by a more holistic ethical framework when considering whether to use a certain rating factor to set insurance premiums.
Gradient Institute Fellow Kimberlee Weatherall and Chief Scientist Tiberio Caetano have written a submission to the Australian Human Rights Commission on their “Human Rights and Technology” discussion paper.
Our Chief Scientist, Tiberio Caetano, has summarised some lessons we have learned over the last year creating practical implementations of AI systems from ethical AI principles. Tiberio is a member of the OECD’s Network of Experts in Artifical Intelligence and wrote this article for the network’s blog.
Gradient Institute has written a paper that extends the work we submitted to the 2020 Ethics of Data Science Conference on fair regression in a number of ways. First, the methods introduced in the earlier paper for quantifying the fairness of continuous decisions are benchmarked against “gold standard” (but typically intractable) techniques in order to
In this paper we study the problem of how to create quantitative, mathematical representations of fairness that can be incorporated into AI systems to promote fair AI-driven decisions.
Gradient Institute Chief Scientist Tiberio Caetano has been appointed to the OECD Network of Experts on AI (ONE AI). The expert group provides policy, technical and business input to inform OECD analysis and recommendations. It is a multi-disciplinary and multi-stakeholder group.
Finn Lattimore, a Gradient Principal Researcher, has published her work on developing a Bayesian approach to inferring the impact of interventions or actions. The work, done jointly with David Rohde (Criteo AI Lab), shows that representing causality within a standard Bayesian approach softens the boundary between tractable and impossible queries and opens up potential new
Gradient has released a White Paper examining four key challenges that must be addressed to make progress towards developing ethical artificial intelligence (AI) systems. These challenges arise from the way existing AI systems reason and make decisions. Unlike humans, AI systems only consider the objectives, data and constraints explicitly provided by their designers and operators.
Gradient Institute made a submission to the Australian Department of Industry, SCience, Energy and Resources for the public consultation on the discussion paper “Artificial Intelligence: Australia’s Ethics Framework (A Discussion Paper)” released by the Department of Industry, Innovation and Science on 5 April 2019.
We at the Gradient Institute are often asked who decides the particular ethical stance encoded into an ethical AI system. In particular, because we work on building such systems, the question also takes the form of “whose ethics” we will encode into them. Our Chief Practitioner, Lachlan McCalman, has written a blog post to address such questions.
Societies are increasingly, and legitimately, concerned that automated decisions based on historical data can lead to unfair outcomes for disadvantaged groups. One of the most common pathways to unintended discrimination by AI systems is that they perpetuate historical and societal biases when trained on historical data. Two of our Principal Researchers, Simon O’Callaghan and Alistair Reid, discuss whether we can improve the fairness of a machine learning model by removing sensitive attribute fields from the data.