Australian Human Rights Commission

undefined
Case Study
Gradient Institute worked with the Australian Human Rights Commission and others to identify some of the causes of algorithmic bias and examine ways of reducing or mitigating it.

Gradient Institute worked with the Australian Human Rights Commission (AHRC), the Consumer Policy Research Centre, CHOICE and CSIRO’s Data61 to develop a technical paper addressing algorithmic bias.

This formed part of the Human Rights and Technology Project being undertaken by the AHRC. The larger project “aims to advance human rights protection in the context of unprecedented technological change. It considers how law, policy, incentives and other measures can promote and protect human rights in respect of new and emerging technologies.”

In this project, Gradient Institute provided the technical work on algorithmic bias including running simulations on synthetic data and writing the technical content of the report. Gradient also worked very closely with the AHRC on the full report contents to ensure that the report was technically accurate as well as being able to be understood by a wide audience. Gradient also develolped an online interactive simulation to demonstrate algorithmic bias that the AHRC released on its website in conjunction with the release of the report.

The final technical paper was called “Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias” and is available here.

The AHRC wants to use the report to help educate industry, government and the public on algorithmic bias and how to manage it. In particular, as Human Rights Commissioner Ed Santow said on the report’s release, “New technology should give us what we want and need, not what we fear. Australians should be told when AI is used in decisions that affect them. The best way to rebuild public trust in the use of AI by government and corporations is by ensuring transparency, accountability and independent oversight. A clear national strategy and good leadership will give Australia a competitive advantage and technology that Australians can trust."

The paper shows that algorithmic bias can result in decisions that are unfair, or even unlawful. It demonstrates how businesses can identify algorithmic bias in AI, and proposes steps they can take to address/mitigate the problem. The paper also offers practical guidance for companies to ensure that when they use AI systems, their decisions are fair, accurate and comply with human rights. It also includes a detailed technical hypothetical simulation to test how algorithmic bias can arise. The hypothetical simulation is that of an electricity retailer that uses an AI-powered tool to decide how to offer its products to customers, and on what terms. The paper includes five recommendations for businesses and five approaches for mitigating algorithmic bias.

A summary of the paper was also published as an article in The Conversation.