Report

De-risking automated decisions

Article image

Today, in collaboration with Minderoo Foundation, we are releasing a report on de-risking automated decisions, which includes practical guidance for AI governance and AI risk management.

Many organisations are using AI to make consequential decisions, such as deciding who gets insurance, a loan, or a job. When humans delegate decisions to AI, problems can happen because AI lacks elements often required for good decision making, such as common sense, moral reasoning, and a basic understanding of the law. Many incidents have made it clear that AI has the potential to produce unlawful, immoral, discriminatory outcomes for individuals through opaque and unaccountable decisions. This includes issues such as AI discriminating against women and minorities. What can organisations do to reap the benefits of using AI for decision-making while preventing these issues from happening? This new report provides general guidelines for organisations to reduce the risks of using AI for automated decision making. It explains some novel risks introduced by AI, provides illustrations through case studies, and suggests a range of preventative, detective, and corrective actions to reduce and manage those risks.

Related news

Gradient Institute 2025 Impact Report
News

Gradient Institute 2025 Impact Report

In 2025, Gradient Institute shaped Australiaʼs national AI guidance, contributed to global AI safety science, and helped hundreds of organisations build the capabilities to develop, deploy and use trustworthy AI systems.

Read more
Launching Gradient Gatherings
Event

Launching Gradient Gatherings

We're hosting our first Gradient Gatherings event in Sydney, and we'd love to see you there.

Read more
Scaling Sameness
Explainer

Scaling Sameness

There is an intuitive logic to redundancy. Send three engineers to check the bridge. Have two pilots in the cockpit. Run the numbers twice. If independent reviewers reach the same conclusion, we treat that agreement as evidence the conclusi...

Read more

Let's navigate AI responsibly together.

Contact us