Report

New guidance to help businesses implement responsible AI

Article image
Gradient Institute, in collaboration with Australia's National Artificial Intelligence Centre (hosted by CSIRO), has released a report that outlines practical measures for businesses to adopt Responsible Artificial Intelligence.

There are many existing high level principles and frameworks for Responsible AI, and although important for establishing the organisational intent, these are often not directly helpful for the people building or overseeing the use of such systems. The new report aims to bridge the gap between principles and implementation by exploring a selection of key practices and resources that promote alignment with the Australian Government's AI ethics principles. The report examines approaches such as impact assessment, data curation, contextualising fairness, pilot studies and organisational training, while providing pointers to useful resources for each.

According to the recent Australian Responsible AI Index, although 82% of businesses believe they practice responsible AI, less than 24% have concrete measures in place. CEO of Gradient Institute, Bill Simpson-Young, hopes this report will inspire more organisations to embark on the journey towards responsible AI practices.

Some of the selected practices are very intuitive. For example, when an AI system is engaging with people, informing users of an AI’s operation builds trust and empowers them to make informed decisions. Transparency for impacted individuals could be as simple as informing the user when they are interacting with an AI system.

Other practices are more nuanced and we aim to make the reader aware of this nuance, and point to resources where they can learn more. For example, while it is broadly accepted that fairness is important, what constitutes fair outcomes or fair treatment is open to interpretation, highly contextual and depends on the harms and benefits of the system and how impactful they are. It is the role of the system owner to consult relevant affected parties, domain and legal experts and system stakeholders to determine how to contextualise fairness to their specific AI use‑case, which informs all the design and operational decisions they make.

This report is released under a creative commons licence by the National AI Centre and Gradient Institute to invite organisations to actively share their experience and effective practices.

Read the press release here.

Report in full here.

Related news

Gradient Institute 2025 Impact Report
News

Gradient Institute 2025 Impact Report

In 2025, Gradient Institute shaped Australiaʼs national AI guidance, contributed to global AI safety science, and helped hundreds of organisations build the capabilities to develop, deploy and use trustworthy AI systems.

Read more
Launching Gradient Gatherings
Event

Launching Gradient Gatherings

We're hosting our first Gradient Gatherings event in Sydney, and we'd love to see you there.

Read more
Scaling Sameness
Explainer

Scaling Sameness

There is an intuitive logic to redundancy. Send three engineers to check the bridge. Have two pilots in the cockpit. Run the numbers twice. If independent reviewers reach the same conclusion, we treat that agreement as evidence the conclusi...

Read more

Let's navigate AI responsibly together.

Contact us