Regulation

Australian Government releases mandatory AI guardrails paper and voluntary AI safety standard

Article image

The Australian Government has released two key documents aimed at promoting safe and responsible AI in Australia, with Gradient Institute contributing expertise to their development.

The mandatory AI guardrails paper was developed in consultation with the temporary AI Expert Group of which Gradient Institute CEO Bill Simpson-Young is a member. The paper proposes:

  • - A definition of "high risk AI"

  • - 10 regulatory guardrails for high risk AI, designed to mitigate potential harms throughout the AI lifecycle

  • - Regulatory options to enforce these guardrails, building upon existing legal frameworks.

The government is seeking public feedback on this proposal paper, recognising the critical importance of this consultation process for Australia's future. Given AI's far-reaching implications for individuals and society, we encourage broad engagement in this process, and will be making a submission ourselves.

Concurrently released, the new Voluntary AI Safety Standard aims to:

  • - Support and promote best practice governance

  • - Help businesses adopt AI technologies safely and responsibly

The Gradient Institute team had input into the development of the voluntary standard also.

These releases follow the government's interim response to the "Safe and Responsible AI in Australia" consultation released earlier this year to which Gradient Institute made a submission.

Related news

Gradient Institute 2025 Impact Report
News

Gradient Institute 2025 Impact Report

In 2025, Gradient Institute shaped Australiaʼs national AI guidance, contributed to global AI safety science, and helped hundreds of organisations build the capabilities to develop, deploy and use trustworthy AI systems.

Read more
Launching Gradient Gatherings
Event

Launching Gradient Gatherings

We're hosting our first Gradient Gatherings event in Sydney, and we'd love to see you there.

Read more
Scaling Sameness
Explainer

Scaling Sameness

There is an intuitive logic to redundancy. Send three engineers to check the bridge. Have two pilots in the cockpit. Run the numbers twice. If independent reviewers reach the same conclusion, we treat that agreement as evidence the conclusi...

Read more

Let's navigate AI responsibly together.

Contact us