Submission

Gradient Institute response to AI regulation discussion paper

Article image

Gradient Institute submitted a response to the Department of Industry, Science and Resources' discussion paper “Safe and Responsible AI in Australia”.

Gradient's response highlights two key issues in the discussion paper:

  • - The ‘risk-based’ approach to AI regulation outlined fails to properly target controls towards the context-specific risks posed by an application and would lead to ineffective management of those risks.

  • - The new risks to public safety created by certain highly capable foundation models (‘frontier’ models) are not acknowledged or addressed in the proposed approach.

Based on these observations, we provided the following recommendations to the Government:

  • - Treat application-specific risks through existing sector-specific and general regulation by ensuring that existing regulation is being applied, providing guidance on how to apply it and updating or clarifying it to treat new AI-associated risks as needed.

  • - Lead a global regulatory response to the safety risks of frontier models by investing in the development of standards and compliance mechanisms, championing international agreements to apply them, and implementing them in Australia.

  • - Create a government body with access to technical expertise that can assist existing regulators with their AI response, create and potentially enforce regulation for frontier model development, and advise the government on the rapidly changing AI technology landscape.

Read the full submission here.

Related news

Gradient Institute 2025 Impact Report
News

Gradient Institute 2025 Impact Report

In 2025, Gradient Institute shaped Australiaʼs national AI guidance, contributed to global AI safety science, and helped hundreds of organisations build the capabilities to develop, deploy and use trustworthy AI systems.

Read more
Launching Gradient Gatherings
Event

Launching Gradient Gatherings

We're hosting our first Gradient Gatherings event in Sydney, and we'd love to see you there.

Read more
Scaling Sameness
Explainer

Scaling Sameness

There is an intuitive logic to redundancy. Send three engineers to check the bridge. Have two pilots in the cockpit. Run the numbers twice. If independent reviewers reach the same conclusion, we treat that agreement as evidence the conclusi...

Read more

Let's navigate AI responsibly together.

Contact us