Regulation

Regulators urged to monitor the AI power struggle closely for potential risks

Article image

Dr. Tiberio Caetano, co-founder and chief scientist of Gradient Institute, warns that regulators need to start paying attention to anyone using vast amounts of computer power in machine learning to prevent “bad actors” from creating harmful new AI systems.

While regulators focus on the algorithms and data used in machine learning, not enough attention is paid to the role of computing power in unlocking AI capability.

Dr. Caetano explains that increasing the size or compute of these models results in them acquiring qualitatively new capabilities. He says that while discussions about regulating AI development are well under way, most proposals have only focused on the algorithms and data used for machine learning, and that compute needs to be added to the regulatory mix.

Read the full AFR article here.

Related news

Gradient Institute 2025 Impact Report
News

Gradient Institute 2025 Impact Report

In 2025, Gradient Institute shaped Australiaʼs national AI guidance, contributed to global AI safety science, and helped hundreds of organisations build the capabilities to develop, deploy and use trustworthy AI systems.

Read more
Launching Gradient Gatherings
Event

Launching Gradient Gatherings

We're hosting our first Gradient Gatherings event in Sydney, and we'd love to see you there.

Read more
Scaling Sameness
Explainer

Scaling Sameness

There is an intuitive logic to redundancy. Send three engineers to check the bridge. Have two pilots in the cockpit. Run the numbers twice. If independent reviewers reach the same conclusion, we treat that agreement as evidence the conclusi...

Read more

Let's navigate AI responsibly together.

Contact us