Gradient Institute submitted a response to the Department of Industry, Science and Resources' discussion paper “Safe and Responsible AI in Australia”.
Gradient's response highlights two key issues in the discussion paper:
- The ‘risk-based’ approach to AI regulation outlined fails to properly target controls towards the context-specific risks posed by an application and would lead to ineffective management of those risks.
- The new risks to public safety created by certain highly capable foundation models (‘frontier’ models) are not acknowledged or addressed in the proposed approach.
Based on these observations, we provided the following recommendations to the Government:
- Treat application-specific risks through existing sector-specific and general regulation by ensuring that existing regulation is being applied, providing guidance on how to apply it and updating or clarifying it to treat new AI-associated risks as needed.
- Lead a global regulatory response to the safety risks of frontier models by investing in the development of standards and compliance mechanisms, championing international agreements to apply them, and implementing them in Australia.
- Create a government body with access to technical expertise that can assist existing regulators with their AI response, create and potentially enforce regulation for frontier model development, and advise the government on the rapidly changing AI technology landscape.
Read the full submission here.