
Launching Gradient Gatherings
We're hosting our first Gradient Gatherings event in Sydney, and we'd love to see you there.
Read moreTrusted by individuals and organisations across government, industry, academia, civil society, and the community










































Resources and reports to enable responsible AI practice.

We're hosting our first Gradient Gatherings event in Sydney, and we'd love to see you there.
Read more
There is an intuitive logic to redundancy. Send three engineers to check the bridge. Have two pilots in the cockpit. Run the numbers twice. If independent reviewers reach the same conclusion, we treat that agreement as evidence the conclusion is sound. Having a second (or more) pair of eyes on a process, a deliverable, a product helps reduce the risk of an individual’s blind spot undermining the desired result.
Read more
The Department of Industry, Science and Resources (DISR) has released Guidance for AI Adoption to enable safe and responsible AI across Australian industry. The Guidance for AI Adoption: Foundations and supporting templates were developed by Gradient Institute with the National AI Centre.This new package of practical tools and templates helps organisations of all sizes put responsible AI principles into action. Rather than adding complexity, it focuses on essential practices that build trust and accountability, helping organisations adopt AI safely, confidently, and to its fullest potential.
Read more
Gradient Institute has responded to the Productivity Commission's interim report on 'Harnessing data and digital technology', arguing that its proposed regulatory approach to AI applies conventional governance principles to a fundamentally unconventional technology that may lead to a paradigm shift in how society operates.
Read more
Australian not-for-profit organisations (NFPs)—organisations with socially-aligned motivations, knowledge, and experience could significantly amplify their impact by using artificial intelligence (AI) capabilities in a safe and responsible way.AI adoption poses unique challenges for the NFP sector, which often operates with constrained resources and a natural aversion to risk given their funding environments and impact profiles. In response to these challenges, over the past year Gradient Institute delivered a dedicated program aimed at uplifting the capability of Australian NFPs and social enterprises (SEs) to develop and use AI responsibly.With support from Google.org, the program aimed to provide mission-driven organisations with the knowledge, confidence, and practical guidance to explore AI innovation, while remaining mindful of the potential risks and ethical considerations.The initiative was delivered through two streams: education, offering a suite of learning options including introductory and specialised courses, live webinars, and self-paced eLearning modules tailored for NFP and SE needs; and advisory, providing NFPs with actionable responsible AI goals, tailored support through exploratory workshops, individual consultations, and assistance with AI governance planning.This approach enabled organisations to build a strong foundation in responsible AI and receive targeted assistance relevant to their operational context and social mission.
Read more
Organisations are starting to adopt AI agents based on large language models to automate complex tasks, with deployments evolving from single agents towards multi-agent systems. While this promises efficiency gains, multi-agent systems fundamentally transform the risk landscape rather than simply adding to it. A collection of safe agents does not guarantee a safe collection of agents – interactions between multiple LLM agents create emergent behaviours and failure modes extending beyond individual components.
Read moreGradient Institute is an independent nonprofit research organisation helping society understand, manage and shape AI as it transforms our world. We conduct, distil, and interpret scientific research to bring clarity where decisions are complex, stakes are high, and uncertainty is the norm. Our approach is built on:
Rigorous research into AI systems, capabilities and risks, grounded in how they affect people, institutions, and society in practice.
Research designed to support policy, governance, and responsible use - with public benefit, not private advantage, as the guiding principle.
Clear, science-based explanations that help decision-makers and the public understand trade-offs, limits, and consequences, so decisions become responsive rather than reactive.

We help people build calibrated trust in AI: trust that is proportionate to the evidence, no more and no less. More trust than the evidence warrants creates risk. Less holds back AI’s potential. We provide the research, methods, and frameworks that ground trust in evidence and rigorous analysis, so that AI can be used responsibly, governed effectively, and questioned rigorously.
We conduct, distil, and interpret research, including through sponsored research and strategic partnerships, on AI capability, safety, and societal impact. Our work generates new knowledge and insight that helps tackle high-stakes challenges where technical and scientific AI research is essential.
Explore researchWe partner with governments, organisations, civil society, and communities to bring scientific understanding into important decisions about AI and its applications. We help people understand the terrain, provide assurance, and support thoughtful implementation to achieve intended impacts.
Explore AdvisoryWe help people across government, industry, civil society, and the public build genuine understanding of AI: what it is, what it can and cannot do at any point in time, where it can go wrong, and how to engage with it responsibly. Our programs build skills, discernment, and calibrated trust, so supporting clear-eyed decision-making rather than blind or reluctant adoption.
Learn more about Education
We collaborate with governments, research institutions, universities, industry, and civil society on work aligned with our public-interest mission. Our partnerships focus on bringing rigorous research and clear judgment into decisions where AI carries real societal consequences.
Want to collaborate with us?
Contact usFoundation Members:
