Responsible AI (sometimes referred to as ethical AI or trustworthy AI) is a multi-disciplinary effort to design and build AI systems to improve our lives. Responsible AI systems are designed with careful consideration of their fairness, accountability, transparency, and most importantly, their impact on people and on the world.
The field of responsible AI draws from many disciplines, including computer science, social science, economics, law, policy, and ethics. Creating responsible AI requires engaging all parts of society, especially those people affected by the system, and requires us all to decide how AI should contribute to our future.
- For a more detailed exposition of how Gradient Institute sees the challenges in building responsible AI, please read our White Paper, Practical Challenges for Ethical AI.
- You could also read our technical paper on algorithmic bias, produced with the Australian Human Rights Commission, Using Artificial Intelligence to Make Decisions: Addressing the Problem of Algorithmic Bias.
- Or read of some of our blog posts: motivating the need for applying scientific methods to responsible AI development, and exploring some effective (and less effective) approaches to making AI systems fairer.
- Finally, for some concrete examples of our work in Responsible AI, we have a selection of case studies.