Today, in collaboration with Minderoo Foundation, we are releasing a report on de-risking automated decisions, which includes practical guidance for AI governance and AI risk management.
Many organisations are using AI to make consequential decisions, such as deciding who gets insurance, a loan, or a job. When humans delegate decisions to AI, problems can happen because AI lacks elements often required for good decision making, such as common sense, moral reasoning, and a basic understanding of the law. Many incidents have made it clear that AI has the potential to produce unlawful, immoral, discriminatory outcomes for individuals through opaque and unaccountable decisions. This includes issues such as AI discriminating against women and minorities. What can organisations do to reap the benefits of using AI for decision-making while preventing these issues from happening? This new report provides general guidelines for organisations to reduce the risks of using AI for automated decision making. It explains some novel risks introduced by AI, provides illustrations through case studies, and suggests a range of preventative, detective, and corrective actions to reduce and manage those risks.