Monetary Authority of Singapore

Enabling financial institutions to evaluate AI for ethical outcomes

undefined
Case Study
Gradient Institute is working with the Monetary Authority of Singapore (MAS) to help ensure that AI is used ethically in Singapore’s financial industry.

Gradient Institute collaboratively developed a methodology, metrics and tools for financial institutions to apply the Monetary Authority of Singapore's principles for responsible AI.

To begin addressing the ethical risks of AI decision-making in finance and in doing so encourage AI adoption, the Monetary Authority of Singapore (MAS) released principles for responsible AI in the finance industry. These “FEAT Principles” (Fairness, Ethics, Accountability, Transparency) were developed in partnership with Singaporean and international financial institutions and AI experts, and describe aspirational ethical properties that an AI system would have. The four FEAT Fairness principles, for example, require that for AI and Data Analytics Systems (AIDA):

  1. Individuals or groups of individuals are not systematically disadvantaged through AIDA-driven decisions, unless these decisions can be justified.
  2. Use of personal attributes as input factors for AIDA-driven decisions is justified.
  3. Data and models used for AIDA-driven decisions are regularly reviewed and validated for accuracy and relevance, and to minimise unintentional bias.
  4. AIDA-driven decisions are regularly reviewed so that models behave as designed and intended.

Whilst appearing simple, these principles contain within them complex and value-laden questions such as when a group or individual is being ‘systematically disadvantaged’, and what data counts as ‘relevant’ for a particular application. Like the concept of fairness itself, these questions have no single uncontested answer, nor one which is independent of ethical judgement. Nor do the principles provide guidance for which (if any) of the myriad fairness measures developed may be appropriate to use to specify unjustified systematic disadvantage or unintentional bias.

After releasing the FEAT Principles, MAS convened a consortium of more that 25 banks, insurers, and AI firms to work on their practical implementation. As core members of this “Veritas Consortium”, Gradient Institute, working with AI firm Element AI, IAG’s Firemark Labs Singapore, EY and the banks HSBC and UOB, spent 2020 working on the first step in that implementation: a methodology for assessing AI systems for alignment with the FEAT Fairness principles (with the other principles relating to ethics, accountability and transparency being tackled in a later phase). The team also developed guidance for financial institutions to ensure that their AI systems align with the principles, and case studies illustrating the application of the methodology to credit scoring and customer marketing systems.

The assessment methodology Gradient Institute and the core Veritas team developed was reviewed by the other organisations in the Veritas consortium, some or all of whom are likely to implement it internally. In 2021, work will continue on assessments and guidance for the other FEAT Principles (of ‘ethics, accountability and transparency’), and new guidance and case-studies for AI systems used in insurance. These concepts are not independent of fairness, and so we will likely see iteration of the fairness methodology and integration into a single, holistic assessment. Finally, there is much work to be done providing more detailed guidance and case studies for application areas beyond marketing and credit scoring.

It is Gradient Institute’s hope that, whilst being voluntary, FEAT Fairness assessments will become common practice in the finance industry and that regulators around the world will study them carefully to stimulate and inform future guidelines and regulation. We also hope that institutions begin to publish some or all of their FEAT Fairness assessments, giving the wider community an ability to understand, and voice opinions on, these systems that make consequential, yet currently opaque, impacts on our lives.

For more information: