Gradient Institute, along with collaborators from ServiceNow, Vector Institute and The University of Tübingen, just published an article in the January edition of IEEE Computer laying out conceptual foundations for practical assessment of AI fairness.
The article describes an AI fairness assessment approach developed by the authors along with collaborators from financial institutions, tech companies and the Monetary Authority of Singapore (MAS). It examines a key question posed by MAS when it initiated the work: If society demands that a bank’s use of artificial intelligence systems is “fair,” what is the bank to actually do?
Three key pillars, described in the article, support the authors’ answer to this question:
- Owing to the value-sensitive and context-sensitive nature of fairness, banks must explicitly stake a claim about the appropriate fairness objectives and constraints for each of their systems, and provide evidence that their systems adhere to them.
- The fairness objectives and constraints the banks develop should be built from an understanding of their systems’ impacts, particularly the harms and benefits the systems create for customers and for broader society.
- The time and resources required to assess the fairness of an AI system must be able to scale according to the system’s risk level, especially as the number and variety of AI systems in use within banks continues to grow.
Link to the article:
L. McCalman, et al.,”Assessing AI Fairness in Finance” in Computer, vol. 55, no. 01, pp. 94-97, 2022.
Link to the FEAT Fairness Assessment Methodology (2021 Version):