Assessing AI fairness in finance

Lachlan McCalman
Gradient Institute
Published in
11 min readFeb 1, 2021

--

Gradient Institute’s collaborative work to develop a practical fairness assessment for AI systems in finance. Get the methodology and accompanying case studies from this link.

Artificial intelligence (AI) systems are becoming ubiquitous in all kinds of decision-making, impacting many aspects of our lives and perhaps even altering the nature of our society. Belatedly, we have recognised that this change is not always for good: AI systems have demonstrated again and again that they can cause significant and avoidable harm when poorly designed. AI systems unfairly discriminating against individuals by their race, gender, or other attributes is a particularly common and disheartening example of this harm. Preventing incidents like these and helping AI live up to its promise of better and fairer decision-making is a tremendous technical and social challenge.

One of the key design mistakes behind harmful AI systems in use today is an absence of explicit and precise ethical objectives or constraints. Unlike humans, AI systems cannot apply even a basic level of moral awareness to their decision-making by default.. Only by encoding mathematically precise statements of our ethical standards into our designs can we expect AI systems to meet those standards.

Technical work to develop such ethical encodings is burgeoning, with much of the focus on the fairness of AI systems in particular. This work typically involves developing mathematically precise measures of fairness suitable for designing into AI systems. Fairness measures use the system’s data, predictions and decisions to characterise its fairness according to a specific definition (for example, by comparing the error rates of the system’s predictions between men and women). There now exists a panoply of fairness measures, each corresponding to a different notion of fairness and potentially applicable in different contexts.

Parallel to the work of encoding ethical objectives mathematically is a broader social effort to develop principles and guidelines for ethical AI. These aim to help the designers, maintainers, and overseers of AI systems recognise and ameliorate ethical risks. Governments, corporations and other organisations have released hundreds of such frameworks in the last few years, many with common themes like the importance of explanations of an AI system’s decisions, the need to provide mechanisms for redress when errors occur, and the need to understand and minimise avoidable harms caused by the system.

However, a gap remains between the technical efforts and the broader design principles. Designers building AI systems have access to principles on the one hand, and mathematical tools on the other, but little guidance about how to integrate these two resources and build a system that utilises them in consequential settings.

Gradient Institute recently completed a milestone in the ongoing work with the above organisations and the Singaporean Finance industry that begins to close this gap. The output of milestone was a methodology for banks to assess the fairness of their existing AI systems and design fairer AI systems in the future.

The FEAT Principles

One of the early adopters of AI technology outside tech firms has been finance. Banks, for example, increasingly rely on AI systems for determining loan and credit card approvals, for conducting marketing, and for detecting fraudulent behaviour. Such decisions also carry significant risk to both customers and the banks themselves.

To begin addressing the ethical risks of AI decision-making in finance and in doing so encourage AI adoption, the Monetary Authority of Singapore (MAS) released principles for responsible AI in the finance industry. These “FEAT Principles” (Fairness, Ethics, Accountability, Transparency) were developed in partnership with Singaporean and international financial institutions and AI experts, and describe aspirational ethical properties that an AI system would have. The four FEAT Fairness principles, for example, require that for AI and Data Analytics Systems (AIDA):

  1. Individuals or groups of individuals are not systematically disadvantaged through AIDA-driven decisions, unless these decisions can be justified.
  2. Use of personal attributes as input factors for AIDA-driven decisions is justified.
  3. Data and models used for AIDA-driven decisions are regularly reviewed and validated for accuracy and relevance, and to minimise unintentional bias.
  4. AIDA-driven decisions are regularly reviewed so that models behave as designed and intended.

Whilst appearing simple, these principles contain within them complex and value-laden questions such as when a group or individual is being ‘systematically disadvantaged’, and what data counts as ‘relevant’ for a particular application. Like the concept of fairness itself, these questions have no single uncontested answer, nor one which is independent of ethical judgement. Nor do the principles provide guidance for which (if any) of the myriad fairness measures developed may be appropriate to use to specify unjustified systematic disadvantage or unintentional bias.

FEAT Fairness Assessment Methodology

After releasing the FEAT Principles, MAS convened a consortium of more that 25 banks, insurers, and AI firms to work on their practical implementation. As core members of this “Veritas Consortium”, Gradient Institute, the AI firm Element AI, IAG’s Firemark Labs Singapore, EY and the banks HSBC and UOB, spent last year working on the first step in that implementation: a methodology for assessing AI systems for alignment with the FEAT Fairness principles (with the other principles relating to ethics accountability and transparency being tackled in a later phase). The team also developed guidance for financial institutions to ensure that their AI systems align with the principles, and case studies illustrating the application of the methodology to credit scoring and customer marketing systems. The methodology and case studies can be downloaded at the bottom of this page.

The methodology comprises a set of 18 questions (and extensive accompanying guidance) that are answered by the people responsible for the design, development and operation of the system. The answers to these questions then go to an independent assessor who makes a judgement about the degree to which the system is aligned with the FEAT Fairness principles. The questions and guidance also serve to help banks design new AI systems, by taking them through the steps required to define, measure, and implement fairness objectives and constraints.

The design of the methodology had to accommodate two critical but conflicting requirements: It had to be generic enough to be applicable across a whole industry and applicable to systems with different purposes, but specific enough to be useful and implementable by practitioners who may not be experts in algorithmic fairness or ethics.

The final design of the methodology tries to balance these competing requirements with three key design pillars: asking users to stake their ethical claim, focussing on the harms and benefits of the system, and scaling the assessment to the system risk.

Asking system owners to stake a claim

The first design pillar of the methodology is that it asks system owners to stake a claim on what they believe the fairness objectives of the system should be. Any assessment that can be applied to different AI systems cannot itself mandate specific notions or measures of fairness, such as the exact circumstances that constitute unjustified systematic disadvantage (FEAT Principle F1). Different measures of fairness imply different ethical stances, and no methodology could hope to enumerate the right choice in every situation, nor impose a particular choice that aligns the designer’s (or a particular community’s) ethical stance.

In philosophical literature, fairness is known as an essentially contested concept. Whilst people may have an intuitive understanding of what fairness is, different people will have different ideas about exactly what is fair in a particular context. This also applies to the selection of precise fairness objectives that can be encoded into an AI system. For example, in a hiring scenario, both the application of gender quotas to remove the effects of past and current discrimination, as well as blind hiring in which the gender of applicants is obscured, are just two of many conflicting versions of fair hiring. Each of these approaches entails a different hiring process and will produce different results. Each has proponents and detractors, both with reasoned arguments that may depend on the details of the particular situation and the necessary choice of a baseline against which to compare. Deciding on a particular fairness measure for an AI system is akin to selecting one of these approaches to fair hiring; the use of a particular mathematical measure of fairness implies a specific set of ethical values and priorities.

Imposing particular fairness measures on a whole class of AI systems would certainly be ignoring the unique circumstances and context of each system, and the ethical preferences of the people responsible for it. Therefore, the set of fairness measures can only be decided at a per-system level. Because no jurisdiction has yet developed regulation that mandates certain measures in certain circumstances (which may not even be possible or advisable), it must be the people responsible for that system that decide how its fairness should be measured.

The FEAT Fairness Assessment Methodology is built around this idea of the system owners ‘staking a claim’ by stating their fairness objectives and how they’re measured, preferably at design time. The assessment then asks them for evidence to convince an independent assessor that the system meets these objectives. This approach separates the question of ‘what is fair in this situation?’ from the question of ‘does this system operate in accordance with its stated fairness objectives?’. An expert can answer the second question with the output of the methodology. By sharing parts of the assessment with people affected by the system, independent ethics panels, external regulators, or the wider public, the answer to the first question can also be examined and critiqued.

Focus on harms and benefits

The second design pillar of the methodology addresses the problem that to be useful, the methodology cannot simply offload all the work of developing and measuring fairness objectives and constraints onto the users. To help in this task, the methodology asks system owners to analyse the harms and benefits that the system may create, and the different individuals and groups that it may impact. Once banks have identified these they can develop fairness measures from them by estimating how these harms and benefits are distributed across the population. The resulting fairness measures may have already been developed in the literature or could be novel and specific to the system.

This approach inverts the common question of ‘which fairness measure to choose?’ for an AI system: Instead, first decide who the system impacts and under what circumstances (noting that these choices also involve ethical judgement). Specific fairness measures can then be derived from the harms, benefits, and impacted people with guidance from the methodology.

A harms- and benefits- based approach also serves to highlight the assumptions that any set of fairness objectives rely on, and the challenge (really the impossibility) of capturing complex concepts like wellbeing or mental anguish in mathematics. For instance, the majority of standard fairness measures count occurrences of harms or benefits, assuming that every such harm has the same magnitude. This is unlikely to be the case in any real system: the harm of a cancelled credit-card, for instance, can be a nuisance for a wealthy person but a devastating blow to someone experiencing temporary financial hardship.

Another challenge in developing precise measures of fairness is the need to compare different kinds of harms and benefits with each other. Most systems will not have a single harm or benefit (and therefore a single measure of fairness) but many. How to ethically make trade-offs between say, the harm of being wrongly denied a loan from the benefit of receiving one is an unsolved problem.

The assessment methodology highlights these and other difficult problems for system owners developing fairness measures to align them with FEAT principles and provides some guidance and case study examples. None-the-less, understanding and developing measures for a system’s impact is likely a substantial undertaking, especially when the impact may be indirect or difficult to measure. For consequential systems this effort is paramount, but for the potentially hundreds of small, proof-of-concept, or research-style models used within a bank, performing a full assessment may be an impossible workload.

Scaling for risk

The third and final design pillar of the methodology addresses the workload involved in assessing the hundreds of AI systems in a large organisation. It specifies that systems with greater risk, for example, that affect many people or that make consequential decisions, should be assessed in greater detail.

Banks typically already undertake these kinds of risk-scaled model assessments but with a focus on financial harms. The methodology is designed to be incorporated into these processes, adding considerations of fairness risks for customers. The way it is integrated is not prescribed owing to how differently organisations organise their internal processes, however, the methodology does make suggestions based on common Model Risk Management approaches within banks.

This integration might, for example, see banks adopt risk levels for their AIDA systems. Models that are used only to inform future research, for example, might fall into an ‘exempt from assessment’ category. Other models could be grouped into higher risk categories based, for instance, on their

  • internal complexity (making their behaviour more difficult to reason about)
  • degree of autonomy
  • the size of the impacts of their mistakes
  • the number of people they affect.

The methodology recommends that these risk categories are associated with their own customisations of the assessment. Whilst the highest-risk systems should undergo full assessments, lower risk systems might require less detailed analysis, or may require only a subset of the assessment to be completed.

Case studies of applying the methodology

To ensure that the final version of the assessment methodology was indeed useful and practical to implement, the Veritas core team applied it to a number of real and synthetic AI systems, releasing these as accompanying case studies. The case studies focused on two application areas in which AI systems are commonly deployed: customer marketing, and credit scoring. Gradient Institute, with IAG’s Firemark Labs Singapore and HSBC, focussed on customer marketing.

We based our approach to the assessment on a principle of responsibility for marketing systems: that if a marketing system causes someone to acquire a product or service they would not otherwise have obtained, then the marketing system bears some responsibility for the harms and benefits associated with that product. A system that encourages people to take up smoking should be examined with the negative effects of smoking in mind. In a fairness context, this means examining how the marketing system increased the harms or benefits for some people compared to others.

Our synthetic case study assessed a system for the marketing of a personal loan, and developed measures that, for example, examined how the marketing system increased the rate of default (compared to ‘walk in’ customers) overall and also for particular at-risk groups.

The project team opted to perform synthetic and open data case studies to address the difficult challenge of keeping proprietary data about real systems confidential. The information needed to complete the fairness assessment, by its nature, requires detailed disclosure of a system’s objectives, priorities, performance, and customer profiles. All of this information is typically highly sensitive for a bank and their customers, potentially useful for their competitors, and may be encompassed by privacy legislation. To ensure that the synthetic and open data case studies were still informative for real systems, banks in the team performed internal assessments of similar systems.

The next steps

The assessment methodology Gradient Institute and the core Veritas team developed has now been reviewed by the other organisations in the Veritas consortium, some or all of whom are likely to implement it internally. This year, work will continue on assessments and guidance for the other FEAT Principles (of ‘ethics, accountability and transparency’), and new guidance and case-studies for AI systems used in insurance. These concepts are not independent of fairness, and so we will likely see iteration of the fairness methodology and integration into a single, holistic assessment. Finally, there is much work to be done providing more detailed guidance and case studies for application areas beyond marketing and credit scoring.

It is Gradient Institute’s hope that, whilst being voluntary, FEAT Fairness assessments will become common practice in the finance industry and that regulators around the world will study them carefully to stimulate and inform future guidelines and regulation. We also hope that institutions begin to publish some or all of their FEAT Fairness assessments, giving the wider community an ability to understand, and voice opinions on, these systems that make consequential yet currently opaque impacts in our lives.

--

--