News:

Using probabilistic classification to measure fairness for regression

Articles
Feb 18, 2020
Technical audience

In this paper (to be presented at the second Ethics of Data Science Conference) we study the problem of how to create quantitative, mathematical representations of fairness that can be incorporated into AI systems to promote fair AI-driven decisions. For discrete decisions (such as accepting or rejecting a loan application), there are well established ways to quantify fairness. However many decisions can lie in a continuous range, such as determining interest rates. There is a paucity of methods to quantify fairness for such continuous decisions, especially that require no assumptions about the underlying process. In this paper, the authors propose new methods to quantify fairness for continuous decisions, thus allowing for the incorporation of fairness considerations into the design of AI systems used to set interest rates, risk scores, payment amounts or other decisions that lie in a continuous range.

You can find the draft paper on arxiv.