Our team members contribute articles and research papers on our work or the state of ethical AI in the world. Some articles are intended for a general audience, and we also explore some more in-depth technical questions in machine learning or causal inference.
This paper extends the work submitted by Gradient to the 2nd Ethics of Data Science Conference on fair regression in a number of ways. Firstly, the methods introduced in the earlier paper for quantifying the fairness of continuous decisions are benchmarked against “gold standard” (but typically intractable) techniques in order to test their efficacy. Next we adapt these methods to produce a fast and scalable algorithm for adjusting the predictions of regression models to increase the fairness of their outcomes for a multitude of fairness criteria. You can find the draft paper on arxiv.
In this paper (to be presented at the second Ethics of Data Science Conference) we study the problem of how to create quantitative, mathematical representations of fairness that can be incorporated into AI systems to promote fair AI-driven decisions. For discrete decisions (such as accepting or rejecting a loan application), there are well established ways to quantify fairness. However many decisions can lie in a continuous range, such as determining interest rates. There is a paucity of methods to quantify fairness for such continuous decisions, especially that require no assumptions about the underlying process. In this paper, the authors propose new methods to quantify fairness for continuous decisions, thus allowing for the incorporation of fairness considerations into the design of AI systems used to set interest rates, risk scores, payment amounts or other decisions that lie in a continuous range.
You can find the draft paper on arxiv.
In this post we explain a Bayesian approach to inferring the impact of interventions or actions. We show that representing causality within a standard Bayesian approach softens the boundary between tractable and impossible queries and opens up potential new approaches to causal inference. This post is a detailed but informal presentation of our Arxiv papers: Replacing the do calculus with Bayes rule, and Causal inference with Bayes rule - also see our video presentation Bayesian Causality
Read the rest of the post here.
This White Paper examines four key challenges that must be addressed to make progress towards developing ethical artificial intelligence (AI) systems. These challenges arise from the way existing AI systems reason and make decisions. Unlike humans, AI systems only consider the objectives, data and constraints explicitly provided by their designers and operators. They possess no intrinsic moral awareness or social context with which to understand the consequences of their actions. To build ethical AI systems, any moral considerations must be explicitly represented in the objectives, data and constraints that govern how AI systems make decisions.
Read the rest of this whitepaper here.
We at the Gradient Institute are often asked who decides the particular ethical stance encoded into an “ethical AI”. In particular, because we work on building such systems, the question also takes the form of “whose ethics” we will encode into them. This post is to address such questions.
Two of us from Gradient institute were lucky enough to attend NeurIPS 2018 as co-organisers of the Workshop on Ethical, Social and Governance Issues in AI. With over 1000 accepted papers at the main conference, we only had time to see a small fraction of the amazing work on display. In this post we give a brief summary of three interesting papers on the topic of discrimination and fairness in machine learning. Concerns around fairness and discrimination arise in machine learning when some metric of an algorithm’s predictions (such as accuracy, number of people classified high risk or error rates) differs between groups of people. These groups are typically defined in terms of an attribute deemed sensitive, such as race, gender, religion, etc.
See here for the summary of highlights.
Societies are increasingly, and legitimately, concerned that automated decisions based on historical data can lead to unfair outcomes for disadvantaged groups. One of the most common pathways to unintended discrimination by AI systems is that they perpetuate historical and societal biases when trained on historical data. This is because an AI has no wider knowledge to distinguish between bias and legitimate selection.
In this post we investigate whether we can improve the fairness of a machine learning model by removing sensitive attribute fields from the data. By sensitive attributes we mean attributes that the organisation responsible for the system does not intend to discriminate against because of societal norms, law or policy - for example, gender, race, religion.
You can read the original post here.
Never in history has the human race been as powerful as it is today. The technology we are developing is reshaping our societies and our planet at an ever-increasing pace. In the space of decades, artificial intelligence (AI) systems have migrated from science fiction, to the lab, to the real world. To date, AI has been applied to accomplish many tasks, but perhaps the greatest significance of such systems is that they are also making consequential decisions about what happens to us. The AI technology humans are building is already steering the lives of billions of people, and the sophistication and reach of this influence are growing very rapidly. Data-driven algorithms are deciding who gets insurance, who gets a loan, and who gets a job. Parole and sentencing risk scores, social media feeds, web search results, traffic routes, advertising, job recruitment and online dating recommendations are all consequential decisions that are already algorithmically personalised today. Powered by algorithms and data, AI is becoming increasingly capable of influencing, in a highly specific and targeted manner, what actually happens to each one among billions of humans.
Read more here.