Blog:

Dec 3, 2019

Practical Challenges For Ethical AI (White Paper)

In our new White Paper we discuss four of the practical challenges in designing and building ethical AI systems

This blog entry contains the executive summary of Gradient Institute’s new White Paper. The full paper can be found here.


This White Paper examines four key challenges that must be addressed to make progress towards developing ethical artificial intelligence (AI) systems. These challenges arise from the way existing AI systems reason and make decisions. Unlike humans, AI systems only consider the objectives, data and constraints explicitly provided by their designers and operators. They possess no intrinsic moral awareness or social context with which to understand the consequences of their actions. To build ethical AI systems, any moral considerations must be explicitly represented in the objectives, data and constraints that govern how AI systems make decisions.

The first challenge in creating ethical AI is to define ethical objectives and constraints as precise, measurable quantities. This is necessary because AI systems reason mathematically, rather than through written or spoken language that is open to interpretation. Any mathematical representation will only be able to approximate the ‘true’ intention motivating the deployment of the system, and will inevitably fail to capture the complexity of human experience. It is therefore crucial that designers maximise the quality of such mathematical approximations by incorporating diverse viewpoints, and building robust mechanisms to detect and mitigate risks that arise when these approximations are unsatisfactory.

Given a well-defined set of ethical objectives, the next challenge is to create a system that will realise them. Doing so requires careful analysis of data bias, causal relationships and predictive uncertainty. Real-world data sets inevitably contain biases for which designers must account. It is also necessary to model the causal effect that design choices are likely to have on the objectives to better ensure the decisions produced by the AI system lead to the intended consequences. A quantitative treatment of uncertainty is also crucial to understand and manage risks associated with deploying the system.

The next challenge is to leverage human reasoning and judgement to provide effective oversight over AI-driven decisions. Effective oversight relies on nuanced approaches to transparency and interpretability; simplistic approaches such as transparent source code or intuitive explanations for individual decisions are unlikely to be the answer to create robust, reliable and ethical AI systems. More effective approaches will likely be context- and domain-specific, and require a deeper understanding of how to combine human and machine reasoning.

The fourth and final challenge this White Paper discusses is how to ensure regulation keeps up with advances in AI development. The urgent need to establish effective systems of accountability for AI-driven decision-making demands a proactive approach to regulation, however, the fact that the scientific understanding of AI ethics is still in its infancy means that policymakers should proceed with caution. A multidisciplinary approach is required to solve the challenges outlined in this White Paper, and it is unlikely that general-purpose, cross-domain regulation will prove suitable. Instead, we should consider bolstering sectoral regulatory bodies by identifying how current regulation falls short, making changes to cover existing gaps, and ensuring that all regulation is flexible enough to respond to rapid advances in AI technology.

The challenges of developing ethical AI systems are substantial, but so too are the opportunities to do good. With the right knowledge, we can engineer automated decision-making systems that deliberately minimise harm and can be configured to achieve a variety of ethical objectives. In doing so, we create the opportunity to discuss, re-examine, and perhaps even advance the values that we as a society wish to live by.


You can find the rest of our white paper here.