We undertake scientific research in collaboration with universities and other research institutions to advance a science of ethics for AI, and share the findings across the academic community through publications and presentations. Current areas of research activity are featured below.
In order to ensure AI systems make ethical decisions, we must understand how different design choices for the system will affect such decisions, and how these in turn will impact on ethical outcomes of interest. In other words, we need to understand the causal relationships that go all the way from the equations that govern how the system operates to the downstream impact the system’s decisions have on the people affected by it. Causality is the scientific study of cause and effect relationships - how they can be mathematically represented, statistically modelled and used to support effective decision-making. This research work is conducted in collaboration with industry partners deploying AI systems at scale.
Algorithmic fairness involves expressing notions such as equity, equality, or reasonable treatment, as quantifiable measures that a machine learning algorithm can optimise. Mathematising these concepts so they can be inferred from data is challenging, as is deciding on the balance between fairness and other objectives such as accuracy in a particular application. Our research in this area is motivated by our personal experiences and industry partner use cases, and thus has a very applied focus.
Existing anti-discrimination law was developed under the assumption that decisions are made by people. However, as AI systems are playing an increasing role in decision making and, since people and AI systems process information quite differently, there is a need to revisit anti-discrimination law. We are investigating how existing legislation could be applied to algorithmic decision making systems, if there are any important gaps and how they might be addressed. This research work is conducted in collaboration with legal scholars with expertise in anti-discrimination law.
Can and should AI systems be used for discretionary decision-making in the context of Australian administrative law? AI systems are routinely used to automate non-discretionary decisions, but it is unclear whether it is technically or legally possible to have an AI system make a discretionary decision. As AI systems for automated decision making expand their footprint, work needs to be done to better understand the right scope for their use by governments in administrative decision-making. This line of research is pursued in conjunction with legal experts in public and administrative law.