Australia’s national science agency the CSIRO has warned business leaders not to blindly trust companies selling AI products in a new report it hopes will encourage responsible AI implementation.

Recognising that lots of businesses will procure AI products from third parties rather than building them in-house, CSIRO offers a “caution” that buying AI systems “does not absolve [businesses] of responsibility for how the system operates”.

“The purchaser needs to be informed about (or ideally be involved in specifying) the system’s algorithms, data and objectives,” the CSIRO report says.

“A client who simply trusts that a vendor has taken appropriate care – without applying their own due diligence – may be exposing themselves to uncontrolled risks for which they are ultimately accountable.”

The risks of failed due diligence can be severe, CSIRO warns, and could see companies sleepwalking into reputational crises or serious legal trouble by buying off-the-shelf AI products without a second thought.

While Australia is far behind the European Union when it comes to bespoke AI-related legislation, lawmakers appear to be leaning toward an approach that focuses on how AI interacts with existing laws.

“AI is not unregulated,” a government spokesperson recently told Information Age. “Our copyright laws prescribe how data can be collected and used to train large language models, and privacy laws shape the kind of information that can be incorporated.

“Consumer protection laws also apply to the use of AI to mislead or deceive consumers.”

To combat these risks, CSIRO has offered a comprehensive guide for responsible AI implementation that breaks down Australia’s AI Ethics Principles into a series of business actions relevant to system owners, developers, and senior leadership.

The suggested actions range from impact assessments and setting objectives to data processing, privacy protection, and external audits.

CSIRO is hoping the report – along with other work from its Responsible AI Network – will help bridge the gap between the more abstract, theoretical frameworks and real-world business applications.

“We hear from businesses, that their ability to innovate with AI is directly correlated with their ability to earn trust from the communities they serve,” National AI Centre director Stela Solar said.

“AI systems that are developed without appropriate checks and balances can have unintended consequences that can significantly damage company reputation and customer loyalty.”

Data that trains AI models can embed bias, as seen with hiring and recruitment algorithms that end up baking in subconscious bias against women.

Even automated decision-making systems that rely on simple arithmetic can lead to irrevocable harm, as seen with the Robodebt fiasco.

Bill Simpson-Young, CEO of the Gradient Institute which helped develop the report, said businesses need to start implementing practices that are known to be effective.

“For example, when an AI system is engaging with people, informing users of an AI’s operation builds trust and empowers them to make informed decisions,” he said.

“Transparency for impacted individuals could be as simple as informing the user when they are interacting with an AI system.”