Report

Investigating Manipulative Applications of Generative AI

Article image

With support from the Minderoo Foundation, Gradient Institute is exploring how bad actors could leverage the latest pre-trained language models for personalised persuasion and manipulation.

Our focus is on uncovering and illustrating ethical risks associated with this technology, contributing to a more informed and responsible technological landscape. We are consulting our network of experts as we formulate scenarios where bad actors use large language models for applications such as:

  • - Covertly collecting personal information by posing as a helpful assistant, exploiting trust to deceive individuals into sharing details unknowingly.

  • - Applying personalised persuasion techniques to endorse products or political views without revealing motives, tailoring messages to exploit the information asymmetry.

  • - Distorting perceptions government representatives hold of sentiment and the salience of voting issues in their electorate, thereby influencing policy decision-making and undermining the democratic process.

Moving beyond the theory, we are actively developing software to demonstrate the risk viability, providing an interactive experience for senior decision-makers in government and industry. Our ultimate aim is to equip them with the necessary knowledge to prompt a thoughtful reassessment of risks, practices, or legislative imperatives as they navigate the dynamically evolving AI ecosystem.

Related news

Gradient Institute 2025 Impact Report
News

Gradient Institute 2025 Impact Report

In 2025, Gradient Institute shaped Australiaʼs national AI guidance, contributed to global AI safety science, and helped hundreds of organisations build the capabilities to develop, deploy and use trustworthy AI systems.

Read more
Launching Gradient Gatherings
Event

Launching Gradient Gatherings

We're hosting our first Gradient Gatherings event in Sydney, and we'd love to see you there.

Read more
Scaling Sameness
Explainer

Scaling Sameness

There is an intuitive logic to redundancy. Send three engineers to check the bridge. Have two pilots in the cockpit. Run the numbers twice. If independent reviewers reach the same conclusion, we treat that agreement as evidence the conclusi...

Read more

Let's navigate AI responsibly together.

Contact us