The rapid growth of artificial intelligence bots is outpacing the ability of the tech industry to understand its uses and guard against it getting out of control, an expert has warned.

With its update in December last year, ChatGPT ushered in a new era of generative AI technologies, leaving governments and traditional tech companies fighting to keep up and machine learning researchers frequently baffled by its ability to evolve independently.

“The speed at which capability is growing is greater than the speed at which AI safety is being investigated,” Bill Simpson-Young, chief executive of the Gradient Institute, a non-profit which studies ethical AI use, told Central News

“We’re dealing with a technology we don’t really understand. The people designing it don’t really understand how it is behaving.

“This is the first time I can think of in computing where new features are emerging that no one saw coming and emerge purely by making systems bigger.”

Trying to just tweak existing consumer protection is not going to be enough.

Numerous industry players, including (the maker of ChatGPT) OpenAI’s chief executive Sam Altman, have warned of the risk it could get out of control, while others have called for a pause in its development.

Advanced programs such as GPT-4 can process language and text with better accuracy whilst autonomous models like Auto-GPT can even write it’s own code.

“What’s already starting to happen is… the combination of large language models and other autonomous agent frameworks like Auto-GPT,” said Simpson-Young.

The risks and benefits of ChatGPT and other forms of AI have split opinion among users about the take-up of AI technologies.

A joint study by KMPG and the University of Queensland found more people felt they could trust AI but at the same time were wary of how it would be used.

In a survey conducted by Forbes, more than 75 per cent of Americans expressed concerns AI would lead to job losses and had the ability to produce misinformation. Respondents to the survey were also wary about how artificial intelligence is used in certain types of content.

Organisations across the world are also feeling the heat of implementing AI technologies with a 2021 survey by McKinsey showing AI posed cybersecurity threats. Between 2020 and 2021 there was a five per cent increase in responses by organisations in emerging economies who indicated they were working to stop workers from being laid off the job because of AI.  

Simpson-Young said he is more concerned about the ability of AI models to generate misinformation and called for strong regulation to enforce use.

“The scale at which that can happen now is huge,” he added. “The fact that you can do this interactively through conversation you have much more potential for misinforming people.

“Trying to just tweak existing consumer protection is not going to be enough.

“Someone could use one of these models to plan out some actions that might be really harmful to a lot of people and those actions will then get implemented without humans being in the loop.” 

The responsibility for ensuring the proper use and ethical considerations of AI lies with the developers, policymakers, and users who interact with these systems.

The risks haven’t stopped the artificial intelligence market from expanding. Between 2018 and 2021, global corporate investment in AI technologies rose by more than $127 billion.

Companies are investing billions of dollars into research and development of AI technologies that are now guiding the next phase of the digital era. Australia alone will need more than 160,000 AI specialists in less than a decade. 

The possibilities being unearthed by AI put a greater onus on the responsible development and use of these platforms, with even the programme absolving itself of responsibility.

“I am a tool created by humans and my responses are based on patterns and information present in the data I was trained on,” said ChatGPT, when prompted. 

“The responsibility for ensuring the proper use and ethical considerations of AI lies with the developers, policymakers, and users who interact with these systems.”

Main image by Mike MacKenzie/Flickr.