LinkedIn News Australia’s Post

View organization page for LinkedIn News Australia, graphic

1,169,792 followers

How should Australia regulate AI? Two thirds of Australians expect the technology to be regulated and want an independent body to oversee it, according to a recent report by consultancy firm KPMG and the Australian Information Industry Association. (Read more: https://lnkd.in/eHWs_umV) In addition to a dedicated AI regulator, there are also calls for the powers of the eSafety Commissioner or ACCC to be expanded so that they can better oversee companies developing or deploying the technology. (Read more: https://lnkd.in/eW8rbbmC) The European Commission has made the most progress in this space so far — it has been drafting an AI Act since 2020. However the rapid advancements in generative AI — largely due to the rise of ChatGPT — forced the Commission to update its draft legislation last week, which if passed will see AI tools ranked according to their perceived risk level (minimal, limited, high, and unacceptable). Companies developing or using AI applications considered riskier will have stronger transparency requirements, and will also need to disclose any copyrighted material used to develop their systems. (Read more: https://lnkd.in/gbbbaQwp). Artificial intelligence is difficult to regulate due to its broad range of uses across a diverse range of industries. The technology is not new, however the popularity of ChatGPT has inspired companies around the world to adopt it. ChatGPT's release has also put AI into the hands of individuals, smaller companies and even so-called 'bad actors' who lack scrutiny. “We need to empower individual regulators — sector based regulators. But we need AI dedicated agencies, just as we have for finance, just as we have for health just at the head for any other sort of truly high impact field,” said Tiberio Caetano, Co-Founder and Chief Scientist of the Gradient Institute, and one of Australia's leading experts on AI. Australia's Industry and Science minister, Ed Husic MP, has directed the National Science and Technology Council to investigate the technology and instructed it to provide advice to government on how to respond, with findings expected "shortly", according to InnovationAus.com. (Read more: https://lnkd.in/e3p_kvMF) How can Australia regulate AI in a way that doesn't stifle innovation? Share your thoughts in the comments below. 🖊️ Marty McCarthy #responsibleai #ai #ethicalai #artificialintelliegence #airegulation #TechWrapUpAU

Australian experts want AI regulator, investigation of failures

Australian experts want AI regulator, investigation of failures

https://www.innovationaus.com

Jasper Cooper

CEO at AutoRFP.ai | AI Copilot for RFPs & Security Questionnaires

1y

It's crucial for the government to actually understand AI's implications before creating heavy laws or regulations. Doing so too soon will hurt Australia's competitive edge in this now critical industry. Just like other industries, I agree that AI will need its own rules eventually. But timing is key. Take Aviation, for example – Qantas began flying internationally in 1935, but the Civil Aviation Authority, which oversees aviation safety and regulations, wasn't established until 1988. This allowed the public and government to really grasp the industry's true risks and adapt, instead of rushing into creating regulations and authorities based on iffy predictions. I'm certainly not saying we should wait that long, but taking our time with AI regulations will help Australia stay competitive. By understanding AI's true impact, we can incrementally legislate based on reality rather than unfounded fears of some dystopian future. The real distopian future is AI taking over the world and Australia not being a part of it.

⭐️ Blair Hudson

AI and Software Engineering Leader • Transforming the Global Workforce with Human and Machine Learning • LinkedIn Top Voice

1y

Rather than jumping straight on the bandwagon with AI-specific regulation, why don’t we continue updating existing regulation to be compatible with our countries AI goals? Our Privacy Act (1988) is still awaiting its update for the Internet era to bring us in line with world leaders. The Attorney-General’s review might get us there. In the EU, the GDPR already provides individual rights regarding automated decision making and profiling, which is a first huge step to close before we go any deeper on AI regulations. First thing’s first.

Michael Plis

Follow me for AI, IT, Cybersecurity | Founder @ Cyberkite | Innovator & Educator | Neurodivergent | Millennial | Trekkie 🖖 | Linkedin Top Voice | Born in Poland

1y

The extensive design of Ai Regulations in the EU is a good start for Australia to adopt and expand on. A risk rating system for all ai products managed by a number of agencies including the ACMA that handles entertainment rating eg: G, PG, M, MA, MA+, R would be handy or something along the lines of types of dangers a specific ai product has. Also enforcement and restriction of such tools in certain circumstances in the Australian community such as at schools, certain types of AI must be banned in order to maintain stable curriculum and effectively teach. In business world, AI needs to be regulated on a company policy level of what is acceptable and what is not acceptable in terms of the use of AI. Ai Regulation: government needs to define rules about extremism, nacism, and other democracy destroying information that could destabalize Australia further than it already is due to damage poser by Social Media. In terms of the creative/arts industries including the music and visual arts industry It is important for government to define copyright laws that allow artists to opt out from generative AI training databases. Data Scraping laws should be tightened to regulate what can be scraped and with people's permission only!!!

Henry Patishman

Executive Vice President Identity Verification Solutions at Regula. Entrepreneur, Advisor, Investor, Speaker, Thought Leader, Consultant, Company Director at various.

1y

In my opinion to effectively regulate AI, the regulators must add value for developers and users.  One solution is a rating system similar to food health star rating but applied to AI.  Key criteria such as Security, Confidence Factor, Speed, etc could replace the Energy, Saturated Fat, Sugar, Sodium and Fibre in calculating a star rating for a given AI solution.  This approach adds value to consumers by standardising the comparison and allowing informed choice; it also adds great value to developers, in aligning future development to key criteria and the creation of standardised testing for these criteria. For example let's take "Confidence Factor", for many AI use cases it is extremely important to understand the trustworthiness of answers provided by AI.  Most answers from AI systems today sound valid and plausible - but they may be based on incomplete or outdated information. They may have even used invalid sources to create its "opinion or dissertation". To solve this issue a standardised confidence factor needs to be created.   By developing this type of approach, I believe regulators will create an environment where they are best able to inform and protect consumers whilst adding value to developers of these technologies.

Bec Johnson

PhD Researcher in AI ethics, University of Sydney. Formerly Google Research Ethical AI team. Managing Editor AI Ethics Journal. Listed 2020 "100 Brilliant Women in AI Ethics" by LH3. Founder PhD Students AI Ethics

1y

Thanks Marty McCarthy for sharing this piece from the The Australian Financial Review . Tiberio Caetano was speaking at an event called ChatLLM23 that I organised and hosted with several others at the University of Sydney last Friday. After several months planning we brought together over 34 speakers from Australia and around the world to specifically address the risks and ethical impacts of generative AI. Other speakers included Margaret Mitchell and Toby Walsh, to name but a few. With over 100 people in attendance at the venue and ~200 more online, it was seen as an important event in bringing so many experts together at this critical time to discuss generative AI and is the largest event of this kind held in Australia. You can see the full speaker list along with their affiliations, abstracts, and biographies on our site program.ChatLLM23.com We had many great speakers and there are many important messages from them. The recordings of the day will be posted on our website soon. The intent of the conference was to bring a wide and diverse range of voices together, which we achieved. The result is a great resource for the media and journalists to draw from. Kindest, Bec https://chatllm23.com/program

You can't regulate something that no one has been able to accurately define. What we have currently isn't AI, it's a set of sophisticated algorithms and machine learning... more of an advanced, clever mash-up of statistical techniques... rather than genuine intelligence.

Like
Reply
Stela SOLAR

Director @ National AI Centre | Co-chair of The Commonwealth AI Consortium

1y

Regulation is always evolving especially as new innovations emerge, and encouraging discussion and debate is an essential first step. When I speak with businesses, there’s often a default to connect AI to ethics. AI is not in a regulatory vacuum. Our laws today also apply to AI systems - e.g. Privacy Act, Anti-Discrimination Laws etc. AI is a tool and the outcomes from AI systems are just as related to the contexts they operate in, as they are to how the technology is used, as they are to the technology and how it's created. AI system outcomes are the sums of many micro-actions and multidisciplinary considerations coming together across the AI value chain - and the AI value chain includes AI and non-AI things. #AIAustralia

Jerome Babate

Executive Director, Filipino Nursing Diaspora Network

12mo

It can regulate AI in a way that doesn't stifle innovation by implementing a risk-based approach. The federal government can classify AI applications into low, medium, and high-risk categories based on their potential harm to society.

People need education - AI was a part of education in good institutions like WAIT back in the mid 80’s, so were units in ethics, law, philosophy, human sciences (psych, social, behavioural etc). But there is a lot of “training” today which does not impart a holistic value process. Training makes monkey see, monkey do a reality And that is AI in a nutshell Unsupervised “Trained” on data sets drawn from the “internet” ? No wonder training had to be supervised to get anything remotely sensible - but the supervision merely produces a cloned vague image of what the people doing the supervising feed the system. But what does that make it? Still a tool that does not have “Skin in the game” - thus doomed to make terrible selections. Not exactly intelligent - far from it A fancy rules based selection algorithm can regurgitate scored connections fast, but it still has NOT ONE SHRED of comprehension of meaning That’s the problem. Regulators will permit and turn stupid things like this ON because the regulators THINK it is intelligent. Consider Robodebt. But if used by a person as a tool, trained to fit purpose, it can be a useful support tool… provided those using it are educated to realise its inherent limitations.

Like
Reply
Aaron Dye

Helping the best recruiters make more placements through their website or careers page

11mo

AI won't get regulated because it is a competitive advantage to every country/person who is able to use it freely. Businesses will move to places of lower AI regulation. The only way it would work is akin to a nuclear non-proliferation treaty between countries and unlike that one, the technical barrier of creating Uranium 235 does not exist. Many countries signed onto that treaty simply because they could not invest the money to build a giant centrifuge to extract the lighter isotope. Spinning up an AI with the open-source libraries that are available right now... are orders of magnitude easier to do and as such many countries would not sign on to such a treaty. IMO the cat is out of the bag on this one and there is no way of putting it back in said bag.

See more comments

To view or add a comment, sign in

Explore topics