Gradient in the news
Articles in the news media on the work and influence of Gradient Institute
Govt appoints AI Expert Group - Will guide introduction of ‘guardrails’.
15 February 2024
Bill Simpson-Young, Chief executive of Gradient Institute has been appointed to sit on a 12 person expert panel by the federal government.
A 12-person expert panel has been appointed by the federal government to guide the introduction of “guardrails” for the use of artificial intelligence in high-risk settings.
Minister for Industry and Science Ed Husic announced the establishment of the new AI Expert Group on Wednesday, and revealed it held its first meeting earlier this month.
The group includes renowned AI experts from the legal and tech sectors, and will provide advice to the Industry department on transparency, testing and accountability in AI over the next four months.
Read more ...Bill Simpson-Young, Chief executive of Gradient Institute has been appointed to sit on a 12 person expert panel by the federal government.
A 12-person expert panel has been appointed by the federal government to guide the introduction of “guardrails” for the use of artificial intelligence in high-risk settings.
Minister for Industry and Science Ed Husic announced the establishment of the new AI Expert Group on Wednesday, and revealed it held its first meeting earlier this month.
The group includes renowned AI experts from the legal and tech sectors, and will provide advice to the Industry department on transparency, testing and accountability in AI over the next four months.
Read more ...New artificial intelligence expert group (media release)
14 February 2024
Following the Government’s interim response to the Safe and Responsible AI in Australia consultation, Minister for Industry and Science Ed Husic has today announced the establishment of a new Artificial Intelligence Expert Group.
The Group will provide advice to the Department of Industry, Science and Resources on immediate work on transparency, testing and accountability, including options for AI guardrails in high-risk settings, to help ensure AI systems are safe.
The Group has already started work and met for the first time on Friday 2 February 2024.
Read more ...New artificial intelligence expert group (media release)
14 February 2024
Following the Government’s interim response to the Safe and Responsible AI in Australia consultation, Minister for Industry and Science Ed Husic has today announced the establishment of a new Artificial Intelligence Expert Group.
The Group will provide advice to the Department of Industry, Science and Resources on immediate work on transparency, testing and accountability, including options for AI guardrails in high-risk settings, to help ensure AI systems are safe.
The Group has already started work and met for the first time on Friday 2 February 2024.
Read more ...Cadent donates undisclosed amount to Gradient Institute to develop AI safety research
9 November 2023
Ethical technology studio Cadent has donated an undisclosed amount to independent nonprofit research Gradient Institute to help it develop research on technical safety.
Cadent said the donation will help Gradient Institute’s efforts in addressing this crucial gap.
Cadent’s donation will support a three-month research project of a PhD student working on AI safety, under the supervision of Gradient Institute researchers.
The project will aim to investigate the potential misuse of large language models for manipulating individuals for commercial, political, or criminal purposes, and to explore original technical solutions against such threats.
Read more ...Cadent donates undisclosed amount to Gradient Institute to develop AI safety research
9 November 2023
Ethical technology studio Cadent has donated an undisclosed amount to independent nonprofit research Gradient Institute to help it develop research on technical safety.
Cadent said the donation will help Gradient Institute’s efforts in addressing this crucial gap.
Cadent’s donation will support a three-month research project of a PhD student working on AI safety, under the supervision of Gradient Institute researchers.
The project will aim to investigate the potential misuse of large language models for manipulating individuals for commercial, political, or criminal purposes, and to explore original technical solutions against such threats.
Read more ...Tech Policy Design Centre Hopeful on the Potential for Australia’s Artificial Intelligence
25 October 2023
With the rapidly changing social and technical landscape of artificial intelligence (AI), the Tech Policy Design Centre’s podcast Tech Mirror hosted by Professor Johanna Weaver, today released episode, Beyond The Pause: Australia’s AI Opportunity, which features Bill Simpson-Young, CEO of Gradient Institute and Dr Tiberio Caetano, Chief Scientist at Gradient Institute – two of Australia’s leading independent AI technologists.
Both signatories of the Pause Letter that was published in March 2023, Mr Simpson-Young and Dr Caetano reflects on the proliferation of AI into public consciousness and the steps that led to him signing the Pause Letter.
“At the time there was no widespread recognition of the potential issues of large language models. In my judgement, based on scientific research and based on the reaction of reputable colleagues who were legitimately concerned about the issue, it was very clear to me that this was a matter that needed to be brought to the centre of our political discourse. It was about sounding the alarm” Dr Caetano said.
Read more ...Tech Policy Design Centre Hopeful on the Potential for Australia’s Artificial Intelligence
25 October 2023
With the rapidly changing social and technical landscape of artificial intelligence (AI), the Tech Policy Design Centre’s podcast Tech Mirror hosted by Professor Johanna Weaver, today released episode, Beyond The Pause: Australia’s AI Opportunity, which features Bill Simpson-Young, CEO of Gradient Institute and Dr Tiberio Caetano, Chief Scientist at Gradient Institute – two of Australia’s leading independent AI technologists.
Both signatories of the Pause Letter that was published in March 2023, Mr Simpson-Young and Dr Caetano reflects on the proliferation of AI into public consciousness and the steps that led to him signing the Pause Letter.
“At the time there was no widespread recognition of the potential issues of large language models. In my judgement, based on scientific research and based on the reaction of reputable colleagues who were legitimately concerned about the issue, it was very clear to me that this was a matter that needed to be brought to the centre of our political discourse. It was about sounding the alarm” Dr Caetano said.
Read more ...New report to help businesses implement responsible AI
28 June 2023
The report, ‘Implementing Australia’s AI Ethics Principles: A Selection of Responsible AI Practices and Resources’, was developed by Gradient Institute.
It comes as the recent Australian Responsible AI Index found that despite 82 per cent of businesses believing they were practising AI responsibly, less than 24 per cent had actual measures in place to ensure they were aligned with responsible AI practices.
Read more ...The report, ‘Implementing Australia’s AI Ethics Principles: A Selection of Responsible AI Practices and Resources’, was developed by Gradient Institute.
It comes as the recent Australian Responsible AI Index found that despite 82 per cent of businesses believing they were practising AI responsibly, less than 24 per cent had actual measures in place to ensure they were aligned with responsible AI practices.
Read more ...Guide on how to implement responsible AI
26 June 2023
Bill Simpson-Young, CEO of Gradient Institute said he hoped the report would encourage more businesses to start the journey towards responsible AI practices.
"Even though Responsible AI practices, resources and standards will keep evolving at a fast pace, this should not distract organisations from implementing practices that are known to be effective today,” Mr Simpson-Young said.
“For example, when an AI system is engaging with people, informing users of an AI’s operation builds trust and empowers them to make informed decisions. Transparency for impacted individuals could be as simple as informing the user when they are interacting with an AI system.
“While it is broadly accepted that fairness is important, what constitutes fair outcomes or fair treatment is open to interpretation and highly contextual. What constitutes a fair outcome can depend on the harms and benefits of the system and how impactful they are.
It is the role of the system owner to consult relevant affected parties, domain and legal experts and system stakeholders to determine how to contextualise fairness to their specific AI use case. The report helps organisations address these challenges." he said.
Read more ...Guide on how to implement responsible AI
26 June 2023
Bill Simpson-Young, CEO of Gradient Institute said he hoped the report would encourage more businesses to start the journey towards responsible AI practices.
"Even though Responsible AI practices, resources and standards will keep evolving at a fast pace, this should not distract organisations from implementing practices that are known to be effective today,” Mr Simpson-Young said.
“For example, when an AI system is engaging with people, informing users of an AI’s operation builds trust and empowers them to make informed decisions. Transparency for impacted individuals could be as simple as informing the user when they are interacting with an AI system.
“While it is broadly accepted that fairness is important, what constitutes fair outcomes or fair treatment is open to interpretation and highly contextual. What constitutes a fair outcome can depend on the harms and benefits of the system and how impactful they are.
It is the role of the system owner to consult relevant affected parties, domain and legal experts and system stakeholders to determine how to contextualise fairness to their specific AI use case. The report helps organisations address these challenges." he said.
Read more ...CSIRO and Gradient Institute publish report on ethical use of AI
26 June 2023
Released under a Creative Commons licence, the report includes recommendations for practices including impact assessments, data curation, fairness measures, pilot studies and organisational training, all aimed at helping businesses develop robust and responsible AI.
It has been published following research from Fifth Quadrant and the Gradient Institute finding that despite 82% of Australian businesses believing they were practising AI responsibly, less than 24% had measures in place to ensure they were aligned with responsible AI practices.
Read more ...Released under a Creative Commons licence, the report includes recommendations for practices including impact assessments, data curation, fairness measures, pilot studies and organisational training, all aimed at helping businesses develop robust and responsible AI.
It has been published following research from Fifth Quadrant and the Gradient Institute finding that despite 82% of Australian businesses believing they were practising AI responsibly, less than 24% had measures in place to ensure they were aligned with responsible AI practices.
Read more ...Don’t blindly trust AI vendors
22 June 2023
Bill Simpson-Young, CEO of the Gradient Institute which helped develop the report, said businesses need to start implementing practices that are known to be effective.
“For example, when an AI system is engaging with people, informing users of an AI’s operation builds trust and empowers them to make informed decisions,” he said.
“Transparency for impacted individuals could be as simple as informing the user when they are interacting with an AI system.”
Read more ...Don’t blindly trust AI vendors
22 June 2023
Bill Simpson-Young, CEO of the Gradient Institute which helped develop the report, said businesses need to start implementing practices that are known to be effective.
“For example, when an AI system is engaging with people, informing users of an AI’s operation builds trust and empowers them to make informed decisions,” he said.
“Transparency for impacted individuals could be as simple as informing the user when they are interacting with an AI system.”
Read more ...CSIRO leads on responsible AI
22 June 2023
“We’ve written this report with the National AI Centre specifically to help organisations, particularly smaller companies, who want to build AI systems responsibly but just don’t know where to start,” said Bill Simpson-Young, chief executive of the Gradient Institute, which wrote the report.
“Even though there aren’t specific AI regulations now, there are lots of laws that apply to AI systems,” he said.
“It’s very easy for a company building an AI system to inadvertently discriminate. It’s possible for an organisation to do things that are illegal.
“You really have to be careful in the design of the system, in the way it’s operated, in the way it’s monitored.”
The report comes in the middle of the government’s eight-week consultation period into AI regulation.
Read more ...CSIRO leads on responsible AI
22 June 2023
“We’ve written this report with the National AI Centre specifically to help organisations, particularly smaller companies, who want to build AI systems responsibly but just don’t know where to start,” said Bill Simpson-Young, chief executive of the Gradient Institute, which wrote the report.
“Even though there aren’t specific AI regulations now, there are lots of laws that apply to AI systems,” he said.
“It’s very easy for a company building an AI system to inadvertently discriminate. It’s possible for an organisation to do things that are illegal.
“You really have to be careful in the design of the system, in the way it’s operated, in the way it’s monitored.”
The report comes in the middle of the government’s eight-week consultation period into AI regulation.
Read more ...Businesses offered crash course on AI ethics
22 June 2023
Australian organisations are being urged to tighten their use of artificial intelligence with the application of ethics principles ahead of potentially sweeping changes to regulations and standards. Developed in partnership with the Gradient Institute, the report offers 26 practices that promote responsible use of the increasingly popular but high-risk technology. The practices range from building education and awareness to specifically defining how to measure and monitor fairness using data, with each explained in terms of who should be using them, their impact and likely barriers to implementation.
Read more ...Businesses offered crash course on AI ethics
22 June 2023
Australian organisations are being urged to tighten their use of artificial intelligence with the application of ethics principles ahead of potentially sweeping changes to regulations and standards. Developed in partnership with the Gradient Institute, the report offers 26 practices that promote responsible use of the increasingly popular but high-risk technology. The practices range from building education and awareness to specifically defining how to measure and monitor fairness using data, with each explained in terms of who should be using them, their impact and likely barriers to implementation.
Read more ...AI poses ‘risk of extinction’ warn industry leaders
31 May 2023
“If the leading developers, you know, you've got Anthropic, you’ve got OpenAI, you’ve got Google, they agree it poses an extinction risk, then people need to know what these organisations think,” said Tiberio Caetano, chief scientist at Australian AI ethics think tank Gradient Institute.
Read more ...“If the leading developers, you know, you've got Anthropic, you’ve got OpenAI, you’ve got Google, they agree it poses an extinction risk, then people need to know what these organisations think,” said Tiberio Caetano, chief scientist at Australian AI ethics think tank Gradient Institute.
Read more ...Pentagon AI fake image strikes concern
24 May 2023
Tiberio Caetano, chief scientist at the Gradient Institute, a non-profit AI ethics research organisation, said counterfeit content was a “massive issue” of generative AI.
“Human life, it’s predicated on the idea that we believe in trust,” he said.
“Trust is really the backbone of civilisation. Without trust, we can’t do anything.
Read more ...Pentagon AI fake image strikes concern
24 May 2023
Tiberio Caetano, chief scientist at the Gradient Institute, a non-profit AI ethics research organisation, said counterfeit content was a “massive issue” of generative AI.
“Human life, it’s predicated on the idea that we believe in trust,” he said.
“Trust is really the backbone of civilisation. Without trust, we can’t do anything.
Read more ...Conditions right for AI to go rogue, warns expert
24 May 2023
“The speed at which capability is growing is greater than the speed at which AI safety is being investigated,” Bill Simpson-Young, chief executive of the Gradient Institute, a non-profit which studies ethical AI use, told Central News.
“We’re dealing with a technology we don’t really understand. The people designing it don’t really understand how it is behaving.
“This is the first time I can think of in computing where new features are emerging that no one saw coming and emerge purely by making systems bigger.”
Read more ...“The speed at which capability is growing is greater than the speed at which AI safety is being investigated,” Bill Simpson-Young, chief executive of the Gradient Institute, a non-profit which studies ethical AI use, told Central News.
“We’re dealing with a technology we don’t really understand. The people designing it don’t really understand how it is behaving.
“This is the first time I can think of in computing where new features are emerging that no one saw coming and emerge purely by making systems bigger.”
Read more ...Control AI or risk ‘1984’ future, says human rights commissioner Lorraine Finlay
15 May 2023
Gradient Institute Chief Scientist Tiberio Caetano said there was the potential that AI could be used to manipulate people. Australia’s human rights commissioner Lorraine Finlay has warned artificial intelligence will turn into an Orwellian nightmare that presents fiction as fact and spreads disinformation if the federal government and business fail to rein in the growing technology.
Read more ...Gradient Institute Chief Scientist Tiberio Caetano said there was the potential that AI could be used to manipulate people. Australia’s human rights commissioner Lorraine Finlay has warned artificial intelligence will turn into an Orwellian nightmare that presents fiction as fact and spreads disinformation if the federal government and business fail to rein in the growing technology.
Read more ...How should Australia regulate AI?
3 May 2023
“We need to empower individual regulators — sector based regulators. But we need AI dedicated agencies, just as we have for finance, just as we have for health just at the head for any other sort of truly high impact field,” said Tiberio Caetano, Co-Founder and Chief Scientist of the Gradient Institute, and one of Australia's leading experts on AI.
Read more ...How should Australia regulate AI?
3 May 2023
“We need to empower individual regulators — sector based regulators. But we need AI dedicated agencies, just as we have for finance, just as we have for health just at the head for any other sort of truly high impact field,” said Tiberio Caetano, Co-Founder and Chief Scientist of the Gradient Institute, and one of Australia's leading experts on AI.
Read more ...Watch for ‘bad actors’ in AI power struggle, regulators urged
2 May 2023
One of the nation’s leading artificial intelligence experts has warned that regulators need to start paying attention to anyone throwing vast amounts of computer power at machine learning, to prevent “bad actors” from creating harmful new capabilities for AI.
The large language models (LLMs) that underpin services such as ChatGPT have placed new opportunities to create AI functionality in the hands of anyone with enough computing power, according to Dr Tiberio Caetano, co-founder and chief scientist of Gradient Institute, a think tank that works on AI ethics, accountability and transparency.
Read more ...One of the nation’s leading artificial intelligence experts has warned that regulators need to start paying attention to anyone throwing vast amounts of computer power at machine learning, to prevent “bad actors” from creating harmful new capabilities for AI.
The large language models (LLMs) that underpin services such as ChatGPT have placed new opportunities to create AI functionality in the hands of anyone with enough computing power, according to Dr Tiberio Caetano, co-founder and chief scientist of Gradient Institute, a think tank that works on AI ethics, accountability and transparency.
Read more ...Australian experts want AI regulator, investigation of failures
28 April 2023
Gradient Institute co-founder and chief scientist at the research institute Tiberio Caetano says the public release of generative AI tools and access to uncapped compute power to push it further had upended this paradigm, handing the power to organisations and individuals with relatively less scrutiny.
Read more ...Gradient Institute co-founder and chief scientist at the research institute Tiberio Caetano says the public release of generative AI tools and access to uncapped compute power to push it further had upended this paradigm, handing the power to organisations and individuals with relatively less scrutiny.
Read more ...Using AI to keep people moving
20 April 2023
Gradient Institute and Transurban are supporting the release of the Responsible AI Index 2022, which measures and tracks how well Australian organisations are designing and implementing AI systems, with a view to fairness, accountability, transparency, and impact on people and society. Now in its second year, the study was conducted by Fifth Quadrant CX, led by the Responsible Metaverse Alliance, supported by Gradient Institute and sponsored by Transurban and IAG.
Read more about the Responsible AI Index.
Read more ...Using AI to keep people moving
20 April 2023
Gradient Institute and Transurban are supporting the release of the Responsible AI Index 2022, which measures and tracks how well Australian organisations are designing and implementing AI systems, with a view to fairness, accountability, transparency, and impact on people and society. Now in its second year, the study was conducted by Fifth Quadrant CX, led by the Responsible Metaverse Alliance, supported by Gradient Institute and sponsored by Transurban and IAG.
Read more about the Responsible AI Index.
Read more ...Kuwaiti newsreader is just one facet of AI penetration of media
12 April 2023
Gradient Institute chief executive Bill Simpson-Young shared this concern of AI generating news, and told The New Daily that his guess is there is a human behind Fedha, checking everything, because it is not designed to create the news.
“Large language was incredibly powerful and incredibly impressive, but they’re not good at generating facts. They’re not designed to generate facts,” he said. “They’re designed to generate language.”
Read more ...Gradient Institute chief executive Bill Simpson-Young shared this concern of AI generating news, and told The New Daily that his guess is there is a human behind Fedha, checking everything, because it is not designed to create the news.
“Large language was incredibly powerful and incredibly impressive, but they’re not good at generating facts. They’re not designed to generate facts,” he said. “They’re designed to generate language.”
Read more ...IAG Sponsors 2nd Responsible AI Index, Recommits to Responsible AI
4 April 2023
IAG was an early supporter and adopter of responsible AI practices. In 2018 we supported the creation of the not-for-profit Gradient Institute, recognising the importance of responsible AI, and continue to sponsor the research institute today. In 2020-21 we participated in a pilot study of the Australian AI Ethics Principles. In that case study, we illustrate how the car insurance total loss experience has been improved by the application of AI.
Read more ...IAG was an early supporter and adopter of responsible AI practices. In 2018 we supported the creation of the not-for-profit Gradient Institute, recognising the importance of responsible AI, and continue to sponsor the research institute today. In 2020-21 we participated in a pilot study of the Australian AI Ethics Principles. In that case study, we illustrate how the car insurance total loss experience has been improved by the application of AI.
Read more ...Research Shows a Worrying Lack of Action Towards Responsible AI
4 April 2023
Australia risks falling behind on Artificial Intelligence (AI), with new research revealing that Australian businesses are slow to develop and use the technology responsibly.
Now in its second year, the Responsible AI Index 2022, led by Dr Catriona Wallace, of the Responsible Metaverse Alliance, supported by Gradient Institute and sponsored by IAG and Transurban, measures and tracks how well organisations are designing and implementing Responsible AI systems, with a view to fairness, accountability, transparency, and impact on people and society.
Read more ...Australia risks falling behind on Artificial Intelligence (AI), with new research revealing that Australian businesses are slow to develop and use the technology responsibly.
Now in its second year, the Responsible AI Index 2022, led by Dr Catriona Wallace, of the Responsible Metaverse Alliance, supported by Gradient Institute and sponsored by IAG and Transurban, measures and tracks how well organisations are designing and implementing Responsible AI systems, with a view to fairness, accountability, transparency, and impact on people and society.
Read more ...CSIRO creates Responsible AI Network
16 March 2023
Gradient Institute joins CSIRO National AI Centre's Responsible AI Network as an expert knowledge partner.
Read more ...CSIRO creates Responsible AI Network
16 March 2023
Gradient Institute joins CSIRO National AI Centre's Responsible AI Network as an expert knowledge partner.
Read more ...Voice of real Australia: Why marketers love NAPLAN time
10 May 2022
Plus, a press release from the Australian National University and Gradient Institute, announcing that kids who are happier do better in test results. And, conversely, kids who are unhappy, do worse.
Read more ...Plus, a press release from the Australian National University and Gradient Institute, announcing that kids who are happier do better in test results. And, conversely, kids who are unhappy, do worse.
Read more ...AI changes far reaching, metaverse gaining traction
10 May 2022
Artificial intelligence (AI) is likely to increasingly drive improved customer and employee experience after being “slow out the gate” in that area compared to other advantages it is delivering, Gradient Institute Director Catriona Wallace has told the Actuaries Institute summit. “We will start to see a greater emphasis on AI-driven employee, customer experience, really as a result of the pandemic and the challenge organisations and brands are having with regard to retaining customers and retaining and attracting employees,” she said. “We will see a lot greater use of AI and machine learning in our workplaces around employee wellbeing and productivity and customer wellbeing and sales and service.”
Read more ...Artificial intelligence (AI) is likely to increasingly drive improved customer and employee experience after being “slow out the gate” in that area compared to other advantages it is delivering, Gradient Institute Director Catriona Wallace has told the Actuaries Institute summit. “We will start to see a greater emphasis on AI-driven employee, customer experience, really as a result of the pandemic and the challenge organisations and brands are having with regard to retaining customers and retaining and attracting employees,” she said. “We will see a lot greater use of AI and machine learning in our workplaces around employee wellbeing and productivity and customer wellbeing and sales and service.”
Read more ...NSW gov commissions metaverse study
3 May 2022
Partners with the Gradient Institute. The NSW government has commissioned a study into the metaverse to understand the opportunities and regulatory risks that shared virtual worlds hold. The study will be conducted by the Gradient Institute, a Sydney-based research and advocacy institute, with support to be provide by the Department of Customer Service. The Gradient Institute has previously worked with the department’s digital arm, digital.nsw, to develop the government’s AI ethics policy.
Read more ...NSW gov commissions metaverse study
3 May 2022
Partners with the Gradient Institute. The NSW government has commissioned a study into the metaverse to understand the opportunities and regulatory risks that shared virtual worlds hold. The study will be conducted by the Gradient Institute, a Sydney-based research and advocacy institute, with support to be provide by the Department of Customer Service. The Gradient Institute has previously worked with the department’s digital arm, digital.nsw, to develop the government’s AI ethics policy.
Read more ...Minderoo supports AI configuration tool
16 March 2022
AI platforms are set to become more readily customisable as a result of new, open-source software created with the support of Minderoo Foundation. That software, dubbed AI Impact Control Panel, will provide companies with a graphic interface to allow them to adjust the technical interface of an AI platform. It comes on the back of a report by developer Gradient Institute, compiled with Minderoo’s assistance, which aims to provide guidance to businesses in de-risking their reliance on computers to make key decisions.
Read more ...Minderoo supports AI configuration tool
16 March 2022
AI platforms are set to become more readily customisable as a result of new, open-source software created with the support of Minderoo Foundation. That software, dubbed AI Impact Control Panel, will provide companies with a graphic interface to allow them to adjust the technical interface of an AI platform. It comes on the back of a report by developer Gradient Institute, compiled with Minderoo’s assistance, which aims to provide guidance to businesses in de-risking their reliance on computers to make key decisions.
Read more ...Urgent warning for parents after children exposed to ‘very dark’ side of the metaverse
12 March 2022
Parents are being warned their children could be exposed to ‘virtual crimes’ on the metaverse which currently have little to no policing. The ‘metaverse’ is an entire virtual world, created using artificial intelligence where users create lifelike avatars to interact with and live inside the alternate universe.
Read more ...Parents are being warned their children could be exposed to ‘virtual crimes’ on the metaverse which currently have little to no policing. The ‘metaverse’ is an entire virtual world, created using artificial intelligence where users create lifelike avatars to interact with and live inside the alternate universe.
Read more ...Study finds happier kids get better test results at school
9 February 2022
A new study has confirmed what many parents would already have thought – happier kids get better test results at school. Researchers from Australian National University (ANU) studied more than 3,000 students and found that self-reported levels of depression had a negative effect on NAPLAN results. ANU and Gradient Institute performed the work together with Gradient Institute applying its machine learning based causal inference techniques and software tools.
Read more ...Study finds happier kids get better test results at school
9 February 2022
A new study has confirmed what many parents would already have thought – happier kids get better test results at school. Researchers from Australian National University (ANU) studied more than 3,000 students and found that self-reported levels of depression had a negative effect on NAPLAN results. ANU and Gradient Institute performed the work together with Gradient Institute applying its machine learning based causal inference techniques and software tools.
Read more ...Bradfield Oration 2021:Interconnected cities vision expands to Newcastle and Wollongong
2 December 2021
The Daily Telegraph’s Anna Caldwell chats with Amy Brown, Ann Sherry, Catriona Wallace (Gradient Institute) and Tony Shepherd”.
Read more ...Bradfield Oration 2021:Interconnected cities vision expands to Newcastle and Wollongong
2 December 2021
The Daily Telegraph’s Anna Caldwell chats with Amy Brown, Ann Sherry, Catriona Wallace (Gradient Institute) and Tony Shepherd”.
Read more ...The hidden risks of government by artificial intelligence
30 November 2021
Ever since the computer Hal went rogue in the film 2001: A Space Odyssey, people have worried that one day computers would become so smart they would take over the world. That is yet to happen but the rapid development of artificial intelligence is starting to raise important issues. The NSW Ombudsman has just issued a report titled The new machinery of government which acknowledges the technology has uses and benefits but warns governments should be more careful in how they apply it. Gradient Institute contributed to the technical aspects of the report.
Read more ...The hidden risks of government by artificial intelligence
30 November 2021
Ever since the computer Hal went rogue in the film 2001: A Space Odyssey, people have worried that one day computers would become so smart they would take over the world. That is yet to happen but the rapid development of artificial intelligence is starting to raise important issues. The NSW Ombudsman has just issued a report titled The new machinery of government which acknowledges the technology has uses and benefits but warns governments should be more careful in how they apply it. Gradient Institute contributed to the technical aspects of the report.
Read more ...The West's strength might be its weakness in our AI-driven world
20 November 2021
Ethics is at the heart of the technological arms race the world is officially not participating in. When the values of the West are leveraged against us by autocrats, they become our weakness as well as our strength. This week the Prime Minister gave a speech as part of the Australian Strategic Policy Institute’s new Sydney Dialogues series that, while accompanied by less fanfare, was more consequential than the AUKUS submarine deal.
Read more ...The West's strength might be its weakness in our AI-driven world
20 November 2021
Ethics is at the heart of the technological arms race the world is officially not participating in. When the values of the West are leveraged against us by autocrats, they become our weakness as well as our strength. This week the Prime Minister gave a speech as part of the Australian Strategic Policy Institute’s new Sydney Dialogues series that, while accompanied by less fanfare, was more consequential than the AUKUS submarine deal.
Read more ...UnionBank releases report with recommendations on responsible AI implementation
5 November 2021
Artificial intelligence (AI) continues to become increasingly widespread and organizations are ever becoming more reliant on AI systems for critical decision-making. With this, there is a need for a framework that will guide them in ensuring that they are able to implement their AI initiatives responsibly. Recognizing this need, Union Bank of the Philippines (UnionBank) released a report that will act as a guide on how responsible AI can be enforced on the greater industrial level. The contents of the report were based on a collaborative initiative with the Gradient Institute called Project AI Trust, which has enabled the Bank to consider and implement responsible AI practices for its automated systems.
Read more ...Artificial intelligence (AI) continues to become increasingly widespread and organizations are ever becoming more reliant on AI systems for critical decision-making. With this, there is a need for a framework that will guide them in ensuring that they are able to implement their AI initiatives responsibly. Recognizing this need, Union Bank of the Philippines (UnionBank) released a report that will act as a guide on how responsible AI can be enforced on the greater industrial level. The contents of the report were based on a collaborative initiative with the Gradient Institute called Project AI Trust, which has enabled the Bank to consider and implement responsible AI practices for its automated systems.
Read more ...IAG-backed research reveals need for investment in Responsible AI
14 October 2021
Ethical AI Advisory and Gradient Institute have launched the inaugural Australian Responsible AI Index, sponsored by IAG and Telstra. The index findings, which reveal that less than one in 10 Australia-based organisations have a mature approach to deploying responsible and ethical artificial intelligence (AI), signal the urgent need for Australian organisations to increase investment in responsible AI strategies.
Read more ...Ethical AI Advisory and Gradient Institute have launched the inaugural Australian Responsible AI Index, sponsored by IAG and Telstra. The index findings, which reveal that less than one in 10 Australia-based organisations have a mature approach to deploying responsible and ethical artificial intelligence (AI), signal the urgent need for Australian organisations to increase investment in responsible AI strategies.
Read more ...Australian organisations lack maturity in responsible AI
5 October 2021
Less than one in 10 organisations in Australia have a mature approach to deploying responsible artificial intelligence (AI), underscoring a need for greater focus on the ethical considerations related to growing use of the technology.
Read more ...Australian organisations lack maturity in responsible AI
5 October 2021
Less than one in 10 organisations in Australia have a mature approach to deploying responsible artificial intelligence (AI), underscoring a need for greater focus on the ethical considerations related to growing use of the technology.
Read more ...NSW Artificial Intelligence Advisory Committee inaugural members named
17 March 2021
The New South Wales government has named the 11 individuals who will form the NSW Artificial Intelligence Advisory Committee and play a role in how AI is used in the state, including Gradient CEO Bill Simpson-Young.
Read more ...The New South Wales government has named the 11 individuals who will form the NSW Artificial Intelligence Advisory Committee and play a role in how AI is used in the state, including Gradient CEO Bill Simpson-Young.
Read more ...Can artificial intelligence now influence human decision-making?
23 February 2021
A new study by researchers from the Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, along with the Australian National University and researchers from Germany, has determined that AI can influence human decision-making.
Read more ...Can artificial intelligence now influence human decision-making?
23 February 2021
A new study by researchers from the Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, along with the Australian National University and researchers from Germany, has determined that AI can influence human decision-making.
Read more ...Can you trust a computer algorithm?
10 February 2021
Artificial intelligence can mimic human decisions — but also drastically amplify hidden biases.
Read more ...Can you trust a computer algorithm?
10 February 2021
Artificial intelligence can mimic human decisions — but also drastically amplify hidden biases.
Read more ...Ethical AI Advisory And Gradient Institute Partner For Australian AI Ethics
30 September 2020
Non-profit Gradient Institute and consultancy Ethical AI Advisory have signed an alliance to tackle the development and deployment of Artificial Intelligence (AI) that is both ethical and trustworthy.
Read more ...Non-profit Gradient Institute and consultancy Ethical AI Advisory have signed an alliance to tackle the development and deployment of Artificial Intelligence (AI) that is both ethical and trustworthy.
Read more ...Gradient and Ethical AI to take on biased AI algorithms
29 September 2020
The non-profit Gradient Institute and consultancy Ethical AI Advisory have signed an alliance to collaborate on the development and deployment of Artificial Intelligence (AI) that is both ethical and trustworthy.
Read more ...Gradient and Ethical AI to take on biased AI algorithms
29 September 2020
The non-profit Gradient Institute and consultancy Ethical AI Advisory have signed an alliance to collaborate on the development and deployment of Artificial Intelligence (AI) that is both ethical and trustworthy.
Read more ...Singapore to establish AI framework for ‘fairness’ credit scoring metrics
29 May 2020
Monetary Authority of Singapore tasks two teams, comprising banks and artificial intelligence industry players, to develop metrics that ensure the “responsible use of AI” for credit risk scoring and customer marketing.
Read more ...Monetary Authority of Singapore tasks two teams, comprising banks and artificial intelligence industry players, to develop metrics that ensure the “responsible use of AI” for credit risk scoring and customer marketing.
Read more ...MAS, banks creating framework for AI use in assessing credit risk
20 May 2020
The Monetary Authority of Singapore (MAS) is working with banks and technology firms to develop measures to judge customers fairly when artificial intelligence (AI) is used to assess their credit risk.
Read more ...The Monetary Authority of Singapore (MAS) is working with banks and technology firms to develop measures to judge customers fairly when artificial intelligence (AI) is used to assess their credit risk.
Read more ...Autonomous cars ‘won’t kill insurance’
24 February 2020
The chief customer officer of Insurance Australia Group says driverless cars will not kill the insurance sector, but they will substantially change how the industry operates.
Read more ...Autonomous cars ‘won’t kill insurance’
24 February 2020
The chief customer officer of Insurance Australia Group says driverless cars will not kill the insurance sector, but they will substantially change how the industry operates.
Read more ...Battleground over accountability for AI
13 December 2019
AI deployments are saturating businesses but few are thinking about the ethics of how algorithms work and the impact it has on people.
Read more ...Battleground over accountability for AI
13 December 2019
AI deployments are saturating businesses but few are thinking about the ethics of how algorithms work and the impact it has on people.
Read more ...Federal govt to create AI ethics guidelines
5 April 2019
The Coalition government has revealed plans to create guidelines to ensure artificial intelligence is responsibly developed and applied in Australia.
Read more ...Federal govt to create AI ethics guidelines
5 April 2019
The Coalition government has revealed plans to create guidelines to ensure artificial intelligence is responsibly developed and applied in Australia.
Read more ...NSW govt looks to develop AI ethics policy
28 March 2019
The NSW government has begun considering what an ethics policy framework might look like for artificial intelligence in a bid to drive agencies to adopt the technology.
Read more ...NSW govt looks to develop AI ethics policy
28 March 2019
The NSW government has begun considering what an ethics policy framework might look like for artificial intelligence in a bid to drive agencies to adopt the technology.
Read more ...New institute wants ‘world where systems behave ethically’
14 December 2018
Not-for-profit Gradient Institute will release open source tools and provide training in bid to make machine learning fairer.
Read more ...New institute wants ‘world where systems behave ethically’
14 December 2018
Not-for-profit Gradient Institute will release open source tools and provide training in bid to make machine learning fairer.
Read more ...IAG teams up with Data61, USyd over AI ethics institute
13 December 2018
Insurance Australia Group (IAG) has today unveiled a new partnership with the CSIRO’s Data61 and University of Sydney to launch a research and advocacy institute concerned with the ethical use of artificial intelligence.
Read more ...IAG teams up with Data61, USyd over AI ethics institute
13 December 2018
Insurance Australia Group (IAG) has today unveiled a new partnership with the CSIRO’s Data61 and University of Sydney to launch a research and advocacy institute concerned with the ethical use of artificial intelligence.
Read more ...Data61 leads new ‘ethical’ artificial intelligence institute
13 December 2018
The Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, alongside IAG and the University of Sydney, has created a new artificial intelligence (AI)-focused institute, aimed at exploring the ethics of the emerging technology.
Read more ...Data61 leads new ‘ethical’ artificial intelligence institute
13 December 2018
The Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, alongside IAG and the University of Sydney, has created a new artificial intelligence (AI)-focused institute, aimed at exploring the ethics of the emerging technology.
Read more ...