- Only one in three Australians are willing to trust AI
- 45% are unwilling to share their data with an AI system
- 42% generally accept AI but only 25% approve or embrace it
- 96% expect AI to be regulated with the majority expecting government oversight
- But 86% want to know more about AI
More than half (61 per cent) of Australians know little about Artificial intelligence (AI) and many are unaware that it is being used in everyday applications, like social media. While 42 per cent generally accept it only 16 per cent approve of AI.
These are some of the key findings of the University of Queensland/KPMG Australia Trust in Artificial Intelligence report launched today. It is the first national survey to take a deep dive into a vital area of technology that is reshaping how we work and live, setting out to understand and quantify the extent of Australians’ trust in and support of AI, and to benchmark these attitudes over time.
“The benefits and promise of AI for society and business are undeniable,” said Professor Nicole Gillespie, KPMG Chair in Organisational Trust and Professor of Management at the University of Queensland Business School. “AI helps people make better predictions and informed decisions, it enables innovation, and can deliver productivity gains, improve efficiency, and drive lower costs. Through such measures as AI driven fraud detection, it is helping protect physical and financial security – and facilitating the current global fight against COVID-19.”
But Professor Gillespie said that the risks and challenges AI poses for society are equally undeniable. These include the risk of codifying and reinforcing unfair biases, infringing on human rights such as privacy, spreading fake online content, technological unemployment and the dangers stemming from mass surveillance technologies, critical AI failures and autonomous weapons.
“It’s clear that these issues are causing public concern and raising questions about the trustworthiness and regulation of AI. Trust in AI systems is low in Australia, with only one in three Australians reporting that they are willing to trust AI systems. A little under half of the public (45 per cent) are unwilling to share their information or data with an AI system and two in five (40 per cent) are unwilling to trust the output of an AI system (eg a recommendation or decision).”
She said that the report findings also highlighted that most Australians do not view AI systems as trustworthy – however, they are more likely to perceive AI systems as competent than designed to operate with integrity and humanity. While many in the community are hesitant to trust AI systems, Australians generally accept (42 per cent) or tolerate (28 per cent) AI, but few approve (16 per cent) or embrace (7 per cent) AI.
Professor Gillespie said a key insight from the survey shows the Australian public is generally ambivalent in their trust towards AI systems:
“If left unaddressed this is likely to impair societal uptake and the ability of Australia to realise the societal and economic benefits of AI at a time when investment in these new technologies is likely to be critical to our future prosperity. The report provides a roadmap for what to do about this,” she said.
Four key drivers of trust in AI
The report emphasizes four key drivers that influence Australian’s trust in AI systems:
- Adequate regulation – beliefs about the adequacy of current regulations and laws to make AI use safe.
- Impact on society – the perceived uncertain impact of AI on society.
- Impact on jobs – the perceived impact of AI on jobs.
- Understanding of AI – the familiarity with and extent of understanding of AI.
“Of these drivers, the perceived adequacy of current regulations and laws is clearly the strongest,” said Professor Gillespie. “This demonstrates the importance of developing adequate regulatory and legal mechanisms that people believe will protect them from the risks associated with AI use. Our findings suggest this is central to shoring up trust in AI.”
She noted that one reason for the lack of confidence in commercial organisations to develop and regulate AI may be that people think such organisations are motivated to innovate to cut labour costs and increase revenue, rather than to help solve societal problems and enhance societal wellbeing.
“About three quarters (76 per cent) of the public believe commercial organisations innovate with AI for financial gain, whereas only a third (35 per cent) believe they innovate with AI for societal benefit,” said Professor Gillespie. “That opens up an opportunity for business to invest in and better communicate to Australians about how they are using AI and emerging technologies to create mutual benefit and societal good.”
James Mabbott, National Leader KPMG Innovate pointed to the survey finding that Australians generally disagree (43-47 per cent) or are ambivalent (19-21 per cent) about the adequacy of current safeguards around AI (such as rules, regulations and laws).
“Survey respondents ask whether the regulations are sufficient to make the use of AI safe or protect them from problems. Similarly, the majority either disagree or are ambivalent that the government adequately regulates AI,” he said. “This is where innovation is needed – in understanding that trust acts as the central vehicle through which other drivers impact AI acceptance – and in delivering certainty. We need to be more creative about providing these solutions and assurances and communicating them in an effective way.”
Mr Mabbott said the findings from the report and defined action plan provided a key opportunity to enhance vitally needed public trust in AI in order to accelerate benefits whilst mitigating potential harms.
How to Build Trust in AI “Road Map”
According to the findings from the University of Queensland/KPMG Australia Trust in Artificial Intelligence report, the key ways to build trust in AI are:
- Live up to Australians’ expectations of trustworthy AI
- establish mechanism to ensure high standards of AI systems in terms of: performance and accuracy, data privacy and security, transparency and explainability, accountability, fairness, risk and impact mitigation, and appropriate human oversight. Each of these principles is important for trust.
- undertake regular in-house and independent ethical reviews of AI systems to ensure they operate according to the principles of trustworthy AI.
- develop AI systems for the benefit of citizens, customers and employees, and better demonstrate how the use of AI supports societal health and wellbeing.
- conduct strategic long-range workforce planning and provide retraining opportunities to those affected by automation.
- recognise that while most Australians are comfortable with AI use at work for organisational security and task automation and augmentation, they are less comfortable with AI for employee-focused activities, such as evaluating and monitoring performance, and recruitment and selection.
- build trust with customers, employees and the public more broadly – it is not enough to focus on only one stakeholder group.
- consider that different cohorts in the workplace and community have different views about AI, with younger people and the university educated being more trusting and accepting of AI.
- Strengthen the regulatory framework for governing AI
- government and the companies deploying AI must carefully manage the challenges associated with AI such as fake online content, surveillance, data privacy, cyber security, bias, technological unemployment and autonomous vehicles.
- strengthen the regulatory and legal framework governing AI to better protect people from the risks and support people to feel safe using AI.
- ensure government and existing regulators take the lead in regulating and governing AI systems, rather than leaving it to industry only.
- given the public has the most confidence in Australian universities and research and defence organisations to develop and use, regulate and govern AI systems, there is an opportunity for business and government to partner with these organisations around AI initiatives.
- adopt assurance mechanisms that support the ethical deployment of AI systems such as establishing independent AI ethics reviews, adopting codes of conduct and national standards, and obtaining AI ethics certification.
- Strengthen Australia’s AI literacy
- address the Australian community’s generally low awareness and understanding of AI and its use in everyday life by supporting people to understand AI. Familiarity and understanding of AI are key drivers of trust and acceptance of AI.
- educate the community about what AI is and when and how it is being used.
- invest in and enhance Australia’s AI literacy with responsibility shared by government and organisations using or developing AI.
The full report is available here
About the Report
Definition of Artificial Intelligence
For the purposes of the survey, Artificial Intelligence (AI) refers to computer systems that can perform tasks or make predictions, recommendations or decisions that usually require human intelligence. AI systems can perform these tasks and make these decisions based on objectives set by humans but without explicit human instructions.
Survey Methodology
The University of Queensland/KPMG Australia “Trust in Artificial Intelligence” national survey is the first of its kind to take a deep look at community trust and expectations in relation to AI. The survey involved a nationally representative sample of over 2,500 Australian citizens and was conducted in June to July 2020.