会员中心     
首页 > 资料专栏 > 经营 > 运营治理 > 其他资料 > trust-in-ai-global-insights-2023年报告PDF

trust-in-ai-global-insights-2023年报告PDF

妖狐天下
V 实名认证
内容提供者
热门搜索
年报 年报报告
资料大小:4606KB(压缩后)
文档格式:PDF
资料语言:中文版/英文版/日文版
解压密码:m448
更新时间:2023/7/8(发布于江苏)

类型:金牌资料
积分:--
推荐:免费申请

   点此下载 ==>> 点击下载文档


“trust-in-ai-global-insights-2023年报告PDF”第1页图片 “trust-in-ai-global-insights-2023年报告PDF”第2页图片 图片预览结束,如需查阅完整内容,请下载文档!
文本描述
Trust in
Artificial
Intelligence
A global study
2023
uq.au
KPMG.auCitation
Gillespie, N., Lockey, S., Curtis, C., Pool, J., &
Akbari, A. (2023). Trust in Artificial Intelligence:
A Global Study. The University of Queensland
and KPMG Australia. doi:10.14264/00d3c94
University of Queensland Researchers
Professor Nicole Gillespie, Dr Steve Lockey,
Dr Caitlin Curtis and Dr Javad Pool.
The University of Queensland team led the
design, conduct, analysis and reporting of
this research.
KPMG Advisors
James Mabbott, Rita Fentener van Vlissingen,
Jessica Wyndham, and Richard Boele.
Acknowledgements
We are grateful for the insightful input, expertise
and feedback on this research provided by
Dr Ali Akbari, Dr Ian Opperman, Rossana Bianchi,
Professor Shazia Sadiq, Mike Richmond, and
Dr Morteza Namvar, and members of the
Trust, Ethics and Governance Alliance at The
University of Queensland, particularly Dr Natalie
Smith, Associate Professor Martin Edwards,
Dr Shannon Colville and Alex Macdade.
Funding
This research was supported by an Australian
Government Research Support Package grant
provided to The University of Queensland AI
Collaboratory, and by the KPMG Chair in Trust
grant (ID 2018001776).
Acknowledgement of Country
The University of Queensland (UQ)
acknowledges the Traditional Owners and their
custodianship of the lands. We pay our respects
to their Ancestors and their descendants, who
continue cultural and spiritual connections
to Country. We recognise their valuable
contributions to Australian and global society.
(c) 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
(c)2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation. Contents
Executive summary 02
Introduction 07
How we conducted the research 08
1. To what extent do people trust AI systems? 11
2. How do people perceive the benefits and risks of AI?22
3. Who is trusted to develop, use and govern AI?29
4. What do people expect of the management, governance
and regulation of AI?34
5. How do people feel about AI at work? 43
6. How well do people understand AI?53
7. What are the key drivers of trust in and acceptance of AI?60
8. How have trust and attitudes towards AI changed over time?66
Conclusion and implications 70
Appendix 1: Method and statistical notes 73
Appendix 2: Country samples 75
Appendix 3: Key indicators for each country77
(c) 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
(c)2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE 2
Executive
summary
Artificial Intelligence (AI) has become a ubiquitous part of everyday life and work.
AI is enabling rapid innovation that is transforming the way work is done and
how services are delivered. For example, generative AI tools such as ChatGPT
are having a profound impact. Given the many potential and realised benefits for
people, organisations and society, investment in AI continues to grow across all
sectors1, with organisations leveraging AI capabilities to improve predictions,
optimise products and services, augment innovation, enhance productivity and
efficiency, and lower costs, amongst other beneficial applications.
However, the use of AI also poses risks and challenges, raising concerns about
whether AI systems (inclusive of data, algorithms and applications) are worthy
of trust. These concerns have been fuelled by high profile cases of AI use
that were biased, discriminatory, manipulative, unlawful, or violated human
rights. Realising the benefits AI offers and the return on investment in these
technologies requires maintaining the public’s trust: people need to be confident
AI is being developed and used in a responsible and trustworthy manner.
Sustained acceptance and adoption of AI in society are founded on this trust.
This research is the first to take a deep dive examination into the public’s trust
and attitudes towards the use of AI, and expectations of the management and
governance of AI across the globe.
We surveyed over 17,000 people from 17 countries covering all global regions:
Australia, Brazil, Canada, China, Estonia, Finland, France, Germany, India, Israel,
Japan, the Netherlands, Singapore, South Africa, South Korea, the United Kingdom
(UK), and the United States of America (USA). These countries are leaders in
AI activity and readiness within their region. Each country sample is nationally
representative of the population based on age, gender, and regional distribution.
We asked survey respondents about trust and attitudes towards AI systems in
general, as well as AI use in the context of four application domains where AI is
rapidly being deployed and likely to impact many people: in healthcare, public safety
and security, human resources and consumer recommender applications.
The research provides comprehensive, timely, global insights into the public’s
trust and acceptance of AI systems, including who is trusted to develop,
use and govern AI, the perceived benefits and risks of AI use, community
expectations of the development, regulation and governance of AI, and how
organisations can support trust in their AI use. It also sheds light on how people
feel about the use of AI at work, current understanding and awareness of AI,
and the key drivers of trust in AI systems. We also explore changes in trust and
attitudes to AI over time.
Next, we summarise the key findings.
(c) 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
(c)2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE 3
Most people are wary about trusting AI systems andPeople perceive the risks of AI in a similar way
have low or moderate acceptance of AI: however, across countries, with cybersecurity rated as the
trust and acceptance depend on the AI applicationtop risk globally
Across countries, three out of five people (61%) are wary While there are differences in how the AI benefit-risk
about trusting AI systems, reporting either ambivalence or ratio is viewed, there is considerable consistency across
an unwillingness to trust. Trust is particularly low in Finland countries in the way the risks of AI are perceived.
and Japan, where less than a quarter of people report trusting
Just under three-quarters (73%) of people across the globe
AI. In contrast, people in the emerging economies of Brazil,
report feeling concerned about the potential risks of AI.
India, China and South Africa (BICS2) have the highest levels
These risks inc