这是用户在 2025-7-11 1:09 为 https://app.immersivetranslate.com/word/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

Background Guide for the UNCTAD
联合国贸易和发展会议背景指南

Contents
内容

I. Welcome Letter3
I. 欢迎信 3

II. Introduction to the Committee4
II. 委员会简介 4

A. Function of the UNCTAD4
A. 贸发会议的职能 4

B. Instruments under the UNCTAD4
B. 联合国贸易和发展会议下的文书 4

III. Introduction to the Topic5
III. 主题介绍 5

A. Current State of Development and Application Scenarios5
A. 发展现状和应用场景 5

1. Rapid expansion and market potential (frontier aspects, market size or growth rate)5
1. 快速扩张和市场潜力(前沿方面、市场规模或增长率)5

1.1 Concentration of Research and Development6
1.1 研发集中 6

1.2 Evolution and Synergy7
1.2 进化与协同 7

B. Existing Technological and Societal Risks8
B. 现有的技术和社会风险 8

1. Technological Risks: Intellectual Property Uncertainty8
1. 技术风险:知识产权的不确定性 8

1.1 Source Ambiguity8
1.1 来源歧义 8

1.2 Authorship and Accountability8
1.2 作者和责任 8

1.3 Legal and Ethical Lag9
1.3 法律和道德滞后 9

2. Societal Risks: Asymmetrical Knowledge Creation and Cultural Bias9
2. 社会风险:不对称的知识创造和文化偏见 9

2.1 Cultural Bias and the Dilution of Creative Standards9
2.1 文化偏见和创意标准的稀释 9

2.2 AI and Social Inequality10
2.2 人工智能与社会不平等 10

3. Economic and Structural Risks10
3. 经济和结构性风险 10

3.1 Labor Market Disruption10
3.1 劳动力市场动荡 10

3.2 Platform Monopoly and Technological Oligopoly10
3.2 平台垄断与技术寡头垄断 10

IV. Inclusive AI Construction11
四、普惠人工智能建设 11

A. Overview of Current Global Technological Developments11
A. 当前全球技术发展概况 11

B. Existing technical, social risks and issues12
B. 现有的技术、社会风险和问题 12

1. Technological Concentration and Digital Divide12
1. 技术集中与数字鸿沟 12

2. Algorithmic Bias and Cultural Exclusion12
2. 算法偏见和文化排斥 12

3. Intellectual Property Infringement & Misuse of AI-generated Content13
3. 知识产权侵权和滥用 AI 生成内容 13

4. Lack of Transparency and Public Trust13
4. 缺乏透明度和公众信任 13

C. Inclusive development and technology governance14
C. 包容性发展和技术治理 14

1. Cultural Differences and Consumer Bias Adaptation (Regional AI Innovation Construction)14
1. 文化差异和消费者偏见适应(区域人工智能创新构建)14

2. Infrastructure Construction (AI for Creative Empowerment)14
2. 基础设施建设(AI for Creative Empowerment)14

3. Data access and feasible technical cooperation15
3. 数据获取和可行的技术合作 15

D. Global Governance Mechanism Construction16
D. 全球治理机制构建 16

1. Intellectual property and data ownership issues16
1. 知识产权和数据所有权问题 16

2. Global Equal Opportunities and the Participation of Marginal Groups17
2. 全球机会均等和边缘群体的参与 17

E. Possible Directions of Solutions18
E. 解决方案的可能方向 18

V. Reliable Artificial Intelligence Construction18
五、可靠的人工智能建设 18

A. Fundamental Problems in AI safety18
A. AI 安全的基本问题 18

1. Negative Side Effects Avoidance18
1. 避免负面副作用 18

1.1 Ethical side effects: uncontrolled automatic decision-making and the lack of machine ethics.19
1.1 道德副作用:不受控制的自动决策和缺乏机器道德。19

1.2 Social side effects: algorithmic bias and discriminatory impact20
1.2 社会副作用:算法偏见和歧视性影响 20

1.3 Governance dilemma and public distrust20
1.3 治理困境与公众不信任 20

2. Safe Exploration and Data Fringe20
2. 安全勘探和数据边缘 20

2.1 The Problem of Unsafe Exploration21
2.1 不安全勘探的问题 21

2.2 Challenges at the Data Fringe21
2.2 数据边缘的挑战 21

3. Robustness in Distribution Shift22
3. Distribution Shift 中的稳健性 22

3.1 Understanding Distribution Shift22
3.1 了解分布偏移 22

3.2 Examples of Vulnerability22
3.2 漏洞示例 22

3.3 Policy and Deployment Considerations22
3.3 策略和部署注意事项 22

3.4 International Relevance22
3.4 国际相关性 22

B. Actions taken by Member States on AI Safety23
B. 会员国在 AI 安全方面采取的行动 23

1. Setting Overarching Approaches and Strategies23
1. 设定总体方法和策略 23

1.1 China24
1.1 中国 24

1.2 European Union24
1.2 欧盟 24

1.3 United States25
1.3 美国 25

2. Countries with policies catching up25
2. 政策赶上 25 个国家

2.1 Brazil26
2.1 巴西 26

2.2 Cote d’Ivoire26
2.2 科特迪瓦 26

2.3 Japan26
2.3 日本 26

2.4 Republic of Korea26
2.4 韩国 26

3. Building Data for Responsible AI26
3. 为负责任的 AI 构建数据 26

3.1 Chile27
3.1 智利 27

3.2 Germany27
3.2 德国 27

3.3 India28
3.3 印度 28

3.4 Colombia28
3.4 哥伦比亚 28

3.5 Singapore28
3.5 新加坡 28

C. Actions taken by the UN System on AI Safety28
C. 联合国系统在 AI 安全方面采取的行动 28

1. Actions directly addressing AI Safety28
1. 直接针对 AI 安全的行动 28

1.1 Actions taken by UNCTAD28
1.1 联合国贸易和发展会议采取的行动 28

1.2 UN. General Assembly’s DR on AI management29
1.2 联合国。大会关于 AI 管理的 DR29

1.3 Principles for the Ethical Use of Artificial Intelligence in the United Nations System30
1.3 联合国系统合乎道德地使用人工智能的原则 30

1.4 High-level Advisory Body on AI​31
1.4 人工智能问题高级别咨询机构 31

2. Instruments interrelated to AI Safety32
2. 与 AI Safety 32 相关的仪器

2.1 UNCTAD eWeek32
2.1 联合国贸易和发展会议第 32

2.2 UNICRI Centre for AI and Robotics32
2.2 UNICRI 人工智能和机器人中心 32

References33
参考资料 33

I. Welcome Letter
I. 欢迎信

We are not in Kansas anymore.
我们已经不在堪萨斯州了。

Such a revolutionary era, far removed from conventional experiences, has been brought to us in a speed beyond our wildest imagination. So fast that hardly would anyone be able to foretell how the world will be ushered within the latest development in the Artificial Intelligence technology.
这样一个革命性的时代,与传统经验相去甚远,以超乎我们最疯狂想象的速度呈现在我们面前。如此之快,几乎没有人能够预测人工智能技术的最新发展将如何迎来世界。

Those adventurers, filled with the boredom one would get from Kansas, see AI as a dope for their creative economy, with scripts and movies, journalistic texts, music, images, captions, animations, and virtual reality content being brought about by snaps and clicks, aka. the AIGC. Even the most seemingly disciplined salary man would use AI to extract, collect and enhance information for a better post-production workflow. It seems somehow that we are in an age of forever improving and progressing with the tools in hand undergoing the nearly same progress.
那些充满堪萨斯州无聊的冒险家将 AI 视为他们创意经济的兴奋剂, 剧本和电影、新闻文本、音乐、图像、字幕 、动画和虚拟现实内容都是由咔嚓一声带来的。AIGC。即使是看起来最自律的工薪阶层也会使用 AI 来提取、收集和增强信息,以实现更好的后期制作工作流程。不知何故,我们似乎处于一个不断改进和进步的时代,手头的工具正在经历几乎相同的进步。

However, those who were left inside Kansas might see a different picture. Limited data access might not be a problem for Meta, as it boasts its giant source of chips and from those who consume them. Lack of necessary skills might not be a problem for DeepSeek, for it would be great effort to handle the massive number of CVs handed in daily. Insufficient digital infrastructure might not as well be a problem for most countries with an AI technology industry. Such is the fact in a post-Kansas world, where technology, while destroying barriers, has in itself made more.
然而,那些留在堪萨斯州的人可能会看到不同的画面。有限的数据访问对 Meta 来说可能不是问题,因为它需要大量的芯片来源和消费芯片的人。缺乏必要的技能对 DeepSeek 来说可能不是问题,因为处理每天提交的大量简历将需要付出巨大的努力。对于大多数拥有 AI 技术行业的国家/地区来说,数字基础设施不足可能也不是问题。在后堪萨斯世界里,这就是事实,技术在消除障碍的同时,本身也创造了更多。

To bridge the gap with leading economies, the UNCTAD promotes AI for all, with developing countries swiftly implementing AI policies to overcome barriers to diffusion consistent with their development strategies and goals, while addressing possible economic and social downsides of it. AI for all further highlights the need for global collaboration to make AI accessible and beneficial for all, fostering inclusive innovation to tackle global challenges. As AI development is highly concentrated in a few countries and companies, stronger international cooperation is crucial to co-create inclusive governance mechanisms and to ensure AI will drive safe and sustainable progress rather than deepening existing inequalities.
为了弥合与主要经济体的差距,联合国贸易和发展会议 (UNCTAD) 促进全民人工智能,发展中国家迅速实施人工智能政策,以克服符合其发展战略和目标的传播障碍,同时解决其可能的经济和社会负面影响。全民人工智能进一步凸显了全球协作的必要性,使人工智能为所有人所用和受益,促进包容性创新以应对全球挑战。由于 AI 开发高度集中在少数国家和公司,因此加强国际合作对于共同创建包容性治理机制并确保 AI 将推动安全和可持续的进步,而不是加深现有的不平等至关重要。

We shall meet outside of Kansas, as a whole.
我们将作为一个整体在堪萨斯州以外的地方见面。

II. Introduction to the Committee
II. 委员会简介

A. Function of the UNCTAD
A. 贸发会议的职能

Established in 1964, the United Nations Conference on Trade and Development (UNCTAD) has been playing a key role in supporting the development agenda as a principal organ under the UN Secretariat. UNCTADs primary function is to help developing countries participate more equitably in the global economy. It supports their efforts to use trade, investment, finance, and technology as vehicles for inclusive and sustainable development.
联合国贸易和发展会议 (UNCTAD) 成立于 1964 年,作为联合国秘书处下属的主要机构,一直在支持发展议程方面发挥着关键作用。 贸发会议的主要职能是帮助发展中国家更公平地参与全球经济。它支持他们努力利用贸易、投资、金融和技术作为实现包容性和可持续发展的工具。

One of UNCTADs core roles is that of a global think tank. It conducts in-depth reports, policy reviews, and economic research and analysis on key areas such as the digital economy, trade and development, and technology and innovation (for details, see Section B: Instruments under the UNCTAD). In addition to research, UNCTAD provides hands-on technical assistance and several comprehensive tools, such as the ASYCUDA customs software and the Empretec programme, its flagship capacity-building initiative. It also provides policy recommendations to support government decision-making. Furthermore, UNCTAD serves as a forum that promotes international cooperation. It holds quadrennial ministerial conferences, board meetings, and expert group meetings, covering a wide range of development-related topics. High-profile events such as the World Investment Forum and the UN Trade Forum are also hosted under its framework (UNCTAD, 2024).
贸发会议的核心角色之一是全球智库。它对数字经济、贸易和发展以及技术和创新等关键领域进行深入报告、政策审查以及经济研究和分析(有关详细信息,请参见 B 部分:联合国贸易和发展会议下的工具)。除了研究之外,贸发会议还提供实践技术援助和多种综合工具,例如 ASYCUDA 海关软件和其旗舰能力建设倡议 Empretec 计划 。它还提供政策建议以支持政府决策。此外,贸发会议还是促进国际合作的论坛。它每四年举行一次部长级会议、董事会会议和专家组会议,涵盖广泛的发展相关主题。世界投资论坛和联合国贸易论坛等备受瞩目的活动也在其框架下举办(UNCTAD,2024)。

Building on these overarching institutional functions, UNCTAD further helps governments to advance their development goals via a series of targeted actions. According to UNCTAD at a Glance (2023), its work focuses on twelve priority areas, including achieving integration into the global trading system, limiting exposure to financial volatility and debt, attracting development-friendly investment, increasing access to digital technologies, promoting entrepreneurship and innovation, and protecting consumers from abuse. For instance, as part of UNCTADs engagement with the creative economy, it has highlighted the potential impact of artificial intelligence on creative industries in developing countries, and has initiated global dialogue to help policymakers address challenges such as digital divides, data governance, and algorithmic bias (UNCTAD, 2024).
在这些总体机构职能的基础上,贸发会议通过一系列有针对性的行动进一步帮助各国政府推进其发展目标 。根据联合国贸易和发展会议概览(2023 年),其工作侧重于 12 个优先领域,包括融入全球贸易体系、限制金融波动和债务风险、吸引有利于发展的投资、增加获得数字技术的机会、促进创业和创新,以及保护消费者免受滥用。例如,作为贸发会议与创意经济合作的一部分,它强调了人工智能对发展中国家创意产业的潜在影响,并发起了全球对话,以帮助政策制定者应对数字鸿沟、数据治理和算法偏见等挑战(贸发会议,2024 年)。

B. Instruments under the UNCTAD
湾。 贸发会议下的文书

UNCTAD uses a variety of tools to promote inclusive and sustainable development, particularly in developing countries. To begin with, in its capacity as a global think tank, UNCTAD produces analytical reports and policy studies to inform decision-making by providing data support and strategy development guidance. Its flagship publications (UNCTAD, 2023) includes the Trade and Development Report, World Investment Report, The Least Developed Countries Report, Economic Development in Africa Report, Digital Development Report, Technology and Innovation Report, and the Review of Maritime Report.
贸发会议使用各种工具来促进包容性和可持续发展,特别是在发展中国家。首先,作为全球智库,贸发会议编制分析报告和政策研究,通过提供数据支持和战略发展指导,为决策提供信息。其旗舰出版物(UNCTAD,2023 年) 包括 《贸易与发展报告》、《世界投资报告》、《最不发达国家报告》、《非洲经济发展报告》、《数字发展报告》、《技术与创新报告》和《海事回顾报告》。

In addition to its research functions, UNCTAD also provides practical tools and technical assistance projects, directly serving national development strategies. One notable tool is the Automated System for Customs Data (ASYCUDA), a software system that modernizes and simplifies customs processes, thereby promoting trade efficiency. Another is Empretec (UNCTAD, n.d.), a flagship training program aimed at fostering entrepreneurship and enhancing innovative capabilities of small and medium-sized enterprises.
除了研究职能外,贸发会议还提供实用工具和技术援助项目,直接服务于国家发展战略。一个值得注意的工具是海关数据自动化系统 (ASYCUDA),这是一种软件系统,可使海关流程现代化和简化,从而提高贸易效率。 另一个是 Empretec (UNCTAD, n.d.),这是一个旗舰培训计划,旨在培养中小企业的创业精神和增强创新能力。

Beyond UNCTADs research and technical work, it also serves as intergovernmental forum for policy dialogue and consensus-building. It organizes quadrennial ministerial conferences, board meetings, multi-year expert meetings, and specialized global events such as the World Investment Forum (UNCTAD, 2023), eWeek, and the UN Trade Forum. These platforms bring together governments, academia, business, and civil society to discuss key issues related to trade and development.
除了贸发会议的研究和技术工作外,它还作为政策对话和建立共识的政府间论坛。它组织四年一度的部长级会议、董事会会议、多年期专家会议以及专门的全球活动,例如世界投资论坛(UNCTAD,2023 年)、eWeek 和联合国贸易论坛。这些平台汇集了政府、学术界、企业和公民社会,讨论与贸易和发展相关的关键问题。

More recently, in response to emerging technological trends, UNCTAD has increasingly integrated digitalization and artificial intelligence (AI) issues into its research and policy toolkit. In its digitalisation, artificial intelligence and the creative economy presentation in 2024, UNCTAD analyzed the potential impact of AI technology on the creative industries (UNCTAD, 2024) of developing countries, identifying both innovation opportunities and structural risks such as algorithmic bias, data concentration, and the widening of the digital divide. Through policy dialogue and knowledge dissemination, UNCTAD is committed to assisting member states in developing strategies that harness the development potential of AI while effectively mitigating its negative impacts, thereby promoting more inclusive and innovation-driven economic growth.
最近, 为了应对新兴的技术趋势,贸发会议越来越多地将数字化和人工智能 (AI) 问题纳入其研究和政策工具包。在 2024 年的数字化 、人工智能和创意经济报告中,贸发会议分析了人工智能技术对发展中国家创意产业的潜在影响(贸发会议,2024 年),确定了创新机会和结构性风险,例如算法偏见、数据集中和数字鸿沟的扩大。通过政策对话和知识传播,贸发会议致力于协助会员国制定战略,利用人工智能的发展潜力,同时有效减轻其负面影响,从而促进更具包容性和创新驱动的经济增长。

III. Introduction to the Topic
III. 主题简介

A. Current State of Development and Application Scenarios
A. 开发现状和应用场景

1. Rapid expansion and market potential (frontier aspects, market size or growth rate)
1. 快速扩张和市场潜力(前沿方面、市场规模或增长率)

Artificial intelligence (AI) has been rapidly reshaping the global creative economy in recent years. From generative image and video creation to AI-assisted music composition, architectural design, and automated journalism, AI technologies are increasingly integrated into the workflows of artists, media producers, and other creative professionals. It not only propels the evolution of cultural production techniques but also disrupts conventional commercial structures and broadens the scope of artistic expression.
近年来,人工智能 (AI) 正在迅速重塑全球创意经济。从生成图像和视频创作到 AI 辅助音乐创作、建筑设计和自动化新闻,AI 技术越来越多地集成到艺术家、媒体制作人和其他创意专业人士的工作流程中。它不仅推动了文化生产技术的发展,也打破了传统的商业结构,拓宽了艺术表现的范围。

The applications of generative AI in the creative sectors are expanding by supporting new forms of content production and collaboration. It has been widely applied to various creative fields, including text, music, video, architecture, and design. It is not only used for creation but can also enhance the quality of finished products, co-produce with humans, and even support immersive experiences. Moreover, beyond digital applications, AI has also been adopted in traditional handicrafts and performing arts to help local communities protect and disseminate their culture as well as adapt traditional expressions to modern platforms (UNCTAD, 2025). These cutting-edge applications thereby demonstrate AIs potential to support both innovation and inclusion in creative economies, especially in developing countries.
通过支持新形式的内容制作和协作,生成式 AI 在创意领域的应用正在扩大。它已广泛应用于各种创意领域,包括文本、音乐、视频、建筑和设计。它不仅用于创作,还可以提高成品的质量,与人类共同生产,甚至支持身临其境的体验。此外,除了数字应用之外,人工智能还被用于传统手工艺品和表演艺术,以帮助当地社区保护和传播他们的文化,并将传统表现形式适应现代平台(UNCTAD,2025)。因此,这些尖端应用展示了人工智能在支持创意经济创新和包容性方面的潜力,尤其是在发展中国家。

This expansion is strongly supported by market data. According to Generative AI In Creative Industries Global Market Report 2025, the generative AI in creative industries market size is projected to grow from $3.08 billion in 2024 to $4.09 billion in 2025 at a compound annual growth rate (CAGR) of 32.8%, and it is expected to grow to $12.61 billion in 2029 at a compound annual growth rate (CAGR) of 32.5% (The Business Research Company, 2025). The growth can be attributed to the rise of creative coding communities, public awareness and interest, and the rise of open-source frameworks in the past one to two years. For the growth in the forecast period, the contributing factors could be improved data efficiency and few-shot learning, ethical and inclusive AI practices, and cross-domain creative applications. Major trends in the forecast period include AI-powered content generation and automation, cross-domain creativity, Interactive and immersive experiences, and AI-driven design and innovation.
这种扩张得到了市场数据的有力支持。根据《2025 年创意产业生成式人工智能全球市场报告》,创意产业中的生成式人工智能市场规模预计将从 2024 年的 30.8 亿美元增长到 2025 年的 40.9 亿美元,复合年增长率 (CAGR) 为 32.8%,预计到 2029 年将增长到 126.1 亿美元,复合年增长率 (CAGR) 为 32.5%(商业研究公司, 2025 年)。这种增长可归因于创意编码社区的兴起、公众意识和兴趣,以及过去一到两年开源框架的兴起。对于预测期内的增长,促成因素可能是提高数据效率和小样本学习、合乎道德和包容性的 AI 实践以及跨领域的创意应用程序。预测期内的主要趋势包括 AI 驱动的内容生成和自动化、跨领域创造力、交互式和沉浸式体验以及 AI 驱动的设计和创新。

1.1 Concentration of Research and Development
1.1 C 研发集中

The global landscape of research and development (R&D) in frontier technologies, particularly artificial intelligence (AI), is increasingly dominated by a small number of countries and corporations. According to the 2024 EU Industrial R&D Investment Scoreboard, the worlds top 2000 R&D investing companies dedicated €1257.7 billion to R&D in 2023, representing an all-time record and accounting for 85–90% of global private R&D funding (European Commission, Joint Research Centre, 2024). Notably, the top 50 investors alone accounted for over 40% of global R&D expenditure, with 22 based in the United States, followed by 11 in the EU and 5 each in China and Japan.
前沿技术,尤其是人工智能 (AI) 的全球研发 (R&D) 格局越来越由少数国家和公司主导。根据 2024 年欧盟工业研发投资记分牌,全球前 2000 家研发投资公司在 2023 年投入了 12577 亿欧元用于研发,创下历史新高,占全球私人研发资金的 85-90%(欧盟委员会,联合研究中心,2024 年)。值得注意的是,仅前 50 名投资者就占全球研发支出的 40% 以上,其中 22 家位于美国,其次是欧盟的 11 家,中国和日本各 5 家。

This trend of R&D concentration extends beyond finance into knowledge creation. According to 2025 Technology and Innovation Report (UNCTAD, 2025), over the period 2000–2023, for AI alone, more than 713,000 peer-reviewed scientific articles were published and 338,000 patents were filed, with a sharp increase since 2020. However, this rapid expansion is unequally distributed: knowledge creation in frontier technologies is dominated by China and the United States, which together are responsible for around one third of global peer-reviewed articles and two thirds of patents. These countries are dominant in patents than scientific articles. Moreover, some countries also tend to specialize in specific subfields of technology. For example, Japan in electric vehicles and the Republic of Korea in 5G technologies (UNCTAD, 2025).
这种研发集中的趋势超越了金融,延伸到了知识创造。根据《2025 年技术与创新报告》(UNCTAD,2025 年),在 2000 年至 2023 年期间,仅就人工智能而言,就发表了超过 713,000 篇同行评审的科学文章,并提交了 338,000 项专利,自 2020 年以来急剧增加。然而,这种快速扩张的分布并不均:前沿技术的知识创造由中国和美国主导,它们合计占全球同行评审文章的三分之一和专利的三分之二。这些国家在专利方面比科学文章更占主导地位。 此外 ,一些国家也倾向于专注于特定的技术子领域。例如,日本在电动汽车方面,韩国在 5G 技术方面(联合国贸易和发展会议,2025 年)。

Current AI research and development shows a clear trend toward technological focus. Industry analysis indicates that generative AI, computer vision, natural language processing (NLP), and autonomous systems constitute the primary areas of technological breakthrough. This trend is particularly evident in investment: deep learning frameworks and large language models are the fastest-growing areas, with a compound annual growth rate of 85.7% (IDC, 2024), with content generation and translation technologies occupying the main application scenarios. At the same time, AI technology is becoming a core driver in multiple high-growth sectors, including virtual reality (VR), blockchain-based authentication of creative works, and emotion recognition systems in the entertainment industry.
当前的 AI 研发显示出明显的技术重点趋势。行业分析表明,生成式 AI、计算机视觉、自然语言处理 (NLP) 和自主系统构成了技术突破的主要领域。这一趋势在投资中尤为明显:深度学习框架和大型语言模型是增长最快的领域,复合年增长率为 85.7%(IDC,2024 年),内容生成和翻译技术占据主要应用场景。与此同时,人工智能技术正在成为多个高增长领域的核心驱动力,包括虚拟现实 (VR)、基于区块链的创意作品认证以及娱乐行业的情感识别系统。

This development trend aligns with the observations of the United Nations Conference on Trade and Development (UNCTAD): AI-related R&D is increasingly integrating with the cultural and creative sectors. For instance, in game development, generative AI has enabled the automated generation of scenes and characters, the fashion industry is optimizing design processes through computer vision technology, and music synthesis and digital cultural heritage preservation rely on NLP and deep learning technologies (UNCTAD, 2025). Additionally, corporate practices further validate this trend: NVIDIAs Jetson platform is driving the industrial deployment of autonomous systems, while HiDream.ais visual large-scale model has achieved commercialization, highlighting the synergistic acceleration of technological R&D and industrial applications.
这一发展趋势与联合国贸易和发展会议(UNCTAD)的观察一致:人工智能相关的研发正在越来越多地与文化和创意部门融合。例如,在游戏开发中,生成式 AI 实现了场景和角色的自动生成,时尚行业正在通过计算机视觉技术优化设计流程,音乐合成和数字文化遗产保护依赖于 NLP 和深度学习技术(UNCTAD,2025)。 此外 ,企业实践进一步验证了这一趋势:NVIDIA 的 Jetson 平台正在推动自主系统的工业部署,而 HiDream.ai视觉大规模模型已经实现商业化,凸显了技术研发和工业应用的协同加速。

However, this high level of concentration could pose challenges for inclusivity and equitable development. At both the corporate and national levels, market dominance risks widening global technological divides, making it even more difficult for latecomers to catch up, particularly given the slowdown in technology diffusion observed in recent decades. The cost of training frontier AI models has increased 2.4 times annually since 2016 (UNCTAD, 2025). In addition, technology development and innovation in developing countries can also be hindered by data and intellectual property policies in developed countries.
然而,这种高度集中可能会对包容性和公平发展构成挑战。 企业和国家层面 市场主导地位有可能扩大全球技术鸿沟,使后来者更难迎头赶上,特别是考虑到近几十年来观察到的技术传播放缓 。自 2016 年以来,训练前沿 AI 模型的成本每年增加 2.4 倍(UNCTAD,2025)。此外,发展中国家的技术发展和创新也可能受到发达国家数据和知识产权政策的阻碍。

Therefore, despite the rapid advancement of cutting-edge technologies, the barriers to accessing them are becoming increasingly high. UNCTAD calls for the promotion of more inclusive innovation policies globally, particularly to enhance the technological capabilities of countries in the Global South, support the development of AI systems that are multilingual and culturally diverse, and promote South-South cooperation (UNCTAD, 2025). Otherwise, the rapid evolution of AI and other advanced technologies may instead widen existing development gaps.
因此,尽管尖端技术发展迅速,但获得它们的门槛越来越高。联合国贸易和发展会议 (UNCTAD) 呼吁在全球范围内促进更具包容性的创新政策,特别是提高全球南方国家的技术能力,支持开发多语言和文化多样化的人工智能系统,并促进南南合作(贸发会议,2025 年)。否则,人工智能和其他先进技术的快速发展反而可能会扩大现有的发展差距。

1.2 Evolution and Synergy
1.2 进化与协同

To understand the promises and perils of AI, it is essential to trace its evolution and its convergence with other frontier technologies. Artificial Intelligence has progressed in three major waves. The first wave occurred in the 1950s and 1960s, focusing on rule-based systems that used pre-determined
要了解 AI 的前景和风险,必须追踪其演变及其与其他前沿技术的融合。人工智能经历了三大浪潮。第一波发生在 1950 年代和 1960 年代,专注于使用预先确定的基于规则的系统
rules of choices
选择规则
to make decisions. After periods of stagnation known as
做出决策。在被称为
AI winters
AI 冬天
, the second wave began in the 1990s and was based on statistical learning, driven by advances in computational power, vast data availability, and improved algorithms. A landmark event during this wave was the creation of ImageNet in 2007 and the rise of digital assistants like Siri in 2011. However, AI at this stage was considered narrow AI, confined to specific tasks within limited domains (UNCTAD, 2025).
,第二波浪潮始于 1990 年代,基于统计学习,由计算能力的进步、大量数据可用性和改进的算法驱动。这一浪潮中的一个里程碑事件是 2007 年 ImageNet 的创建以及 2011 年 Siri 等数字助理的兴起。然而,人工智能在这一阶段被认为是狭义的人工智能,仅限于有限领域内的特定任务(UNCTAD,2025)。
The third and current wave, beginning in the 2020s, is characterized by the rise of generative AI (GenAI), which utilizes natural language processing and large language models to generate new content rather than just classify data. Unlike predictive AI, which typically analyzes and categorizes data to predict outcomes, GenAI is designed to identify relationships in large datasets and use them to create new, original content such as texts, images, and videos. Notable examples include ChatGPT, DALL·E, and Sora. This shift brings immense potential, but also challenges, particularly in explainability, since the probabilistic nature of GenAI means that identical inputs may yield different results (UNCTAD, 2025).
第三波浪潮也是当前浪潮始于 2020 年代,其特点是生成式人工智能 (GenAI) 的兴起,它利用自然语言处理和大型语言模型来生成新内容,而不仅仅是对数据进行分类。与通常对数据进行分析和分类以预测结果的预测性 AI 不同,GenAI 旨在识别大型数据集中的关系,并使用它们来创建新的原创内容,例如文本、图像和视频。值得注意的例子包括 ChatGPT、DALL·E 和 Sora。这种转变带来了巨大的潜力,但也带来了挑战,尤其是在可解释性方面,因为 GenAI 的概率性意味着相同的输入可能会产生不同的结果(UNCTAD,2025)。

These AI waves have also catalyzed synergistic transformations across multiple technological domains, making AI a general-purpose technology. According to UNCTAD, AI is already embedded in our daily life and serves as a general-purpose technology that augments other technologies (UNCTAD, 2025, p. 61). The report highlights its integration with several frontier technologies:
这些 AI 浪潮还催化了多个技术领域的协同转型,使 AI 成为一种通用技术。根据联合国贸易和发展会议 (UNCTAD) 的说法,人工智能已经嵌入到我们的日常生活中,并作为一种通用技术来增强其他技术 (联合国贸易和发展会议,2025 年,第 61 页)。该报告重点介绍了它与几项前沿技术的集成:

Internet of Things (IoT): Connected devices are increasingly enhanced by AI, allowing for autonomous data analysis, decision-making, and real-time actions with minimal human input. This convergence forms what UNCTAD refers to as the artificial intelligence of things, enabling applications like smart factories and intelligent transport systems when combined with 5G infrastructure.
物联网 (IoT):AI 越来越多地增强了互联设备,允许以最少的人工输入进行自主数据分析、决策和实时作。这种融合形成了联合国贸易和发展会议(UNCTAD)所说的 物联网人工智能 ,当与 5G 基础设施相结合时,智能工厂和智能交通系统等应用成为可能。

Big Data: A strong synergy exists between AI and big data. AI can improve data analysis and pattern recognition, while big data can be used in training models. Applications include video surveillance that processes large data streams to detect anomalies.
大数据:AI 和大数据之间存在强大的协同作用。AI 可以改进数据分析和模式识别,而大数据可用于训练模型。应用包括处理大型数据流以检测异常情况的视频监控。

Blockchain: AI is being used in cybersecurity, finance, and supply chain management. It enables threat detection and fraud prevention, while blockchain augments AI-based security measures with linked cryptographic authentication and decentralized computing power.
区块链:AI 正用于网络安全、金融和供应链管理。它支持威胁检测和欺诈预防,而区块链通过链接的加密身份验证和去中心化的计算能力增强了基于 AI 的安全措施。

3D Printing: Tools like Style2Fab and 3D-GPT enable designers to test and simulate many design scenarios. GenAI facilitates design by automating virtual testing and refinement.
3D 打印:Style2Fab 和 3D-GPT 等工具使设计师能够测试和模拟许多设计场景。GenAI 通过自动化虚拟测试和优化来促进设计。

Robotics and Drones: AI-powered industrial robots are widely used in manufacturing, while in agriculture, AI supports automated harvesting. Drones also benefit from AI to operate autonomously and adapt to changing scenarios.
机器人和无人机:AI 驱动的工业机器人广泛用于制造业,而在农业中,AI 支持自动收割。无人机还受益于 AI 自主运行并适应不断变化的场景。

Green Technologies: AI is instrumental in optimizing renewable energy management, smart grids, and energy storage, even as its energy use raises sustainability concerns.
绿色技术:AI 在优化可再生能源管理、智能电网和储能方面发挥着重要作用,即使其能源使用引发了对可持续性的担忧。

Nanotechnology and Gene Editing: AI drives breakthroughs in material design and genetic research, including autonomous nanorobots.
纳米技术和基因编辑:人工智能推动了材料设计和基因研究的突破,包括自主纳米机器人。

Overall, these synergies make AI not only a field of standalone innovation but also a foundational driver of the next industrial transformation. As noted in the UNCTAD report, AI may be considered the latest in a sequence of industrial revolutions, with its potential to amplify human intelligence and reshape the global economy.
总体而言,这些协同效应使 AI 不仅是一个独立的创新领域,而且是下一次工业转型的基础驱动力。正如联合国贸易和发展会议 (UNCTAD) 报告所指出的,人工智能可能被认为是一系列工业革命中的最新一项,其潜力可以放大人类智慧并重塑全球经济。

B. Existing Technological and Societal Risks
B. 现有的技术和社会风险

1. Technological Risks: Intellectual Property Uncertainty
1. 技术风险:知识产权的不确定性

1.1 Source Ambiguity
1.1 源歧义

AI-generated content is based on the ingestion and processing of massive data, including public databases, user-generated inputs, and online repositories. However, the source of this training data is usually opaque to users even if they require AI tools to offer information sources, making it difficult to determine whether the generated content infringes on existing copyrights. A significant concern arises from the fact that certain publicly available materials are not legally permissible for reuse, yet AI systems are not inherently equipped to discern such limitations.
AI 生成的内容基于海量数据的摄取和处理,包括公共数据库、用户生成的输入和在线存储库。但是, 即使用户需要 AI 工具提供信息源, 这些训练数据的来源通常对用户来说是不透明的 因此很难确定生成的内容是否侵犯了现有版权。一个重大的担忧是,某些公开可用的材料在法律上不允许重复使用,但人工智能系统本身并不具备识别此类限制的能力。

This ambiguity has led to a series of copyright infringement lawsuits. In 2024, multiple legal cases in the United States have challenged AI companies on the grounds of unauthorized use of copyrighted materials during model training (Goetze, 2024; Wired, 2025). The central issue lies in the unresolved debate over whether AI-generated works can be considered transformative—a key criterion under the fair use doctrine. If AI systems are indeed trained on copyrighted data without consent, the impact could be devastating for the creative industries, particularly affecting early-career creators whose works are more vulnerable to societal changes.
这种模糊性导致了一系列版权侵权诉讼。 2024 年,美国的多起法律案件以在模型训练期间未经授权使用受版权保护的材料为由对 AI 公司提出挑战(Goetze,2024 年;连线,2025 年)。核心问题在于关于 AI 生成的作品是否可以被视为 变革性 的未解决的争论 ——这是合理使用原则下的一个关键标准。如果 AI 系统确实在未经同意的情况下使用受版权保护的数据进行训练,则其影响可能会对创意产业造成毁灭性影响,尤其是影响到作品更容易受到社会变化影响的早期职业创作者

1.2 Authorship and Accountability
1.2 作者身份和责任

The question of who holds legal and moral responsibility for AI-generated content remains controversial. Current perspectives suggest three possible accountable parties: (1) end users who initiate the generation process, (2) platform operators who provide the technological infrastructure, and (3) developers who design the models underlying mechanism. Firstly, those who actively input prompts may be seen as co-creators and thus accountable for potential misuse or infringement, particularly when they generate content intentionally resembling copyrighted works. Secondly, platform operators might be liable for enabling and profiting from mass content production without establishing sufficient safeguards. This view has gained attention in legal discussions such as the U.S. Copyright Offices 2023 inquiry into authorship and copyrightability of AI works. Thirdly, developers shoulder responsibility for choosing the training data, designing the model, and embedding limitations into the system.
谁对 AI 生成的内容承担法律和道德责任的问题仍然存在争议 。目前的观点表明了三个可能的责任方:(1) 启动生成过程的最终用户,(2) 提供技术基础设施的平台运营商,以及 (3) 设计模型底层机制的开发人员 首先,那些主动输入提示的人可能被视为共同创作者,因此要对潜在的滥用或侵权负责,尤其是当他们生成故意类似于受版权保护的作品的内容时。其次,平台运营商可能要为在没有建立足够保障措施的情况下实现大规模内容生产并从中获利负责。这一观点在法律讨论中受到关注,例如美国版权局 2023 年对人工智能作品的作者身份和可版权性的调查。第三,开发人员 负责选择训练数据、设计模型以及将限制嵌入到系统中。

A related dilemma is whether AI-generated content can be protected under current intellectual property laws. Traditional copyright laws are based on the assumption that the creator is a human. The advent of AI, which operates autonomously and impersonally, complicates this assumption. Therefore, policymakers and regulators need to discuss issues as royalties for artists on streaming platforms, reselling of e-books, and platform liability for unauthorised uploaded content (UNCTAD,2025). The UK Intellectual Property Office (2023) clarified that works generated autonomously by AI are not eligible for copyright protection unless a human can demonstrate meaningful creative contribution. In Germany, regulations do not protect AI artwork. In these regulatory frameworks, AI cannot be an author, although this barrier is continuously challenged (European Parliament, 2020). However, if such content is deemed ineligible for protection like that, it risks becoming part of the public domain by default—potentially retraining original human creation and diluting creative markets. Scholars argue that without a revised framework that accounts for hybrid authorship, creative ecosystems may suffer both economically and ethically.
一个相关的困境是 AI 生成的内容是否可以受到现行知识产权法的保护。传统的版权 基于 创作者是人类的假设。自主和非个人化运行的 AI 的出现使这一假设变得复杂。 因此,知识产权制造商和监管机构需要讨论流媒体平台上艺术家的版税 、电子书的转售以及未经授权上传内容的平台责任等问题(UNCTAD,2025)。 英国知识产权局 (2023) 澄清说,除非人类能够展示有意义的创造性贡献,否则由 AI 自主生成的作品没有资格获得版权保护。 在德国,法规不保护 AI 艺术品。 在这些监管框架中,人工智能不能成为作者,尽管这一障碍不断受到挑战(欧洲议会,2020 年)。 但是,如果此类内容被认为不符合此类保护的条件 ,则默认情况下它有可能成为公共领域的一部分——可能会重新训练原始的人类创作并稀释创意市场。学者们认为,如果没有一个考虑到混合作者身份的修订框架,创意生态系统可能会在经济和道德上受到影响。

1.3 Legal and Ethical Lag
1.3 法律和道德要求

The pace of technological innovation in AI far outstrips the speed at which legal and regulatory systems adapt. This temporal mismatch has led to what scholars refer to as regulatory lag or post hoc legalism. As a result, AI development often occurs in a vacuum of accountability, with ownerless content circulating freely in digital ecosystems. Without proactive legal structures, this gap risks enabling unregulated exploitation and undermining ethical standards in digital creativity.
AI 技术创新的速度远远超过了法律和监管系统的适应速度。这种时间上的不匹配导致了学者们所说的 监管滞后 事后法律主义 ”。 因此,AI 开发通常发生在问责制的真空中, 无所有者 内容在数字生态系统中自由流通。如果没有积极的法律结构,这种差距就有可能导致不受监管的开发,并破坏数字创意的道德标准。

2. Societal Risks: Asymmetrical Knowledge Creation and Cultural Bias
2.社会风险:不对称的知识创造和文化偏见

Global knowledge production in frontier technologies is disproportionately concentrated in a few countries. According to UNCTAD (2025), China and the United States lead in knowledge creation in frontier technologies. Not just countries, such phenomenon also exists among technological corporates. This kind of imbalance contributes to asymmetrical knowledge creation and entrenches cultural bias within AI systems, widening global technological divides.
前沿技术的全球知识生产不成比例地集中在少数几个国家。根据 UNCTAD (2025),中国和美国前沿技术方面的知识创造方面处于领先地位 不仅国家,这种现象也存在于科技公司中。 这种 不平衡导致知识创造不对称,并在 AI 系统内加深文化偏见 ,从而扩大全球技术鸿沟。

2.1 Cultural Bias and the Dilution of Creative Standards
2.1 文化偏见和创意标准的稀释

AI systems, mostly trained on English-language content and Western-centric data often reflect and reproduce inherent cultural biases. Language models, for instance, demonstrate a marked preference for Western aesthetics and values. A notable example involves image generation tools producing default outputs aligned with Western beauty norms when prompted with terms such as beauty. Moreover, AIs reliance on historical data limits its capacity for genuine innovation. As it generates content through recombination rather than original thought, its outputs are inherently derivative. This tendency can homogenize cultural expression and suppress marginalized narratives. Experiments on Chinese platforms like Doubao have also shown how AI-generated depictions of individuals from different provinces reinforce socioeconomic stereotypes—presenting people in developed regions as decent and underdeveloped areas as poorly-dressed and embarrassed.
人工智能系统大多接受过英语内容和以西方为中心的数据训练,通常反映和复制固有的文化偏见。例如,语言模型表现出对西方美学和价值观的明显偏好。一个值得注意的例子是图像生成工具在提示使用“beauty”等术语时生成符合西方美容规范的默认输出 此外,AI 对历史数据的依赖限制了其真正创新的能力。由于它通过重组而不是原创思想生成内容,因此其输出本质上是衍生的。这种趋势会使文化表达同质化,压制边缘化的叙事。在豆包等中国平台上进行的实验表明,人工智能生成的对来自不同省份的个人的描述如何强化了社会经济刻板印象——将发达地区的人们描绘成体面 而将欠发达地区的人描绘成衣着简陋和尴尬的人们。

As AI becomes more widely accepted, the algorithmically determined aesthetic may replace human experiential judgment, threatening the ethical integrity and diversity of cultural production. This phenomenon has been discussed by Benjamin Bratton in The Stack and Shoshana Zuboffs theory of automated exploitation.
随着人工智能被广泛接受 ,算法决定的审美可能会取代人类的经验判断,从而威胁到文化生产的道德完整性和多样性。本杰明·布拉顿 (Benjamin Bratton) 在 堆栈 中讨论了这种现象 ,肖莎娜·祖博夫 (Shoshana Zuboff 自动剥削”理论也曾讨论过。

2.2 AI and Social Inequality
2.2 人工智能与社会不平等

Apart from biases in cognition level, AI technologies are also amplifying existing social inequalities. Empirical studies in the U.S. have demonstrated that algorithmic systems frequently reinforce discriminatory practices, particularly against historically marginalized communities. For example, Obermeyer et al. (2019) showed that widely used health care algorithms under-allocate resources to Black patients due to biased training data. Similarly, Eubanks (2018) detailed how automated welfare systems have led to systemic exclusion of low-income individuals.
除了认知水平的偏见外,人工智能技术也在放大现有的社会不平等。美国的实证研究表明,算法系统经常强化歧视性做法,尤其是针对历史上被边缘化的社区。例如,Obermeyer 等人(2019 年)表明,由于训练数据有偏见,广泛使用的医疗保健算法将资源分配给黑人患者不足。同样,Eubanks (2018) 详细介绍了自动化福利系统如何导致对低收入个人的系统性排斥。

Global assessments by Trystan S. Goetze (2024) suggest that such disparities are even more pronounced in low- and middle-income countries, where infrastructural and regulatory capacities are limited. In countries lacking robust digital literacy or data protection laws, AI systems may disproportionately profile users or restrict access to essential services based on flawed or biased algorithms.
Trystan S. 的全球评估 Goetze (2024) 认为,这种差距在基础设施和监管能力有限的低收入和中等收入国家更为明显。在缺乏健全的数字素养或数据保护法律的国家/地区,AI 系统可能会根据有缺陷或有偏见的算法不成比例地分析用户或限制对基本服务的访问。

The mechanisms through which AI exacerbates inequality are manifold: 1) Algorithmic profiling: AI systems may perpetuate historical prejudices embedded in datasets. 2) Data supervision: Low-income populations are often subject to more intrusive data collection. 3) Unequal access: Technological infrastructure is unevenly distributed, limiting equitable benefits. 4) Labor displacement: Workers in routine or semi-skilled jobs are more likely to be replaced by automation. These systemic imbalances can exacerbate social instability and erode trust in institutions, necessitating coordinated international policy responses that prioritize fairness, transparency, and accountability in AI deployment.
人工智能加剧不平等的机制是多方面的:1) 算法分析 :人工智能系统可能会使数据集中嵌入的历史偏见永久化。 2) 数据监督 :低收入人群通常会受到更具侵入性的数据收集。 3) 机会不平等 :技术基础设施分布不均,限制了公平的利益。 4) 劳动力流失 :从事常规或半熟练工作的工人更有可能被自动化取代。 这些系统性失衡会加剧社会不稳定并削弱对机构的信任,因此需要协调一致的国际政策应对措施,优先考虑 AI 部署的公平性、透明度和问责制。

3. Economic and Structural Risks
3.经济和结构性风险

3.1 Labor Market Disruption
3.1 劳动力市场中断

AIs ability to automate tasks traditionally performed by human creatives poses significant risks to the labor market. Entry-level content creators—such as copywriters, illustrators, and junior designers—are particularly vulnerable to displacement. This trend may transform temporary frictional unemployment into long-term structural unemployment if not mitigated by proactive reskilling initiatives.
人工智能自动执行传统上由人类创意人员执行的任务的能力对劳动力市场构成了重大风险。入门级内容创作者(如撰稿人、插画家和初级设计师)特别容易受到流失的影响。如果不通过积极的技能再培训计划来缓解,这一趋势可能会将暂时的摩擦性失业转变为长期的结构性失业。

Currently, the educational and vocational training systems in most countries lag behind the evolving skill demands of the AI-driven economy. Without targeted investment in AI literacy and digital upskilling, the workforce faces a paradox: while innovation drives demand for new competencies, widespread job displacement continues due to technological substitution.
目前,大多数国家的教育和职业培训系统落后于人工智能驱动经济不断发展的技能需求。如果不对人工智能素养和数字技能提升进行有针对性的投资,劳动力将面临一个悖论:虽然创新推动了对新能力的需求,但由于技术替代,广泛的工作岗位流失仍在继续。

3.2 Platform Monopoly and Technological Oligopoly
3.2 平台垄断与技术寡头垄断

The intensification of AI innovation demands substantial capital investments in computing resources, talent acquisition, and data infrastructure. These barriers to entry have resulted in a market dominated by a few transnational corporations with access to proprietary algorithms and high-performance hardware.
AI 创新的加剧需要在计算资源、人才招聘和数据基础设施方面进行大量资本投资。这些进入壁垒导致市场由少数跨国公司主导,这些公司可以使用专有算法和高性能硬件。

This technological oligopoly not only stifles competition from smaller enterprises but also deepens the global digital divide. Developing countries and smaller firms often lack the infrastructural backbone and policy support to engage meaningfully in the AI economy. Consequently, AI markets increasingly exhibit characteristics of monopolistic competition, with a handful of actors controlling both innovation trajectories and value distribution.
这种技术寡头垄断不仅扼杀了来自小型企业的竞争,还加深了全球数字鸿沟。发展中国家和小型企业往往缺乏基础设施支柱和政策支持,无法有意义地参与人工智能经济。因此,人工智能市场越来越表现出垄断竞争的特征,少数参与者同时控制着创新轨迹和价值分配。

IV. Inclusive AI Construction
四、普惠人工智能建设

A. Overview of Current Global Technological Developments
A. 当前全球技术发展概述

Concern for man himself and his fate must always form the chief interest of all technical endeavors. Never forget this in the midst of your diagrams and equations.
对人本身和他的命运的关注必须始终成为所有技术努力的主要利益。永远不要忘记你的图表和方程式中的这一点。

--<Out of My Later Years> Albert Einstein
--<Out of My Late Years> 阿尔伯特·爱因斯坦

Artificial intelligence (AI) is sparking interest among policymakers globally as a powerful tool to unlock new opportunities for sustainable development. Over 70 countries have already published AI policies or initiatives, with numerous more in progress around the world. AI technology and applications are developing at record pace, as evidenced by the rapid and widespread adoption of generative AI (GenAI) – new tools and applications which create original text, audio, image, and video content. One striking benchmark is to consider the pace at which various technologies have permeated our lives. It took 75 years for fixed telephones to reach 100 million users globally.
人工智能 (AI) 作为开启可持续发展新机遇的有力工具,正在激发全球政策制定者的兴趣。70 多个国家/地区已经发布了 AI 政策或计划,世界各地还有更多国家/地区正在进行中。AI 技术和应用程序正在以创纪录的速度发展,生成式 AI (GenAI) 的快速和广泛采用证明了这一点,生成式 AI (GenAI) 是创建原始文本、音频、图像和视频内容的新工具和应用程序。一个引人注目的基准是考虑各种技术渗透到我们生活中的速度。固定电话花了 75 年时间才覆盖全球 1 亿用户。

Frontier technologies are advancing rapidly, with a market size projected to grow sixfold by 2033, to $16.4 trillion. Market power, research and development (R&D) investment, knowledge creation and the development and deployment of these technologies are dominated by technology giants from developed countries.
前沿技术正在迅速发展,预计到 2033 年,市场规模将增长六倍,达到 16.4 万亿美元。市场力量、研发 (R&D) 投资、知识创造以及这些技术的开发和部署都由发达国家的科技巨头主导。

Knowledge creation in frontier technologies has been gathering pace, with a rapid rise in research publications and patents. Over the period 2000–2023, for AI alone, more than further widening existing gaps. Nevertheless, at this stage, AI was largely confined to specific tasks within limited domains and did not possess human-like intelligence. This is considered narrow artificial intelligence, or weak AI. The third and current wave gathered momentum in the 2020s, with the use of significant computer power for systems not only based on rules but seeking contextual adaptation or factoring in contexts and explaining decisions. Recent years have seen the emergence of GenAI, driven by advances in natural language processing into a general‑purpose technology
前沿技术的知识创造一直在加快步伐,研究出版物和专利迅速增加。在 2000 年至 2023 年期间,仅就人工智能而言,就进一步扩大了现有差距。 然而,在这个阶段,人工智能主要局限于有限领域的特定任务,不具备类似人类的智能。这被认为是狭义的人工智能,或弱 AI 第三波浪潮也是当前浪潮在 2020 年代蓄势待发,不仅基于规则,而且寻求上下文适应或考虑上下文并解释决策,将大量计算机能力用于系统。近年来,在自然语言处理的进步成为通用技术的推动下,GenAI 出现了

The responsible adoption of AI has substantial potential to drive inclusive growth and economic development in emerging economies. Investing in AI and digital innovation prepares countries to generate new business models and participate in the global economy. According to UNESCO, AI may add USD $13 trillion to the global economy by 2030 and increase global GDP by 1.2%.2 It can boost productivity and efficiency in key economic sectors, and in public services to overcome resources gaps, ranging from health and education to transportation and finance. Strategic adoption of technologies can provide employment opportunities for youth, innovators and entrepreneurs to participate in global AI value chains.
负责任地采用人工智能具有推动新兴经济领域的包容性增长和经济发展的巨大潜力 。投资于人工智能和数字创新使各国为创造新的商业模式和参与全球经济做好准备。 据联合国教科文组织称, 到 2030 年,人工智能可能会为全球经济增加 13 万亿美元,并将全球 GDP 增加 1.2%2。它可以提高关键经济部门和公共服务的生产力和效率,以克服从健康和教育到交通和金融等资源缺口。战略性地采用技术可以为青年、创新者和企业家提供就业机会,让他们参与全球人工智能价值链。

When it comes to the global AI governance and collaboration, international AI governance initiatives are highly fragmented and dominated by developed countries. AI technology is largely controlled by a few technology giants, which are likely to prioritize profits over societal benefits, and it can be deployed virtually anywhere, extending its influence beyond borders. Therefore, Governments is striving to act to establish international guidance on AI development that favors public interest and promotes AI as a public good. Most developing countries have significant stakes in the future of AI but have limited influence over the direction it takes, which may result in a failure of global AI governance. This requires multi‑stakeholder cooperation to make AI accessible and beneficial for everyone and foster inclusive innovation in tackling global challenges. A comprehensive global framework for AI should incorporate accountability mechanisms for companies, Governments and institutions. UNCTAD, in this report, advocates an AI‑for‑all approach, addressing infrastructure, data and skills, to steer the technology towards shared goals and values.
在全球人工智能治理和协作方面, 国家人工智能治理计划高度分散,由发达国家主导。AI 技术主要由少数科技巨头控制,这些巨头可能会优先考虑利润而不是社会利益,它几乎可以部署在任何地方,将其影响力扩展到国界之外。因此,各国政府正在努力建立有利于公共利益并将 AI 作为公共产品推广的 AI 发展国际指南。大多数发展中国家对 AI 的未来有着重大利害关系,但对其发展方向的影响力有限,这可能导致全球 AI 治理失败。这需要多方利益相关者合作,让每个人都能使用 AI 并造福自己,并促进包容性创新以应对全球挑战。一个全面的人工智能全球框架应纳入公司、政府和机构的问责机制。联合国贸易和发展会议 (UNCTAD) 在本报告中倡导一种 AI for all 方法,解决基础设施、数据和技能问题,以引导技术朝着共同的目标和价值观发展。

Over the years, the United Nations has made a significant contribution to the global discourse on AI governance (figure V.3). For example, since 2017, ITU has organized sessions of the AI for Good Global Summit, a key platform that identifies AI applications to advance on the Sustainable Development Goals and scale such applications for global impacts. Other important United Nations‑based platforms for advancing understanding on science and technology are the Commission on Science and Technology for Development (CSTD) and the Multi‑stakeholder Forum on Science, Technology and Innovation for the Sustainable Development Goals (STI Forum). In 2024, the United Nations General Assembly adopted two resolutions, one on seizing the opportunities of safe, secure and trustworthy AI systems for sustainable development (United Nations General Assembly, 2024a) and one on enhancing international cooperation on capacity‑building of AI (United Nations General Assembly, 2024b). At present, international governance mechanisms and institutions are universally falling into stagnation; thus, global issues are becoming more difficult to reach a basic consensus and establish collective rules. It is therefore more urgent to recover basic international cooperation under the framework of international organizations like UNCTAD.
多年来,联合国为关于 AI 治理的全球讨论做出了重大贡献(图 V.3)。例如,自 2017 年以来,国际电联组织了 AI for Good 全球峰会的会议,这是一个确定人工智能应用以推进可持续发展目标并扩大此类应用以产生全球影响的重要平台。其他以联合国为基础的促进科学技术理解的重要平台是科学技术促进发展委员会 (CSTD) 和可持续发展目标科学、技术和创新多利益攸关方论坛 (STI 论坛)。2024 年,联合国大会通过了两项决议,一项是关于抓住安全、可靠和值得信赖的人工智能系统促进可持续发展的机遇(联合国大会,2024a),另一项是关于加强人工智能能力建设的国际合作(联合国大会,2024b)。 当前,国际治理机制和机构普遍陷入停滞,全球性问题越来越难以达成基本共识和建立集体规则。因此,在联合国贸易和发展会议(UNCTAD)等国际组织的框架下恢复基本的国际合作更为紧迫。

B. Existing technical, social risks and issues
B. 现有的技术、社会风险和问题

1. Technological Concentration and Digital Divide
1. 技术集中与数字鸿沟

The rapid advancements in artificial intelligence have widened the digital divide, creating what is now known as the AI divide. This divide represents the unequal access, benefits, and opportunities in AI technology across various regions, communities, and socioeconomic groups. The most marginalized communities, including women, people of color, disabled individuals, LGBTQ+ persons, and others, bear the brunt of this divide. To bridge this gap, embracing and promoting AI literacy is paramount. Understanding the basics of AI is essential for everyone to thrive in this rapidly evolving landscape.
人工智能的快速发展扩大了数字鸿沟,形成了现在所谓的人工智能鸿沟。这种鸿沟代表了不同地区、社区和社会经济群体在 AI 技术方面的不平等获取、利益和机会。最边缘化的社区,包括女性、有色人种、残疾人、LGBTQ+ 人士和其他人,首当其冲地受到这种鸿沟的影响。为了弥合这一差距,接受和促进 AI 素养至关重要。了解 AI 的基础知识对于每个人在这个快速发展的环境中茁壮成长至关重要。

Fear is a significant barrier to AI literacy. Many people are apprehensive about AI, as evidenced by a recent survey across 31 countries, in which nearly equal numbers of adults reported being nervous (52%) 1and excited (54%) about AI products and services. This fear often overshadows the natural curiosity and excitement that new technologies typically generate. To overcome this challenge, it is crucial to provide accessible and relatable AI education that addresses these fears and stimulates curiosity.
恐惧是 AI 素养的重大障碍。许多人对 AI 感到担忧,最近在 31 个国家/地区进行的一项调查证明了这一点,其中几乎相同数量的成年人表示对 AI 产品和服务感到紧张 (52%) 1 和兴奋 (54%)。这种恐惧往往掩盖了新技术通常产生的自然好奇心和兴奋感。为了克服这一挑战,提供可访问且相关的 AI 教育以解决这些恐惧并激发好奇心至关重要。

Studies indicate that the increasing prevalence of AI varies in terms of understanding and awareness, particularly among underrepresented groups. A fear of AI-biased outcomes and concerns about the negative impacts of AI are stifling interest in understanding how to use the technology to improve lives. This gap is evident in the workforce, where women are more likely to be exposed to AI-related job changes but face a significant skills gap compared to men.
研究表明,AI 的日益流行在理解和意识方面存在差异,尤其是在代表性不足的群体中。对 AI 偏见结果的恐惧和对 AI 负面影响的担忧正在扼杀人们对了解如何使用该技术来改善生活的兴趣。这种差距在劳动力中很明显,与男性相比,女性更有可能接触到与 AI 相关的工作变化,但面临巨大的技能差距。

2. Algorithmic Bias and Cultural Exclusion
2. 算法偏见和文化排斥

racial bias remains a significant issue in the development of AI systems. There are already several AI bias examples relating to race, including one from The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). COMPAS predicted the likelihood that US criminals would re-offend. In 2016, ProPublica investigated COMPAS and found that the system was far more likely to say black defendants were at risk of reoffending than their white counterparts. While it correctly predicted reoffending at a rate of around 60% for both black and white defendants, COMPAS misclassified almost twice as many black defendants (45%) as higher risk compared to white defendants (23%).
种族偏见仍然是 AI 系统开发中的一个重要问题。已经有几个与种族相关的 AI 偏见示例,包括一个来自替代制裁惩教罪犯管理分析 (COMPAS) 的示例。COMPAS 预测了美国罪犯再次犯罪的可能性。2016 年,ProPublica 调查了 COMPAS,发现该系统比白人被告更有可能说黑人被告有再犯罪的风险。虽然 COMPAS 正确预测了黑人和白人被告的再犯罪率约为 60%,但与白人被告 (23%) 相比,黑人被告 (45%) 的再犯罪率几乎是白人被告的两倍。

AI can also reflect racial prejudices in healthcare, as was the case with an algorithm used by US hospitals. This algorithm, which was utilized for over 200 million people, was designed to predict which patients required extra medical care. It analysed their healthcare cost history, presuming that cost reflects a persons healthcare needs. However, this presumption failed to account for the varying ways in which black and white patients incur healthcare expenses.A 2019 paper in Science explains how black patients are more likely to pay for active interventions like emergency hospital visits – despite showing signs of uncontrolled illnesses.2
AI 还可以反映医疗保健中的种族偏见,就像美国医院使用的算法一样。该算法已用于超过 2 亿人,旨在预测哪些患者需要额外的医疗护理。它分析了他们的医疗保健成本历史,假设成本反映了一个人的医疗保健需求。然而,这一推定未能解释黑人和白人患者产生医疗保健费用的不同方式。2019 年发表在《科学》杂志上的一篇论文解释了黑人患者如何更有可能为紧急医院就诊等积极干预措施付费——尽管他们表现出疾病不受控制的迹象。 阿拉伯数字

At the same time, the communities and cultures are excluded from AI tools, leading to missed opportunities and increased risks from bias and misinformation. Scholars find that large language models suffer a digital divide: The ChatGPTs and Geminis of the world work well for the 1.52 billion people who speak English, but they underperform for the worlds 97 million Vietnamese speakers, and even worse for the 1.5 million people who speak the Uto-Aztecan language Nahuatl.3
与此同时, 社区和文化排除在人工智能工具之外,导致错失机会,并增加偏见和错误信息带来的风险。 学者们发现,大型语言模型存在数字鸿沟:世界上的 ChatGPT 和 Geminis 对 15.2 亿说英语的人效果很好,但对世界上 9700 万说越南语的人来说,它们的表现不如人意,对 150 万说乌托-阿兹特卡语的人来说甚至更糟纳瓦特尔语。3

The main culprit is data: These non-English languages lack the needed quantity and quality of data to build and train effective models. That means most major LLMs are predominantly trained using English (or other high-resource languages) data or poor-quality local language data and not attuned to the rest of the worlds contexts and cultures. As a result, black patients received lower risk scores than their white counterparts.
罪魁祸首是数据:这些非英语语言缺乏构建和训练有效模型所需的数据数量和质量。这意味着大多数主要的 LLM 主要使用英语(或其他高资源语言)数据或质量不佳的当地语言数据进行培训,并且不适应世界其他地区背景和文化。 因此,黑人患者获得的风险评分低于白人患者

3. Intellectual Property Infringement & Misuse of AI-generated Content
3. 侵犯知识产权和滥用 AI 生成的内容

Generative Artificial Intelligence is one of the most conspicuous ways to epitomize how AI infringes on ineffective property. It utilizes data lakes and question snippets to recover patterns and relationships, becoming increasingly prevalent in creative industries. However, the legal implications of using generative AI remain unclear, particularly concerning copyright infringement, ownership of AI-generated works, and unlicensed content in training data. Courts are currently attempting to establish how intellectual property laws should be applied to generative AI, and several cases have already been initiated. To protect themselves from these risks, companies utilizing generative AI need to ensure compliance with the law and take steps to mitigate potential risks, such as ensuring they use training data free from unlicensed content and developing methods to demonstrate the provenance of generated content.
生成式人工智能是体现人工智能如何侵犯无效财产的最引人注目的方式之一。它利用数据湖和问题片段来恢复模式和关系,这在创意产业中越来越普遍。然而,使用生成式 AI 的法律影响仍不清楚,特别是关于版权侵权、AI 生成作品的所有权以及训练数据中未经许可的内容。法院目前正在尝试确定知识产权法应如何适用于生成式 AI,并且已经启动了几起案件。为了保护自己免受这些风险,使用生成式 AI 的公司需要确保遵守法律并采取措施降低潜在风险,例如确保他们使用的训练数据不含未经许可的内容,并开发方法来证明生成内容的来源。

4. Lack of Transparency and Public Trust
4. 缺乏透明度和公众信任

With tens of billions invested in AI last year and leading players such as OpenAI seeking trillions more, the tech industry is racing to add to the ever-growing pile of generative AI models. The goal is to consistently demonstrate improved performance and, in doing so, narrow the gap between human capabilities and what can be achieved with AI.
去年,人工智能投资了数百亿美元,OpenAI 等领先企业正在寻求数万亿的投资,科技行业正在竞相增加不断增长的生成式 AI 模型。目标是持续展示改进的性能,并在此过程中缩小人类能力与 AI 所能实现的差距。

While technology promises immense benefits, it also presents considerable challenges. One of the most significant issues facing the adoption of AI is the enigma of its inner workings, often referred to as the black box problem. AI technologies are based on complex algorithms and mathematical models that are not easily understood, even by experts in the field. As AI continues to integrate into critical decision-making systems, the lack of understanding about how it arrives at certain conclusions becomes a key concern. Understanding these algorithms is crucial for ethical implementation, risk assessment, and potential regulation. Despite numerous efforts to develop explainable AI systems, many AI technologies remain opaque. As we proceed with the adoption of AI across various sectors, it becomes crucial to address this lack of transparency. Failure to do so could result in ethical dilemmas, regulatory hurdles, and a general mistrust of the technology among the public.
虽然技术带来了巨大的好处,但也带来了相当大的挑战。采用 AI 面临的最重要问题之一是其内部工作原理的谜团,通常被称为 黑匣 问题。AI 技术基于复杂的算法和数学模型,即使是该领域的专家也不容易理解。随着 AI 不断集成到关键决策系统中,缺乏对其如何得出某些结论的理解成为一个关键问题。了解这些算法对于道德实施、风险评估和潜在监管至关重要。尽管为开发可解释的 AI 系统做出了许多努力,但许多 AI 技术仍然不透明。随着我们在各个领域继续采用 AI,解决这种缺乏透明度的问题变得至关重要。如果不这样做,可能会导致道德困境、监管障碍以及公众对技术的普遍不信任。

C. Inclusive development and technology governance
C. 包容性发展和技术治理

1. Cultural Differences and Consumer Bias Adaptation (Regional AI Innovation Construction)
1. 文化差异与消费者偏见适应(区域 AI 创新建设)

The globalization of artificial intelligence (AI) technologies in the creative economy has amplified concerns surrounding cultural homogenization, implicit bias, and the dominance of Anglo-American content paradigms. The architecture of most AI models is inherently influenced by the nature of their training datasets, which are often skewed towards Global North cultures and major world languages. This leads to algorithmic outputs that may not adequately reflect or respect the diversity of regional aesthetics, traditions, or social narratives.
创意经济中人工智能 (AI) 技术的全球化加剧了人们对文化同质化、隐性偏见和英美内容范式主导地位的担忧。大多数 AI 模型的架构本质上受到其训练数据集性质的影响,这些数据集通常偏向于北半球文化和世界主要语言。这导致算法输出可能无法充分反映或尊重区域美学、传统或社会叙事的多样性。

To mitigate this, inclusive development strategies must prioritize regional AI innovation ecosystems that embed cultural specificity into AI design. This includes the development of linguistically localized models, training datasets representing diverse cultural outputs, and governance structures that empower local stakeholders to define what constitutes culturally appropriate content. For example, UNESCOs Recommendation on the Ethics of Artificial Intelligence (2021) emphasizes the need for cultural pluralism in digital systems and supports region-specific innovation frameworks. Furthermore, incentives for local creative industries and research institutions to engage in AI co-development can promote participatory technology design and greater trust in AI applications.
为了缓解这种情况,包容性发展战略必须优先考虑将文化特异性嵌入 AI 设计的区域 AI 创新生态系统。这包括开发语言本地化模型、代表不同文化产出的训练数据集,以及使当地利益相关者能够定义什么是 文化上适当 内容的治理结构 。例如, 联合国教科文组织《人工智能伦理问题建议书》(2021 年)强调数字系统文化多元化的必要性,并支持针对特定地区的创新框架。此外,鼓励当地创意产业和研究机构参与人工智能共同开发的激励措施可以促进参与式技术设计和提高对人工智能应用的信任。

2. Infrastructure Construction (AI for Creative Empowerment)
2. 基础设施建设 AI for Creative Empowerment

Digital infrastructure remains a decisive factor in determining whether nations and communities can benefit from AI-based creative technologies. The International Telecommunication Union (ITU) reports that as of 2023, only 36% of individuals in least developed countries (LDCs) had access to the Internet, compared to over 90% in high-income countries. This divide limits access to AI tools, cloud computing services, digital content platforms, and opportunities for creative entrepreneurship.
数字基础设施仍然是决定国家和社区能否从基于 AI 的创意技术中受益的决定性因素。国际电信联盟 (ITU) 报告称,截至 2023 年,最不发达国家 (LDC) 只有 36% 的个人能够访问互联网,而高收入国家的这一比例超过 90%。这种鸿沟限制了获得 AI 工具、云计算服务、数字内容平台和创意创业的机会。

A governance-oriented response requires multilevel infrastructure planning that encompasses broadband expansion, access to low-cost computing resources, and the establishment of AI-enabling environments such as innovation labs and community tech hubs. Multilateral financing mechanisms, including development banks and UN partnerships, can support the deployment of shared digital platforms for AI content generation. Additionally, international cooperation under the UNCTAD framework may facilitate open-source AI solutions for small and medium-sized creative enterprises, enabling a bottom-up model of empowerment where marginalized voices can access the tools necessary for participation in the digital creative economy.
以治理为导向的应对措施需要多层次的基础设施规划,包括宽带扩展、获得低成本计算资源以及建立支持 AI 的环境,例如创新实验室和社区技术中心。包括开发银行和联合国伙伴关系在内的多边融资机制可以支持部署用于 AI 内容生成的共享数字平台 。此外,贸发会议框架下的国际合作可以促进为中小型创意企业提供开源人工智能解决方案,实现自下而上的赋权模式,让边缘化的声音能够获得参与数字创意经济所需的工具。

To design equitable and sustainable infrastructure strategies, policy interventions must be guided by accurate data and regional benchmarking. Recent assessments show a clear disparity in AI infrastructure readiness across UN regions, reflecting historical asymmetries in digital investment and innovation capacity. Figure 1 below visualizes this disparity using three normalized indicators—Internet penetration, cloud services accessibility, and the density of AI innovation hubs—across four major UN regions.
为了设计公平和可持续的基础设施战略,政策干预必须以准确的数据和区域基准为指导。最近的评估显示,联合国各区域的人工智能基础设施准备情况存在明显差异,这反映了数字投资和创新能力的历史不对称。下面的图 1 使用三个标准化指标(互联网普及率、云服务可访问性和 AI 创新中心的密度)将这种差异可视化,涵盖四个主要联合国区域。

Figure 1. Global AI Infrastructure Readiness by UN Region (2023)
图 1.按联合国区域划分的全球 AI 基础设施准备情况(2023 年)

This heatmap illustrates regional disparities in AI-enabling infrastructure using normalized indicators for internet penetration (ITU), cloud service accessibility (World Bank estimates), and AI innovation hub density (UNCTAD/UNESCO sources).
该热图使用互联网普及率 (ITU)、云服务可访问性(世界银行估计)和 AI 创新中心密度(UNCTAD/UNESCO 来源)的标准化指标,说明了 AI 支持基础设施的地区差异。

This visualization highlights the concentration of AI-enabling infrastructure in developed regions, where access to broadband, cloud services, and research institutions is significantly more widespread. Conversely, in regions such as Sub-Saharan Africa and parts of Asia-Pacific, these deficits create structural barriers to inclusive participation in the AI-driven creative economy.
这种可视化突出了支持 AI 的基础设施集中在发达地区,这些地区的宽带、云服务和研究机构的访问要广泛得多。相反,在撒哈拉以南非洲和亚太地区部分地区,这些赤字为包容性参与人工智能驱动的创意经济造成了结构性障碍。

Bridging these gaps requires not only capital investment but governance frameworks that prioritize inclusivity and regional equity. These may include regulatory reforms to facilitate infrastructure co-financing, regional capacity-building programs, and public-private partnerships that support AI infrastructure aligned with cultural and linguistic contexts. In this way, infrastructure becomes not only a technical prerequisite, but also a foundation for a more just, pluralistic, and sustainable global creative economy.
弥合这些差距不仅需要资本投资,还需要优先考虑包容性和区域公平的治理框架。这些可能包括促进基础设施联合融资的监管改革、区域能力建设计划以及支持与文化和语言背景相一致的 AI 基础设施的公私合作伙伴关系。通过这种方式,基础设施不仅成为技术先决条件,而且成为更加公正、多元化和可持续的全球创意经济的基础。

3. Data access and feasible technical cooperation
3. 数据接入和可行的技术合作

AI governance cannot be inclusive without fair and ethical access to data — the fundamental input for machine learning and generative content systems. However, data asymmetry remains a structural barrier: high-income countries and large technology firms control the majority of high-quality datasets, while developing economies often lack the infrastructure and legal frameworks for data collection, storage, and processing. This exacerbates the digital dependency of the Global South and limits local model development.
如果没有公平和合乎道德地访问数据,AI 治理就不可能具有包容性,而数据是机器学习和生成内容系统的基本输入。然而,数据不对称仍然是一个结构性障碍:高收入国家和大型科技公司控制着大部分高质量数据集,而发展中经济体往往缺乏数据收集、存储和处理的基础设施和法律框架。这加剧了全球南方的数字依赖性,并限制了本地模型的开发。

To address this, inclusive governance must promote technical cooperation through open data frameworks, federated learning models, and interregional data-sharing protocols that safeguard privacy and sovereignty. The African Unions Data Policy Framework (2022) provides a regional blueprint for such cooperation, emphasizing data governance aligned with development goals. In addition, the creation of multistakeholder platforms under the UN system could support South-South cooperation in data governance, technical standardization, and joint AI model training initiatives. These approaches facilitate not only equitable technological diffusion but also the emergence of context-relevant AI applications in sectors such as culture, education, and media.
为了解决这个问题,包容性治理必须通过保护隐私和主权的开放数据框架、联合学习模型和区域间数据共享协议来促进技术合作。非洲联盟的数据政策框架(2022 年) 为此类合作提供了区域蓝图,强调与发展目标相一致的数据治理。此外,在联合国系统下创建多利益相关方平台可以支持数据治理、技术标准化和联合 AI 模型训练计划方面的南南合作。这些方法不仅促进了公平的技术传播,还促进了文化、教育和媒体等领域与背景相关的 AI 应用的出现。

Education in related professional fields and localization of digital creative talents
相关专业领域教育与数字创意人才本土化

Long-term inclusiveness in the AI-driven creative economy depends on the ability of all regions to cultivate local expertise in both technical and creative domains. Despite global awareness of the digital skills gap, UNESCO estimates that less than 10% of students in Sub-Saharan Africa receive any form of digital literacy training during secondary education. Moreover, there exists a mismatch between AI development skills and cultural production competencies in many regions.
人工智能驱动的创意经济的长期包容性取决于所有地区在技术和创意领域培养当地专业知识的能力。尽管全球都意识到数字技能差距,但联合国教科文组织估计,撒哈拉以南非洲只有不到 10% 的学生在中学教育期间接受了任何形式的数字素养培训。此外,许多地区的人工智能开发技能和文化生产能力之间存在不匹配。

Governments must adopt integrated education strategies that combine AI technical training with creative industry education, while also ensuring gender equity and access for marginalized populations. The localization of digital talent requires curricula that are responsive to local cultural contexts and delivered through accessible channels, including online education platforms and community-based training programs. International organizations can contribute by establishing regional AI education hubs, sponsoring scholarship schemes, and promoting mobility programs for digital artists and AI professionals from underrepresented communities.
政府必须采取综合教育战略,将人工智能技术培训与创意产业教育相结合,同时确保性别平等和边缘化人群的机会。数字人才的本地化需要课程响应当地文化背景,并通过可访问的渠道提供,包括在线教育平台和基于社区的培训计划。国际组织可以通过建立区域 AI 教育中心、赞助奖学金计划以及为来自代表性不足社区的数字艺术家和 AI 专业人士推广移动计划来做出贡献。

D. Global Governance Mechanism Construction
D. 全球治理机制构建

1. Intellectual property and data ownership issues
1. 知识产权和数据所有权问题

This issue can be divided to three parts: enhancing productivity capacity via tech-transaction, intellectual property structure and international partnership on IP and SDGs
这个问题可以分为三个部分:通过技术交易提高生产力能力、知识产权结构以及知识产权和可持续发展目标的国际伙伴关系

Firstly, only industries with certain levels of technological capacity can take advantage of the opportunities arising from the expiry of patents, licenses on patented products, collaborative research and the flexibility provided under international treaties, including the WTO agreement on Trade-related aspects of Intellectual Property (TRIPS). In addition to the use of TRIPS flexibility, UNCTADs work programme focuses on policy frameworks to facilitate technological transactions and related capacity building, which includes designing appropriate frameworks for international technology transactions, building capacity to negotiate transactions involving technology and innovation, online database of case law and jurisprudence from various jurisdictions on the use of TRIPS.
首先,只有具有一定技术能力水平的行业才能利用专利到期、专利产品许可、合作研究以及国际条约(包括世贸组织关于贸易相关知识产权协定 (TRIPS))提供的灵活性所带来的机会。除了利用 TRIPS 的灵活性外,贸发会议的工作方案还侧重于促进技术交易和相关能力建设的政策框架,其中包括为国际技术交易设计适当的框架,建设涉及技术和创新的交易的谈判能力,来自不同司法管辖区的判例法和判例的在线数据库,关于 TRIPS 的使用。

Secondly, IP structure needs to evolve as the technology is developing by leaps and bonds, which means the IP architecture needs to respond to the dynamics and economic realities of big data/artificial intelligence, climate change technologies, genetic engineering and other emerging technologies. UNCTAD uses expert meetings, policy advice and research to analyze the socio-economic aspects of these emerging technologies and the role of IP rights in the development context. Considering that the unlimited grant or exercise of rights without corresponding and appropriate limitations and exceptions has serious adverse long-term implications not only for development priorities, but indeed for the creative and innovation process itself, limitations and exceptions for creativity, competition, and economic development are of great significance.
其次,知识产权结构需要随着技术的飞速发展而发展,这意味着知识产权架构需要对大数据/人工智能、气候变化技术、基因工程和其他新兴技术的动态和经济现实做出反应。贸发会议利用专家会议、政策咨询和研究来分析这些新兴技术的社会经济方面以及知识产权在发展环境中的作用。考虑到在没有相应和适当的限制与例外的情况下无限制地授予或行使权利 不仅对发展优先事项,而且实际上对创造和创新过程本身都有严重的不利长期影响,因此对创造、竞争和经济发展的限制与例外具有重要意义。

Last but not least, IP is an important element in the realization of other domestic public policy objectives beyond technology. To avoid conflict, governments need to design coherent domestic policies and pursue international cooperation. Countries are engaged at regional and multilateral levels on various issues of technology, IP, and investment as part of negotiations on global challenges including climate change or initiatives to tackle anti-microbial resistance (AMR).
最后但并非最不重要的一点是,知识产权是实现技术以外的其他国内公共政策目标的重要因素。为避免冲突,政府需要制定连贯的国内政策并寻求国际合作。各国在区域和多边层面参与技术、知识产权和投资等各种问题,作为应对气候变化或应对抗菌素耐药性 (AMR) 倡议的全球挑战谈判的一部分。

2. Global Equal Opportunities and the Participation of Marginal Groups
2. 全球机会均等和边缘群体的参与

The global governance of artificial intelligence (AI) must be premised on the principle of inclusivity, ensuring that all individuals—regardless of geography, gender, socioeconomic status, or identity—have the opportunity to meaningfully participate in and benefit from AI-driven creative economies. However, current global trends reflect a widening participation gap, whereby marginalized populations—particularly in least developed countries (LDCs), indigenous communities, and displaced populations—face significant structural, technological, and linguistic barriers to access.
人工智能 (AI) 的全球治理必须以包容性原则为前提,确保所有人(无论地理位置、性别、社会经济地位或身份如何)都有机会有意义地参与人工智能驱动的创意经济并从中受益。然而,当前的全球趋势反映了不断扩大的参与差距,边缘化人群——尤其是最不发达国家 (LDC)、土著社区和流离失所人口——在获得参与方面面临重大的结构、技术和语言障碍。

Digital exclusion is not merely a matter of connectivity but of systemic inequality. For example, according to the International Telecommunication Union (ITU), women in LDCs are 17% less likely than men to use the Internet. Meanwhile, linguistic diversity remains severely underrepresented in major generative AI systems, limiting the capacity of marginalized groups to engage with or contribute to digital content production in their native languages. These asymmetries risk entrenching existing power structures within the digital creative economy, wherein content production, visibility, and monetization remain concentrated in a handful of global regions and cultural contexts.
数字排斥不仅仅是一个连通性问题,也是系统性不平等的问题。例如,根据国际电信联盟 (ITU) 的数据,最不发达国家女性使用互联网的可能性比男性低 17%。与此同时,语言多样性在主要的生成式 AI 系统中仍然严重代表性不足,这限制了边缘化群体以他们的母语参与或促进数字内容制作的能力。这些不对称性有可能巩固数字创意经济中现有的权力结构,其中内容生产、可见性和货币化仍然集中在少数全球地区和文化背景下。

To address these disparities, inclusive AI governance must be anchored in rights-based, participatory mechanisms that promote equal access to AI infrastructure, education, and creative platforms. At the international level, UN agencies and regional bodies can support the creation of multi-stakeholder platforms dedicated to amplifying the voices of marginalized creators and cultural producers. National governments, meanwhile, should prioritize the integration of underrepresented groups into AI-related policy consultations and ensure that national digital strategies include provisions for linguistic inclusion, disability accessibility, and gender equity.
为了解决这些差异,包容性 AI 治理必须以基于权利的参与性机制为基础,促进平等获得 AI 基础设施、教育和创意平台。在国际层面,联合国机构和区域机构可以支持创建多利益攸关方平台,致力于扩大边缘化创作者和文化生产者的声音。与此同时,各国政府应优先考虑将代表性不足的群体纳入人工智能相关政策磋商,并确保国家数字战略包括语言包容性、残疾无障碍和性别平等的规定。

Moreover, legal frameworks must protect the intellectual property rights of creators from marginalized communities and support mechanisms for fair remuneration. Capacity-building initiatives—including digital literacy programs, open education resources in indigenous and minority languages, and creative incubators—can further democratize access to AI-enabled creative opportunities.
此外,法律框架必须保护来自边缘化社区的创作者的知识产权,并支持公平报酬机制。能力建设计划(包括数字素养计划、土著和少数民族语言的开放教育资源以及创意孵化器)可以进一步使获得 AI 支持的创意机会的机会民主化。

Finally, global governance mechanisms should encourage South–South cooperation, whereby knowledge, tools, and inclusive innovation practices are shared horizontally among developing countries. Only through a coordinated, inclusive, and equity-focused governance agenda can the promise of AI for all be realized in practice—not just in principle.
最后,全球治理机制应鼓励南南合作,从而在发展中国家之间横向共享知识、工具和包容性创新实践。只有通过协调、包容和注重公平的治理议程 ,才能在实践中实现 AI for all 的承诺 ,而不仅仅是在原则上。

E. Possible Directions of Solutions
E. 解决方案的可能方向

1. Global Channels for Financial Support for Technological Industries: The formulation of global platforms and systems that enable efficient fund-raising and building financial infrastructure especially to support investment in developing economies.
1. 技术产业金融支持的全球渠道: 制定全球平台和系统,以实现高效的资金筹集和建设金融基础设施,特别是支持对发展中经济体的投资。

2. Transformation of Traditional Industries in Less Developed Economies and Technological Gap Narrowing: Traditional industries transformation and relevant infrastructure modernization in less developed countries through technology adoption, reducing the technological gap, and fostering innovation to boost economic growth.
2. 欠发达国家的传统产业转型和技术差距缩小 :欠发达国家的传统产业转型和相关基础设施现代化,通过技术采用、缩小技术差距和促进创新以促进经济增长。

3. Competitiveness Improvement of Developing Countries in Participation of Technological Benefits and Equal Opportunities: Enhancing the global competitiveness of developing countries, highlights the need to ensure their equal access to international trade opportunities and can compete on a level playing field.
3. 提高发展中国家参与技术利益和平等机会的竞争力 :增强发展中国家的全球竞争力,强调需要确保他们平等获得国际贸易机会,并能够在公平的竞争环境中竞争。

V. Reliable Artificial Intelligence Construction
五、可靠的人工智能建设

A. Fundamental Problems in AI safety
一个。 AI 安全的基本问题

With the increasingly powerful function and wide spread of Artificial Intelligence, ensuring its safe and reliable operation has become a critical issue of global concern. Modern AI is no more limited to narrowly-defined, rule-based environments. On the contrary, AI becomes more capable to make decision in open, dynamic and high-risk realm, from autonomous driving and medical diagnostics to financial system and military applications. Nevertheless, the growing ability also brings huge safety threat and challenges. Different from traditional software, AI is generally propelled by complex optimization process, which is trained on vast datasets and learn the unpredictable behavior. This makes us more difficult to predict AIs behavior in a new situation, especially when its target or restrictions are inaccurate. Specifically, AI might optimize for the very object in a way which is technically correct but ethically wrong.
随着人工智能的功能日益强大和广泛普及,确保其安全可靠运行已成为全球关注的关键问题。现代 AI 不再局限于定义狭隘、基于规则的环境。相反,AI 在 开放、动态和高风险的领域中做出决策的能力更强,从自动驾驶和医疗诊断到金融系统和军事应用。然而,不断增长的能力也带来了巨大的安全威胁和挑战。与传统软件不同,AI 通常由复杂的优化过程驱动,该过程在大量数据集上进行训练并学习不可预测的行为 。这使得我们更难预测 AI 在新情况下的行为,尤其是当其目标或限制不准确时。具体来说,AI 可能会以技术上正确但在道德上错误的方式针对对象进行优化。

Figure V.A.1: Wide application of AI
V.A.1:AI 的广泛应用

Aim to promote reliable development of AI, international society must endeavor to the fundamental problems in safety. In the section below, we highlight 3 main points: negative side effects avoidance, safe exploration and data fringe, and robustness under distribution shifteach of which plays an essential role in building reliable and sustainable AI system.
为了促进人工智能的可靠发展,国际社会必须努力解决安全方面的根本问题。在下面的部分中,我们重点介绍了 3 个要点:避免负面副作用、安全勘探和数据边缘以及分布转移下的稳健性 —— 每一项都在构建可靠和可持续的 AI 系统方面发挥着至关重要的作用。

1. Negative Side Effects Avoidance
1. 避免负面副作用

How to avoid negative side effect in AI development is one of the core challenges. Many phenomenon have illustrated a key problem AI might makes harmful decisions to achieve goals. And the results sometimes are satisfying but not mean its correction of precession. The root of challenges lies in the difficulty of specifying reward functions or objective criteria that fully capture human intent. Unlike human beings, AI systems lack of common sense and social comprehension because of the absence of explicit enough program order and plenty of representative data model. Consequently, it s impossible for AI to perfectly balance the relationship between assignments and social protection in current developmental situation. Although less frequent than adversarial risks or data issues, negative behavioral spillovers remain crucial and under-addressed.
如何避免 AI 开发中的负面副作用是核心挑战之一。许多现象都说明了一个关键问题 —— 人工智能可能会做出有害的决定来实现目标。结果有时令人满意 ,但并不意味着它纠正了岁差。挑战的根源在于难以指定完全捕捉人类意图的奖励函数或客观标准。与人类不同,AI 系统缺乏常识和社会理解力,因为缺乏足够明确的程序顺序和大量具有代表性的数据模型。因此, 在当前的发展形势下,人工智能不可能完美地平衡任务与社会保障之间的关系。尽管不如对抗性风险或数据问题发生,但负面行为溢出仍然至关重要且未得到解决。

Early in 2016, Alex Moltzau has pointed out that Unintended and harmful behavior that may emerge from poor design of real‑world AI systems.
2016 年初,Alex Moltzau 指出, 现实世界 AI 系统的不良设计可能会出现无意的有害行为

Figure V.A.1.1: Overview of Concrete Safety Risks in Modern AI that Go Beyond Speculative Existential Threats
V.A.1.1:现代 AI 中超越推测性生存威胁的具体安全风险概述

Moltzau highlights practical examples: A robot arm optimizing for efficiency might knock over a vase—destroying something not explicitly prohibited. A cleaning robot, in trying to minimize dirt, might block people from entering a room—disrupting human activity. These behaviors are neither malevolent nor random but emerge logically when an AI is strictly bound to an incomplete objective definition. Beyond these, ethical and social negative side effects are more serious.
Moltzau 强调了实际示例:为提高效率而优化的机械臂可能会撞倒花瓶,从而破坏未明确禁止的东西。清洁机器人在试图减少污垢时,可能会阻止人们进入房间,从而扰乱人类活动。这些行为既不是恶意的,也不是随机的,而是当 AI 被严格绑定到不完整的客观定义时,它们会合乎逻辑地出现。 除此之外 ,道德和社会的负面影响更为严重。

1.1 Ethical side effects: uncontrolled automatic decision-making and the lack of machine ethics.
1.1 道德副作用:不受控制的自动决策和缺乏机器道德

The amazing capabilities of AI and the risk of possible abuse have caused long-term concerns in the scientific, humanistic and policy communities. The author quoted the views of Henry Kissinger and Stephen Hawking, warning that AI may pose a fundamental threat to human survival in the future. Even if it is not the current general AI, sufficient safety fences should be established in advance.
人工智能的惊人能力和可能被滥用的风险引起了科学界、人文界和政策界的长期担忧。 作者引用了亨利·基辛格和斯蒂芬·霍金的观点,警告人工智能在未来可能会对人类生存构成 ' 根本威胁 '。即使不是现在的 通用 AI,也应该提前建立足够的安全围栏。

To this end, many regions, including the European Union, have tried to set basic norms. For example, the 2019 EU Trustworthy AI Ethics Guidelines clearly stated that AI systems should have seven principles: human supervision, technical robustness, security, privacy protection, diversity, fairness, accountability, etc. These principles are precisely to prevent the system from accidentally violating human rights and disrupting social structures when completing tasks.
为此,包括欧盟在内的许多地区都试图制定基本规范。例如,2019 年的 欧盟值得信赖的 AI 伦理指南 明确指出,AI 系统应具备七项原则:人工监督、技术稳健性、安全性、隐私保护、多样性、公平性、问责制等。这些原则,正是为了防止系统在完成任务时不小心侵犯人权,扰乱社会结构。

1.2 Social side effects: algorithmic bias and discriminatory impact
1.2 社交副作用:算法偏见和歧视性影响

AI has been widely deployed in sensitive fields such as medicine, justice, recruitment, and education, and bias and mistakes may directly affect human destiny.
人工智能已广泛应用于医学、司法、招聘和教育等敏感领域,偏见和错误可能直接影响人类命运。

Figure V.A.1.2: Example of AI s positive and negative impacts in healthcare, suitable as a domain-specific case
V.A.1.2 人工智能对医疗保健的积极和消极影响的示例,适合作为特定领域的案例

For example: Googles image recognition system fails to accurately identify minorities; Amazons recruitment algorithm systematically gives men higher scores, even though the designer did not intend to do so. These side effects come from historical injustices in the training data and implicit assumptions in the model design. The result is that AI tools not only fail to correct social inequality, but may exacerbate discrimination.
例如:“ 谷歌的图像识别系统无法准确识别少数族裔;亚马逊的招聘算法系统地给男性更高的分数,尽管设计师并不打算这样做。 这些副作用来自训练数据中的历史不公正和模型设计中的隐含假设。结果是,人工智能工具不仅无法纠正社会不平等,而且可能加剧歧视。

1.3 Governance dilemma and public distrust
1.3 治理困境和公众不信任

Although AI ethics guidelines are flourishing, there are huge difficulties in their implementation. Google attempted to set up an AI ethics committee in 2019, but it was quickly disbanded due to widespread criticism of its membership. This shows that the public lacks trust in the ability of large companies to formulate ethical standards, also highlights an important side effect: ethical fatigue and a crisis of trust in governance.
尽管 AI 伦理准则蓬勃发展,但其实施存在巨大困难。谷歌试图在 2019 年成立一个人工智能道德委员会,但由于对其成员的广泛批评,该委员会很快被解散。这表明公众对大公司制定道德标准的能力缺乏信任,也凸显了一个重要的副作用:道德疲劳和对治理的信任危机。

2. Safe Exploration and Data Fringe
2.安全探索和数据边缘

As AI systems increasingly operate in complex, uncertain, and real-time environments, their ability to learn through exploration becomes critical. However, exploration—particularly in reinforcement learning (RL) and interactive settings—can introduce unintended risks if not properly constrained. This challenge, known as unsafe exploration, refers to scenarios in which AI agents gather experience or test new strategies in ways that may violate safety boundaries or lead to system failures.
随着 AI 系统越来越多地在复杂、不确定和实时的环境中运行,它们通过探索学习的能力变得至关重要。然而,如果约束不当,探索( 尤其是在强化学习 (RL) 和交互式环境中)可能会带来意想不到的风险。这种挑战称为不安全探索,是指 AI 代理以可能违反安全边界或导致系统故障的方式收集经验或测试新策略的场景。

2.1 The Problem of Unsafe Exploration
2.1 不安全勘探的问题

Exploration is fundamental to AI learning, especially in tasks where agents must balance exploitation (using what is known) and exploration (trying new actions). Yet, unregulated exploration poses serious dangers in real-world systems. For example: a self-driving car testing new paths might enter unsafe terrain or violate traffic rules; a medical diagnosis system attempting novel treatment combinations may cause harm before feedback is available.
探索是 AI 学习的基础,尤其是在代理必须平衡利用(使用已知内容)和探索(尝试新作)的任务中。然而,不受监管的勘探在现实世界的系统中构成了严重的危险。例如:测试新路径的自动驾驶汽车可能会进入不安全的地形或违反交通规则;尝试新的治疗组合的医学诊断系统可能会在获得反馈之前造成伤害。

According to Gyevnár and Kasirzadeh (2025), unsafe exploration was one of the most actively researched risk types in peer-reviewed AI safety literature, with 54 papers (11.4%) directly addressing it. The majority of these works focus on reinforcement learning settings, where trial-and-error learning naturally conflicts with strict safety requirements.
根据 Gyevnár 和 Kasirzadeh (2025) 的说法,不安全探索是同行评审的 AI 安全文献中研究最积极的风险类型之一,有 54 篇论文 (11.4%) 直接涉及它。这些工作大多集中在强化学习设置上,其中试错学习自然会与严格的安全要求相冲突。

Figure V.A.2.1: Proportion of papers addressing each AI safety risk type
V.A.2.1: 针对每种 AI 安全风险类型的论文比例

2.2 Challenges at the Data Fringe
2.2 数据边缘的挑战

The data fringe refers to the outer boundary of an AI systems training distributionthose states, inputs, or environments that it has rarely or never seen. At this fringe, the system is more likely to behave unpredictably, misclassify inputs, or overconfidently act on faulty reasoning. The review highlights that unsafe exploration often emerges in this fringe zone due to: Inaccurate reward signals or misspecified training objectives. Incomplete constraint modeling, especially in simulation-to-reality transfers. Delayed feedback, making it hard to know an action is unsafe until after harm is done.
数据边缘 是指 AI 系统训练分布的外部边界 ,即它很少或从未见过的状态、输入或环境。在这个边缘,系统更有可能表现得不可预测、对输入进行错误分类或过度自信地根据错误的推理采取行动。该审查强调,由于以下原因,这个边缘区域经常出现不安全的探索:奖励信号不准确或训练目标指定错误 。不完整的约束建模 ,尤其是在仿真到现实的传输中。延迟反馈,使得在造成伤害之前很难知道某项作是不安全的。

These problems become especially pronounced in interactive or open-ended systems, such as chatbots, robotic manipulators, and autonomous drones, where the systems choices generate new states and novel situations.
这些问题在交互式或开放式系统中尤为明显,例如聊天机器人、机器人纵器和自主无人机,在这些系统中,系统选择会产生新的状态和新情况。

Above all, research on safe exploration focuses on ensuring AI agents operate within permissible bounds even when trying new actions in uncertain or changing environments.
最重要的是,对安全探索的研究侧重于确保 AI 代理在允许的范围内运行,即使在不确定或不断变化的环境中尝试新作时也是如此。

3. Robustness in Distribution Shift
3.Distribution Shift 中的稳健性

One of the central assumptions in machine learning is that the data encountered during deployment will closely resemble the data used during training. In practice, this assumption is often violated. Distribution shift occurs when the statistical properties of the input data change between training and deployment environmentsrendering AI systems less reliable, unpredictable, or even unsafe. Building systems that remain robust under such shifts is an essential pillar of AI safety.
机器学习的一个主要假设是,部署期间遇到的数据将与训练期间使用的数据非常相似。在实践中,这一假设经常被违反。当输入数据的统计属性在训练和部署环境之间发生变化时,就会发生分布偏移 从而使 AI 系统变得不那么可靠、不可预测甚至不安全。构建在这种转变下保持稳健的系统是 AI 安全的重要支柱。

3.1 Understanding Distribution Shift
3.1 了解分布偏移

Distribution shift can take many forms, including:
分销转变可以采取多种形式,包括:

Covariate shift: The distribution of inputs changes (e.g., new camera angles in a vision system).
协变量偏移:输入的分布发生变化(例如,视觉系统中的新相机角度)。

Label shift: The prevalence of output categories changes (e.g., rare diseases becoming more common).
标签偏移:输出类别的患病率发生变化(例如,罕见病变得更加常见)。

Concept shift: The meaning of categories changes over time (e.g., evolving social language or fraud patterns).
概念转变:类别的含义会随着时间的推移而变化(例如,不断发展的社交语言或欺诈模式)。

These shifts can arise from real-world dynamics such as technological updates, adversarial manipulation, or social change. A system trained on static data may fail catastrophically when facing such novel inputswithout any warning or graceful degradation.
这些转变可能来自现实世界的动态,例如技术更新、对抗性纵或社会变革。在静态数据上训练的系统在面对此类新输入时可能会发生灾难性故障 而不会发出任何警告或正常降级。

3.2 Examples of Vulnerability
3.2 漏洞示例

Distributional robustness failures have been observed in a wide range of applications: Medical AI: Diagnostic models trained on hospital A often underperform in hospital B due to differing protocols, populations, or equipment. Language models: NLP systems trained on curated web data may misinterpret user-generated slang, dialects, or regional variants. Autonomous vehicles: Vision systems perform poorly when lighting, weather, or road signage changes from the training distribution. In safety-critical settings, these failures are not just inconvenientthey can be life-threatening.
分布稳健性失败已在广泛的应用中观察到:医疗 AI:由于方案、人群或设备不同,在医院 A 上训练的诊断模型在医院 B 中通常表现不佳。语言模型:在精选 Web 数据上训练的 NLP 系统可能会误解用户生成的俚语、方言或区域变体。自动驾驶汽车:当照明、天气或道路标志与训练分布发生变化时,视觉系统性能不佳。在安全关键型环境中,这些故障不仅不方便 而且可能危及生命。

3.3 Policy and Deployment Considerations
3.3 策略和部署注意事项

From a governance standpoint, robustness to distributional shift implies that AI systems must be: Monitored continuously, not just evaluated at the point of deployment. Tested across multiple domains, populations, and time frames. Audited for shift sensitivity, including retrospective analyses of misbehavior in deployment environments.
从治理的角度来看,分布式转变的稳健性意味着 AI 系统必须:持续监控,而不仅仅是在部署时进行评估。在多个领域、人群和时间框架中进行测试。对轮班敏感性进行了审核,包括对 部署环境中不当行为的回顾性分析。

In high-stakes applicationssuch as public health, criminal justice, or autonomous weaponsdistribution shift is not an edge case but an inevitability. Therefore, institutions deploying AI must ensure that: Fallback procedures exist for handling unfamiliar inputs; Ongoing retraining or recalibration is budgeted and operationalized; And that performance guarantees include robustness metrics, not just aggregate accuracy.
在高风险应用中 例如公共卫生、刑事司法或自主武器 ), 分发转移不是边缘情况,而是不可避免的。因此,部署 AI 的机构必须确保:存在处理不熟悉输入的回退程序;持续的再培训或重新校准被预算和实施;性能保证包括稳健性指标,而不仅仅是聚合准确性。

3.4 International Relevance
3.4 国际相关性

Distribution shift also has geopolitical and cross-cultural dimensions. A model trained in one country may fail when applied in another, due to differences in language, law, infrastructure, or demographics. This makes localization, diverse training, and regional robustness audits critical for international AI deployment.
分销转移还涉及地缘政治和跨文化维度。由于语言、法律、基础设施或人口统计的差异,在一个国家/地区训练的模型在另一个国家/地区应用时可能会失败 。这使得本地化、多样化的培训和区域稳健性审计对于国际 AI 部署至关重要。

As the global AI community moves toward shared standards, incorporating distributional robustness into regulatory frameworks will be necessary to ensure fairness, safety, and long-term trust.
随着全球 AI 社区朝着共享标准迈进,有必要将分布式稳健性纳入监管框架,以确保公平性、安全性和长期信任。

Risk
风险

Method
方法

Undesirable Behavior
不良行为

Corrigibility via rational utility agents
通过合理的效用代理实现可验证性

Lack of Control Enforcement
缺乏控制执行

Inverse Reinforcement Learning
逆向强化学习

Unsafe Exploration
不安全探索

Constraint Aware Learning
约束感知学习

System Misspecification
系统规格错误

ML, Retraining under new data
ML,新数据下的再训练

Tablet V.A.3.4: Technical and Governance Solution
平板电脑 V.A.3.4:技术和监管解决方案

B. Actions taken by Member States on AI Safety
湾。 会员国在 AI 安全方面采取的行动

1. Setting Overarching Approaches and Strategies
1. 设定总体方法和策略

For the digital economy, there are three primary regulatory approaches. One option, favored in China, involves direct intervention to support national political goals through strict regulations. A second approach, as seen in the European Union, focuses on strong regulations aimed at protecting fundamental rights and values. A third approach, preferred in the United States, involves a light regulatory framework. Recently, the development of AI and its extensive societal and economic impacts have influenced national strategies, with emerging similarities in approaches.
对于数字经济,有三种主要的监管方法。中国青睐的一种选择是直接干预,通过严格的监管来支持国家政治目标。第二种方法,如欧盟所见,侧重于旨在保护基本权利和价值观的强有力法规。第三种方法在美国更受欢迎,涉及宽松的监管框架。最近,人工智能的发展及其广泛的社会和经济影响影响了国家战略,方法上出现了相似之处。

The initial step of a national AI strategy is to identify and address coordination failures and weaknesses in the innovation system. Governments can, for instance, support applied research through project grants for AI-related business activities. Pilot AI use cases in specific sectors and knowledge and technology transfer mechanisms can contribute to accelerating the adoption of AI. Countries may consider a multi-step approach, as implemented in China, first incentivizing the private sector to adopt, adapt, and develop AI, and then supervising and regulating the AI industry.
国家 AI 战略的第一步是识别和解决创新系统中的协调失败和弱点。例如,政府可以通过为 AI 相关商业活动提供项目资助来支持应用研究。在特定部门试点 AI 用例以及知识和技术转让机制有助于加速 AI 的采用。各国可以考虑采取多步骤的方法,就像在中国实施的那样,首先激励私营部门采用、适应和开发人工智能,然后监督和监管人工智能行业。

Governments need to promote good practices and enforce rules and standards while revising regulations and policies to adapt to changing circumstances. For example, the European Union provides a coherent framework that integrates new legislation as it emerges, addressing issues such as consumer protection and regulating platforms to counterbalance concentration and ensure data protection.
政府需要推广良好实践并执行规则和标准,同时修订法规和政策以适应不断变化的环境。例如,欧盟提供了一个连贯的框架,该框架整合了新出现的新立法,解决了消费者保护和监管平台等问题,以平衡集中并确保数据保护。

Policy formulation and implementation are interactive and iterative processes that require continuous evaluation, and expectations need to be aligned with feasibility. Failures should be accepted, as they are with regard to new ventures in the private sector, but evaluation mechanisms should be put in place to improve design and implementation. Currently, only about 10 per cent of the AI policies surveyed by OECD have been evaluated, based on data from the AI Policy Observatory.
政策制定和实施是交互式和迭代过程,需要持续评估,期望需要与可行性保持一致。 应该接受失败,就像私营部门的新企业一样,但应该建立评估机制以改进设计和实施。目前,根据人工智能政策观察站的数据,经合组织调查的人工智能政策中只有约 10% 进行了评估。

1.1 China
1.1 中国

The Government of China has adopted an increasingly active role in AI. In 2017, it outlined a long-term strategic plan to transform China by 2030 from an AI contributor to a primary AI innovator.
中国政府在人工智能方面发挥着越来越积极的作用。2017 年,它概述了一项长期战略计划,旨在到 2030 年将中国从 AI 贡献者转变为主要的 AI 创新者。

China is currently formulating industry standards and expanding regulatory oversight, and has recently moved to more direct supervision of AI, introducing some of the worlds first binding national regulations. These regulations define requirements for how algorithms are built and deployed and establish the information that developers must disclose to the Government and the public.
中国目前正在制定行业标准并扩大监管,最近还转向对 AI 进行更直接的监管,推出了世界上第一个具有约束力的国家法规。这些法规定义了算法的构建和部署方式的要求,并确定了开发人员必须向政府和公众披露的信息。

In 2023, the Cyberspace Administration introduced Interim Measures for the Administration of Generative Artificial Intelligence Services, for regulating research, development, and the use of GenAI. The measures impose various obligations on GenAI providers to ensure that models, contents, and services comply with national requirements and uphold core socialist values and national security. They also aim to ensure the transparency of GenAI services and the accuracy and reliability of generated content, to prevent discrimination and respect intellectual property and individual rights. In this last aspect, the measures echo earlier provisions targeting deepfakes and fake news. In 2024, the Government launched a National Data Bureau to coordinate and support the development of foundational data systems, and to integrate, share, develop, and apply data resources.
2023 年,国家互联网信息办公室出台了《生成式人工智能服务管理暂行办法》,以规范 GenAI 的研究、开发和使用。这些措施对 GenAI 提供商施加了各种义务,以确保模型、内容和服务符合国家要求,并维护 社会主义核心价值观 和国家安全。他们还旨在确保 GenAI 服务的透明度以及生成内容的准确性和可靠性,以防止歧视并尊重知识产权和个人权利。在最后一个方面,这些措施与早期针对深度伪造和假新闻的规定相呼应。2024 年,政府成立了国家数据局,以协调和支持基础数据系统的开发,并整合、共享、开发和应用数据资源。

China relies on a series of technical and administrative tools, such as disclosure requirements, model auditing mechanisms, and technical performance standards, as well as measures to ensure that public bodies are responsive to technological development. Focusing on particular emerging issues and technologies reduces the burden of generalization but demands a high level of responsiveness to technological advances and strong coordination among public bodies.
中国依靠一系列技术和管理工具,例如披露要求、模型审计机制和技术性能标准,以及确保公共机构对技术发展做出反应的措施。关注特定的新出现的问题和技术可以减轻泛化的负担,但需要对技术进步的高度响应和公共机构之间的强有力协调。

1.2 European Union
1.2 欧盟

In 2024, the European Union passed the AI Act, which defines rules according to the associated level of risk, namely, unacceptable, high, limited, or minimal. Most applications, such as video games or spam filters, fall into the minimal risk category, and companies are only advised to adopt voluntary codes of conduct. The Act allows high-risk AI systems but stipulates that these should include complete, clear, and accessible instructions, which should be stored in an open database maintained by the European Commission in collaboration with member states.
2024 年,欧盟通过了 AI 法案,该法案根据相关风险级别(即不可接受、高、有限或最小)定义规则。大多数应用程序(例如视频游戏或垃圾邮件过滤器)都属于最低风险类别,仅建议公司采用自愿行为准则。该法案允许高风险的 AI 系统,但规定这些系统应包括完整、清晰和可访问的指令,这些指令应存储在欧盟委员会与成员国合作维护的开放数据库中。

The Act prohibits uses that present unacceptable risks, such as cognitive behavioral manipulation, social scoring, biometric identification, and categorization, as well as remote biometric identification systems such as facial recognition. This is known as a risk-based approach.
该法案禁止存在不可接受风险的使用,例如认知行为纵、社交评分、生物特征识别和分类,以及面部识别等远程生物特征识别系统。这被称为基于风险的方法。

The AI Act builds on previous legislation such as the General Data Protection Regulation of 2016, which guarantees privacy and respect for human rights. The Digital Services Act of 2022 aims to establish a level playing field, to promote innovation and competitiveness in information services, from websites to digital platforms, and to prevent large providers from imposing unfair conditions that damage other businesses or limit consumer choice.
AI 法案建立在以前的立法之上,例如 2016 年的《通用数据保护条例》,该条例保障隐私和尊重人权。2022 年《数字服务法案》旨在建立公平的竞争环境,促进从网站到数字平台的信息服务的创新和竞争力,并防止大型提供商施加损害其他企业或限制消费者选择的不公平条件。

The European Union has also revised its industrial strategy to address external dependencies on critical technologies. Strategic areas related to the AI value chain are critical raw materials, semiconductors, quantum technologies, and cloud computing. In these areas, the European Union is building industrial, research, and trade policies, fostering co-investment across member states, and bringing together stakeholders in industrial alliances. In 2023, to strengthen competitiveness and resilience in semiconductor technologies and applications, the European Union passed the European Chips Act, aiming to mobilize more than €43 billion of public and private investments and setting out measures to prepare for, anticipate, and respond to possible supply chain disruptions, while strengthening its technological leadership. The European Union has also allocated funds for AI research and innovation. The European Research Executive Agency manages more than 1,000 research projects, with pioneering projects in AI and quantum technologies.
欧盟还修订了其工业战略,以解决对关键技术的外部依赖问题。与 AI 价值链相关的战略领域是关键原材料、半导体、量子技术和云计算。在这些领域,欧盟正在制定工业、研究和贸易政策,促进成员国之间的共同投资,并将利益相关者聚集在工业联盟中。2023 年,为了加强半导体技术和应用的竞争力和韧性,欧盟通过了《欧洲芯片法案》,旨在调动超过 430 亿欧元的公共和私人投资,并制定措施来准备、预测和应对可能的供应链中断,同时加强其技术领先地位。欧盟还为人工智能研究和创新拨款。欧洲研究执行署管理着 1,000 多个研究项目,其中包括人工智能和量子技术方面的开创性项目。

1.3 United States
1.3 美国

In 2022, the United States Congress passed the CHIPS (Creating Helpful Incentives to Produce Semiconductors) and Science Act to boost scientific research and advanced semiconductor manufacturing capacity. The act was motivated by increasing dependency in chip manufacturing and the fact that federal R&D spending had neared its lowest point in 60 years, and targets frontier technologies, including AI. Of the $250 billion budgeted, 80 per cent are allocated to research activities and the rest to tax credits for chip manufacturers.
2022 年,美国国会通过了 CHIPS(创造有益的激励措施来生产半导体)和科学法案,以促进科学研究和先进半导体制造能力。该法案的动机是对芯片制造的依赖性增加,以及联邦研发支出已接近 60 年来的最低点,并针对包括人工智能在内的前沿技术。在 2500 亿美元的预算中,80% 用于研究活动,其余用于芯片制造商的税收抵免。

The Act exemplifies key aspects of policies for emerging technologies. It adopts an anticipatory approach, supporting technologies that could shape future industries. It addresses coordination failures and leverages complementarities through a supply chain approach, supporting activities from hardware production to computing infrastructure, research, and skill development.
该法案体现了新兴技术政策的关键方面。它采用前瞻性方法,支持可能塑造未来行业的技术。它通过供应链方法解决协调失败问题并利用互补性,支持从硬件生产到计算基础设施、研究和技能开发的活动。

New talent will be trained through a national network for microelectronics education and cybersecurity workforce development programs. To retain talent, an AI scholarship program has been established for students who commit to a period of government service. The Act also promotes safe and trustworthy AI systems and the collection of best practices for artificial intelligence and data science. Finally, it envisions public-private partnerships that would establish virtual testbeds to examine potential vulnerabilities to failure, malfunction, or cyberattack.
新人才将通过全国微电子教育和网络安全劳动力发展计划网络进行培训。为了留住人才,已经为承诺在政府服务一段时间的学生建立了人工智能奖学金计划。该法案还促进了安全可靠的人工智能系统以及人工智能和数据科学最佳实践的集合。最后,它设想了公私合作伙伴关系,建立虚拟测试平台,以检查故障、故障或网络攻击的潜在漏洞。

The Blueprint for an AI Bill of Rights noted that AI and automated decision systems should not advance at the cost of civil rights, democratic values, or foundational American principles, and set out principles to guide the design, use, and deployment of automated systems to protect the public. Action is also being taken by individual states. In California, for example, an AI bill in 2024 required firms to commit to model testing and the disclosure of safety protocols and made compulsory a series of requirements that were previously only voluntary. This could represent a major shift in the way emerging and potentially disruptive technologies are dealt with in the United States.
《人工智能权利法案蓝图》指出,人工智能和自动决策系统的发展不应以牺牲民权、民主价值观或美国基本原则为代价,并规定了指导自动化系统的设计、使用和部署以保护公众的原则。各个州也在采取行动。例如,在加利福尼亚州,2024 年的一项人工智能法案要求公司承诺进行模型测试和披露安全协议,并将以前只是自愿的一系列要求强制要求定为强制性的。这可能代表美国处理新兴和潜在颠覆性技术的方式发生重大转变。

2. Countries with policies catching up
2. 政策迎头赶上的国家

AI policies in major economies can create significant spillover effects, shaping the policy choices of other countries. As leading countries set higher benchmarks, particularly in boosting competition and prioritizing R&D, not all countries are equally positioned to keep up. Many may struggle to match increasing R&D budgets, and the focus on future technologies can deepen disparities, widening the gaps between advanced economies and those working to catch up. This highlights the challenges faced by smaller or less advanced countries in keeping pace with global innovation leaders. Working directly with communities, industrial representatives, and individuals can help pinpoint specific business or geographical issues and the need for partnerships with private actors.
主要经济体的人工智能政策可以产生巨大的溢出效应,影响其他国家的政策选择。随着领先国家设定更高的基准,特别是在提高竞争和优先考虑研发方面,并非所有国家都能平等地跟上。许多人可能难以匹配不断增长的研发预算,而对未来技术的关注可能会加深差距,扩大发达经济体与努力迎头赶上的经济体之间的差距。这凸显了较小或欠发达国家在跟上全球创新领导者的步伐方面面临的挑战。直接与社区、行业代表和个人合作有助于确定特定的商业或地理问题以及与私营行为者合作的必要性。

Improvements in wireless technologies and devices can facilitate small-scale AI adoption, but scaling up is much more demanding. Without adequate computing power and digital skills, connectivity alone risks turning an economy into a data exporter and missing opportunities to generate local benefits. The rise of cloud computing is a response to the increasing dependence of AI on data and computing power. When enhancing infrastructure systems, countries should prioritize connectivity, interoperability, and standardization across systems, sectors, actors, users, and providers, including across regional and national boundaries.
无线技术和设备的改进可以促进小规模的 AI 采用,但扩大规模的要求要高得多。如果没有足够的计算能力和数字技能,仅靠连接就有可能使一个经济体成为数据输出国,并错失创造当地利益的机会。云计算的兴起是对 AI 对数据和计算能力日益增长的依赖的回应。在加强基础设施系统时,各国应优先考虑跨系统、部门、行为体、用户和提供商的连通性、互作性和标准化,包括跨地区和跨国界。

2.1 Brazil
2.1 巴西

In 2023, the New Growth Acceleration Programme planned a $5.7 billion investment to foster the transition to a digital economy through public-private partnerships for digital infrastructure; the federal Government would contribute about 44 per cent of the overall budget, State-owned companies, 20 per cent, and private companies, 36 per cent. The plan is to expand 4G networks across the country, deploy new 5G networks, and reinforce infrastructure with fiber-optic cables, such as the 587 km-long cables that will connect the capitals of two northern states, Amapá and Paraná, on opposite sides of the Amazon delta. This connectivity upgrade is aimed at reaching all public schools and healthcare units, contributing to the modernization of the public sector.
2023 年,新增长加速计划计划投资 57 亿澳元,通过数字基础设施的公私合作伙伴关系促进向数字经济过渡;联邦政府将贡献总预算的 44% 左右,国有企业占 20%,私营公司占 36%。该计划是在全国范围内扩展 4G 网络,部署新的 5G 网络,并通过缆加固基础设施 ,例如连接亚马逊三角洲两侧两个北部州的首府阿马帕和巴拉那州的 587 公里长的电缆。这种连接升级旨在覆盖所有公立学校和医疗保健单位,为公共部门的现代化做出贡献。

2.2 Cote dIvoire
2.2 科特迪瓦

Targeted infrastructure can support the adoption of AI in particular sectors. For example, the e-Agriculture project is aimed at increasing the use of digital technologies and improving farm productivity and access to markets. This is being pursued by improving Internet coverage and adoption, fostering the use of large-scale digital platforms, rehabilitating rural access roads, and adopting sustainable digital services to diffuse e-agriculture. Focusing on both physical infrastructure and digital services, the project represents a value-chain approach that can respond to community needs.
有针对性的基础设施可以支持人工智能在特定领域的采用。例如,e-Agriculture 项目旨在增加数字技术的使用,提高农场生产力和市场准入。为此,我们通过提高互联网覆盖率和采用率、促进大规模数字平台的使用、修复农村通道以及采用可持续的数字服务来推广电子农业。该项目专注于物理基础设施和数字服务,代表了一种可以响应社区需求的价值链方法。

2.3 Japan
2.3 日本

The High-Performance Computing Infrastructure project strengthens national computing capacity for AI development. The project uses an existing supercomputer and connects major universities and national laboratories via a high-speed network. By decentralizing access and networking institutions, the project increases computing power availability and supports innovation in computing-intensive sectors, increasing the number of new actors in the AI ecosystem. Decentralized organizational systems and distributed networks are crucial aspects of the digital revolution and a cornerstone of advanced AI ecosystems.
高性能计算基础设施项目加强了国家对 AI 开发的计算能力。该项目使用现有的超级计算机,并通过高速网络连接主要大学和国家实验室。通过分散访问和网络机构,该项目提高了计算能力的可用性,并支持计算密集型领域的创新,从而增加了 AI 生态系统中新参与者的数量。去中心化的组织系统和分布式网络是数字革命的关键方面,也是高级 AI 生态系统的基石。

2.4 Republic of Korea
2.4 韩国

The K-Chips Act increases tax credits for investments in semiconductor enterprises and other national strategic technologies, with a focus on SMEs. The policy supports the development and production of essential hardware components of the AI value chain by streamlining regulation and standardization in the field of microchips, to provide a common and clear playing field for business development.
K-Chips 法案增加了对半导体企业和其他国家战略技术投资的税收抵免,重点是中小企业。该政策通过简化微芯片领域的监管和标准化,支持人工智能价值链关键硬件组件的开发和生产,为业务发展提供共同和清晰的竞争环境。

3. Building Data for Responsible AI
3. 为负责任的 AI 构建数据

Data is a key production factor in the knowledge economy. Many countries already had data policies in place before the advent of AI, but will need to update them, while others still lack national data frameworks. Data policies should ensure that databases are interoperable and available across the economy, with privacy protection for both inputs and outputs, relying on consent and taking account of possible biases.
数据是知识经济中的关键生产要素。许多国家在 AI 出现之前就已经制定了数据政策,但需要更新这些政策,而其他国家/地区仍然缺乏国家数据框架。数据政策应确保数据库在整个经济体中具有互作性和可用性,对输入和输出进行隐私保护,依赖于同意并考虑到可能的偏见。

AI systems add concerns related to ownership, while also raising questions of intellectual property or fairness and accountability when generating data and decisions. Supporting AI development may require rethinking intellectual property provisions and creating mechanisms to facilitate public-private collaboration. Such efforts should promote AI innovation while safeguarding human rights and addressing potential vulnerabilities and malfunctions.
AI 系统增加了与所有权相关的问题,同时在生成数据和决策时也引发了知识产权或公平和问责制的问题。支持 AI 开发可能需要重新考虑知识产权条款并创建促进公私合作的机制。这些努力应促进人工智能创新,同时保护人权并解决潜在的脆弱性和故障。

Policies should also respond to the international and transboundary nature of AI. Using cloud computing available from international markets can reduce costs, but it is important to avoid increasing data and information dependency and stifling the future development of a domestic service market.
政策还应应对人工智能的国际和跨界性质。使用国际市场提供的云计算可以降低成本,但重要的是要避免增加数据和信息依赖性并扼杀国内服务市场的未来发展。

Countries need to consider all levels of the data value chain. Policies should clearly define which types of data can be made publicly available, and how they should be handled, and favor standards for data and metadata. Countries can also collect and provide open data, either through AI-specific programmes or through open-data initiatives and hubs, to streamline data integration, storage, access, and collaboration. This could improve transparency, promote innovation, and encourage public engagement in the adoption and development of AI.
各国需要考虑数据价值链的所有层面。政策应明确定义哪些类型的数据可以公开可用,以及应如何处理这些数据,并支持 数据和元数据的标准。各国还可以通过人工智能特定计划或通过开放数据计划和中心收集和提供开放数据,以简化数据集成、存储、访问和协作。这可以提高透明度,促进创新,并鼓励公众参与 AI 的采用和开发。

Governments can also rely on industrial players to leverage existing strengths by supporting platforms for data exchange and aggregation and for data monetization and the development of AI for particular uses. Different types of data have their own requirements. In particular, for data on humans, or AI applications making decisions for humans, there should be higher standards for privacy and responsibility, and accountability in case of errors. Policies and standards can be developed through public consultations and open forums, to incorporate the views and concerns of different stakeholders, increase accountability and transparency, and foster trust. Countries can support open data to facilitate access, data integration, and collaboration.
政府还可以依靠工业参与者利用现有优势,支持数据交换和聚合平台以及数据货币化和开发用于特定用途的 AI。不同类型的数据有其自己的要求。特别是,对于人类数据或为人类做出决策的 AI 应用程序,应该有更高的隐私和责任标准,以及在出现错误时的责任。可以通过公众咨询和公开论坛制定政策和标准,以纳入不同利益相关者的观点和关切,提高问责制和透明度,并促进信任。国家/地区可以支持开放数据,以促进访问、数据集成和协作。

3.1 Chile
3.1 智利

The Ministry of Science, Technology, Knowledge, and Innovation, and the Ministry of Economy, Development, and Tourism have established the Data Observatory, a public-private-academia collaboration that seeks to maximize the benefits from data for science, research, and productive development. As a multi-stakeholder organization, the Observatory leverages the competences and resources of a variety of actors for developing STI and data-based services and analyses in different fields, from natural science to urban planning. It uses open-data platforms that facilitate the participation of small providers and supports projects and initiatives related to data analysis for social impact.
科学、技术、知识和创新部以及经济、发展和旅游部建立了数据观察站,这是一个公私学术合作机构,旨在最大限度地利用数据为科学、研究和生产性发展带来的好处。作为一个多方利益相关者组织,天文台利用各种参与者的能力和资源,在从自然科学到城市规划的不同领域开发 STI 和基于数据的服务和分析。它使用开放数据平台,促进小型提供商的参与,并支持与数据分析相关的项目和倡议,以实现社会影响。

3.2 Germany
3.2 德国

The Federal Ministry of Digital Affairs and Transport has launched Mobility Data Space, which brings together automobile companies, organizations, and institutions that wish to monetize their data, seek data exchanges that bring mutual benefits, or need data for innovative AI mobility solutions. A market-based platform, it incentivizes participation by offering the potential for financial remuneration, representing a model that leverages existing industrial strengths to support the diffusion of AI.
德国联邦数字事务和交通部推出了移动数据空间,将希望将数据货币化、寻求互惠互利的数据交换或需要数据用于创新 AI 移动解决方案的汽车公司、组织和机构聚集在一起。作为一个基于市场的平台,它通过提供潜在的经济报酬来激励参与,代表了一种利用现有行业优势来支持人工智能传播的模式。

3.3 India
3.3 印度

The Council of Medical Research has issued Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare, to direct AI adoption and development involving humans or their data. These recognize the importance of processes for safety and minimizing risk to prevent unintended or deliberate misuses that can harm patients. Data sets used by AI should avoid biases by adequately representing the population and guaranteeing the highest privacy and security standards for patient data.
医学研究委员会发布了人工智能在生物医学研究和医疗保健中的应用伦理准则,以指导涉及人类或其数据的人工智能的采用和开发。这些标准认识到流程对安全和最大限度地降低风险的重要性,以防止可能伤害患者的意外或故意误用。AI 使用的数据集应通过充分代表人群并保证患者数据的最高隐私和安全标准来避免偏见。

3.4 Colombia
3.4 哥伦比亚

The Data Protection Authority has created a Sandbox on Privacy by Design and by Default in Artificial Intelligence Projects. This is an experimental space where AI companies can collaborate on solutions that respect personal information and rights, by design and in compliance with national data processing regulations. The Authority accompanies the process and gathers information about possible regulatory adaptations, to keep pace with technological advances, thereby also making the sandbox a tool for policy learning.
数据保护局 (Data Protection Authority) 在人工智能项目中创建了一个关于隐私设计和默认隐私的沙盒。这是一个实验空间,AI 公司可以在其中协作开发尊重个人信息和权利的解决方案,这些解决方案的设计符合国家数据处理法规。管理局伴随这一过程并收集有关可能的监管调整的信息,以跟上技术进步的步伐,从而也使沙盒成为政策学习的工具。

3.5 Singapore
3.5 新加坡

In the Copyright Act 2021, Singapore redesigned the copyright regime to take account of how copyrighted works are created, distributed, accessed, and used. The Act is aimed at making available large and diverse data sets for algorithmic training. The Act introduces an exception to the current regime that permits the copying of copyrighted works for the purpose of computational data analysis such as text and data mining and the training of machine-learning algorithms. It also introduces conditions and safeguards to protect the commercial interests of copyright owners.
在 2021 年版权法中,新加坡重新设计了版权制度,以考虑受版权保护的作品是如何创作、分发、访问和使用。该法案旨在为算法训练提供大量多样的数据集。该法案为现行制度引入了一个例外,允许出于计算数据分析(例如文本和数据挖掘以及机器学习算法的训练)目的复制受版权保护的作品。它还引入了保护版权所有者商业利益的条件和保护措施。

C. Actions taken by the UN System on AI Safety
C. 联合国系统在人工智能安全方面采取的行动

1. Actions directly addressing AI Safety
1. 直接解决 AI 安全的行动

1.1 Actions taken by UNCTAD
1.1 贸发会议采取的行动

UNCTAD has a long history of focusing on the field of AI security and regulation, and has made outstanding contributions to the field. Direct actions taken by UNCTAD include: advocating for the establishment of a regulatory and disclosure regime and building institutional routines based on it; and building a platform for multilateral dialogues related to credible AI in international conferences such as CSTD.
UNCTAD 长期专注于 AI 安全和监管领域,并为该领域做出了突出贡献。贸发会议采取的直接行动包括:倡导建立监管和披露制度,并在此基础上建立机构惯例;以及在 CSTD 等国际会议上建立一个与可信人工智能相关的多边对话平台。

1.1.1 Regulatory and disclosure regime based on ESG framework
1.1.1 基于 ESG 框架的监管和披露制度

In February 2025, UNCTAD released a policy brief entitled Global collaboration for inclusive and equitable artificial intelligence. In the brief, UNCTAD first proposed Establishing an artificial intelligence public disclosure mechanism, drawing from experiences related to the environmental, social and governance (ESG) reporting framework, could help enhance accountability and ensure that global commitments lead to tangible outcomes. This recommendation was refined in the 2025 Technology and innovation report released in April: the report suggests that an accountability and oversight mechanism could be established based on the ESG framework, i.e., based on the three dimensions of environmental, social, and governance, to require technology companies to report publicly and be subject to The report suggests that accountability and monitoring mechanisms can be established based on the ESG framework, i.e., requiring technology companies to report publicly and be evaluated based on environmental, social and governance aspects.
2025 年 2 月,联合国贸易和发展会议 (UNCTAD) 发布了一份题为《全球合作促进包容性和公平的人工智能》的政策简报。在简报中,联合国贸易和发展会议 (UNCTAD) 首次提出 建立人工智能公开披露机制,借鉴与环境、社会和治理 (ESG) 报告框架相关的经验,有助于加强问责制并确保全球承诺带来切实的成果 ”。 这一建议在 4 月发布的《2025 年科技与创新报告》中得到了完善:报告建议可以基于 ESG 框架建立问责和监督机制,即基于环境、社会和治理三个维度,要求科技公司公开报告并受制于报告建议,可以在 ESG 框架的基础上建立问责和监督机制, 即要求科技公司公开报告并根据环境、社会和治理方面进行评估。

Subsequently, the conference pointed out three different approaches to AI regulation:
随后,会议指出了 AI 监管的三种不同方法:

(1) Principles-based, which implies the establishment of internationally fair guiding principles throughout the AI life cycle to be voluntarily adhered to by companies;
(1) 基于原则,这意味着在整个人工智能生命周期中建立国际公平的指导原则,供公司自愿遵守;

(2) Risk-based, which implies risk grading of AI applications and adopting different regulatory measures for different risk levels;
(2) 风险为本,即对人工智能应用进行风险分级,并针对不同的风险级别采取不同的监管措施;

(3) Liability-based, which implies making it clear that AI developers bear the legal responsibility for their safety issues.
(3) 基于责任,这意味着明确 AI 开发者对其安全问题承担法律责任。

1.1.2 CSTD: International platform on AI safety
1.1.2 CSTD:人工智能安全国际平台

UNCTAD has established a number of reliable AI-related multilateral dialog platforms, including CSTD and eWeek, and of these platforms, CSTD has the most direct relevance to AI safety.
贸发会议已经建立了许多可靠的人工智能相关多边对话平台,包括 CSTD 和 eWeek,在这些平台中,CSTD 与 AI 安全最直接相关。

The United Nations Commission on Science and Technology for Development (CSTD) is a subsidiary body of the Economic and Social Council (ECOSOC). It holds an annual intergovernmental forum for discussion on timely and pertinent issues affecting science, technology and development. Since 2005, the Commission has been mandated by ECOSOC to serve as the focal point in the system-wide follow-up to the outcomes of the World Summit on the information Society (WSIS). Since 2023, CSTD has included AI, data governance, and algorithmic ethics as a core theme, convening the 28th CSTD side event on April 9, 2025, with the theme Data, AI, and human rights: frameworks and use cases for responsible deployment.
联合国科学技术促进发展委员会 (CSTD) 是经济及社会理事会 (ECOSOC) 的附属机构。它每年举办一次政府间论坛,讨论影响科学、技术和发展的及时和相关问题。自 2005 年以来,经社理事会授权该委员会作为信息社会世界峰会 (WSIS) 成果全系统后续行动的协调中心。 自 2023 年以来,CSTD 将人工智能、数据治理和算法伦理作为核心主题,并于 2025 年 4 月 9 日召开了第 28 届 CSTD 会外活动,主题为数据、人工智能和人权:负责任部署的框架和用例。

The side event aims to:
会外活动旨在:

(1) Critically examine the relationship between AI technologies, data governance, and human rights protection.
(1) 批判性地审视人工智能技术、数据治理和人权保护之间的关系。

(2) Analyze the implementation of AI systems through the lens of human rights, specifically with respect to privacy, non-discrimination, access and the role of the private sector.
(2) 从人权的角度分析人工智能系统的实施情况,特别是在隐私、非歧视、获取和私营部门的作用方面。

(3) Present and assess global case studies where AI has been applied in human rights contexts, offering practical insights and lessons learned.
(3) 提出和评估人工智能应用于人权背景的全球案例研究,提供实用的见解和经验教训。

(4) Engage in a dialogue on strategies for ensuring the rights-respecting use of AI technologies, with a particular focus on marginalized and vulnerable communities.
(4) 就确保尊重权利地使用 AI 技术的战略进行对话,特别关注边缘化和弱势社区。

(5) Facilitate collaboration among stakeholders in addressing the challenges and opportunities presented by AI in human rights-sensitive contexts.
(5) 促进利益相关者之间的合作,以应对人工智能在人权敏感背景下带来的挑战和机遇。

The side event explored the issue of AI reliability in the context of human rights in terms of AI Deployment in Global Contexts and Data and AI Governance in Human Rights-Sensitive Contexts, and built an annual international platform on AI safety.
会外活动从全球背景下的 AI 部署和人权敏感背景下的数据和 AI 治理方面探讨了人权背景下的 AI 可靠性问题,并建立了一个关于 AI 安全的年度国际平台。

1.2 UN. General Assemblys DR on AI management
1.2 联合国。大会关于 AI 管理的 DR

On March 21, 2024, the United Nations General Assembly adopted Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development, marking the first global resolution on artificial intelligence in the history of the United Nations.
2024 年 3 月 21 日,联合国大会通过了“ 抓住安全、可靠和值得信赖的人工智能系统的机会,促进可持续发展 ,这是联合国历史上第一个关于人工智能的全球决议。

The resolution extensively covers the management of reliable AI use, including encouraging member states to assess AI risks such as algorithmic bias, discriminatory content in data sets, misuse and over-reliance on AI, promoting transparent AI accountability and complaint mechanisms, calling for the establishment of international standards for traceability, correction, privacy and risk monitoring, and encouraging the private sector to participate in the governance of reliable AI use within the framework of international and local regulations.
该决议广泛涵盖了对可靠 AI 使用的管理,包括鼓励会员国评估 AI 风险,例如算法偏见、数据集中的歧视性内容、滥用和过度依赖 AI,促进透明的 AI 问责和投诉机制,呼吁建立可追溯性、更正、隐私和风险监控的国际标准,并鼓励 私营部门在国际和地方法规框架内参与对可靠 AI 使用的治理。

As the first global consensus document related to artificial intelligence, this draft resolution provides guiding solutions for other UN AI governance mechanisms and serves as a reference for the formulation of AI regulations by various countries.
作为首个与人工智能相关的全球共识文件,该决议草案为其他联合国人工智能治理机制提供了指导性解决方案,并为各国制定人工智能法规提供了参考。

1.3 Principles for the Ethical Use of Artificial Intelligence in the United Nations System
1.3 联合国系统合乎道德地使用人工智能的原则

In September 2022, the United Nations System Chief Executives Board for Coordination endorsed the Principles for the Ethical Use of Artificial Intelligence in the United Nations System, developed through the High-level Committee on Programmes (HLCP) which approved the Principles at an intersessional meeting in July 2022.
2022 年 9 月,联合国系统行政首长协调理事会批准了通过方案问题高级别委员会 (HLCP) 制定的《联合国系统以合乎道德的方式使用人工智能的原则》,该委员会在 2022 年 7 月的闭会期间会议上批准了这些原则

(1) Do no harm
(1) 不伤害

AI systems must not cause or exacerbate harm to individuals or society, including social, cultural, economic, environmental, or political harm, and should be monitored to prevent violations of human rights or fundamental freedoms throughout their life cycle.
AI 系统不得对个人或社会造成或加剧伤害,包括社会、文化、经济、环境或政治危害,并且应对其进行监控,以防止在其整个生命周期内侵犯人权或基本自由。

(2) Defined purpose, necessity, and proportionality
(2) 明确的目的、必要性和相称性

AI use must have a clearly defined and legitimate purpose, be necessary within its context, and be proportionate to the goals pursued, aligned with the mandates and rules of the respective UN entity.
人工智能的使用必须具有明确定义的合法目的,在其背景下是必要的,并与所追求的目标相称,与相应联合国实体的任务和规则相一致。

(3) Safety and security
(3) 安全保障

Potential and actual risks to human beings, the environment, or ecosystems must be identified and mitigated throughout the AI systems life cycle through robust frameworks that ensure safe and secure operation.
必须通过确保安全运行的强大框架 ,在 AI 系统的整个生命周期中识别和减轻对人类、环境或生态系统的潜在和实际风险

(4) Fairness and non-discrimination
(4) 公平和非歧视

AI systems should promote fair distribution of benefits and risks, prevent bias and discrimination, and must not lead to deception or unjust restrictions on human rights.
AI 系统应促进利益和风险的公平分配,防止偏见和歧视,并且不得导致欺骗或对人权的不公正限制。

(5) Sustainability
(5) 可持续性

AI development and use should promote environmental, social, and economic sustainability, with continuous assessment and mitigation of long-term and intergenerational impacts.
AI 的开发和使用应促进环境、社会和经济的可持续性,并持续评估和减轻长期和代际影响。

(6) Right to privacy, data protection, and data governance
(6) 隐私权、数据保护权和数据治理权

AI systems must respect and protect individuals data rights, requiring strong data protection frameworks and governance to safeguard data integrity and privacy.
AI 系统必须尊重和保护个人数据权利,需要强大的数据保护框架和治理来保护数据完整性和隐私。

(7) Human autonomy and oversight
(7) 人类自主权和监督

AI must support human decision-making and freedom, ensuring meaningful human control, especially in critical or rights-impacting decisions, which must never be fully delegated to machines.
AI 必须支持人类决策和自由,确保有意义的人类控制,尤其是在关键或影响权利的决策中,绝不能将这些决策完全委托给机器。

(8) Transparency and explainability
(8) 透明度和可解释性

The functioning and decisions of AI systems must be transparent and technically explainable; individuals must be informed when AI affects their rights and be able to understand the logic behind those decisions.
AI 系统的功能和决策必须透明且技术上可解释;当 AI 影响个人的权利时,他们必须得到通知,并能够理解这些决定背后的逻辑。

(9) Responsibility and accountability
(9) 责任和问责制

UN organizations must establish mechanisms for oversight, audit, and accountability, clearly assigning ethical and legal responsibility for AI-based decisions and ensuring whistleblower protections.
联合国组织必须建立监督、审计和问责机制,明确为基于 AI 的决策分配道德和法律责任,并确保举报人得到保护。

(10) Inclusion and participation
(10) 包容性和参与

AI systems must be designed and deployed through inclusive, interdisciplinary, and participatory processes that engage affected communities and promote gender equality and stakeholder consultation.
AI 系统必须通过包容性、跨学科和参与性流程进行设计和部署,让受影响的社区参与进来,促进性别平等和利益相关者咨询。

The principles intended to be read with other related policies and international law, and all organizations of the United Nations system are required to follow the principles.
旨在与其他相关政策和国际法一起阅读的原则,以及联合国系统的所有组织都必须遵循这些原则。

1.4 High-level Advisory Body on AI​
1.4 人工智能问题高级别咨询机构

On October 26, 2023, the UN Secretary-General officially announced the establishment of the High-Level Advisory Body on Artificial Intelligence. The Body aimed at undertaking analysis and advance recommendations for the international governance of AI.
2023 年 10 月 26 日,联合国秘书长正式宣布成立人工智能高级别咨询机构。该机构旨在为 AI 的国际治理进行分析并提出建议。

In September 2024, the Body published Governing AI for HumanityThe report identifies five major problems with AI disinformation, militarized abuse, mass surveillance, social inequality, and increased energy consumption. Crucially, the report lists 7 key solutions on the problems, which includes:
2024 年 9 月,该机构发布了《治理人工智能促进人类 》, 该报告确定了人工智能虚假信息、军事化滥用、大规模监控、社会不平等和能源消耗增加的五个主要问题。至关重要的是,该报告列出了这些问题的 7 个关键解决方案, 其中包括

(1) International Scientific Panel on AI
(1) 国际人工智能科学小组

Establish a UN-backed, multidisciplinary expert panel to assess AI capabilities, risks, and uncertainties.
建立一个 由联合国支持的多学科专家小组,以评估 AI 能力、风险和不确定性。

This body would publish regular global reports, build scientific consensus, and inform evidence-based policymaking across countries.
该机构将定期发布全球报告,建立科学共识,并为各国的循证政策制定提供信息。

(2) Global Policy Dialogue on AI Governance
(2) 人工智能治理全球政策对话

Create a recurring, UN-hosted forum to foster alignment among countries on AI governance.
创建一个由联合国主办的经常性论坛,以促进各国在 AI 治理方面的一致性。

It would promote the exchange of regulatory best practices, address emerging risks, and strengthen coordination around human rights and safety frameworks.
它将促进监管最佳实践的交流,应对新出现的风险,并加强围绕人权和安全框架的协调。

(3) AI Standards Exchange
(3) AI 标准交流

Develop a global coordination hub to map and compare existing AI technical and ethical standards.
建立一个全球协调中心,以映射和比较现有的 AI 技术和道德标准。

This would enhance interoperability, identify gaps, and support the convergence of global norms across sectors and jurisdictions.
这将增强互作性,发现差距,并支持跨部门和司法管辖区的全球规范趋同。

(4) Global AI Capacity Development Network
(4) 全球人工智能能力发展网络

Build a network of linked AI centers to support training, research, and local innovation, particularly in underserved regions.
构建一个由相互关联的 AI 中心组成的网络 ,以支持培训、研究和本地创新,尤其是在服务不足的地区。

The network would share resources such as compute access, open datasets, and testing environments to enable responsible AI development worldwide.
该网络将共享计算访问、开放数据集和测试环境等资源,以实现全球负责任的 AI 开发。

(5) Global Fund for AI
(5) 全球人工智能基金

Launch a multilateral funding mechanism to support equitable access to AI technologies and infrastructure.
启动多边筹资机制,支持公平获得 AI 技术和基础设施。

The fund would promote inclusive innovation, help countries meet global safety standards, and align AI use with the Sustainable Development Goals (SDGs).
该基金将促进包容性创新,帮助各国达到全球安全标准,并使 AI 的使用与可持续发展目标 (SDG) 保持一致。

(6) Global AI Data Framework
(6) 全球 AI 数据框架

Develop a globally aligned framework for ethical, secure, and inclusive data use in AI systems.
为 AI 系统中合乎道德、安全和包容性的数据使用制定全球一致的框架。

This would promote trusted data sharing, address cultural and legal differences, and ensure fairness in access to training data.
这将促进可信数据共享,解决文化和法律差异,并确保公平地访问训练数据。

(7) United Nations AI Office
(7) 联合国 AI 办公室

Create a dedicated UN AI Office to coordinate governance efforts and serve as a central point of engagement.
创建一个专门的联合国人工智能办公室来协调治理工作并作为参与的中心点。

The office would work across UN agencies, governments, industry, and civil society to ensure coherent, inclusive, and responsible global AI action.
该办公室将与联合国机构、政府、行业和民间社会合作,以确保连贯、包容和负责任的全球 AI 行动。

2. Instruments interrelated to AI Safety
2. 与 AI 安全相关的仪器

2.1 UNCTAD eWeek
2.1 联合国贸易和发展会议 (UNCTAD) 电子周

eWeek is a global multilateral forum held annually by the United Nations Conference on Trade and Development since 2016. The UNCTAD eWeek has become the leading forum for Ministers, senior government officials, CEOs and other business representatives, international organizations, development banks, academics and civil society to discuss the development opportunities and challenges associated with the digital economy.
eWeek 是自 2016 年以来由联合国贸易和发展会议每年举办的全球多边论坛。贸发会议 eWeek 已成为部长、高级政府官员、首席执行官和其他商业代表、国际组织、开发银行、学术界和民间社会讨论与数字经济相关的发展机遇和挑战的领先论坛。

eWeek, as a conference on digital governance in the reliable use of AI field, put forward many constructive suggestions in the form of roundtable discussions and joint statements. In 2023, eWeek included a conference themed on AI Governance: Ensuring equity and accountability in the digital economy. The meeting pointed out that the current technological development in the AI field far exceeds the regulatory pace. The existing legal framework is difficult to adapt to cutting-edge issues such as large models, automatic decision-making, and deep forgery. It called for strengthening a responsibility framework that is transparent in algorithms, traceable in decision-making, and supervised by humans, establishing a global AI governance platform or mechanism, and promoting the interoperability of international technical standards and data. And encourage small countries and small and medium-sized enterprises to participate in pilot projects and innovations.
eWeek 作为 AI 可靠使用领域的数字治理会议,以圆桌讨论和联合声明的形式提出了许多建设性的建议。2023 年,eWeek 举办了一场主题为“AI 治理:确保数字经济中的公平和问责制”的会议。会议指出,当前人工智能领域的技术发展远超监管步伐。现有的法律框架难以适应大模型、自动决策、深度伪造等前沿问题。它呼吁加强算法透明、决策可追溯、人工监督的责任框架,建立全球人工智能治理平台或机制,促进国际技术标准和数据的互作性。并鼓励小国和中小企业参与试点项目和创新。

2.2 UNICRI Centre for AI and Robotics
2.2UNICRI 人工智能和机器人中心

Launched in July 10, 2019, The Centre for Artificial Intelligence and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI) was established to support and assist UN Member States in understanding the risks and benefits of AI and robotics and exploring their use for contributing to a future free of violence and crime.
联合国区域间犯罪与司法研究所 (UNICRI) 人工智能和机器人技术中心于 2019 年 7 月 10 日启动,旨在支持和协助联合国会员国了解人工智能和机器人技术的风险和益处,并探索其用途,以促进没有暴力和犯罪的未来。

UNICRI works with national authorities, law enforcement agencies, the public-private sector, and civil society actors to harness the opportunities of new and emerging technologies related to justice, as well as to advance understanding of their potential risks to justice, including how their use may impact human rights. With its focus on justice, UNICRI also continues to support the UN Common Agendas goals to enhance global collaboration to address the societal, ethical, legal, and economic impacts of digital technologies to maximize benefits and minimize harm to society. This includes UNICRIs Future Series Webinars, which promote research, knowledge sharing, and dissemination related to new and emerging technologies such as Web 3.0, metaverse, and augmented reality.
新闻司法所与国家当局、执法机构、公私部门和民间社会行为者合作,利用与司法相关的新兴技术的机会,并促进对其司法潜在风险的理解,包括其使用如何影响人权。UNICRI 专注于正义,还继续支持联合国共同议程的目标,以加强全球合作,以应对数字技术的社会、伦理、法律和经济影响,从而最大限度地提高利益并最大限度地减少对社会的伤害。这包括 UNICRI 的未来系列网络研讨会,旨在促进与 Web 3.0、元界和增强现实等新兴技术相关的研究、知识共享和传播。

References
引用

\[1] European Commission, Joint Research Centre. (2024). *2024 EU Industrial R\&D Investment Scoreboard*. Publications Office of the European Union. [https://joint-research-centre.ec.europa.eu/jrc-news-and-updates/eu-companies-lead-global-rd-investment-growth-breaking-decade-long-trend-2024-12-18\en](https://joint-research-centre.ec.europa.eu/jrc-news-and-updates/eu-companies-lead-global-rd-investment-growth-breaking-decade-long-trend-2024-12-18_en)
\[1] 欧盟委员会,联合研究中心。(2024). *2024 年欧盟工业研发投资记分牌*。欧盟出版物办公室。[https://joint-research-centre.ec.europa.eu/jrc-news-and-updates/eu-companies-lead-global-rd-investment-growth-breaking-decade-long-trend-2024-12-18\en](https://joint-research-centre.ec.europa.eu/jrc-news-and-updates/eu-companies-lead-global-rd-investment-growth-breaking-decade-long-trend-2024-12-18_en)

\[2] Gartner. (2024). *Hype Cycle for Emerging Technologies, 2024*. Gartner, Inc.
\[2] 加特纳。(2024 年)。*新兴技术的成熟度曲线,2024 年*。Gartner 公司

\[3] IDC. (2024). *Worldwide Artificial Intelligence Spending Guide*. International Data Corporation.
\[3] IDC.(2024). *全球人工智能支出指南*。国际数据公司。

\[4] McKinsey & Company. (2023). *The state of AI in 2023: Generative AIs breakout year*. [https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023)
\[4] 麦肯锡公司。(2023). *2023 年 AI 状况:生成式 AI 的突破年*。[https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023)

\[5] The Business Research Company. (2025). *Generative AI in creative industries global market report 2025*. [https://www.thebusinessresearchcompany.com/report/generative-ai-in-creative-industries-global-market-report](https://www.thebusinessresearchcompany.com/report/generative-ai-in-creative-industries-global-market-report)
\[5] 商业研究公司。(2025 年)。*2025 年创意产业中的生成式 AI 全球市场报告*。[https://www.thebusinessresearchcompany.com/report/generative-ai-in-creative-industries-global-market-report](https://www.thebusinessresearchcompany.com/report/generative-ai-in-creative-industries-global-market-report)

\[6] UNCTAD. (2022). *Creative economy outlook 2022*. [https://unctad.org/webflyer/creative-economy-outlook-2022](https://unctad.org/webflyer/creative-economy-outlook-2022)
\[6] 联合国贸易和发展会议。(2022). *2022 年创意经济展望*。[https://unctad.org/webflyer/creative-economy-outlook-2022](https://unctad.org/webflyer/creative-economy-outlook-2022)

\[7] UNCTAD. (2023). *UNCTAD at a glance*. [https://unctad.org/webflyer/unctad-glance](https://unctad.org/webflyer/unctad-glance)
\[7] 联合国贸易和发展会议。(2023). *贸发会议概览*。[https://unctad.org/webflyer/unctad-glance](https://unctad.org/webflyer/unctad-glance)

\[8] UNCTAD. (2024). *Digitalisation, artificial intelligence and the creative economy*. [https://unctad.org/webflyer/digitalisation-artificial-intelligence-and-creative-economy](https://unctad.org/webflyer/digitalisation-artificial-intelligence-and-creative-economy)
\[8] 联合国贸易和发展会议。(2024). *数字化、人工智能和创意经济*。[https://unctad.org/webflyer/digitalisation-artificial-intelligence-and-creative-economy](https://unctad.org/webflyer/digitalisation-artificial-intelligence-and-creative-economy)

\[9] UNCTAD. (2025). *Creative Economy Development Report 2025*. [https://unctad.org](https://unctad.org)
\[9] 联合国贸易和发展会议。(2025). *2025 年创意经济发展报告*.[https://unctad.org](https://unctad.org)

\[10] UNCTAD. (n.d.-a). *ASYCUDA – Automated System for Customs Data*. [https://unctad.org/topic/customs/asycuda](https://unctad.org/topic/customs/asycuda)
\[10] 联合国贸易和发展会议。*ASYCUDA – 海关数据自动化系统*。[https://unctad.org/topic/customs/asycuda](https://unctad.org/topic/customs/asycuda)

\[11] UNCTAD. (n.d.-b). *Empretec programme*. [https://empretec.unctad.org/](https://empretec.unctad.org/)
\[11] 联合国贸易和发展会议。*Empretec 计划*。[https://empretec.unctad.org/](https://empretec.unctad.org/)

\[12] Goetze, T. S. (2024). *AI Art is Theft: Labour, Extraction, and Exploitation*. arXiv preprint.
\[12] 格茨,TS (2024)。*人工智能艺术是盗窃:劳动、提取和剥削*。 arXiv 预印本。

\[13] Wired. (2025). *Disney and Universal Sue Midjourney for Copyright Infringement*. [https://www.wired.com/story/disney-universal-sue-midjourney/](https://www.wired.com/story/disney-universal-sue-midjourney/)
\[13] 连线。(2025). *迪士尼和环球影业起诉 Midjourney 侵犯版权*。[https://www.wired.com/story/disney-universal-sue-midjourney/](https://www.wired.com/story/disney-universal-sue-midjourney/)

\[14] UK Intellectual Property Office. (2023). *The governments code of practice on copyright and AI*. [https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai](https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai)
\[14] 英国知识产权局。(2023). *政府关于版权和人工智能的行为准则*。[https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai](https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai)

\[15] Gervais, D. (2022). *AI and Copyright: The Upside-Down World*. *Houston Law Review*.
\[15] 热尔韦斯,D. (2022)。*人工智能与版权:颠倒的世界*。*休斯顿法律评论*。

\[16] UNESCO. (2022). *Re|Shaping Policies for Creativity Report*. [https://www.unesco.org/creativity/sites/default/files/medias/fichiers/2023/01/380474eng.pdf](https://www.unesco.org/creativity/sites/default/files/medias/fichiers/2023/01/380474eng.pdf)
\[16] 联合国教科文组织。(2022). * 回复|塑造创意政策报告*。[https://www.unesco.org/creativity/sites/default/files/medias/fichiers/2023/01/380474eng.pdf](https://www.unesco.org/creativity/sites/default/files/medias/fichiers/2023/01/380474eng.pdf)

\[17] Zuboff, S. (2019). *The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power*. PublicAffairs.
\[17] 祖博夫,S.(2019 年)。*监控资本主义时代:在权力的新前沿为人类未来而战*。公共事务。

\[18] Stanford HAI. (2023). *The Foundation Model Transparency Index*. [https://crfm.stanford.edu/fmti/May-2024/index.html](https://crfm.stanford.edu/fmti/May-2024/index.html)
\[18] 斯坦福 HAI.(2023). *基金会模型透明度指数*。[https://crfm.stanford.edu/fmti/May-2024/index.html](https://crfm.stanford.edu/fmti/May-2024/index.html)

\[19] Hagerty, A., & Rubinov, I. (2023). *Global AI Ethics: A Review of the Social Impacts and Ethical Implications of Artificial Intelligence*.
\[19] Hagerty, A., & Rubinov, I. (2023). *全球人工智能伦理:人工智能的社会影响和伦理影响回顾*.

\[20] Lark. (2023). *Ethical Implications of Artificial Intelligence*. [https://www.larksuite.com/en\_us/topics/ai-glossary/ethical-implications-of-artificial-intelligence](https://www.larksuite.com/en_us/topics/ai-glossary/ethical-implications-of-artificial-intelligence)
\[20] 云雀。(2023). *人工智能的伦理影响*。[https://www.larksuite.com/en\_us/topics/ai-glossary/ethical-implications-of-artificial-intelligence](https://www.larksuite.com/en_us/topics/ai-glossary/ethical-implications-of-artificial-intelligence)

\[21] OECD. (2023). *AI Patents and Global Technology Diffusion*.
\[21] 经合组织。(2023). *人工智能专利和全球技术传播*。

\[22] Bratton, B. (2016). *The Stack: On Software and Sovereignty*. [https://observatory.constantvzw.org/books/benjamin-h-bratton-the-stack-on-software-and-sovereignty-2.pdf](https://observatory.constantvzw.org/books/benjamin-h-bratton-the-stack-on-software-and-sovereignty-2.pdf)
\[22] 布拉顿,B. (2016)。*堆栈:关于软件和主权*。[https://observatory.constantvzw.org/books/benjamin-h-bratton-the-stack-on-software-and-sovereignty-2.pdf](https://observatory.constantvzw.org/books/benjamin-h-bratton-the-stack-on-software-and-sovereignty-2.pdf)

\[23] Obermeyer, Z. et al. (2019). *Dissecting racial bias in an algorithm used to manage the health of populations*. *Science*. [https://www.science.org/doi/10.1126/science.aax2342](https://www.science.org/doi/10.1126/science.aax2342)
\[23] Obermeyer, Z. et al. (2019). *剖析用于管理人口健康的算法中的种族偏见*。*科学*。[https://www.science.org/doi/10.1126/science.aax2342](https://www.science.org/doi/10.1126/science.aax2342)

\[24] Eubanks, V. (2018). *Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor*.
\[24] 尤班克斯,V. (2018)。*自动化不平等:高科技工具如何描述、监管和惩罚穷人*。

\[25] World Bank. (2024). *Global Trends in AI Governance*.
\[25] 世界银行。(2024). *人工智能治理的全球趋势*。

\[26] UNCTAD. (2025). *Technology and Innovation Report*. [https://unctad.org/system/files/official-document/tir2025ch1\_en.pdf](https://unctad.org/system/files/official-document/tir2025ch1_en.pdf)
\[26] 联合国贸易和发展会议。(2025). *技术与创新报告*。[https://unctad.org/system/files/official-document/tir2025ch1\_en.pdf](https://unctad.org/system/files/official-document/tir2025ch1_en.pdf)

\[27] UNESCO. (2024). *unesdoc.unesco.org/ark:/48223/pf0000382570*
\[27] 联合国教科文组织。(2024). *unesdoc.unesco.org/ark:/48223/pf0000382570*

\[28] UNCTAD. (2025). *Science, Technology and Innovation*. [https://unctad.org/topic/science-technology-and-innovation](https://unctad.org/topic/science-technology-and-innovation)
\[28] 联合国贸易和发展会议。(2025). *科学、技术与创新*.[https://unctad.org/topic/science-technology-and-innovation](https://unctad.org/topic/science-technology-and-innovation)

\[29] UNESCO. (2024). *Ethics of Artificial Intelligence*. [https://www.unesco.org/en/artificial-intelligence/recommendation-ethics](https://www.unesco.org/en/artificial-intelligence/recommendation-ethics)?
\[29] 联合国教科文组织。(2024). *人工智能伦理*。[https://www.unesco.org/en/artificial-intelligence/recommendation-ethics](https://www.unesco.org/en/artificial-intelligence/recommendation-ethics)?

\[30] UNESCO. (2021). *Recommendation on the Ethics of Artificial Intelligence*. [https://unesdoc.unesco.org/ark:/48223/pf0000381137](https://unesdoc.unesco.org/ark:/48223/pf0000381137)
\[30] 联合国教科文组织。(2021). *人工智能伦理建议*.[https://unesdoc.unesco.org/ark:/48223/pf0000381137](https://unesdoc.unesco.org/ark:/48223/pf0000381137)

\[31] International Telecommunication Union. (2023). *Facts and Figures 2023*. [https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx](https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx)
\[31] 国际电信联盟。(2023). *事实与数据 2023*。[https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx](https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx)

\[32] World Bank. (2021). *GovTech Maturity Index 2021*. [https://www.worldbank.org/en/topic/governance/brief/govtech](https://www.worldbank.org/en/topic/governance/brief/govtech)
\[32] 世界银行。(2021). *2021 年政府科技成熟度指数*。[https://www.worldbank.org/en/topic/governance/brief/govtech](https://www.worldbank.org/en/topic/governance/brief/govtech)

\[33] UNCTAD. (2021). *Technology and Innovation Report 2021*. [https://unctad.org/webflyer/technology-and-innovation-report-2021](https://unctad.org/webflyer/technology-and-innovation-report-2021)
\[33] 联合国贸易和发展会议。(2021). *2021 年技术与创新报告*。[https://unctad.org/webflyer/technology-and-innovation-report-2021](https://unctad.org/webflyer/technology-and-innovation-report-2021)

\[34] African Union. (2022). *Data Policy Framework for Africa*. [https://au.int/en/documents/20220307/data-policy-framework](https://au.int/en/documents/20220307/data-policy-framework)
\[34] 非洲联盟。(2022). *非洲数据政策框架*.[https://au.int/en/documents/20220307/data-policy-framework](https://au.int/en/documents/20220307/data-policy-framework)

\[35] UNESCO Institute for Statistics. (2022). *Digital Literacy in Sub-Saharan Africa*. [http://uis.unesco.org](http://uis.unesco.org)
\[35] 联合国教科文组织统计研究所。(2022). *撒哈拉以南非洲的数字素养*.[http://uis.unesco.org](http://uis.unesco.org)

\[36] UNCTAD. *TRIPS & Emerging Technologies*. [https://unctad.org/Topic/Science-Technology-and-Innovation/Intellectual-Property/IP-Emerging-Technologies](https://unctad.org/Topic/Science-Technology-and-Innovation/Intellectual-Property/IP-Emerging-Technologies)
\[36] 联合国贸发会议。 *TRIPS 与新兴技术*。[https://unctad.org/Topic/Science-Technology-and-Innovation/Intellectual-Property/IP-Emerging-Technologies](https://unctad.org/Topic/Science-Technology-and-Innovation/Intellectual-Property/IP-Emerging-Technologies)

\[37] Okediji, R. (2006). *The International Copyright System: Limitations, Exceptions and Public Interest Considerations for Developing Countries in the Digital Environment*. UNCTAD/ICTSD. [https://unctad.org/system/files/official-document/ictsd2006ipd15\_en.pdf](https://unctad.org/system/files/official-document/ictsd2006ipd15_en.pdf)
\[37] Okediji, R. (2006 年)。*国际版权制度:数字环境中发展中国家的限制、例外和公共利益考虑*。贸发会议/ICTSD。[https://unctad.org/system/files/official-document/ictsd2006ipd15\_en.pdf](https://unctad.org/system/files/official-document/ictsd2006ipd15_en.pdf)

\[38] International Telecommunication Union. (2023). *Achieving gender equality in the digital age: Addressing the digital gender divide*. [https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx](https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx)
\[38] 国际电信联盟。(2023). *在数字时代实现性别平等:解决数字性别鸿沟*。[https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx](https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx)

\[39] Moltzau, A. (2016). *Concrete Problems in AI Safety*.
\[39] 莫尔佐 ,A.(2016 年)。*人工智能安全中的具体问题*。

\[40] Gyevnár, B., & Kasirzadeh, A. (2025). *AI safety for everyone*. *Nature Machine Intelligence*, 7, 531–542.
\[40] Gyevnár, B., & Kasirzadeh, A. (2025). *每个人的人工智能安全*. *自然机器智能*, 7, 531–542.

\[41] Engelke, P. (2020). *AI, Society, and Governance: An Introduction*. Atlantic Council.
\[41] 恩格尔克,P. (2020)。*人工智能、社会和治理:简介*。大西洋理事会。

\[42] Pavaloaia, V., & Necula, S.-C. (2023). *Artificial Intelligence as a Disruptive Technology—A Systematic Literature Review*. *Electronics*, 12(1102). [https://doi.org/10.3390/electronics12051102](https://doi.org/10.3390/electronics12051102)
\[42] Pavaloaia, V., & Necula, S.-C.(2023). *人工智能作为一种颠覆性技术——系统文献综述*。*电子学*, 12(1102).[https://doi.org/10.3390/electronics12051102](https://doi.org/10.3390/electronics12051102)

\[43] UNCTAD. (2025). *Technology and Innovation Report 2025*
\[43] 联合国贸发会议。(2025). *2025 年技术与创新报告*

\[44] UNCTAD. *Commission on Science and Technology for Development*. [https://unctad.org/topic/commission-on-science-and-technology-for-development](https://unctad.org/topic/commission-on-science-and-technology-for-development)
\[44] 联合国贸易和发展会议(UNCTAD)*科学技术促进发展委员会*。[https://unctad.org/topic/commission-on-science-and-technology-for- 开发](https://unctad.org/topic/commission-on-science-and-technology-for-development)

\[45] UNCTAD. *28th CSTD Side Event: Data, AI and Human Rights Frameworks and Use Cases for Responsible AI*. [https://unctad.org/meeting/28th-cstd-side-event-data-ai-and-human-rights-frameworks-and-use-cases-responsible](https://unctad.org/meeting/28th-cstd-side-event-data-ai-and-human-rights-frameworks-and-use-cases-responsible)
\[45] 联合国贸易和发展会议 (UNCTAD)*第 28 届 CSTD 会外活动:数据、人工智能和人权框架以及负责任的人工智能的用例*。[https://unctad.org/meeting/28th-cstd-side-event-data-ai-and-human-rights-frameworks-and-use-cases-responsible](https://unctad.org/meeting/28th-cstd-side-event-data-ai-and-human-rights-frameworks-and-use-cases-responsible)

\[46] UNGA. (2024). *Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development*
\[46] 联合国大会。(2024). *抓住安全、可靠和值得信赖的人工智能系统的机会,促进可持续发展*

\[47] UNSCEB. *Principles for the Ethical Use of Artificial Intelligence in the United Nations System*. [https://unsceb.org/principles-ethical-use-artificial-intelligence-united-nations-system](https://unsceb.org/principles-ethical-use-artificial-intelligence-united-nations-system)
\[47] 联合国教育科学委员会。*联合国系统以合乎道德的方式使用人工智能的原则*。[https://unsceb.org/principles-ethical-use-artificial-intelligence-united-nations-system](https://unsceb.org/principles-ethical-use-artificial-intelligence-united-nations-system)

\[48] UNSCEB. (2022). *Principles for the Ethical Use of Artificial Intelligence in the United Nations System*
\[48] UNSCEB.(2022). *联合国系统以合乎道德的方式使用人工智能的原则*

\[49] UN AI Advisory Body. (2024). *Governing AI for humanity*
\[49] 联合国人工智能咨询机构。(2024). *为人类治理人工智能*

\[50] UNICRI. *Responsible new technologies to address crime and exploitation*. [https://unicri.org/Responsible-new-technologies-address-crime-exploitation](https://unicri.org/Responsible-new-technologies-address-crime-exploitation)
\[50] UNICRI. *负责任的新技术解决犯罪和剥削问题*.[https://unicri.org/Responsible-new-technologies-address-crime-exploitation](https://unicri.org/Responsible-new-technologies-address-crime-exploitation)

  1. UNESCO, Digital divide, https://www.unesco.org/en/articles/ai-literacy-and-new-digital-divide-global-call-action
    联合国教科文组织, 数字鸿沟, https://www.unesco.org/en/articles/ai-literacy-and-new-digital-divide-global-call-action

  2. Eight shocking AI Bias Examples, https://www.prolific.com/resources/shocking-ai-bias
    八个令人震惊的 AI 偏见示例,https://www.prolific.com/resources/shocking-ai-bias

  3. Stanford Report: AI bias and culture exclusion, https://news.stanford.edu/stories/2025/05/digital-divide-ai-llms-exclusion-non-english-speakers-research#:~:text=New%20research%20explores%20the%20communities%20and%20cultures%20being,opportunities%20and%20increased%20risks%20from%20bias%20and%20misinformation.
    斯坦福报告:AI 偏见和文化排斥,https://news.stanford.edu/stories/2025/05/digital-divide-ai-llms-exclusion-non-english-speakers-research#:~:text=New%20research%20explores%20the%20communities%20and%20cultures%20being,opportunities%20and%20increased%20risks%20from%20bias%20and%20misinformation。