AI Governance: New Cooperation or New Competition for Europe and America?

Published in Guangming Daily
(China) on 22 December 2023
by Jiao Zhusong (link to originallink to original)
Translated from by Jo Sharp. Edited by Helaine Schweitzer.

 

 

 

With the rapid development of artificial intelligence in recent years, the competition among Western countries for dominance in AI regulation has become increasingly fierce. The European Parliament, EU member states and the European Commission recently reached agreement on the Artificial Intelligence Act. This will become the world's first comprehensive regulatory framework in the field of artificial intelligence. EU Commissioner for the Internal Market Thierry Breton said that the Artificial Intelligence Act is not only a rule book but will also act as a booster for EU startups and researchers to lead the global AI race.

EU: Legislation Takes the Lead in Artificial Intelligence Governance

The EU's approach to making AI rules is to legislate first, gradually refining its legal framework over the past few years. As early as April 2018, the European Commission issued relevant material outlining approaches to coordinating the development and deployment of artificial intelligence, aiming to increase investment in AI research and innovation. In February 2020, the European Commission issued the White Paper on Artificial Intelligence, proposing a regulatory framework for AI covering issues such as risk assessment, transparency, data use and legal liability. It suggested categorizing and regulating different risk levels of AI applications. In April 2021, the European Commission proposed the world's first regulation of artificial intelligence to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally sustainable. In June 2022, the European Parliament approved the draft negotiating mandate for the proposed Artificial Intelligence Act, which classifies AI systems by risk, restricts "deepfakes" and imposes higher transparency requirements on generative artificial intelligence.

In addition to leading the world in legislative speed, the EU is also trying to push for global regulatory standards for AI. Brando Benifei, one of the members of the European Parliament behind the Artificial Intelligence Act, stated that what EU legislators construct could be a “blueprint” for responsible artificial intelligence in other parts of the world. The media report that the EU is carrying out a lobbying “blitz" to persuade Asian countries to acknowledge the EU's leading position in the regulation of artificial intelligence and making the EU's Artificial Intelligence Act the global standard in the field of AI governance. Currently, the EU and its member states have sent officials to discuss the issue with at least 10 Asian countries, including India, Japan, South Korea, Singapore and the Philippines.

United States: Artificial Intelligence Policy Encourages Technology Development and Application

U.S. policy on the governance of artificial intelligence focuses on the development and application of technology and, compared to the EU, its policies on AI are relatively lenient. Although it also faces the security challenges of AI, the United States has not conducted specialized research and classification of AI risks as the EU has. Instead, it has concentrated on dealing with issues of fairness and data privacy security arising from algorithmic discrimination and aims to maintain the innovation and commercial competitiveness of its domestic AI technology while safeguarding fairness and privacy. In terms of governance tools, the U.S. tends to address relevant security risks through industry self-regulation and guidelines. For example, it urges companies to establish their own AI code of ethics and reduce the risk of algorithmic discrimination through internal audits and self-monitoring.

In recent years, technologies such as generative artificial intelligence have continued to mature. To ensure the safety and reliability of generative AI technology, the U.S. government has gradually strengthened regulation and released a series of policy documents to strengthen AI governance, maintaining the country's dominant position in the field and reducing the security risks of AI. Since its release of the Blueprint for an AI “Bill of Rights” in 2022, the U.S. government has issued several guiding principle documents to direct the design, development, deployment and use of artificial intelligence systems. It encourages industries to voluntarily comply with these principles and engage in self-regulation, forming the basic framework of AI governance in the United States.

EU and US: Competition and Cooperation in Rulemaking

As the “front runners” in the field of artificial intelligence rulemaking, the EU and the U.S. have both a competitive and a cooperative relationship. Their cooperation is primarily facilitated through the U.S.-EU Trade and Technology Council, established in June 2021, using related regulations and policy documents as blueprints. Guided by shared values, both sides aim to reach consensus on emerging technology development and the risk regulatory approach. In December 2022, the U.S.-EU Trade and Technology Council released the Joint Roadmap for Trustworthy AI and Risk Management evaluation and measurement tools guiding risk management and trustworthy AI development on the three aspects of standardized terminology, standards development and risk monitoring. The document highlighted “EU and U.S. … leadership and cooperation in international technical standards development activities.”

However, from the practical point of view, although the United States has begun to align with the European Union on the risk management of AI regulation, there are still some structural issues that are difficult to coordinate between the two. For example, at the strategic level, U.S. security and foreign policy institutions view AI as a crucial national security asset for great power competition, aiming to use it as a tool to expand technological influence. However, the EU, based on economic development and values, focuses more on the ethical challenges posed by AI technology. At the level of risk management concepts, the U.S. encourages innovation and development of AI technology and emphasizing the scientific and flexible nature of regulation, Meanwhile the EU's regulatory style balances development and regulation, hoping to reshape the global digital development model through high-standard legislation and regulation. In terms of regulatory approaches, the U.S. focuses on technology application and process regulation, while the EU focuses on organizational management and risk management. In addition, there are conflicts of ideas between the two sides in the areas such as data governance and privacy protection. The media report that the U.S. has warned the EU that its proposed AI regulation would benefit companies with the resources to bear the costs of compliance and hurt smaller companies, which is a solid manifestation of the ideological conflict between the two sides.

Global Collaboration in Building the Artificial Intelligence Development Environment

At the inaugural AI Safety Summit held in early November, representatives from the United States, the United Kingdom, the European Union, China, India and other parties discussed the risks and opportunities prompted by the rapid development of artificial intelligence technology. After the summit, 28 countries and the European Union signed the Bletchley Declaration, committing to work together to create trustworthy and responsible artificial intelligence. The U.K., which hosted the summit, also announced that the next AI Safety Summit would be held in France next year, while South Korea will co-host a small virtual summit within the next six months.

It is foreseeable that as artificial intelligence continues to develop and grow, the regulation and standardization of AI by the international community will become a global issue. Legislation and cooperation between Europe and the U.S. in AI has some value for other countries around the world. However, in the process of establishing and improving an international AI governance standard system, there are many other participants, not only the few "leaders." All countries around the world should actively exert their influence and jointly promote the creation of an equal, open, and mutually beneficial environment for the development of artificial intelligence.


随着近年来人工智能迅速发展,西方主要国家关于人工智能规则制定主导权的争夺渐趋激烈。近日,欧洲议会、欧盟成员国和欧盟委员会三方就《人工智能法案》达成协议,该法案将成为全球首部人工智能领域的全面监管法规。欧盟内部市场委员蒂埃里·布雷东表示,《人工智能法案》不仅是一本规则手册,也将成为欧盟初创企业和研究人员引领全球人工智能竞赛的助推器。
  欧盟:人工智能治理立法先行
  欧盟对于人工智能规则制定的理念是立法先行,在过去几年中逐渐完善其法律框架。早在2018年4月,欧盟委员会便发布相关文件,概述了协调人工智能发展和部署的方法,旨在增加对人工智能研究和创新的投资。2020年2月,欧盟委员会发布《人工智能白皮书》,提出人工智能的监管框架,涉及风险评估、透明度、数据使用和法律责任等问题,并建议将不同风险级别的人工智能应用进行分类监管。2021年4月,欧盟委员会提出了世界上首部有关人工智能的法规,以确保欧盟使用的人工智能系统安全、透明、可追溯、非歧视和环保。2022年6月,欧洲议会通过了对《人工智能法案》提案的谈判授权草案,该法案对人工智能系统进行了风险分类,限制深度伪造,并对生成式人工智能提出了具有更高透明度的要求。
  除了领先全球的立法速度,欧盟也试图在全球推行人工智能的监管标准。负责《人工智能法案》工作的欧洲议会议员布兰多·贝尼菲表示,欧盟立法者将为世界其他地区构建“负责任的人工智能设定路径”。根据此前媒体报道,欧盟正在开展“游说闪电战”,希望说服亚洲国家承认欧盟在人工智能监管领域的领先地位,使欧盟《人工智能法案》成为人工智能监管领域的全球标准。目前,欧盟及其成员国已派出官员与印度、日本、韩国、新加坡和菲律宾等至少10个亚洲国家就该问题进行商谈。
  美国:人工智能政策鼓励技术发展应用
  美国的人工智能治理政策则以技术的发展应用为重点,相较于欧盟,其对人工智能的政策相对宽松。虽然也面临人工智能的安全挑战,但美国并未像欧盟一样对人工智能风险进行专门研究并分类分级,而是重点关注算法歧视引发的公平问题与数据隐私安全保护这两个风险点,以求在维护公平和保护隐私的同时,保持本国人工智能技术的创新性和商业竞争力。在治理手段方面,美国倾向于通过行业自律和行业准则等方式解决相关安全风险,例如敦促企业制定自己的人工智能伦理准则,并通过内部审核和自我监督来减少算法歧视风险。
  近年来,生成式人工智能等技术不断成熟。为确保生成式人工智能技术的安全性和可靠性,美国政府逐步加强监管,陆续发布系列政策文件强化人工智能治理,以维护美国在该领域的霸主地位,降低人工智能的安全风险。自2022年发布《人工智能权利法案蓝图》以来,美国政府又发布了多个原则性规定,指导人工智能系统的设计、开发、部署和使用,鼓励行业自愿遵守相关原则并进行自我监管,从而逐步形成了美国人工智能治理的基本框架。
  欧美:规则制定既竞争又合作
  作为人工智能规则制定领域的“领跑者”,欧美双方存在着既竞争又合作的关系。双方的合作主要以2021年6月成立的美欧贸易和技术委员会为平台,以美欧各自推出的相关法规、政策文件为蓝本,双方基于共同价值观指导新兴技术发展,力求未来在风险监管方式等层面达成一致。2022年12月,美欧贸易和技术委员会首次发布《可信赖人工智能和风险管理评估与测量工具联合路线图》,从术语规范、标准制定以及风险监测三方面指导双方人工智能风险管理和可信赖人工智能发展,并促进相关国际标准的制定。在该文件中,“美欧共同支持和领导国际技术标准制定工作”被重点提及。
  然而,从实际效果看,尽管美国开始在人工智能监管领域与欧盟趋同对标,但双方仍有一些结构性问题难以协调。例如在战略理念层面,美国安全和外交政策机构将人工智能视为大国竞争的重要国家安全资产,要使其成为拓展技术影响力的工具;但欧盟基于经济发展与价值观更关注人工智能技术的伦理挑战。在风险管理理念层面,美国鼓励人工智能技术创新与发展,强调监管的科学性和灵活性;而欧盟的监管风格兼顾发展与规制,期望通过高标准立法和监管来重塑全球数字发展模式。在监管方式上,美国侧重于技术应用和流程监管;而欧盟侧重组织管理与风险管理。此外,在数据治理、隐私保护等领域,双方也存在理念冲突。有媒体报道,美国曾警告欧盟,其拟议的人工智能监管法规会有利于那些有资源承担合规成本的公司,而伤害到那些规模较小的公司,这便是双方理念冲突的具体体现。
  全球共同构建人工智能发展环境
  在11月初举办的首届人工智能安全峰会上,美国、英国、欧盟、中国、印度等多方代表就人工智能技术快速发展带来的风险与机遇展开讨论。会后,28个国家和欧盟签署《布莱奇利宣言》,承诺共同致力于打造值得信赖和负责任的人工智能。东道主英国还宣布下届人工智能安全峰会将于一年后在法国举办,韩国将在未来6个月内联合主办一次小型虚拟峰会。
  可以预见,随着人工智能的发展和普及,国际社会对人工智能的监管和规范将成为一项全球性议题。欧美双方在人工智能领域的立法与合作对全球其他国家有一定的启示意义。但在国际人工智能治理标准体系建立和完善的过程中,除了少数“领跑者”,还有众多参与者。全球各国都应积极发挥自身影响力,共同推动构建平等、开放、互利的人工智能发展环境。
This post appeared on the front page as a direct link to the original article with the above link .

Hot this week

Topics

Poland: Meloni in the White House. Has Trump Forgotten Poland?*

Germany: US Companies in Tariff Crisis: Planning Impossible, Price Increases Necessary

Japan: US Administration Losing Credibility 3 Months into Policy of Threats

Mauritius: Could Trump Be Leading the World into Recession?

India: World in Flux: India Must See Bigger Trade Picture

Palestine: US vs. Ansarallah: Will Trump Launch a Ground War in Yemen for Israel?

Ukraine: Trump Faces Uneasy Choices on Russia’s War as His ‘Compromise Strategy’ Is Failing

Related Articles

Germany: US Companies in Tariff Crisis: Planning Impossible, Price Increases Necessary

Hong Kong: Can US Tariffs Targeting Hong Kong’s ‘Very Survival’ Really Choke the Life out of It?

Cuba: Trump, Panama and the Canal

China: White House Peddling Snake Oil as Medicine

China: Prime Take: How Do Americans View US Tariff Hikes?