With the rapid development of artificial intelligence in recent years, the competition among Western countries for dominance in AI regulation has become increasingly fierce. The European Parliament, EU member states and the European Commission recently reached agreement on the Artificial Intelligence Act. This will become the world’s first comprehensive regulatory framework in the field of artificial intelligence. EU Commissioner for the Internal Market Thierry Breton said that the Artificial Intelligence Act is not only a rule book but will also act as a booster for EU startups and researchers to lead the global AI race.
EU: Legislation Takes the Lead in Artificial Intelligence Governance
The EU’s approach to making AI rules is to legislate first, gradually refining its legal framework over the past few years. As early as April 2018, the European Commission issued relevant material outlining approaches to coordinating the development and deployment of artificial intelligence, aiming to increase investment in AI research and innovation. In February 2020, the European Commission issued the White Paper on Artificial Intelligence, proposing a regulatory framework for AI covering issues such as risk assessment, transparency, data use and legal liability. It suggested categorizing and regulating different risk levels of AI applications. In April 2021, the European Commission proposed the world’s first regulation of artificial intelligence to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally sustainable. In June 2022, the European Parliament approved the draft negotiating mandate for the proposed Artificial Intelligence Act, which classifies AI systems by risk, restricts “deepfakes” and imposes higher transparency requirements on generative artificial intelligence.
In addition to leading the world in legislative speed, the EU is also trying to push for global regulatory standards for AI. Brando Benifei, one of the members of the European Parliament behind the Artificial Intelligence Act, stated that what EU legislators construct could be a “blueprint” for responsible artificial intelligence in other parts of the world. The media report that the EU is carrying out a lobbying “blitz” to persuade Asian countries to acknowledge the EU’s leading position in the regulation of artificial intelligence and making the EU’s Artificial Intelligence Act the global standard in the field of AI governance. Currently, the EU and its member states have sent officials to discuss the issue with at least 10 Asian countries, including India, Japan, South Korea, Singapore and the Philippines.
United States: Artificial Intelligence Policy Encourages Technology Development and Application
U.S. policy on the governance of artificial intelligence focuses on the development and application of technology and, compared to the EU, its policies on AI are relatively lenient. Although it also faces the security challenges of AI, the United States has not conducted specialized research and classification of AI risks as the EU has. Instead, it has concentrated on dealing with issues of fairness and data privacy security arising from algorithmic discrimination and aims to maintain the innovation and commercial competitiveness of its domestic AI technology while safeguarding fairness and privacy. In terms of governance tools, the U.S. tends to address relevant security risks through industry self-regulation and guidelines. For example, it urges companies to establish their own AI code of ethics and reduce the risk of algorithmic discrimination through internal audits and self-monitoring.
In recent years, technologies such as generative artificial intelligence have continued to mature. To ensure the safety and reliability of generative AI technology, the U.S. government has gradually strengthened regulation and released a series of policy documents to strengthen AI governance, maintaining the country’s dominant position in the field and reducing the security risks of AI. Since its release of the Blueprint for an AI “Bill of Rights” in 2022, the U.S. government has issued several guiding principle documents to direct the design, development, deployment and use of artificial intelligence systems. It encourages industries to voluntarily comply with these principles and engage in self-regulation, forming the basic framework of AI governance in the United States.
EU and US: Competition and Cooperation in Rulemaking
As the “front runners” in the field of artificial intelligence rulemaking, the EU and the U.S. have both a competitive and a cooperative relationship. Their cooperation is primarily facilitated through the U.S.-EU Trade and Technology Council, established in June 2021, using related regulations and policy documents as blueprints. Guided by shared values, both sides aim to reach consensus on emerging technology development and the risk regulatory approach. In December 2022, the U.S.-EU Trade and Technology Council released the Joint Roadmap for Trustworthy AI and Risk Management evaluation and measurement tools guiding risk management and trustworthy AI development on the three aspects of standardized terminology, standards development and risk monitoring. The document highlighted “EU and U.S. … leadership and cooperation in international technical standards development activities.”
However, from the practical point of view, although the United States has begun to align with the European Union on the risk management of AI regulation, there are still some structural issues that are difficult to coordinate between the two. For example, at the strategic level, U.S. security and foreign policy institutions view AI as a crucial national security asset for great power competition, aiming to use it as a tool to expand technological influence. However, the EU, based on economic development and values, focuses more on the ethical challenges posed by AI technology. At the level of risk management concepts, the U.S. encourages innovation and development of AI technology and emphasizing the scientific and flexible nature of regulation, Meanwhile the EU’s regulatory style balances development and regulation, hoping to reshape the global digital development model through high-standard legislation and regulation. In terms of regulatory approaches, the U.S. focuses on technology application and process regulation, while the EU focuses on organizational management and risk management. In addition, there are conflicts of ideas between the two sides in the areas such as data governance and privacy protection. The media report that the U.S. has warned the EU that its proposed AI regulation would benefit companies with the resources to bear the costs of compliance and hurt smaller companies, which is a solid manifestation of the ideological conflict between the two sides.
Global Collaboration in Building the Artificial Intelligence Development Environment
At the inaugural AI Safety Summit held in early November, representatives from the United States, the United Kingdom, the European Union, China, India and other parties discussed the risks and opportunities prompted by the rapid development of artificial intelligence technology. After the summit, 28 countries and the European Union signed the Bletchley Declaration, committing to work together to create trustworthy and responsible artificial intelligence. The U.K., which hosted the summit, also announced that the next AI Safety Summit would be held in France next year, while South Korea will co-host a small virtual summit within the next six months.
It is foreseeable that as artificial intelligence continues to develop and grow, the regulation and standardization of AI by the international community will become a global issue. Legislation and cooperation between Europe and the U.S. in AI has some value for other countries around the world. However, in the process of establishing and improving an international AI governance standard system, there are many other participants, not only the few “leaders.” All countries around the world should actively exert their influence and jointly promote the creation of an equal, open, and mutually beneficial environment for the development of artificial intelligence.
Leave a Reply
You must be logged in to post a comment.