Voice
Convergence despite differences
By Thorsten Jelinek  ·  2024-04-08  ·   Source: NO.15 APRIL 11, 2024
The Q Family humanoid robots developed by the Chinese Academy of Sciences (XINHUA)

This year is significant for establishing guardrails around artificial intelligence (AI), with major regions like the European Union, the United States and China converging over a risk-based regulatory approach, albeit with distinct differences. This trend reflects a broader trend toward digital sovereignty, with governments seeking increased control over their digital markets and technologies to ensure safety and security, while aiming to boost their AI competitiveness.

The pursuit of digital sovereignty, while both necessary and legitimate, carries the risk of erecting new barriers. This requires global efforts to strike a balance between maintaining control and fostering collaboration and openness.

Qiao Hong, a member of the Chinese Academy of Sciences, poses with a humanoid robot developed by her research team, on January 31 (XINHUA)

Regulatory initiatives

From a regulatory standpoint, the EU is at the forefront with its first comprehensive AI legislation, the AI Act, which was adopted by the European Parliament in March. The act establishes a four-tiered risk framework that prohibits certain AI applications, enforces strict regulations and conformity assessments for high-risk uses, mandates transparency for limited-risk applications, and proposes guidelines for non-risky applications.

Stringent obligations only apply to generative AI if categorized as a high-risk application. The act exempts open-source models unless they are deployed in high-risk contexts. A new oversight mechanism has also been established, including two EU-level institutions: the AI Office and the AI Board, which are tasked with ensuring compliance and facilitating the development of codes of practice, but aim to do so without directly overstepping national supervisory authorities of the member states, which still need to be established.

While the act is a landmark achievement, it remains controversial as internal critiques argue that this could stifle innovation and competition; others challenge that strong guardrails spur innovation as they provide not only safety and security but also legal certainty. Additionally, some say as most applications are expected to be in the lowest category, they will not face any mandatory obligations.

However, the act's extraterritoriality clause, which means that the act will govern both AI systems operating in the EU as well as foreign systems whose output enters the EU market, could likely cause frictions, especially with the U.S., as it is perceived as protectionist. This is the flipside of all new guardrails as represented by the EU's comprehensive landscape of privacy, cybersecurity and digital market regulations.

The U.S., in contrast, has taken a different approach. Rather than enacting a comprehensive law, the U.S. Government introduced a presidential executive order on AI on October 30, 2023, encompassing a broad array of guidelines, recommendations and specific actions. This strategy aligns with the U.S. precedent of not having federal laws in other pivotal areas like digital governance, including cybersecurity and privacy protection.

Despite growing recognition of the need for a more comprehensive risk-based approach, bipartisan support in these areas remains elusive. While the absence of federal legislation introduces legal uncertainty, it also allows for flexibility and an issue-focused approach to AI safety and security, notably for high-risk applications such as dual-use foundation models.

The executive order is not only a set of targeted restrictions with sectoral policies, like in transportation and healthcare, but also aims to foster AI talent, education, research and innovation, thus enhancing the U.S.' competitiveness. The competitive dimension is not part of the EU AI Act. Some argue that this is symptomatic of the EU's regulatory focus and the U.S.' liability-oriented and competition-driven approach. 

Nevertheless, security concerns are paramount, as evidenced by proposed mandates requiring U.S. cloud companies to vet and potentially limit foreign access to AI training data centers or by provisions ensuring government access to AI training and safety data. This strategy underscores a deliberate effort to protect U.S. interests amid the dynamic AI domain and intense competition with China for global AI dominance. The U.S. strategy faces a significant drawback—the lack of legislative permanence. This precariousness means a new presidential election could easily revoke the Joe Biden-Kamala Harris administration's executive order, undermining its stability and long-term impact.

China is likely to be the next major country to introduce a dedicated AI law by 2025, a path that was already signaled in the government's new-generation AI development plan released in 2017. The plan proposed the initial establishment of AI laws and regulations, ethical norms and policy systems, with the aim of forming AI security assessment and control capabilities by 2025, and a more complete system of laws, regulations, ethics and policies on AI by 2030. The 2030 objective indicates that AI governance is an ongoing pursuit.

For now, the Chinese Government follows an issue-focused approach regulating the specific aspects of AI that are deemed most urgent. It's a centralized approach that successively introduces a regulatory framework of provisions, interim measures and requirements designed to balance innovation and competitiveness with social stability and security.

On the regulatory side, over the past three years, the Cyberspace Administration of China and other departments have issued three key regulations explicitly to guide AI development and use, including the Provisions on the Administration of Algorithm-Generated Recommendations for Internet Information Services passed in 2021, the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services issued in 2022 and the Interim Measures for the Administration of Generative AI Services in 2023.

The legal discourse in China covers not only ethics, safety and security, but also issues concerning AI liability, intellectual property and commercial rights. These areas have ignited significant debate, especially in relation to China's Civil Code that came into force in 2021, a pivotal legislation aimed at substantially enhancing the protection of a wide range of individual rights. 

Importantly, China's legislators use public consultations and feedback mechanisms to find a suitable balance between safety and innovation. To boost AI innovation and competitiveness, the government has approved more than 40 AI models for public use since November 2023, including large models from tech giants such as Baidu, Alibaba and ByteDance.

An artificial intelligence-powered transparent display laptop developed by Chinese personal computer giant Lenovo on show at the Mobile World Congress in Barcelona, Spain, on February 28 (XINHUA)

Global consensus

In parallel to those national measures, there have been significant efforts to forge AI collaboration on an international and multilateral level, given that no country or region alone can address the disruptions brought about by application of advanced and widespread AI in future. Frameworks that promote responsible AI include the first global yet non-binding agreement on AI ethics—the Recommendation on the Ethics of Artificial Intelligence that was adopted by 193 UNESCO member countries in 2021.

Also, for the first time AI safety was addressed by the UN Security Council in July 2023. Most recently, the UN secretary general's AI advisory body released its interim report, Governing AI for Humanity. Its final version will be presented at the UN's Summit of the Future in September.

High-level consensus was also reached on the level of the Group of 20, which represents around 85 percent of global GDP, supporting the "principles for responsible stewardship of trustworthy AI," which were drawn from the Organization for Economic Cooperation and Development's AI principles and recommendations.

Another significant step forward in bridging the divide between the Western world and the Global South was achieved during the UK-hosted AI Safety Summit. For the first time, in November 2023, the EU, the U.S., China and other countries jointly signed the Bletchley Declaration pledging to collectively manage the risk from AI. Adding to this positive momentum, we have seen an AI dialogue initiated between China and the EU and between China and the U.S.

Despite such advancements, the lack of international collaboration remains, particularly with countries in the Global South. The exclusivity of the Global North is evident in initiatives like the Group of Seven's Hiroshima AI Process Comprehensive Policy Framework and the Council of Europe's efforts, which led to the agreement on the first international treaty on AI in March. This convention, awaiting adoption by its 46 member countries, marks a significant step as it encompasses government and private sector cooperation, but predominantly promotes Western values.

In response to the notable lack of international collaboration with the Global South, China has stepped up its efforts by unveiling the Global AI Governance Initiative during the Third Belt and Road Forum for International Cooperation in Beijing in October 2023. This move aims to promote a more inclusive global discourse on AI governance. At a press conference on the sidelines of the annual session of China's top legislature in March, Foreign Minister Wang Yi highlighted the significance of this initiative, underlining its three core principles: viewing AI as a force for good, ensuring safety and promoting fairness.

Amid various major international initiatives and frameworks, it is essential to establish and nurture communication channels among these different international efforts. Those channels must aim to bridge differences and gradually reduce them over time. Developing governance interoperability frameworks could serve as a practical approach for addressing these differences.

The author is Europe director and senior fellow at Taihe Institute, a Beijing-based nongovernmental think tank. This article was first published in China Today magazine

Copyedited by G.P. Wilson

Comments to yanwei@cicgamericas.com

China
Opinion
World
Business
Lifestyle
Video
Multimedia
 
China Focus
Documents
Special Reports
 
About Us
Contact Us
Advertise with Us
Subscribe
Partners: China.org.cn   |   China Today   |   China Hoy   |   China Pictorial   |   People's Daily Online   |   Women of China   |   Xinhua News Agency
China Daily   |   CGTN   |   China Tibet Online   |   China Radio International   |   Global Times   |   Qiushi Journal
Copyright Beijing Review All rights reserved 京ICP备08005356号 京公网安备110102005860