Much of our modern lives are spent in the digital world – the internet, the Cloud, the Internet of Things – and dictated by an explosion of knowledge fueled by big data and Artificial Intelligence (AI). AI, in particular, impacts nearly every segment of society and profession. Even in historically conservative professions like law, major firms are planning or already have introduced AI applications, such as automated contracting. These applications are designed to take over roles previously performed by lawyers and non-legal staff alike.
AI offers the promise of major leaps both forward and backward, depending upon how wisely it is used and for what purpose. What is not up for debate, however, is the disruption that is occurring and which must be managed in the context of an evolving legal framework and challenging international environment.
Countries have acted to create a legal framework that promotes the advantages of AI while putting checks on potential harms.
For example, China has invested heavily in AI development and is closing in on, and in some regards surpassing, the early lead that major countries such as the U.S. previously held. In fact, China now publishes more research papers on AI and has filed more patents in key areas such as speech and facial recognition. Other factors contributing to China's AI development include significant private and public investment, a favorable regulatory environment, steady improvements to the country's university system and the existence of a large market conducive to the adoption of AI applications and their gradual development. The result is that today, China is one of the leading countries in AI.
In April 2021, the European Commission introduced the draft Regulation on Artificial Intelligence. The AI Act, like the General Data Protection Regulation, is the first proposed AI regulation of its kind. It will apply not only to the 27 member states of the EU, but also to other country's AI applications that have an impact inside the region, and would manage AI systems according to their risks to humankind. The greater the risk the AI system poses, the more strictly it will be regulated. In cases that involve significant invasions of privacy, such as biometric recognitions or social scorecards, it could result in outright prohibition if the AI is deemed the highest-risk. Simply put, its primary focus is on high-risk AI where the regulations specify a wide range of required transparency, diligence and documentation.
In the U.S., in addition to its detailed intellectual property and commercial law regime, President Joe Biden's administration is calling for an AI Bill of Rights. Gaining consensus for such a move will likely prove to be difficult given the country's democratic values and its strong focus on privacy and the protection of individual rights. AI applications have already faced significant legal challenges for their inherent biases that can lead to unsatisfactory and unfair applications in areas such as hiring and criminal profiling. Powerful technologies should be required to respect our democratic values and abide by the central tenet of equality.
In the U.K., the government released its National Artificial Intelligence Strategy on Sept. 22. This comes within the context of related plans and strategies, including 2020's National Data Strategy, the 2021 Plan for Digital Regulation, Innovation Strategy and the AI Council's 2021 AI Roadmap.
The National AI Strategy provides a 10-year plan to make the U.K. "a global AI superpower" within the context of its significant research and development, its AI Sector Deal investment and establishment of AI bodies and structures, including its AI Council and Centre for Data Ethics and Innovation (CDEI).
The AI report lists "people," "data," "computing power" and "finance" as key drivers of progress, discovery and strategic advantage in AI. U.K.'s National AI Strategy is built around three core pillars, including investing in the long-term needs of the AI ecosystem to ensure competitiveness; supporting the U.K.'s move to an AI-enabled economy that considers all sectors and regions; and ensuring the right to national and international governance of AI technologies through working with global partners to promote responsible AI development.
Australia has announced a new plan to protect and promote technologies critical to the national interest. The government lists nine critical technologies out of 63, which include AI as well as a special focus on quantum technologies that use quantum physics to access, transmit and process vast quantities of information. Also on the list are advanced 5G communications and genetic engineering. The Australian government will invest $51 million in a Quantum Commercialization Hub designed to help commercialize Australian quantum research and forge links with global markets and supply chains. The hub would be designed to attract private investment and forge international partnerships. For example, Australia, the U.K., Japan, India and the U.S. are looking at joint capabilities in quantum technologies, cyber and artificial intelligence.
While the above initiatives hold great promise for the future of AI development, significant uncertainties and obstacles remain. In a post-COVID environment, progress in international trade has declined and protectionism has increased. So too, have conflicts in intellectual property protection.
Concerns about the nature of appropriate and ethical AI systems, as well as protecting privacy and human rights amidst a general growing distrust in institutions and big tech are likely to present further impediments to rapid AI development. In overcoming these barriers, it is important for governments to incentivize innovation and foster a supportive regulatory climate that achieves the appropriate balance between public and private risks and rewards. In this process, it is important that perfection be not the enemy of the good.
The author is a columnist with China.org.cn.