Can AI be governed?

October 30, 2023

By Branka Marijan

As the United Kingdom sets to open its AI Safety Summit, there is no shortage of concern about artificial intelligence (AI) and the potential risks it poses. Some of the top Silicon Valley executives, who will attend the Summit, have expressed their worries about AI, even referring to it as an existential risk.

Some of the leading tech founders and global leaders have responded by calling for the creation of an international body to govern AI, akin to the International Panel on Climate Change (IPCC) and, eventually, in line with the International Atomic Energy Agency (IAEA) but auditing AI systems, not atomic energy installations.

The question remains: Can these governance structures and processes be replicated for AI? More importantly, can they even be developed, given the nature of the technology and the unwillingness of several powerful states to engage in multilateral governance?

The call for an international body has been echoed by British Prime Minister Rishi Sunak, who proposes the formation of a “truly” global expert panel that would publish a “state of AI science report.” How truly global the panel would be is not clear.

The proverbial elephant in the room is China. It is not known if China will attend the Summit or if it would be invited on this global panel. With days left before the Summit, Sunak was unwilling to confirm China’s attendance.

Without China, any ability to address AI at a global level is diminished. With China, concerns about using AI to limit democratic freedoms are unlikely to be raised. Consider, for example, the use of AI for domestic surveillance or surveillance of minority populations – all activities that we know have occurred in China.

An accelerating geopolitical/technological rivalry between the United States and China is an additional challenge. Think of the access denied to China by the United States of the sophisticated chips needed for AI advancement. Would an expert panel attempting to put guidelines in place on AI systems be taken seriously by great powers that view AI supremacy as key to both economic development and military power?

Both the U.S. and China see AI as central in military applications. The Pentagon’s recently announced Replicator program, that would see the deployment of thousands of autonomous drones that would leverage AI, among other initiatives, is specifically meant to counter China’s capabilities. U.S. Deputy Defense Secretary Kathleen Hicks noted that the program addresses China’s “biggest advantage, which is mass: more ships, more missiles, more people.” China also sees new military technologies as crucial to its global standing and countering what it perceives as escalating threats. China’s military-civil fusion strategy has received a great deal of attention as it seeks to modernize China’s military and ensure that civilian technological advancements are integrated with military applications.

In United Nations discussions on autonomous weapons, the United States and China have adopted different strategies that have produced the same result: pushback against governance efforts. Though discussions on autonomous weapons are sometimes treated as niche or falling outside of the umbrella of wider AI governance efforts, such discussions likely provide the best illustration of how the two powers will approach regulatory processes.

China has seemed to be open to regulation that would prohibit offensive uses of autonomous weapons, while preserving the ability to develop offensive weapons. With the demarcation of uses rather thin, the result of such a stand would be no effective regulation, as China knows.

It is unlikely that China will compromise on regulation, although it will probably not be completely obstructionist. At the global level, this means that regulations to govern AI that include China will likely reflect China’s needs and will avoid concerns about surveillance and policing.

For its part, the United States has proposed voluntary measures. Earlier this year at the REAIM Summit in the Netherlands, it launched a political declaration on responsible military use of AI and autonomy. The U.S. approach is reflected in its broader view on AI governance and efforts to work with the top, largely U.S.-based AI companies.

In contrast, the European Union (EU) wants to see major tech companies subjected to regulations, not voluntary compliance requests. However, EU efforts to develop a stronger AI Act have been subverted by the influence of major U.S. companies. The principal concern of these companies is that their large language models would be designated “high risk.” Their lobbying actions have shown that major AI companies will actively push back against regulation. The upcoming executive order on AI by the Biden administration shows the recognition that safety standards are needed but it remains to be seen how major companies will work with the U.S. government departments.

An additional challenge is the nature of AI technologies. Unlike nuclear weapons, for example, AI is a multipurpose technology that does not require special materials. Access to advanced chips and supercomputers is necessary for more sophisticated applications but the hardware and software needed for many AI applications are not difficult to access.

Advisory bodies and panels are helpful. This is also true of informal processes and even voluntary standards. Still, at the end of the day, AI governance is, and will remain, firmly in the hands of states.

AI can be governed. What is needed is a multi-level governance model in which state institutions engage and respond to global institutions. Also necessary: formal and legally binding agreements that cover applications that raise the most concern, such as autonomous weapons. Major global powers will need to muster the political will to develop frameworks that will not always reflect their preferences but will serve the wider good. After all, despite the differences between the major powers, it is in their interest to avoid unintended impacts of AI.

From Blog

Related Post

Get great news and insight from our expert team.

December 19, 2024
Analysis and Commentary

An affront to humanitarian norms: Statement on U.S. decision to supply landmines to Ukraine

December 10, 2024
Analysis and Commentary

Amid Gaza carnage, Canada must step up to vigorously defend IHL

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.