AI’s Oppenheimer moment?

August 8, 2023

Photo: J. Robert Oppenheimer is seen in a still from documentary footage made of the assembly of the "Trinity" gadget, the first atomic bomb tested in New Mexico in July 1945. Public Domain

The recently released biopic Oppenheimer raises awareness of the ultimate weapon as J. Robert Oppenheimer, “the father of the atomic bomb,” struggles with his own ultimate question: was he responsible for the destruction of Hiroshima and Nagasaki and the nuclear arms race?

Being accountable then and now

We know that Oppenheimer, who directed the Los Alamos Laboratory that developed the first atomic bombs in the 1940s,came to regret his role. He made unsuccessful efforts to stop the nuclear arms race and felt guilt about the development of atomic weapons for the rest of his life. But should some of this burden of culpability or accountability be shared with others? Many engineers and scientists from both sides in World War II worked to unleash the power of the atom. Could they not have seen where their research was going and done something to control or limit it?

Oppenheimer director Christopher Nolan remarked in an interview that leading researchers of artificial intelligence (AI) see the current time as their “Oppenheimer moment.” It would appear then that AI researchers recognize that they bear some responsibility for how the technology they create is used. And perhaps this time, there is enough time to act responsibly and prevent horrific and unintended consequences.

Controlling AI

However, AI is proving to be even harder to manage than nuclear weapons. Because so much benign and commercial AI technology can also be used to harm opponents – and society in general – scientists and engineers can find themselves indirectly contributing to major geopolitical struggles like the current competition between the United States and China, and wars like the ongoing conflict in Ukraine. Ukraine, in particular, has shown the extent to which advances in computer vision technology, developed for a variety of commercial purposes, can be adapted to a conflict context. While this technological ingenuity has allowed Ukraine to fight back against an invading force, wider adoption of AI technologies will transform the global security environment. Some of the technology will certainly be used by non-state armed groups.

Oppenheimer director Christopher Nolan remarked in an interview that leading researchers of artificial intelligence (AI) see the current time as their “Oppenheimer moment.” It would appear then that AI researchers recognize that they bear some responsibility for how the technology they create is used.

It might seem obvious that the message of responsible technological development would resonate with members of the AI community. And it has with some. Over the past several years, some technologists have called on companies such as Google and Microsoft not to work on surveillance and AI weapons.

Not the goal of everyone

But others involved in AI have more complex responses and don’t seem to take the same message from the Oppenheimer case. For example, both Sam Altman, the CEO of OpenAI, the company behind ChatGPT, and CEO of Tesla Motors Elon Musk have spoken out about the existential risk that AI poses to humanity and called for an international regulatory body or a pause on AI research.

But both appear to have missed the connection between Oppenheimer's role in developing destructive weapons and the role of contemporary scientists and researchers in the field of AI. Altman tweeted: “i was hoping that the oppenheimer movie would inspire a generation of kids to be physicists but it really missed the mark on that. let's get that movie made! (i think the social network managed to do this for startup founders.)”

To this comment Musk responded: “Indeed.”

They are not alone.

In an op-ed in The New York Times, Palantir CEO Alex Karp does refer to AI’s Oppenheimer moment but avoids any deep reflection on accountability. Instead, Karp argues that the United States must immediately adopt AI or cede its position of world dominance to China. He sees AI as the key factor in determining global power dynamics. And so Karp calls on AI researchers in the United States and allied countries to seize this moment to prioritize technological opportunities – including the development of AI weapons – to ensure Western global dominance.

Karp seems to see the loss of lives – particularly non-Western lives – from AI weapons as inevitable but worth the cost. The key concern for Karp is maintaining the U.S. technological edge over China. Former Google CEO Eric Schmidt has expressed similar sentiments.

This AI nationalism reflects a self-proclaiming patriotic movement that requires the brightest minds to build the “sharpest tools” to compete with China. But such a response seems to lead to the conclusion that regulations and assuming responsibility for new AI technologies would be detrimental to Western security interests.

Avoiding unintended consequences

More thought must be given to controlling the development of AI technologies – particularly military applications of AI – if the world is to avoid possibly dire effects. In the immediate and medium terms, the main danger lies in AI’s unpredictability, its persistent brittleness in complex contexts, and the push for further integration of AI into weapons systems and platforms. Countries such as Israel are already using AI to select targets for airstrikes.

These risks are real, not imaginary or theoretical. Yet global response, particularly to military and security applications of AI, has been slow and inadequate. Discussions on autonomous weapons at the United Nations Convention on Certain Conventional Weapons (CCW) have been taking place for nearly a decade but have not moved beyond the development of voluntary principles.

Given the stalled CCW discussions, a number of states such as Costa Rica are calling for the creation of specific, legally binding instruments– developed outside the CCW, if necessary. Any state-led initiatives will need to move quickly to catch up with the technological advances emanating from an almost totally unregulated industry.

Still, it is also necessary for the AI community to reflect deeply on how it can ensure responsible development of technologies. Warnings of potential dangers are not enough.

Oppenheimer saw his work contribute to a nuclear arms race that is capable of destroying all life on Earth. If the present time is AI’s Oppenheimer moment, we must do everything in our power not to repeat history.

From Blog

Related Post

Get great news and insight from our expert team.

October 23, 2024
Analysis and Commentary

How Summit of the Future 2024 dealt with outer space governance

September 23, 2024
Analysis and Commentary

The endless dance of NPT meetings

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.