Ploughshares at Work: An interview with Senior Researcher Dr. Branka Marijan
Published in The Ploughshares Monitor Volume 42 Issue 3 Autumn 2021
Project Ploughshares: Branka, you analyze the military and security implications of emerging technologies—a program area you developed. Explain how this came about.
Branka Marijan: When I started with Ploughshares in 2015, I did a scan of our work and saw that new technologies were transforming and amplifying existing security concerns across our programs—outer space security, arms control, the abolition of nuclear weapons, the nature and causes of armed conflict.
Since its founding in 1976, Ploughshares had been focused on disarmament and effective arms control. This focus remained. But it was clear that we also needed to formulate a response to the challenges of modern tech.
As an organization, we decided to weigh in on potentially harmful developments, such as autonomous weapons, and to examine the security and humanitarian benefits of new technologies like open-source intelligence, which can monitor and track weapons and military technologies. I believe that this decision displayed the foresight that peace organizations need to be responsive and provide more diplomacy-oriented and inclusive worldviews to counter those dominated by conflict and power politics. While I specifically examine new technologies of warfare and security applications of these technologies, my colleagues are also paying attention to the technological transformations in their respective areas.
Our perspective adds value and direction to national and global conversations on modern technology, especially tech aided by artificial intelligence (AI). Yes, it can be challenging to raise security and defence concerns with a group of technologists who don’t envision their technologies on a battlefield or being used to violate human rights. But such conversations are necessary, because many modern technologies are multi-use, developed for one purpose, but easily weaponized or adapted for other uses.
I think the decision we made in 2015 was the right one and really put us at the forefront of some of the discussions happening now in Canada and globally.
PP: Analyzing emerging technologies can be, well, pretty technical. Are you a techno-nerd? What particular skills or traits do you bring to this study?
BM: Until I started at Ploughshares, I really wasn’t a techno-nerd. As a kid, I was fascinated by innovations and their societal and global impacts. And I’m pretty intuitive with tech. I have taken great care to learn from technologists who can communicate with a non-specialist audience. Now I consider myself a proud member of the techno-nerd community!
More seriously, my formal university education in the social sciences provided key transferable skills: analytical thinking, a facility in research, and adaptability. I learned to examine each problem in its social context. This training has given me a perspective that I believe is badly needed—and often absent—in discussions on technology. Techno-optimism dominates modern society. While not all optimism is misplaced, technology is often adopted uncritically. Only after some misuse or abuse are shortcomings even considered.
I believe that, as a society, we must be concerned about how data is collected, what data is collected, how it is used, and how governments regulate the process.
My role is to encourage policy circles to adopt a critical perspective much earlier. Because technological change is not inevitable or linear. It is the result of human choices, shaped by policies and standards and ethics and personal viewpoints. Technology can and must be controlled and shaped to meet human needs.
PP: Go into a little more detail about your education and background.
BM: My path to this work has been a bit unconventional. My PhD thesis focused on peacebuilding in divided societies, specifically Bosnia-Herzegovina and Northern Ireland. I came to understand how important all members of society are in the peacebuilding process. Top-down initiatives often fail if they are not supported by bottom-up initiatives.
I have retained the belief that multi-level governance is critical to ensure policy follow-through. Treaties and political declarations at the international level are important and needed, but we also need national legislation and codes of conduct and standards for people building the technologies.
And, as a civilian survivor of war, I never forget—and describe as often as I can—the impacts that conflict has on ordinary people. My own story fuels my drive to save others from experiencing the effects of armed conflict.
PP: You bring a lot of energy, skill, and commitment to your work. What does that work look like these days?
BM: My biggest current research project relates to the responsible uses of AI in defence. The discussion on autonomous weapons had stalled at the United Nations Conference on Certain Conventional Weapons, but efforts to reenergize it began with two weeks of discussion in August and more scheduled over the next few months. I will be paying close attention to these discussions and doing more research and writing on these issues.
I am also trying to keep up with new technologies that are being introduced or sped up in response to the pandemic. I’m concerned about some of the ways in which this tech, like facial recognition, is being used. I am also beginning more writing on the need for data protections and tools that preserve privacy. All of these concerns relate directly to human security.
PP: A helpful overview. Can we discuss the responsible uses of AI first? With the help of a Mobilizing Insights in Defence and Security (MINDS) grant from the Canadian Department of National Defence, you are researching efforts to regulate and control the military uses of AI. What research findings were most promising? Worrying?
BM: The effort to develop clearer norms for the use of AI in defence is promising, but it is largely happening among traditional allies. My research focuses on the countries in the United States-led “AI Partnership for Defense”—Australia, Canada, Denmark, Estonia, Finland, France, Israel, Japan, Norway, the Republic of Korea, Sweden, and the United Kingdom—as well as likeminded states such as Germany and Spain.
Some of their efforts are in response to perceived Chinese and Russian use of AI for military purposes. The United States, in particular, seeks to thwart such efforts. Canada, with its AI talent, is a valued ally.
But more effort is needed to develop global norms and standards. Global norms are key to ensuring that misperceptions about intent and capability do not result in the development and use of technology that is immature and insufficiently tested.
The challenge is to achieve agreement on norms that respond to concerns for transparency and confidence building. Many states see AI as providing an advantage over their adversaries and are unlikely to fully disclose their capabilities. Without full transparency, some parties will not be confident in participating in agreements, fearful of unknowingly losing an advantage.
In the end, we all lose because we must contend with unreliable and unpredictable technology.
Still, efforts taken by allies to develop standards and norms will have an impact on the types of policy responses that will emerge. Countries outside of these circles recognize this, are also paying attention and see the need to provide their own approaches. As such, there are opportunities that are emerging to start building some understandings of responsible AI behaviour at a more global level as well as introducing confidence building measures to ensure international stability.
PP: As we have already discussed, a lot of the tech you study isn’t used only by the military. It’s also used by police and other domestic security agencies. And, to be clear, we should note that you aren’t examining ALL new tech. You are looking at tech that is used in various security operations. This includes data collection and analysis, facial recognition, tracking, surveillance.
BM: Yes, that’s correct. As I said earlier, much of this tech is multi-use. For example, the company Clearview AI’s facial recognition technology was used by local police services across Canada and by the Royal Canadian Mounted Police. However, it turns out that Clearview AI broke Canada’s privacy laws when it scraped images from public websites and social media profiles and assembled the information in a database to sell to clients.
I believe that, as a society, we must be concerned about how data is collected, what data is collected, how it is used, and how governments regulate the process. In the United States, innocent people are being apprehended because their location-tracking device places them near the scene of a crime.
We must also consider the role of industry and how information is being collected and used. Some concerns relate more to privacy and consumer protections than security and defence. Consider, for example, insurance companies that monitor the driving of their clients with an app.
Still, in some countries, governments can track their own citizens, monitoring their social and political engagements across a number of platforms and applications and collecting seemingly innocuous information, such as the use of particular applications. Some authoritarian regimes try to use tech to quell dissent from their diaspora communities. In conflict zones, peacebuilders encounter social media disinformation campaigns. Recent cyber attacks by what are believed to be state-affiliated groups show the importance of protecting the data of citizens to national security. Technology is transforming the global security environment and we need to pay attention to these shifts.
PP: I know that you often collaborate or connect with other groups. Can you talk about that?
BM: Acting with civil-society colleagues through networks such as the Campaign to Stop Killer Robots amplifies common concerns and raises global awareness. This sort of networking has also introduced me to some incredible thinkers and doers from around the world. Their perspectives are critical in shaping my understanding.
I often connect with academic experts, who are essential in making me see trends and deepening my understanding of technologies and their impacts. Many in the science and technology communities are most helpful in explaining their work so that a non-specialist can understand; they even offer useful social and political perspectives.
Civil society and international organizations, such as the International Committee of the Red Cross, are at the forefront in pushing for better regulations and policies. Working with them magnifies the impact of a small organization like Project Ploughshares. They help us punch above our weight.
Photo: Branka Marijan at the Civic Tech Toronto Presentation on Killer Robots in 2019.