Unintended consequences and malicious uses of AI
News stories on artificial intelligence (AI) herald advancements in medicine, applaud the ability of programs to defeat human players in complex games, and anticipate the ability to read our minds. While it is important to highlight these positive contributions, we must stay alert to the full range of effects of AI on our society and the world—the unintended as well as the deliberately harmful.
How could “bad guys” use or misuse AI to threaten regional, national, even global security? Answers can be found in The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, a report released last month by 26 researchers from U.S. and British universities, think tanks, and nongovernmental organizations.
As the report shows, digital, physical, and political security is already being compromised. For example, non-state actors are weaponizing drones.
In early January, a Russian air base in Syria was attacked by 10 small homemade drone bombs. The attack was not very sophisticated and the Russians were able to capture or disable the drones. This was not the first time militants or terrorists had equipped drones with explosives. However, according to journalist David Hambling, the Syrian attack marked the first time that a “significant swarm” had been used.
Swarm software will likely be commercially available soon; Intel, for example, has used swarms for light shows. This software developed for light shows could be repurposed and allow for the scalability of a swarm attack that we have not seen before. In many ways, the attack on the Russian air base epitomizes the challenges of dual-use technology.
The “Malicious AI” report urges engineers and AI researchers to consider, from the earliest stages of development, the ways in which the technology can be misused. This need for responsible and ethical discussion has been recognized by many scientists working in AI and such related fields as robotics. Scientists, developers, and industry leaders have co-authored open letters that urge governments to address concerns related to autonomous weapons. At least one company, Clearpath Robotics of Waterloo, Ontario, has stated that it will not develop killer robots.
Many coders accept responsibility for AI’s impact on society. According to the latest Stack Overflow annual developer survey of 100,000 coders in 183 countries, 47.8 per cent of respondents felt that those creating AI are “ultimately most responsible for the societal issues” surrounding AI, while 27.9 per cent felt that the government and other regulatory bodies were responsible. Industry leaders were held responsible by 16.6 per cent, while 7.7 per cent believed that no one should be held responsible.
But how does responsibility play out in actual development and research? Are possible adverse effects a concern at all stages of development? Or is it likelier that researchers and developers who are focused on solving a particular problem might not immediately consider such effects or will come to feel that they cannot be held accountable for unintended uses?
At the Artificial Intelligence, Ethics and Society conference in New Orleans in February, one presentation was about an algorithm that could identify gang-related crimes with just four pieces of information: the weapon used, the number of suspects, the neighbourhood, and the exact nature of the location—alley or street corner, for example. Some police forces already use predictive policing technologies that can suggest where a crime will occur and the algorithm fits into this new trend.
But the presenters gave no meaningful answers to questions about unintended side effects, such as misidentifying someone as a gang member, or possible bias in the data used. In response to the ethical concerns raised, one presenter, a computer science professor at Harvard, said, “I am just an engineer.” Such a response was not well received. As conference attendees and contributors to social media indicated, engineers SHOULD be considering exactly these ethical questions.
Researchers whose work is tied to the military and law enforcement must be especially reflective. Recently, the news emerged that Google was working with the U.S. Department of Defense to develop AI programs that would improve the analysis of drone footage by identifying objects. Some Google employees were outraged. Although Google spokespeople were quick to reassure everyone that the technology was being used for non-offensive uses only, it is worth questioning how this technology could be used offensively and what the unintended consequences of its use might be.
What should be done? Viable options exist. Recommendations in the “Malicious AI” report include greater consideration of ethical concerns, unintended consequences and malicious uses of the technology by developers and regulators. And, while dual-use AI technology might seem to defy effective government regulation, especially around weaponization, citizens/consumers should demand responsible, cautious, informed use and regulation based on sound humanitarian principles.
Photo: An artist’s rendition of a swarm of drones iStock/Getty Images Plus