On May 22–23, EWI's Vice President of the Asia-Pacific program Dr. Lora Saalman, joined the first in a series of workshops on "Mapping the impact of machine learning and autonomy on strategic stability and nuclear risk" hosted by SIPRI in Stockholm.
Responding to some of the initial findings from the workshop, Dr. Lora Saalman commented “Even in the European and trans-Atlantic context, where there would seem to be the most agreement on the impact of machine learning and autonomy on nuclear risk, there is still a great deal of work to be done. Participants noted that while unpredictability from ‘black box’ systems may be useful against one’s adversary, they do not fulfill the demands of a commander who seeks predictability in his or her own weapon systems. In addressing how these issues interact with international law, Article 36 weapon reviews were cited as anchors for legal compliance of systems and even for future controls and norms. However, some argued that the inability of Article 36 to capture decision-support systems points to the need for a supplement. While the workshop largely focused on risks brought on by new technologies, experts also emphasized that machine learning and autonomy can have some positive applications in the realm of nuclear controls and confidence building measures, including better surveillance to protect and verify at nuclear facilities, facilitation of measurement of fissile material production, as well as disarmament measures.”
Click here to read the SIPRI event post.