1. Commitment to Control Weaponised Artificial Intelligence: A Step Forward for the OSCE and European Security
- Author:
- Anna Nadibaidze
- Publication Date:
- 01-2022
- Content Type:
- Policy Brief
- Institution:
- The Geneva Centre for Security Policy
- Abstract:
- Recent technological and political developments in OSCE participating States suggest a strong interest in pursuing, testing and using weaponised AI and weapons systems with increasingly autonomous features controlled by algorithms. In May 2021 Defence Minister Sergei Shoigu announced that Russia had begun producing combat robots “capable of fighting on their own”,1 while the French Army is planning to introduce robotic systems by 2040.2 The United Kingdom (UK) government has stated its objective of achieving “a leading role in critical and emerging technologies”3 and has established a Defence Artificial Intelligence and Autonomy Unit to better understand them.4 In the United States, the National Security Commission on Artificial Intelligence has urged the government “not [to] be a witness to the AI revolution in military affairs”.5 The global discussion about autonomous weapons systems is often framed in a futuristic way and focuses on lethal autonomous weapons systems (LAWS) – colloquially called “killer robots” – or the “AI arms race”. But weaponised AI is already a reality of European security. Thus far participating States have been reluctant to utilise the OSCE platform to address the risks caused by the increasing autonomy of weapons systems. Building on this issue, this essay intends to address the following questions: (1) how does the lack of regulation of weaponised AI affect security and stability in Europe? and (2) what role can the OSCE play in mitigating the risks related to weaponised AI?
- Topic:
- Science and Technology, Military Strategy, Artificial Intelligence, and Emerging Technology
- Political Geography:
- Global Focus