File(s) not publicly available
Weapons powered by artificial intelligence pose a frontier risk and need to be regulated
Major advances in artificial intelligence (AI) technology in recent years have led to the growing sophistication of military weaponry. Among the most noteworthy new devices are Lethal Autonomous Weapon Systems (L.A.W.S.): machines engineered to identify, engage and destroy targets without human control.
These machines, and the risks posed by their deployment, have been fodder for science fiction literature and cinema with the German film Metropolis in 1927 and Ray Bradbury’s 1953 dystopian novel Fahrenheit 451. But L.A.W.S are not just speculation anymore. AI systems can now independently use data, predict human behavior, gather intelligence, perform surveillance, identify potential targets and make tactical decisions in virtual combat. The use of robots in the military is becoming commonplace and the leap to fully autonomous robots being optimized for warfare is already realistic. The big question now is not what kinds of L.A.W.S could exist, but if they pose a frontier risk to humanity.