At a meeting in Geneva Dec. 13-17, 2021, The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons systems - Terminator-type killer robots - and failed to place restrictive controls on the development of such lethal weaponry. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.
The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people.
Already we witness frequent misidentifications by individual human operators of drone attacks such as the recent U.S. drone strike in Afghanistan. When selecting a target, will weaponized Artificial Intelligence weapons be able to distinguish between hostile soldiers and children playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? We already have an example of image recognition software used by Google identifying Black people as actual gorillas. AI systems err and when they err, their makers often don't know why they did and, therefore, don't know how to correct them.
Lastly, how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier supposedly at the touch-pad? The soldier's commanders issuing the instructions? The corporation that manufactured the soft and hardware of the weapon?
Taken from here