Recent US Military Decisions:

In response to the US Army’s recent decision to develop drones that use artificial intelligence (AI) to spot and target vehicles and people; Security and Risk Theme Director, Dr Peter Lee, has written an article for The Conversation expressing concerns and cautions about the impact that such a decision may bear for future warfare.

The Implications:

Dr Lee discusses the implications that leaving such decisions to drones will have on the humanity of warfare, or lack thereof, as well as drawing attention to recent failed civilian experiments using artificial intelligence, such as those carried out by Uber and Tesla which resulted in civilian death.


Dr Lee writes, “If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed. In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.”

To find out more, you can read the full article – Drones will soon decide who to kill, on The Conversation.