The army responds to the call. NATO announced on June 30 that it is creating a $1 billion innovation fund that will invest in early-stage startups and venture capital funds developing “priority” technologies such as artificial intelligence, big data and automation.
Since the war began, the UK has launched a new AI strategy specifically for defence, and the Germans have earmarked just under half a billion for research and AI as part of a $100 billion injection into the military.
“War is a catalyst for change,” says Kenneth Payne, who heads defense studies research at King’s College London and author of And, Warbot: Dawn of the Artificial Intelligence Conflict.
The war in Ukraine has added urgency to the push to put more AI tools on the battlefield. Those with the most to gain are startups like Palantir, which hope to cash in as soldiers race to update their arsenals with the latest technologies. But long-standing ethical concerns about the use of AI in warfare have become more pressing as the technology becomes more advanced, while the prospect of restrictions and regulations governing its use seems as remote as ever.
The relationship between technology and the military was not always so friendly. In 2018, after protests and employee outrage, Google pulled out of the Pentagon’s Project Maven, an attempt to build image recognition systems to improve drone strikes. The episode sparked a fierce debate about human rights and the morality of developing AI for autonomous weapons.
It also prompted high-profile AI researchers such as Turing Award winner Yoshua Bengio and Demis Hassabis, Shane Legg and Mustafa Suleyman, founders of leading AI lab DeepMind, to pledge not to work on lethal AI.
But four years later, Silicon Valley is closer to the world’s military than ever. And it’s not just big companies — startups are finally getting a look, says Yll Bajraktari, who was formerly the executive director of the US National Security Commission on AI (NSCAI) and now works for the Special Competitive Studies Project, a group that lobbies for greater AI adoption across the board. Now.
Companies selling military AI make expansive claims about what their technology can do. They say it can help with everything from the mundane to the deadly, from reviewing resumes to processing satellite data or recognizing patterns in data to help soldiers make faster decisions on the battlefield. Image recognition software can help identify targets. Autonomous drones can be used for surveillance or strikes on land, air or water, or to help soldiers deliver supplies more safely than is possible on land.