2019 guardians on the borders of Greece, Hungary and Latvia, testing of a lie detector powered by artificial intelligence began. The system, called iBorderCtrl, analyzed facial movements to try to spot signs that a person was lying to a border agent. The trial was fueled by nearly $5 million in European Union research funds and nearly 20 years of research at Manchester Metropolitan University in the UK.
The trial caused controversy. Polygraphs and other technologies designed to detect lies based on physical attributes have been widely dismissed by psychologists as unreliable. Soon, errors were reported with iBorderCtrl as well. Media reports indicated that its lie-prediction algorithm did not work, and the project’s website acknowledged that the technology “may imply risks to basic human rights”.
This month, Silent Talker, the company spun off from Manchester Met that created the technology behind iBorderCtrl, went under. But that’s not the end of the story. Lawyers, activists and lawmakers are pushing for a European Union law to regulate AI, which would ban systems that claim to detect human fraud in migration – citing iBorderCtrl as an example of what can go wrong. Former Silent Talker executives could not be reached for comment.
Banning artificial intelligence lie detectors at borders is one of thousands of amendments to the AI law being considered by EU officials and members of the European Parliament. The legislation aims to protect the basic rights of EU citizens, such as the right to live without discrimination or to be granted asylum. It labels some use cases of artificial intelligence as “high risk”, some as “low risk”, and bans others entirely. Those lobbying to change the AI Act include human rights groups, unions and companies like Google and Microsoft, who want the AI Act to distinguish between those who build general-purpose AI systems and those who use them for special purposes.
Last month, advocacy groups including European Digital Rights and the Platform for International Cooperation on Undocumented Migrants called for an act to ban the use of AI polygraphs that measure things like eye movements, tone of voice or facial expressions at borders. Statewatch, a non-profit civil liberties organization, published an analysis warning that the AI Act, as written, would enable the use of systems like iBorderCtrl, adding to Europe’s existing “publicly funded border AI ecosystem”. The analysis calculated that over the past two decades, roughly half of the 341 million euros ($356 million) in funding for the use of AI at the border, such as migrant profiling, has gone to private companies.
The use of AI lie detectors at borders is effectively creating a new immigration policy through technology, says Petra Molnar, associate director of the nonprofit Refugee Law Laboratory, labeling everything as suspicious. “You have to prove you’re a refugee, and you’re presumed to be a liar unless proven otherwise,” she says. “That logic underlies everything. It supports AI lie detectors and supports more surveillance and enforcement at borders.”
Molnar, the immigration attorney, says people often avoid eye contact with border or immigration officials for innocuous reasons — such as culture, religion or trauma — but it’s sometimes misinterpreted as a signal that the person is hiding something. People often struggle with cross-cultural communication or talking to people who have experienced trauma, she says, so why should people believe that a machine can do better?