Published in AI

EU panel calls for ban of AI for mass surveillance

by on28 June 2019


Ethical AI

A group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass “scoring of individuals”.

For those who came in late, there is some concern about using AI to collect data about citizens — everything from criminal records to their behaviour on social media — and then using it to assess their moral or ethical integrity.

The recommendations are part of the EU’s ongoing efforts to establish itself as a leader in so-called “ethical AI”.

The new report identified areas of AI research that require funding; encouraged the EU to incorporate AI training into schools and universities, and suggesting new methods to monitor the impact of AI.

The report suggests that the EU should ban AI-enabled mass scoring and limit mass surveillance as some of its few concrete recommendations.

The fear of AI-enabled mass-scoring has developed mainly from reports about China’s nascent social credit system. This programme is often presented as a dystopian tool that will give the Chinese government huge control over citizens’ behaviour; allowing them to dole out punishments (like banning someone from travelling on a high-speed rail) in response to ideological infractions (like criticising the Communist party on social media).

So far though the system has been more benign than it seems. It’s split among dozens of pilot programs, with most focused on stamping out everyday corruption in Chinese society rather than punishing would-be thought crime.

Experts have also noted that similar systems of surveillance and punishment already exist in the West, but instead of being overseen by governments they’re run by private companies. With this additional context, it’s not clear what an EU-wide ban on “mass scoring” would constitute. Would it also cover the activities of insurance companies, creditors, or social media platforms, for example?

Elsewhere in today’s report, the EU’s experts suggest that citizens should not be “subject to unjustified personal, physical or mental tracking or identification” using AI.

This might include using AI to identify emotions in someone’s voice or track their facial expressions, they suggest. But again, these are methods companies are already deploying, using them for tasks like tracking employee productivity.

Fanny Hidvegi, a member of the expert group that authored the report and a policy analyst at nonprofit Access Now, said the document was overly vague, lacking “clarity on safeguards, red lines, and enforcement mechanisms.” Others involved have criticised the EU’s process for being steered by corporate interests.

Philosopher Thomas Metzinger, another member of the AI expert group, has pointed out how initial “red lines” on how AI shouldn’t be used have been dumbed down to mere “critical concerns”.

 

Last modified on 28 June 2019
Rate this item
(1 Vote)

Read more about: