Government’s AI adviser calls for tougher regulation of social media
4 February 2020, 00:04
The Centre for Data Ethics and Innovation says online targeting should be a part of any new legislation around internet harms.
New regulations on social media companies should be tightened to include more transparency around how firms target users online, an independent advisory board on artificial intelligence (AI) has said.
The Centre for Data Ethics and Innovation (CDEI) has urged the Government to also focus on online targeting because the public is concerned about how the technology is being used.
In a new report, it said an analysis of public attitudes on the issue found that many appreciated the value of targeting – where platforms use people’s online habits to target them with content they believe will interest them – but were also concerned about the potential for such data to be exploited.
The CDEI has published three sets of recommendations as part of its research, urging the Government to hold any companies which use online targeting to a higher standard of accountability, as well as calling for transparency of online targeting systems to be increased and more control be given to users to edit how they are targeted.
A number of high-profile internet and social media platforms, including Google and Facebook, use different forms of online targeting to show users adverts or other content which they believe will interest users.
CDEI chairman Roger Taylor said: “Most people do not want targeting stopped. But they do want to know that it is being done safely and responsibly. And they want more control.
“Tech platforms’ ability to decide what information people see puts them in a position of real power. To build public trust over the long term it is vital for the Government to ensure that the new online harms regulator looks at how platforms recommend content, establishing robust processes to protect vulnerable people.”
Last year, a White Paper on online harms published by the Government proposed stricter regulation for internet and social media companies, including a statutory duty of care and measures to increase web safety, particularly in protecting young and vulnerable people from illegal content, while making tech giants liable to fines or criminal prosecution if they breach their responsibilities.
According to the CDEI report, only 29% of people trust platforms to target them in a responsible way, and 61% said they were in favour of greater regulatory oversight of online targeting.
Only around a third of those asked (34%) trust internet companies to change their settings when they ask them to.
Last month, a separate report by the Royal College of Psychiatrists said social media giants should be forced to hand over data and pay towards research into their potential harms.
In response to the CDEI research, Dr Bernadka Dubicka, chairwoman of the child and adolescent faculty at the Royal College of Psychiatrists, said: “We completely agree that there needs to be greater accountability, transparency and control in the online world.
“It is fantastic to see the Centre for Data Ethics and Innovation join our call for the regulator to be able to compel social media companies to give independent researchers secure access to their data.”