Rīga STRATCOMCOE report calls for "algorithmic transparency"

A new publication from the Rīga-based NATO Strategic Communications Center of Excellence (STRATCOMCOE) takes an original look at the current problems plaguing social media with the dissemination of fake news, propaganda and straightforward lies.

Titled  "Government Responses to Malicious Use of Social Media" the report is written by researchers at the University of Oxford in the United Kingdom.

"Since 2016, at least 43 countries around the globe have proposed or implemented  regulations specifically designed to tackle different aspects of influence campaigns, including both real and perceived threats of fake news, social media abuse, and election  interference. Some governments are in the early stages of designing regulatory measures  specifically for digital contexts so they can tackle issues related to the malicious use of  social media. For others, existing legal mechanisms regulating speech and information  are already well established, and the digital aspect merely adds an additional dimension to  law enforcement," the authors write in their introduction.

"Blocking, filtering, censorship mechanisms, and digital literacy campaigns have generally been the cornerstones of regulatory frameworks introduced in most countries, but with the growing  challenges surrounding disinformation and propaganda new approaches for addressing old problems are flourishing. This paper provides an updated inventory of these new measures and interventions," the report promises.

Nearly 20 pages of research later, the report concludes that "There is no simple blueprint solution to tackling the multiple challenges presented by the  malicious use of social media. In the current, highly-politicised environment driving legal and  regulatory interventions, many proposed countermeasures remain fragmentary, heavy-handed,  and ill-equipped to deal with the malicious use of social media. Government regulations thus far have focused mainly on regulating speech online—through the redefinition of what  constitutes harmful content, to measures that require platforms to take a more authoritative  role in taking down information with limited government oversight."

"In the future, we encourage policymakers to shift away from crude measures to control and  criminalise content and to focus instead on issues surrounding algorithmic transparency,  digital advertising, and data privacy.  Thus far, countermeasures have not addressed issues  surrounding algorithmic transparency and platform accountability.


"As algorithms and artificial intelligence have been protective of their  innovations and reluctant to share open access data for research, technologies are blackboxed  to an extent that sustainable public scrutiny, oversight and regulation demands the cooperation of platforms," the report concludes.

"However, harmful content is only the symptom of a much broader problem underlying the current information ecosystem, and measures that attempt to redefine harmful content or place the burden on social media platforms fail to address deeper systemic challenges, and could result in a number of 
unintended consequences stifling freedom of speech online and restricting citizen liberties," the authors warn.

The report is available to download online for free.


Seen a mistake?

Select text and press Ctrl+Enter to send a suggested correction to the editor

Select text and press Report a mistake to send a suggested correction to the editor

Most important