Consuming online news: do we need new standards for citizens and consumers?

01 December 2018

“Now is the time to bring democratic standards to the internet — ones that let us own and articulate how our digital society should work… If we combine our civic society, legal, academic, business and technical expertise, we can set a standard for the world. But we need to act now, or else we’ll lose control of our digital destinies.”

-Martha Lane Fox

Through our research looking at online social media scams, Digital expert Xanthe Couture examines the ways social media platforms facilitate information-sharing between citizens and consumers

The ways we consume online news

Standards around quality and reliability exist for the papers we read, the radio we listen to and the TV shows that we watch. These are debated and refined through national, and international rules as well as guidelines with consumers at the centre.

Yet the news we access through the lens of Facebook, Twitter, WeChat, YouTube, Reddit or any other social media platform is creating new ways for content and news to be created and shared while the lines between advertising, news and personal messages are becoming increasingly blurred.

In addition, consumers are being targeted in whole new ways that traditional news channels and advertising could never do. As recent cases have highlighted, personal data, including preferences, views and activities can be misused by individuals and firms without their consent. Bots, which are automated accounts, can be programmed to direct their tweets at influential users, which makes misinformation and fake news even more likely to be shared.

Research has found that bots can also spread inflammatory news articles on social media, fanning tension on controversial issues at a national level. False rumours and misinformation can also spread more deeply and quickly than the truth  —  because of the very reason that they arouse strong emotions.

Cases being investigated by election watchdogs on both sides of the Atlantic, highlight that the way consumer data is used on social media platforms to target advertising including political messages, and news stories has become an issue beyond quality of the information and consumer trust.

These are not just problems for citizens as political views risk being manipulated, but link to very important questions for our rights and expectations as consumers, where personal data is used to target information online.

What next?

The design of social media platforms means they act as intermediaries offering a service somewhere between a platform and a publisher, with a lack of a clear pathway back to the publisher if something that has been posted is untrue or harmful.

Yet given the growing number of elections around the world where misinformation online has been found to be a potential issue influencing outcomes, questions are now being asked as to whether the platforms consumers scroll through, comment on and share are liable for what goes on their site, or if they are just the empty pipes providing the content.

For example in a recent study, a total of 10 elections across nine African nations between June 2017 and March 2018 bots were found to have been used as an important means of spreading misinformation on major issues, candidates and perceived electoral irregularities.

This question has wide reaching ramifications in terms of regulatory options and links to responsibility of platforms to uphold consumer rights. For example, a digital platform like Amazon’s responsibility to remove unsafe products or take down scams.

Evidence gathered by the recent inquiry into disinformation and fake news by the UK Government has found in this new world of merging platforms and publishers, the consumers of social media have become the product of the companies. There are several options being put forward by the inquiry that could establish better protection for consumers.

These measures could rank information on the internet as being verified and graded according to agreed definitions or criteria, ensuring national advertising regulators also have powers to protect consumers from misleading and harmful digital advertising and supply chains, or changing national laws to make it clearer what kinds of consumer harm tech companies may be liable for. There are also suggestions for changing user interfaces.

To agree standards around tackling misinformation, the European Commission has recently published a set of voluntary measures and best practices developed with the platforms and advertising industry. The Code of Practice on Disinformation includes measures to ensure better transparency of ad placements, closing of fake accounts and helping consumers be better able to identify promoted content.

Anni Hellman, the Deputy Head of Unit Media Convergence and Social Media at the European Commission’s DG Connect says, “the Code is a good step towards effective collaboration between platforms as well as the advertising industry, in committing to fight disinformation and taking up concrete measures aiming at measurable results.

We are closely following progress made on implementation of the Code, and if the results prove unsatisfactory, the Commission could propose regulatory measures.”

Ways to increase quality and consumer safety

It is widely accepted that social media channels are increasingly replacing or being used in addition to traditional sources of news content. 66% of Brazilians reported that they have used social media as a news source in the last week, compared to 45% of Americans and 39% of Britons.

Therefore, some argue that the platforms and internet providers should be part of sustaining the ecosystem for investigative journalism, so that quality is encouraged by a sustainable free press as the switch to digital news continues. There are also ideas to use funds to better resource and expand the role of regulator to monitor influential digital platforms and establish digital literacy initiatives for consumers.

At the very outset, any workable plans require meaningful transparency from social media companies on what data they hold on consumers and how fake news and scams, or ‘dark-ads’, are targeting consumers and how to stop it in future.

Charlie Beckett, Director of Polis, the Media Policy Project and the LSE Truth, Trust and Technology Commission explains, “Social media platforms are big and complicated. What we shouldn’t do is compromise the benefits of freedom of expression and wider societal benefits these platforms bring to consumers.

The regulation of social media platforms should be viewed as an opportunity to ensure competition and diversity of information, so that news organisations can benefit from the innovative access that these platforms bring.

While there are examples of platforms working carefully to improve moderation, transparency and accountability, any voluntary code should be the minimum measure. A special mechanism, such as national specialist institutions could be established to enforce responsibility and hold the platforms to account, with its powers negotiated amongst policy makers and the public.”

While there are no quick-wins or easy solutions to protecting the integrity of the online spaces we all use, and a persistent need to protect free-speech, its only by asking the difficult questions and having all interested players in the room that solutions to misleading and fake information on social media can be uncovered.

We are increasingly at a turning point in the conversation — and there is a fundamental role for civil society to take the lead in demanding robust solutions from government and tech, so that consumers around the globe can trust social media products and services.