Social media – it’s time for global legal action against misinformation

1. Introduction

The COVID-19 virus pandemic was a new environment in which social media was given the opportunity to show its negative role as a space for spreading misinformation, fake news and conspiracy theories, which ultimately endangered the lives of millions of people.

After a series of cases in which it was confirmed that social media, through the unreliability of content and its manipulativeness, led to political decisions based on lack of facts, the pandemic is a new circumstance after which the issue of content regulation on social media is a global priority.

Attempts of self-regulation by technology companies have not come close to making social media more reliable, nor have they reduced their manipulative influence, so there is a need for other systems, such as state institutions and professional associations, to actively participate in this process regardless of all the risks it brings from the point of view of protecting the right to freedom of speech.

2. Social media is, after all, media

In the years of the pandemic, social media reached historically the lowest level of trust. Their constant trend of declining trust, during 2020, reached the lowest level so far, at which only one third of people believe in the reliability of the information they find on them. As a source of reliable information, social media is still far below search engines, traditional media and owned media[1]. Despite such a low level of trust, social media remains a very influential communication channel for the spread of malignant content, given the number of their users and global spread.

Social media and the companies that own them have long avoided any responsibility for the broadcasted content, holding the view that they are not media, but only technological platforms, which enable the expression of individual views whose content they cannot influence. However, this rejection of responsibility has long since become unsustainable, due to numerous pieces of evidence that the very lack of regulation of content on social media has led to severe negative consequences for various aspects of social life.

Faced with numerous criticisms, accusations and even punishments for not preventing the spread of fake news, as well as for abusing the data of its users, Facebook admitted in early 2018 that social media contributes to the negative impact on democracy. According to the high representative of that company, Facebook determined that Russian “authors” opened 80,000 accounts on Facebook and reached 126 million followers in the United States in a period of two years, setting up and promoting fake pages on this network in order to influence attitudes of people, using social media as an “information weapon”[2].

A giant step in the evolution of the position of social media from technical service providers without influencing content, to media in the true sense of the word that have a way of regulating content, occurred in the period after the US presidential election in 2020, and the violence of supporters of defeated candidate Donald Trump in the US Congress on January 6, 2021. Massively using social networks as a channel for spreading fake news about election theft and for mobilizing supporters to violently oppose it, Donald Trump’s movement forced technology companies to radical moves, which culminated by closing his user accounts from which he broadcasted calls for a violent coup.


[1] Edelman Trust Barometer 2021, Global report (https://www.edelman.com/sites/g/files/aatuss191/files/2021-03/2021%20Edelman%20Trust%20Barometer.pdf )

[2] S. Chakrabarthi: Hard Questions: What Effect Does Social Media Have on Democracy? FB Newsroom (2018)

3. Self-regulation is not enough

In the first half of 2020, technology companies removed close to 3.5 billion messages from their social media (Facebook, YouTube, and Twitter) based on various criteria for harmful content. Although most of it goes to spam on Facebook (about 3.3 billion), an impressive number of messages have been removed because of its harmful content such as hate speech (32 million on Facebook), then violence, promotion of violence and extremism (3.1 million on YouTube), or hateful conduct (955 thousand on Twitter)[1].

This type of content intervention is partly done by using software and artificial intelligence, which in a sense could “abolish” these platforms from the classic media role. However, in addition to this technological component, in evaluating content on social media, according to certain criteria, and in deciding on their “fate”, the human hand also participates, thousands of them. On Facebook alone, about 15,000 moderators monitor the content and assess whether some messages violate the rules and thus spread a negative impact on the public[2].

Despite these large resources, technology companies cannot be the only ones to be given the opportunity to regulate content on social media, because the malignant influence cannot be suppressed in such a way to the extent necessary.

As a big proponent of social media regulation, primarily because of their proven misuse of users’ personal data, Nobel laureate Joseph Stiglitz does not allow the regulatory space to be left to technology companies and their self-regulation alone. He suggests that all changes must be made public and proposes the introduction of rules on what user data companies can store, then what data they may use, whether they should merge different databases, for what purposes they would have the right to use the data, and what degree of transparency they must provide when doing something with the data they have collected. “All these issues must be decided. You can’t let technology giants decide that. This must be done publicly, with full awareness of the dangers posed by technology companies”[3].


[1] Siripurapu A, Merrow W: Social Media and Online Speech: How Should Countries Regulate Tech Giants?, Council on Foreign Relations, February 2021 (https://www.cfr.org/in-brief/social-media-and-online-speech-how-should-countries-regulate-tech-giants )

[2] Siripurapu A, Merrow W: Social Media and Online Speech: How Should Countries Regulate Tech Giants?, Council on Foreign Relations, February 2021 (https://www.cfr.org/in-brief/social-media-and-online-speech-how-should-countries-regulate-tech-giants )

[3]Joseph Stiglitz on artificial intelligence: We’re going towards a more divided society, The Guardian (2018)

4. Towards new models of combating misinformation

There is room for regulation by others, not only by technology companies, and it is already open in some countries. In the case of the United States, however, the biggest obstacle to more serious regulation of content on social media and narrowing the space for misinformation remains Section 230 of the 1996 Communication Decency Act, which stipulates that Internet platforms, then described as “interactive computer services”, cannot be treated as publishers or speakers of content provided by their users, which means that any message that appears on these platforms made by their users does not create legal liability for those platforms, even if those messages are offensive, dangerous or even encouraging terrorism, or promoting unscientific methods of treatment, which was especially prevalent during the COVID-19 pandemic.

This provision was made in the prehistory of the Internet (1996) and regardless of its motives at the time to protect the network from external influences and preserve its democracy, today it is the biggest obstacle to protecting its integrity. Hence, the announcements that it is necessary to change this act as soon as possible are understandable, because it leaves the door wide open for misinformation, hate speech, threats and violence.

Some authors rightly warn that the regulation of content on social media cannot be approached with the same legal “tools” as with traditional media, but that this should not be an obstacle to approaching that regulation at all. On the contrary, it is necessary. Thus, Dipayan Ghosh reminds that traditional media, unlike social media, still have limited bandwidth, that their content is produced under editorial supervision, and that the audience of traditional media proactively chooses the content to be consumed[1].

However, the specifics of social media in relation to traditional ones cannot be an obstacle for their content to remain outside the sphere of regulation, not even only in the hands of their owners and their self-regulation. The initial step may be the introduction of a legal obligation that platforms and their owners are obliged to immediately remove content that in any way violates the law, otherwise applicable to traditional media. For example, in terms of broadcasting hate speech, calls for terrorism or spreading misinformation regarding the protection of public health. Such regulations already exist in some countries (for example, in Germany and Australia) and an important part of them are high fines for owners of digital platforms if it is determined that by their inaction they enabled the distribution of illegal content. These legal experiences are a good signpost for a global confrontation with a common problem, and that is the mass distribution of harmful content, which from year to year endangers not only democratic orders around the world, but also elementary values such as human health.


[1] Ghosh D: Are We Entering a New Era of Social Media Regulations, Harvard Business Review, Jan 2021 https://hbr.org/2021/01/are-we-entering-a-new-era-of-social-media-regulation )