From content to architecture: a new rationale for regulating social media platforms

Yasmeen Moreau
8 min readOct 16, 2019

61% of adults and 79% of children have had “potentially harmful” experiences online in the last year, according to a recent survey conducted by the UK Office of Communications (Ofcom). This concerning figure is not that surprising considering the ever-growing mass of negative content propagated online. Whether they are being used to circulate fake news, facilitate foreign interferences in electoral campaigns, perpetrate harassment or organize terrorist attacks, it seems social media platforms are becoming a hunting ground for poorly intentioned users.

Over the past few years, the issue of hate speech and fake news has triggered a wide-ranging debate in many countries worldwide. The flow of harmful content on social media has reached a point where platforms themselves seem to be overtaken by events. With public controversy swirling around them, leading internet companies are now being forced to confront their roles in the digital ecosystem. Apple’s CEO Tim Cook admitted, in a 2018 interview, that regulation was “inevitable”. The Ofcom survey revealed similar attitudes among the UK public, with 70% of adults supporting more regulation on social media sites. The report pointed out to an increase in this figure since the previous survey in 2018, indicating rapidly growing concerns over content circulated online.

That being said, the question of to what extent content should be regulated, and how, is a matter of ongoing debate.

In terms of what should be regulated, what we’re basically asking is “to what extent should freedom of expression be limited to protect individuals”? The answer to this question largely depends on how we interpret freedom of speech, and this is far from being consensual. Put simply, we can oppose two main approaches: the American approach, characterized by a broad understanding of freedom of speech entrenched in the First Amendment, and the European approach, where freedom of speech is subject to much more restrictions. The American approach has historically favored protection of speech over any other consideration. Applied to content online, this conception implies that the potential psychological harm caused by hate speech should not justify banning speech from social media platforms. On the other hand, the European approach to freedom of speech is framed by the prohibition of incitement to hatred or discrimination, regardless of whether such incitement leads to concrete actions. The restrictions to freedom of expression defined in the European Convention on Human Rights follow the idea that words may harm, and, thereby, that expressing oneself comes with duties and responsibilities. We can easily see how these different trade-offs between guaranteeing freedom of speech and protecting individuals call for different levels of regulation of online content. And this raises important regulatory issues as to the possibility (and legitimacy) of applying these national trade-offs to content expressed on a transnational space.

When it comes to how content should be regulated, it seems that no solution so far has been really satisfactory. Proponents of self-regulation argue that this model is the most effective, as it ensures an automatic application of rules and provides the necessary flexibility to adapt to the ever-changing norms of the digital world. From a legal perspective, self-regulation also solves the tricky issue of applying national regulations to a transnational, borderless space: the Internet. However, self-regulation raises serious concerns in terms of legitimacy — do we really want platforms (i.e. private actors pursuing private interests) to have the power to decide what type of content is acceptable or not?

On the other hand, even in countries where calls for new regulatory powers have led to the adoption of strict laws aimed at curbing online hate speech and abusive content, many issues remain. Germany’s Network Enforcement Act (“NetzDG”), which came into effect in January 2018 and requires platforms to delete any “manifestly unlawful” post within 24 hours, has been criticized on several grounds. Many have expressed concerns that this regulation may create incentives for “preventive” over-regulation, as platforms are given very limited time to address complaints and face heavy fine (up to 50 million euros) if they fail to adequately do so. Human Rights Watch has called to reverse the law, which it described as “fundamentally flawed” and a clear violation of free speech.

Ultimately, it looks like the broad consensus on the need to regulate online content has failed to yet produce any adequate regulatory solution.

Why is regulating online content so difficult?

Maybe one of the reasons we’re struggling so much with online content regulation is that we’re not looking at the problem from the right angle. Think about it: why are we so concerned about harmful content online? Is it not as though hate speech, harassment or even fake news were born with the rise of social media platforms. All this kind of harmful content has been around for a long time. But what social media changed — and this is what is really at the heart of the issue — is that this type of content can now easily spread and become viral, thereby affecting a much larger number of people than it would have offline.

Although in terms of content, speech shared online may not be different from speech expressed offline, certain dynamics inherent to social media platforms raise additional challenges that are specific to online content. The architecture of platforms can significantly impact the reach of and exposure to (harmful) content online. For instance, Twitter’s reliance on “trending topics” makes it possible for content to quickly become viral, while Facebook’s focus on more restricted groups limits the potential reach of a user’s post. More importantly, on social media, a content’s visibility is not linked to the decision of a user to post this content but is produced ex post through the interactions it receives from other users (likes, shares, retweets, etc.). And the possibility for this type of interaction is largely dependent on platforms’ algorithms for ranking and sorting the content users are exposed to. Basically, these algorithmic rules give platforms the ability — the power — to either accelerate or slow down the propagation of content.

If platforms have that power, it would make sense for them to accelerate the propagation of the most profitable type of content. Platforms’ business model is anchored in what Michael Goldhaber theorized in 1997 as the “attention economy — i.e. an economy in which companies compete over human attention. The logic is simple: the more they have your attention, the more you stay on their platform, and the more money they make. And how can they capture your attention? By promoting the most sensational content, which, unfortunately, tends to come in the form of extremist speech, fake news or conspiracy theories videos.

Therefore, the real problem does not come from the content itself, but rather from the possibility for virality brought about by platforms’ architecture. This calls for shifting the way we think about online content regulation to tackle the real issue. In other words, moving from content regulation to content propagation regulation.

Shifting the debate: from content regulation to “systemic regulation”

This rationale has been supported by a number of scholars. In a 2018 article, Fagan makes the case for what he refers to as “systemic regulation” of platforms [1]. He explains that the horizontal and open network structures that characterize social media facilitate the proliferation and replication of claims (including false and harmful ones), which will have an impact on patterns of approval and disproval of such claims across the network. In this sense, platforms play a role in shaping social facts and norms. He argues that adjusting the configuration of platform architecture can have a significant impact on the type of content that is approved and replicated. More specifically, systemic adjustments can counter social media’s tendency toward group polarization, reduce the proliferation and visibility of harmful content and “nudge” users towards network locations with greater quality of content.Therefore, if we were to regulate, he concludes that “law should focus on systemic adjustment of platform architecture and avoid targeting and suppressing speech contents”.

On top of being more efficient, such a model would present the advantage of solving the seemingly unsurmountable tension between regulating content and protecting freedom of expression. Platforms have been coming up with creative ways of limiting the reach of undesirable user behaviors without blocking profiles or deleting content. One example, mentioned by Fagan, is Reddit’s “shadow banning” of users who troll: this technique blocks other users from seeing messages posted on the forum by a troll, but without alerting the troll. Here, the troll’s freedom of speech remains untouched — they may continue posting whatever they want — although the harmful content no longer reaches others. On a more serious note, we can point to WhatsApp who recently changed its rules to limit the number of users in a group as well as the possibility for users to forward content, in an effort to fight fake news.

These examples show that platforms are able to limit the propagation of harmful content. This is what regulation should focus on. We need to move towards cooperation between platforms and governments to set standards not on content, but rather on the design of platforms, to promote platform architectures that don’t rely on algorithmic amplification of negative content.

The first step in this direction is to impose transparency obligations upon platforms regarding their algorithms. Getting insight into how algorithms works can serve two important purposes. First, as underlined by Renée DiResta in a 2018 article, it will give users “a better understanding of why they see what they see”, thereby increasing awareness of content propagation patterns and fostering a more critical look on the content we are exposed to. Second, it will allow to identify what is wrong with these algorithms in order to reshape them to be more ethical. It is well known that algorithms are not neutral — those driving content ranking and recommendation on social media platforms are no exception to the rule. And while Facebook or Twitter may not be held responsible for the content circulated on their platforms, they are however responsible for the way they design their algorithms. This is what platform liability would be based upon in such a regulatory model: proving that they are taking ownership of the issue and coming up with mechanisms to review and fix their curatorial algorithms.

The recent mission report on social media regulation submitted to the French Secretary of State for Digital Affairs [2] describes how platform regulation could take inspiration from the regulatory framework for the banking system, which is based on obligations of means rather than results. Banks are liable for implementing measures in compliance with certain preventive rules — as long as they can transparently justify that these measures are in place, they will not be held accountable if risks materialize. The idea is to “create targeted incentives for platforms to participate in achieving a public interest objective without having a direct normative action on the service offered”. Applied to platform regulation, this model would require imposing strong transparency requirements on key internal mechanisms such as moderation rules and algorithmic design, thereby making platforms legally responsible for developing (and giving full account of) means to fight harmful content propagation.

All this requires a proactive role on the part of the government. As explained before, platforms have a monetary interest in algorithmic amplification of sensational content — and it’s difficult to believe that they would, voluntarily and on their own, substitute their private interests for the general interest. At the same time, given the global climate of concern over online content, and the tendency towards stringent regulations, platforms do seem to be increasingly open to collaborating with governments on this matter — if only to limit the burden on them. By cutting down the blame on platforms and pushing for a common discussion on structural factors (platform architecture and algorithms), governments can take the lead in shifting the way we think about societal regulation to start tackling the real issue.

[1] Frank Fagan, “Systemic Social Media Regulation”, Duke Law & Technology Review, vol. 16, no.1, 2018, pp.393–439.

[2] Mission “Regulation of social networks — Facebook experiments”, Creating a French framework to make social platforms more accountable: Acting in France with a European vision, May 2019.

--

--

Yasmeen Moreau

E-education consultant @EFFIOS / Sciences Po Paris graduate / #edtech #education #policy