For many Americans, social media has become a monster. Platforms like Twitter, Facebook and YouTube are seen as festering hotbeds of hate and misinformation that threaten the very foundations of American democracy and civility. Calls for regulation have intensified, with some prominent voices looking across the pond for a model to regulate social media in the public interest.
In November, the European Union’s Digital Services Act took effect, with enforcement beginning for some businesses during the next year and for the rest in January 2024. The stated purpose of the law is to end the supposed “Wild West” of the internet and replace it with a rules-based digital order across the EU’s member states. The sweeping piece of legislation includes an obligation for platforms to evaluate and remove illegal content, such as “hate speech,” as fast as possible. It also mandates that the largest social networks assess and mitigate “systemic risks,” which may include the nebulous concept of “disinformation.”
This is in stark contrast to the US, where platforms enjoy broad immunity from responsibility for content created by users, and where the 1st Amendment protects against most government restrictions of speech.
The European law, by contrast, may sound like a godsend to those Americans concerned about social media’s weaponization against democracy, tolerance and truth after the 2020 election and the Jan. 6 insurrection. Former Secretary of State Hillary Clinton enthusiastically supported the European clampdown on Big Tech’s amplification of what she considers “disinformation and extremism.”
But when it comes to regulating speech, good intentions do not necessarily result in desirable outcomes. In fact, there are strong reasons to believe that the law is a cure worse than the disease, likely to result in serious collateral damage to free expression across the EU and anywhere else legislators try to emulate it.
Removing illegal content sounds innocent enough. It’s not. “Illegal content” is defined very differently across Europe. In France, protesters have been fined for depicting President Macron as Hitler, and illegal hate speech may encompass offensive humor. Austria and Finland criminalize blasphemy, and in Victor Orban’s Hungary, certain forms of “LGBT propaganda” is banned.
The Digital Services Act will essentially oblige Big Tech to act as a privatized censor on behalf of governments -- censors who will enjoy wide discretion under vague and subjective standards. Add to this the EU’s own laws banning Russian propaganda and plans to toughen EU-wide hate speech laws, and you have a wide-ranging, incoherent, multilevel censorship regime operating at scale.
The obligation to assess and mitigate risks relates not only to illegal content, though. Lawful content could also come under review if it has “any actual or foreseeable negative effect” on a number of competing interests, including “fundamental rights,” “the protection of public health and minors” or “civic discourse, the electoral processes and public security.”
What this laundry list actually means is unclear. What we do know is that the unelected European Commission, the EU’s powerful executive arm, will act as a regulator and thus have a decisive say in whether large platforms have done enough to counter both illegal and “harmful” content. You don’t have to be a psychic to predict that the commission could use such ill-defined terms to push for suppression of perfectly lawful speech that rubs it -- or influential member states -- the wrong way.
For instance, Thierry Breton, a powerful European commissioner responsible for implementing the Digital Services Act, has already taken aim at Twitter, now run by Elon Musk. Last month, Breton gave Musk an ultimatum: Abide by the new rules or risk getting banned from the EU.
Such moves will only lead to excessive content moderation by other social media companies. Most large platforms already remove a lot of “lawful but awful” speech. But given the legal uncertainty, and the risk of huge fines, platforms are likely to further err on the side of safety and adopt even more restrictive policies than required by the new law. In fact, Musk called the Digital Services Act “very sensible,” signaling his intent to comply in response to Breton’s warning. This flies in the face of Musk’s techno-optimistic commitment to only remove illegal content and his condemnation of Old Twitter’s untransparent dealings with politicians and government officials seeking to influence content moderation.
So why should Americans care?
The European policies do not apply in the US, but given the size of the European market and the risk of legal liability, it will be tempting and financially wise for US-based tech companies to skew their global content moderation policies even more toward a European approach to protect their bottom lines and streamline their global standards. Referring to European legal standards may thus provide both formal legitimacy and a convenient excuse when platforms remove political speech protected by US law, and that Americans would expect private platforms facilitating public debate to safeguard too.
The result could subject American social media users to moderation policies imposed by another government, constrained by far weaker free speech guarantees than the 1st Amendment. And American politicians may unwisely find the Digital Services Act’s approach appealing. Rep. Adam B. Schiff (D-Burbank) recently demanded that Twitter take action against alleged increases in hate speech, while Republican lawmakers in Texas have proposed a bill banning minors under 18 from social media.
Americans cast off the fetters of the Old World long ago. They should avoid having to do so again.
Jacob Mchangama
Jacob Mchangama is CEO of Justitia, a senior fellow at the Foundation for Individual Rights and Expression, and the author of “Free Speech: A History From Socrates to Social Media.” He wrote this for the Los Angeles Times. -- Ed.
(Tribune Content Agency)