Skip to main content

Free speech but no fake news: Germany tasks Big Tech with battle against disinformation

#17 of 71 articles from the Special Report: Climate of denial

Illustration by Ata Ojani

Support strong Canadian climate journalism for 2025

Help us raise $150,000 by December 31. Can we count on your support?
Goal: $150k
$32k

Last March, pro-Kremlin content creator Alina Lipp shared a video of a Russian-speaking woman claiming a group of Ukrainian refugees had attacked a 16-year-old boy in Euskirchen, a small town in western Germany.

The teenager was beaten so severely that he fell into a coma and later died, the woman in the video said. “People! He died! The boy was only guilty of speaking Russian. I’m afraid this is just the beginning,” she said.

The attack never happened, according to a German police investigation. “It is an intentional fake video intended to stir up hatred,” said an official police statement. A German counterterrorism official told the Washington Post that the way the clip went viral showed signs of a disinformation campaign from Russian state-linked actors or those “acting on behalf of Russia.”

Lipp, 28, was among the first to circulate the fake video online and include a German-language description, which she broadcast to 180,000 followers on her Telegram channel, Neues aus Russland (News from Russia). Social media users then reposted Lipp’s message to German forums disseminating conspiracy theories.

For over a year, Lipp, a German citizen of German-Russian heritage, has broadcast what she claims to be the truth to her social media followers. She has explained that Russia’s February 2022 full-scale invasion of Ukraine was justified to “liberate” and “de-Nazify” the country — a line often used by the Kremlin.

Within months, Lipp’s content led Berlin to open an investigation into the influencer, who has since moved to Russia. Lipp’s case exemplifies Germany’s willingness to act quickly to enforce tough laws on illegal content online, like hate speech and defamation. In Germany, expressing approval of Russia’s war on Ukraine — as Lipp did — could constitute a crime of aggression.

Germany is now preparing for the Digital Services Act (DSA), a new Brussels-led, European Union-wide playbook inspired by its own laws. In recent years, the country’s attempts to clamp down on illegal content on internet platforms have met with mixed reactions across the political spectrum, relaying the difficult balancing act of governing online rules while safeguarding civil liberties. As that debate continues, pro-Kremlin actors are intensifying their disinformation efforts targeting Germans.

Information operations and an unprecedented law

Germany has been a key target of Moscow’s information operations ever since the Cold War. From 2015 to 2021, EUvsDisinfo — a watchdog that tracks disinformation campaigns originating from pro-Kremlin media — logged 700 cases of intentionally fake or misleading reporting about Germany, the highest number among EU countries.

As the debate over how to tackle illegal content on internet platforms continues, pro-Kremlin actors are intensifying their disinformation efforts targeting Germans.

By the mid-2010s, as Germany was grappling with the influx of over one million refugees from Middle Eastern and North African countries, anti-migrant, anti-Muslim and other xenophobic content proliferated on the German internet.

Within German society, right-wing elements like the AfD political party and its supporters exploited the growing discord, often wielding misinformation and disinformation to deepen societal fault lines and undermine public trust in the government. So, too, did pro-Kremlin actors.

In 2016 for instance, Russian government media, like Kremlin-funded Channel One Russia and RT Deutsch, amplified the story of a 13-year-old German-Russian girl purported to have been kidnapped and raped by several Muslim migrants. A German police investigation concluded the story was false and established that the teenager spent the evening with a friend. But Russian state media continued to circulate the story, helping stoke outrage against refugees online and offline.

The disinformation deluge and its consequences catalyzed Berlin to pass an unprecedented, and controversial, law in 2017: the Network Enforcement Act (NetzDG). The regulation requires social media platforms with more than two million users, such as Facebook, Twitter and YouTube, to flag and delete illegal content under German law — like hate speech, defamation and incitement to violence — within 24 hours of being notified, or one week for complex cases. Non-compliance could result in fines of up to €50 million.

The legislation didn’t explicitly target disinformation. But it provided a tool for users to report potentially false content that could be interpreted as hate speech or incitement to violence, says Helena Schwertheim, senior digital policy research manager at the Institute for Strategic Dialogue (ISD), a disinformation- and extremism-focused think tank.

But critics of the law, from far-right actors to human rights and journalism groups alike, argued that the hastily created regulation was an ineffective form of overreach that foisted the burden of resolving a complex social issue onto private companies.

There were also fears that authoritarian nations would be inspired to create their own versions of the law to censor oppositional voices. NGO Justitia found that 24 countries, from Russia to Venezuela and Singapore, followed Berlin’s template and created their own rulebooks — but most did not include the “rule of law and free speech protections equivalent to their German counterpart.” Stephan Mündges, a digital communications researcher at Technical University Dortmund, explains: “There was merit to the critiques, as the chances for overreach were quite real.”

Still, Berlin was an early mover in pressuring the world’s biggest internet platforms to be more transparent and accountable — one of the most important elements in the disinformation fight, Mündges says.

Platforms did comply to an extent. In the first six months after the rule was implemented, Facebook deleted 362 posts that were classified as insults, defamation or incitement to hatred and crime under Germany’s criminal code. As a result, some extremist voices and conspiracy theorists moved to the messaging platform Telegram, which they viewed as more lax on content regulation. German users of Telegram doubled from seven to 15 per cent between 2018 and 2021. But German lawmakers have since cracked down on Telegram, with the Justice Ministry designating it a social network, meaning the platform must abide by the NetzDG’s “binding rules.”

New rules and a new war

Over the past year, German-language Kremlin-linked disinformation related to Russia’s war on Ukraine has surged. A complex web of pro-Russia and state-linked disinformation merchants have spread false claims, for instance, of Ukrainian refugees that paint them as terrorists, criminals and Nazis who are taking advantage of European aid. And Germans’ alignment with pro-Kremlin narratives “increased significantly” between the spring and fall of 2022, according to a February report by non-profit CeMAS, which researches disinformation and conspiracies.

Yet, Brussels has been the trailblazer taking action to stamp out online disinformation, experts say. In 2018, Brussels created the EU’s Code of Practice on Disinformation, a voluntary agreement signed by Big Tech to self-monitor disinformation on their platforms.

Now, Germany and the EU are gearing up for the new, sweeping rule also spearheaded by Brussels. The DSA passed in November 2022 will be fully in place by next year and will apply to most companies. Its goals are similar to the NetzDG: push Big Tech to root out illegal content online and become more transparent in doing so.

Berlin’s NetzDG rules helped spark an EU debate on how to curb such content online and inspired the DSA, says Julian Jaursch, project director at think tank Stiftung Neue Verantwortung who studies disinformation and Big Tech regulation. The new EU law incorporates critical elements of the German law, like mandatory public reporting on content moderation.

Yet Brussel’s new rules cast a far deeper and wider net in its due diligence approach, Schwertheim says. Tech platforms must routinely perform assessments to identify and address systemic risks — rather than simply deleting individual pieces of flagged content. Last month, Brussels announced the largest platforms with over 45 million EU users must identify and mitigate risks, like the spread of disinformation and illegal content on their platforms, and how it affects freedom of expression and media freedom, she says. The companies’ risk mitigation plans will be subject to an independent audit by the EU.

The DSA also prescribes tougher penalties for non-compliant platforms. Maximum fines are set at six per cent of a company’s annual revenue. For the world’s biggest internet companies, that could mean penalties in the billions — a more significant hit that could push firms to better comply. Overall, the act is “a fairly powerful tool to force [internet] platforms to actually be active in the fight against disinformation,” Mündges says.

The EU’s disinformation code will likely work in conjunction with the DSA to aid platforms’ efforts in battling disinformation. “Taken together, these are promising steps that might create more transparency and accountability” — but the end result depends on how strongly the rules are enforced, Jaursch says.

The influence of the new regulation has already been felt, though the law isn’t yet in full swing.

This February, popular short video platform TikTok publicly identified a Russian disinformation network that spread anti-Ukrainian and pro-Kremlin narratives about Russia’s war on Ukraine; it targeted EU nations, primarily Germany and Italy. The social media giant disclosed the campaign under the EU’s disinformation code.

Challenges remain in compelling Big Tech to comply. EU officials have expressed skepticism that Elon Musk’s Twitter will be able to abide by its commitment to the EU’s disinformation code, especially given the billionaire’s cutbacks on content moderators. At the same time, disinformation actors are continually eyeing loopholes in platform moderation policies to evade labels of hate speech or false information, Schwertheim says.

The dilemma of how to regulate disinformation hasn’t yet been solved, Mündges says. But “we should try to regulate our information spaces in a way that they offer the possibility to say what you want openly … without repercussions. On the other hand, we need to protect people from malicious actors, inflammatory content, and incitements to violence. It’s a big task. But we must try.”

Comments