Sunday, February 15, 2026

Two-Thirds of EU Social Media Users Seek News There—But an Equal Share Encounter Disinformation

MEDIATwo-Thirds of EU Social Media Users Seek News There—But an Equal Share Encounter Disinformation

Sixty-six percent of social media users in the European Union say they actively look to these platforms for information about current social and political events, according to the Eurobarometer Social Media Survey conducted in June 2025. At the same time, a similar share of EU citizens report encountering disinformation and fake news within the seven days preceding the survey. Experts warn that the fight against disinformation must more strongly involve digital platforms—and that this should include changes to recommendation and advertising algorithms.

“We need to dismantle the technological mechanisms that fuel disinformation. We can’t treat it as an accident. Let’s look at the facts: disinformation is a great business for large platforms, and its popularity stems from how algorithms work and from the advertising system. Both of these systems—algorithmic recommendations and ads—must be modified so that reliable, high-quality content is visible to people, not misleading material,”
told Newseria Katarzyna Szymielewicz, President of the Panoptykon Foundation.


Poles Acknowledge the Threat—But Rarely Act

According to the study “Poles’ Attitudes Toward Cybersecurity 2025,” conducted by SW Research for the Warsaw Institute of Banking (WIB), 32% of respondents identified disinformation and fake news as the largest threat in the digital space. While many say they know how to protect themselves, practice lags behind intent: 38% do not treat social media as a primary source of knowledge, 31% verify information across multiple sources, but only 17% report or block content of dubious origin.

The WIB report also shows that 43% of Poles claim to check the credibility of online information; 17% do so using several methods, while 26% rely on just one. Meanwhile, 45% admit they verify information irregularly, depending on the source.

Digital platforms argue they already deploy measures to curb disinformation. Experts, however, say these steps are insufficient.

“For two years now we’ve had the Digital Services Act in force. Within this framework, the EU has tools to pressure platforms to adjust their algorithms—Articles 34 and 35 allow for defining mitigation measures that platforms should implement,”
Szymielewicz recalls.

The DSA has applied since 17 February 2024 to all intermediary service providers, including social networks, e-commerce platforms, and hosting services. Its goals include tackling illegal online content, mitigating systemic societal risks, and increasing transparency in how platforms operate.


Change Algorithms—Not Speech

“This isn’t about penalties alone. It’s about cooperating with platforms—or compelling them when necessary—to change algorithmic logic so it serves people, democracy, and society. This has nothing to do with content moderation in the censorial sense. I’m not proposing someone decide what is true or false. I’m proposing algorithmic changes for everyone so we start talking to each other—as a society and as politicians—instead of passively consuming low-quality content that harms public debate,”
emphasizes the Panoptykon president.

Similar adjustments, she adds, should apply to AI-generated content.

“It’s becoming cheaper to produce synthetic content that pretends to be something it isn’t. That’s a major problem for media—and it tempts creators to produce such content instead of journalism,”
Szymielewicz says.
“Under AI regulation, such content should be labeled by its creators as synthetic. Platforms could then treat it as lower priority, not boosting it on par with high-quality, authentic content. This is another technological solution I’m waiting for—one the EU supports under the European Democracy Shield.”


Audiences Want Transparency Around AI

A study titled “Artificial Intelligence in the Media,” commissioned by Poland’s National Broadcasting Council, finds that over 92% of respondents know journalists use AI in TV, radio, and online newsrooms. While audiences recognize benefits, they approach AI in media with caution: two-thirds fear manipulation of public opinion via algorithm-generated content, and nearly 45% believe it could increase the risk of disinformation. Almost 80% think AI use may reduce accuracy and truthfulness in media messages. At the same time, nearly 96% expect AI-created materials to be clearly labeled.

“We must stand together against being treated as people who don’t deserve authentic, high-quality content. The more people make their voice heard—also by not clicking on such content—the faster we’ll achieve change,”
the expert concludes.
“The way social platforms operate today is deeply damaging to communities, dialogue, and—indirectly—democratic processes. They polarize us, trap us in information bubbles, and encourage passive consumption. When debate happens, it’s often full of hate, sensationalism, and extreme эмоtions. That won’t build consensus—and in democracy, consensus is the foundation for decisions and for breaking deadlock.”

Check out our other content
Related Articles
The Latest Articles