Disinformation, hate speech, and AI-generated content are increasingly alarming Europeans. A January Eurobarometer survey shows that nearly 70% of EU citizens fear online communication risks, and the share is even higher in Poland. Deepfakes trigger particular concern: they are used not only for political manipulation but also to produce pornographic content. Members of the European Parliament agree that regulation is needed, but they differ on how far it should go.
“AI’s massive development means that on the internet—on Facebook or TikTok—there are so many generated videos that are hard to distinguish from real ones. Tools should be available to indicate that a given video was generated by AI. This should be clearly specified online,” Arkadiusz Mularczyk, a Law and Justice (PiS) MEP, told the Newseria news agency. “If the recipient online knows that a given video was generated by AI rather than by real people, then, in my view, that information exhausts the protection against harmful content on the internet. Citizens must have the right to information.”
The obligation to label AI-generated content stems from the EU’s Artificial Intelligence Act (AI Act). Some politicians, however, argue that labeling alone does not provide sufficient protection against abuse involving AI and deepfakes. According to estimates cited in a European Parliament report from July 2025, as many as 8 million deepfakes may have been shared across the previous year—about 7.5 million more than in 2023. The European Commission says pornographic materials account for around 98% of such content.
“Artificial intelligence, which was supposed to bring us only benefits, is also bringing a lot of harm. Our task as Europeans—and also as Polish politicians—is to prepare laws so that deepfakes, irregularities, lies, and manipulation are removed quickly,” argues Joanna Scheuring-Wielgus, an MEP from the New Left. “Let me remind you of the very heated case from December last year, when someone used artificial intelligence—using the images of young Polish women—to encourage Poland to leave the European Union. The reaction of the Polish government was immediate, as was the reaction of the European Union. The false information was removed, but unfortunately it also caused a lot of harm.”
She was referring to a TikTok profile called Prawilne_Polki, where AI-generated images of young Polish women presented “benefits” of Poland leaving the EU. According to the Institute of Media Monitoring (Instytut Monitorowania Mediów), the videos amassed hundreds of thousands of views.
“Deepfakes and disinformation are today a weapon aimed straight at Europe—at our democracy and its principles—but also a tool for undermining elections. Europe must have a firm, clear, and comprehensive response to this. But there are also limits we must not cross. That is the safety of our children online. This is the most important thing. Most of these deepfakes are pornographic materials, so we cannot allow children to be exposed to something so vile instead of having a safe internet. That is why we are preparing rules that will protect children, women, and citizens in general—not only the principles of democracy and free elections,” says Mirosława Nykiel, an MEP from the Civic Coalition.
A report titled “Nastolatki” (Teenagers) by NASK–PIB found that 34% of second-year high school students had heard of the concept of “deepfake.” Among first-year students, the figure was slightly lower at 30%. Of those who had heard the term, 63% and 74% respectively said they had encountered deepfakes. The concept is less familiar to seventh- and eighth-grade primary school students—16% and 26% respectively—but among those who did recognize the term, 60% and 64% said they had come across deepfakes. According to this group of teenagers, deepfakes are most often created “as a joke”—43% of NASK respondents said so. About 29% believed the technology is used to create pornographic content. In addition, 7% of respondents admitted they had become victims of deepfakes.
Poland’s Personal Data Protection Office (UODO) emphasizes that new provisions are needed to protect users from abuses linked to the phenomenon. The head of the institution argues that both EU rules such as the AI Act and the Digital Services Act (DSA), as well as Polish regulations (the Civil Code, the Criminal Code, and copyright law), provide only selective and insufficient protection. One opportunity to improve the situation, in this view, would be to resume work on national provisions implementing the DSA—after a recent veto by Poland’s president.
“The Digital Services Act, which was approved in the European Parliament, is a very good project. The bad news is that Poland is the only country that has not implemented it. It was passed in the Sejm and the Senate, but President Karol Nawrocki did not sign the law. That is bad news for Polish women and men, because it means a lack of defense against disinformation, scammers, and manipulation,” says Joanna Scheuring-Wielgus.
At the beginning of January, Karol Nawrocki vetoed the law implementing the EU’s DSA, which—according to the government—was intended to enable effective action against illegal content, including pedophilia and scams, by allowing illegal content to be blocked. It was precisely this element that sparked the president’s opposition: he pointed to the risk of censoring content and limiting freedom of speech. The government stressed that the parliamentary bill incorporated Senate amendments that increased judicial oversight over the procedure for blocking illegal content. The procedure included, among other things, limiting the automatic immediate enforceability of decisions that were challenged.
“I do not trust the actions of the European Commission aimed at fighting for internet transparency, because behind that there can always be actions whose aim is to restrict freedom of speech. Of course, some provisions may be directed at limiting improper content online. But some may be aimed at the opposition that criticizes the actions of the European Commission, Ursula von der Leyen, the commissioners, or the entire ideology of the European Union—often promoting values that differ greatly from what ordinary EU citizens think,” Arkadiusz Mularczyk assesses.
He argues that protecting minors from illegal content could serve as a pretext for maintaining a monopoly over the media narrative.
“With technology platforms, we need to enter into dialogue. We see that Elon Musk’s X becoming involved in President Trump’s campaign caused liberal and left-wing elites in the European Union to suddenly decide that social media is a threat to their primacy, to their monopoly on power. So under the banner of protecting the internet and protecting minors online, a fight against digital platforms began,” the PiS MEP says.
“Digital platforms, above all, should pay taxes in our country. They should respond to manipulation, disinformation, hate, and scams—and unfortunately they do not,” says Joanna Scheuring-Wielgus.
On 26 January, the European Commission announced the opening of formal proceedings against X. It is to analyze whether the platform assessed and mitigated the risks linked to deploying Grok—the AI-based chatbot—in the EU. This includes the risk of disseminating illegal content, such as sexually explicit manipulated images, including those that may constitute child sexual abuse material.
“Do current EU rules provide sufficient tools to respond to such cases of using artificial intelligence? I think to a large extent they do. But we have to take into account that these technologies are changing at an exponential pace, so we cannot say we have something safe and permanent,” says Mirosława Nykiel. “Security has to be flexible—very accessible and convenient—so that at any moment you can add something when something changes. Creating excessive regulations is not our badge of honor. We are now working on ‘omnibus’ packages to move away from an excess of rules, so we don’t want to create new ones in their place. But the world is changing so fast that we have to regulate all of this. Citizens are, in fact, demanding it from us.”
According to the Eurobarometer survey published in January, Europeans are highly concerned about phenomena such as disinformation, hate speech online and offline, false content created by AI, the protection of personal data online, and threats to freedom of speech. Each of these risks was indicated by around 67–69% of EU respondents. Only one in ten said they saw no risk in this area. Among Polish respondents, the share of those concerned was even higher—between 70% and 75%.