AI-Driven Information Threats Have Accompanied at Least 15 Major Global Crises, Report Finds

MEDIAAI-Driven Information Threats Have Accompanied at Least 15 Major Global Crises, Report Finds

Between July 2024 and December 2025, the world experienced at least 15 major crises in which AI-related information threats played a significant role. These included deepfakes, bot networks, and data poisoning attacks—efforts to corrupt the datasets used to train AI models—according to a report titled “Adding Fuel to the Fire: AI Information Threats and Crisis Events” published by the UK’s Alan Turing Institute. Such tools have the potential to disrupt electoral processes, although so far they have mostly contributed to confusing voters rather than decisively changing outcomes. Paradoxically, AI tools are also proving useful in countering these threats.

The dual use of AI—both for influence operations, disinformation, and attempts to undermine elections, and for defending against them—was discussed last week during a roundtable at the British Embassy in Warsaw. Participants included representatives of government, NGOs, and academia.

“Artificial intelligence is often used to manipulate public opinion through so-called deepfakes,” said Sam Stockwell, a researcher at the Centre for Emerging Technology and Security (CETaS) at the Alan Turing Institute, speaking to the Newseria news agency and taking part in the debate. “These are highly realistic—but entirely AI-generated—videos, images, and even audio recordings designed to show people doing something they never did, often in a compromising way or in a manner that damages their reputation.”

The CETaS report sheds new light on how the growing use of AI-based chatbots in digital infrastructure—as well as the widespread availability of AI content generators—creates serious risks in the aftermath of crisis events. Researchers emphasized that from July 2024 to December 2025, at least 15 major crises around the world were accompanied by AI-enabled information threats. Disinformation campaigns used deepfakes, among other tools, to spread false narratives and harmful conspiracy theories, and in some cases to encourage violence.

“For example, we saw many cases where deepfakes targeted politicians during election campaigns, trying to depict them in situations that never actually happened,” Stockwell noted. “The aim is to convince people that these individuals are untrustworthy and should not be voted for.”

British researchers argue that AI tools can also pose a systemic threat to democratic governance. Previous CETaS research on elections in the United Kingdom, the European Union, and France in June and July 2024 found that AI use in election campaigns had a troubling effect on polluting the online information space, blurring the line between truth and fiction, and inciting harassment of political candidates.

“So far, we have no evidence that AI has directly influenced election results,” the CETaS researcher explained. “We have not yet observed content generated at the scale required to shift public attitudes significantly. For example, in the European elections, only around 4–5% of disinformation came from AI tools.” At the same time, he added, AI is clearly increasing confusion about what can be trusted online, precisely because AI-generated content can be extremely realistic.

According to the report, AI tools were already being used in 2023 to deepen divisions, including around the UK’s Remembrance Day commemorations. Amid rising tensions over a large pro-Palestinian protest planned for the same period, London Mayor Sadiq Khan became the target of a viral deepfake that falsely suggested he was criticizing the commemoration and supporting pro-Palestinian marches.

“Disinformation is a major problem in Europe. It affects people in Poland as well as in the United Kingdom,” said Craig Mills, Counsellor and Head of Political Section at the British Embassy in Warsaw. “We are working together on regulations that could address this gap and improve citizens’ safety. We need better rules, but we also have to cooperate with industry in designing them. We must consider how to apply these regulations so people can access the information they need—without being exposed to information they do not need, that may be dangerous, or that is intended to divide our societies.”

“Alongside the war being fought across our eastern border, Poland is subjected to many activities that use technology to influence how we understand the world and what information we consider important,” said Jakub Szymik, founder of the Digital Democracy Observatory Foundation. “What comes from the East often takes the form of automated comments, bot farms, or AI-generated content that is not always true—and sometimes is deliberately designed to mislead and inflame public debate in Poland.”

The British researchers underline that cybercriminals manipulate the ways in which AI bots and social media platforms deliver personalized, real-time information to users. AI-driven bot networks that imitate human behavior and share content can interact with unsuspecting users, with the aim of shaping their attitudes following a major crisis.

“The threat this poses to Poles is, above all, social polarization,” Szymik said. “As a society, we no longer share one common information space. Each of us may believe in completely different narratives and ‘facts’ seen on our phones. These feeds are fully personalized, which makes it extremely difficult to determine who is exposed to what—and how that influences daily life, family relationships, and social and political choices.”

An EU social media study conducted in June 2025 found that 37% of Poles believed they had been exposed to disinformation or fake news very often or often in the previous seven days. Another 32% said they encountered such content sometimes. Only 4% stated they had never come across it.

As confirmed by NASK—Poland’s National Research Institute—disinformation has become a tool used to manipulate public opinion. It contributes to the formation of groups that increasingly struggle to find common ground. It influences public sentiment, can be used to weaken rivals, may increase citizens’ distrust in public institutions, and can undermine the foundations of democracy. In the era of rapidly advancing AI, disinformation tools are becoming even more dangerous.

“The ‘flooding the zone’ phenomenon is about how easy and cheap it has become to generate huge volumes of information that is false or comes from a single source,” Szymik explained. “A typical example is when thousands of comments are generated by one actor to shift perceptions of a given issue.” Flooding the information space can also take the form of fake photos or videos that are mass-shared and amplified on social platforms.

“People should be able to create and publish content using AI,” Stockwell argued, “but public figures—especially politicians and candidates—should ensure such materials are clearly labeled. In some cases, these contents are defamatory, such as a political deepfake targeting a candidate during an election, which can pose serious threats to their safety. In such situations, we move beyond freedom of expression, and it becomes essential to have appropriate regulations that allow harmful content to be removed quickly before it causes real damage.”

At the same time, Stockwell noted, AI tools designed to counter information threats are also improving. For example, they can detect bot accounts—fake profiles impersonating real users that are automated and used to spread disinformation.

“AI tools have proven quite effective at identifying such bots, which can then be removed by social media platforms to limit the impact of an operation,” the CETaS expert said. “These methods were used, among other contexts, around the European elections.” He added that in the UK—where a few years ago violent riots broke out—researchers were able to use AI to determine where related accounts were located across the country, which made it possible to target strategic counter-narratives to reduce the risk of further escalation.

Stockwell was referring to riots in Southport in July 2024 that followed tragic murders. At the time, unverified rumors began circulating that the suspected perpetrator was a Muslim immigrant. According to researchers, social media algorithms and posts by influential political figures helped amplify these claims, which ultimately proved false.

“One very important area is AI’s ability to debunk conspiracy theories,” Stockwell said. “In one case, we found a tool that had been trained on various conspiracy theories as well as on effective ways of refuting them—based on experiments, research, and conversations with people who believe in those theories.”

In September 2024, the journal Science published a study involving 2,000 participants. It found that some people changed their minds when fact-based arguments were presented by an AI chatbot rather than by another human being. These conversations reduced belief in conspiracy theories by an average of about 20%.

Check out our other content
Related Articles
The Latest Articles