Deloitte: Poland enters the phase of economic expansion

The divergence of economic moods in Poland...

Two Years On: War in Ukraine and Its Global Impact

On February 24, 2022, a full-scale Russian...

The game of “good and bad” artificial intelligence will start soon

SECURITYThe game of "good and bad" artificial intelligence will start soon

Mikko Hyppönen – a cybersecurity expert at WithSecure – identifies five key areas in which cybercriminals will use artificial intelligence (AI) to conduct attacks. While the proliferation of deepfakes online and widespread fraud are important issues, the main threat in this year is anticipated to be the automation of malicious software. According to Hyppönen, the battle between good and evil AI will commence soon, and its effects could be alarming for all.

1. Deepfakes

Deepfake is a technique that involves creating fake images or videos that mimic the appearance and voice of real people or events. It can be used to manipulate public opinion, blackmail, or steal identities. The WithSecure expert indicates that sophisticated frauds using deepfakes are not yet as common as other cyber attacks, it’s only a matter of time before their number increases.

To reduce the risk of falling for such scams, Mikko Hyppönen recommends reverting to the old-fashioned method of “safety words”. These established terms can be used between family members, friends, and colleagues to confirm the authenticity of a conversation. This method is cost-free and provides basic protection against scams. Everyone should adopt this approach in 2024.

2. Deep Scams

In this context, ‘deep’ refers to the massive scale of the fraud for which artificial intelligence is responsible. These scams can encompass investment crimes, phishing, ransomware attacks, and any other actions that can automate processes.
Hyppönen points out the use of AI in enhancing the level of deep scams in the short-term rental industry online. Creating unique and realistic images is no longer a challenge due to AI, making scams harder to detect.

3. Large Language Models in the Hands of Cybercriminals

AI already writes malicious software. Large Language Models (LLMs), like those that power ChatGPT, pose a serious threat to global digital security when utilized by cybercriminals. Using LLMs, cybercriminals can automatically generate a unique malicious code for every target, making such attacks even harder to detect.

4. Zero-Day Vulnerabilities

Zero-day vulnerabilities are gaps in the system that software creators do not know about or have not yet announced and fixed. Cybercriminals can exploit these vulnerabilities for phishing activities, system penetration, to install malicious software, or to steal data. Not only coders but also culprits use AI to effectively find zero-day vulnerabilities in the code.

5. Automated Malicious Software

WithSecure’s expert suggests that cybercriminals, aiming to enhance the effectiveness of their attacks, will strive to fully automate their malware campaigns. AI-driven malware could self-modify its code, learn from its own mistakes, and fix them, to avoid detection and adapt to the new environment. This could lead to a clash between good and bad AI.

Mikko Hyppönen has repeatedly emphasized throughout his career that any smart device is vulnerable to attacks. If you add artificial intelligence and all the machines powered by it into this equation, the future doesn’t look bright. WithSecure’s expert suggests that humans may soon become the second most intelligent beings in the world after AI.

About the expert:

Mikko Hyppönen, originally from Finland, is a cybersecurity expert, speaker, and author. He serves as the director for research and cybersecurity at WithSecure.

Check out our other content
Related Articles
The Latest Articles