ChatGPT Users Targeted by Cyber Criminals

SECURITYChatGPT Users Targeted by Cyber Criminals
  • Digital transformation of companies increasingly relies on artificial intelligence.
  • For example, in the programming industry, 85% of people will use generative AI within the next two years.
  • The widespread use of AI in business also presents an opportunity for cybercriminals – warn ESET analysts, who in the second half of 2023 blocked 650,000 attempts at fraud via malicious domains using the name Chat GPT or similar text, suggesting a connection to the most popular AI tool.

AI – A Great Opportunity for Digital Transformation

The use of generative artificial intelligence in business has become a necessity. Managers and company directors who are conducting digital transformations should be aware that implementing AI is no longer just an image curiosity, but a necessary move in a competitive business environment. This trend is already clearly visible among IT sector companies. According to the Capgemini report “Gen AI in software”, as much as 85% of developers will use Gen AI within the next two years. Moreover, 80% of the respondents believe that the automation of simple, repetitive tasks with Gen AI tools and solutions will significantly change their work and allow them to focus on tasks of higher value.

For 46% of individuals in software engineering positions, AI tools already support their daily tasks. Generative artificial intelligence is also increasingly used in the work of specialists in many other industries. Instead of asking whether to use AI, we should consider how to do it in a way that is safe for the company.

Chat GPT – Beware of Fake Web Tools and Apps

Chat GPT is breaking popularity records. It is estimated to hold 60% of the market share in AI tools[i]. This is reflected in the scale of threats.

“The statistics are devastating – only in the second half of 2023, ESET blocked over 650,000 user attempts to enter fake pages related to ‘chatgpt’ or similar text in the name. The second significant attack vector on users is advertising and attempting to convince them to install fake apps that are uncannily similar to Chat GPT. There are a swarm of fake copies of various apps in the app stores (especially mobile) that mimic the latest versions, e.g. Chat GPT 5, Sora or MidJourney, containing malicious software,” describes Beniamin Szczepankiewicz, an analyst at ESET’s antivirus lab.

It’s often the employee, tempted by the idea of saving time and wanting to improve their work, who is the weakest link in the system and provokes dangerous situations. Fake apps, once downloaded and installed, can attack users with adverts, encourage them to make additional purchases in the app or require a subscription to non-existing or very poor quality services.

“We also discovered scam attempts where cybercriminals promised access to advanced AI capabilities that would be made available for an additional fee. A significant part of the attacks are conducted through ads on social media, where users are massively offered fake pages or apps,” describes Beniamin Szczepankiewicz.

Untrusted but installed desktop or mobile applications can also contain malicious software, allowing fraudsters to take control of the computer/mobile phone or other device and thus gain access to all stored data: personal (including customers’), financial or other. This in turn opens the door for cybercriminals to commit fraud or blackmail. Moreover, authentication data for business accounts can be intercepted, which is a straight path to an attack on a given company or its customers.

As practice shows, employees using AI tools often forget about digital security rules. According to data from Cyberhaven Labs, the amount of data entered by employees into Gen AI tools has dramatically increased – from March 2023 until the present time, it has almost sextupled! For AI tools, employees enter confidential information, such as financial data, client data, employee data and many more. It’s important to remember that the data we input into AI tools can be used in the future to train newer versions, thereby reducing the level of privacy.

Translator Plugin – Be Vigilant

Another way cybercriminals exploit the popularity of AI is by trying to convince people to install a fake plugin that imitates Google Translate – they direct to it through fake Facebook ads using the image of OpenAI or Gemini. Even though the plugin deceptively resembles the real one, it is in fact malicious software designed to steal login data for Facebook, among other things. In one year, ESET systems registered and thwarted 4,000 attempts at this type of fraud.

Generative artificial intelligence is a great opportunity for businesses, but it also requires company managers and boards to be responsible when implementing tools. Leaving free reign to employees can lead to data leaks, and consequently, financial and reputational loss. How can companies avoid this? By developing appropriate AI policies and procedures within their firms, keeping antivirus software up to date, and educating and sharing knowledge with employees about the need to carefully verify the authenticity of AI apps they intend to use and the data they are entering into AI systems. It’s always worth repeating general truths about being vigilant, verifying links you click on, and using multi-factor authentication.

[i] Source: https://www.visualcapitalist.com/ranked-the-most-popular-ai-tools/

Source: https://managerplus.pl/uzytkownicy-chatgpt-na-celowniku-cyberprzestepcow-74508

Exit mobile version