Deloitte: Poland enters the phase of economic expansion

The divergence of economic moods in Poland...

Two Years On: War in Ukraine and Its Global Impact

On February 24, 2022, a full-scale Russian...

Over 1/3 of regulated data goes to AI

SECURITYOver 1/3 of regulated data goes to AI

Netskope has published new research indicating that regulated data (data organizations are legally required to protect) make up over one-third of the sensitive data shared with generative artificial intelligence apps (genAI). This naturally poses a potential risk to companies of costly data breaches. Especially given the report suggests the use of generative artificial intelligence has increased more than threefold over the past 12 months.

The new analysis by Netskope Threat Labs reveals that three-quarters of the surveyed companies completely block at least one genAI app, reflecting corporate technology leaders’ efforts to limit the risk of exfiltrating sensitive data. However, fewer than half of organizations apply data-focused controls to prevent the sharing of confidential information in input requests.

Netskope’s analysis also shows that already 96% of companies are currently using genAI. This number has tripled in the last 12 months. On average, companies are now using nearly 10 genAI apps, compared to 3 last year. Along with the increase in use, companies experienced a sharp rise in sharing proprietary source code in genAI apps, accounting for 46% of all documented data policy violations.

There are positive signs of proactive risk management in the nuances of security settings and data loss control applied by organizations. For example, 65% of companies are currently implementing real-time user coaching to aid in directing user interactions with genAI apps. According to Netskope specialists’ analysis, effective user coaching played a critical role in mitigating data-related risk, prompting 57% of employees to change their behavior after receiving coaching alerts.

“Securing the use of generative artificial intelligence requires further investment and increased attention as its use permeates enterprises and shows no signs of slowing down soon. Companies must realize that genAI outputs can inadvertently disclose confidential information, propagate disinformation, and even introduce malicious content. This calls for a robust approach to risk management to protect data, reputation, and business continuity”. – said James Robinson, Chief Information Security Officer, Netskope.

The Netskope report also reveals that

  • ChatGPT remains the most popular app used by over 80% of companies.
  • Microsoft Copilot has shown the most dramatic increase in usage since its launch in January 2024, at 57%.
  • 19% of organizations have imposed a total ban on GitHub CoPilot.

Source: https://managerplus.pl/netskope-alarmuje-ponad-1-3-danych-regulowanych-trafia-do-ai-86551

Check out our other content
Related Articles
The Latest Articles