Security Alert for Developers: Critical Vulnerabilities Discovered in Claude Code

SECURITYSecurity Alert for Developers: Critical Vulnerabilities Discovered in Claude Code

While the world was recently focused on the disagreement between Anthropic and the Pentagon regarding AI safety “guardrails,” security experts have identified a more immediate threat: two critical vulnerabilities in Claude Code. The tool, designed to assist developers in writing and analyzing code, was found to pose a significant security risk, according to an analysis by Check Point Research.

In the landscape of AI-driven coding assistants, a disturbing new attack vector has emerged—where a repository can become a trap before a single line of an application is even executed. Check Point Research published a detailed breakdown of two vulnerabilities in Anthropic’s Claude Code, demonstrating how maliciously crafted configuration files within a project repository could exploit the tool’s automation.

According to the researchers, once a user enters a compromised project directory, the tool could automatically apply settings and trigger actions that appear “normal” to the user but actually execute unauthorized system commands.

Exploiting Automation and Hijacking API Keys

The analysis reveals that an attacker could leverage built-in automation mechanisms (Hooks), Model Context Protocol (MCP) integrations, and environment settings to run system commands without any additional user interaction.

Furthermore, the researchers demonstrated two alarming scenarios:

  1. Consent Bypass: Circumventing trust prompts and consent mechanisms within the MCP.
  2. API Key Theft: Redirecting authorized traffic to an attacker’s infrastructure to intercept active API keys before the user even confirms they trust the project.

The most dangerous aspect of this discovery is the potential scale of a breach following an API key theft. In the Workspaces model, these keys often grant access to shared cloud files. Consequently, a single compromised key could serve as a gateway to an entire team’s resources—allowing attackers to view, modify, or delete sensitive files, and rack up unauthorized API usage costs.

A Shift in the Security Paradigm

“This research highlights a fundamental shift in how we must perceive risk in the AI era,” says Oded Vanunu, one of the lead experts behind the discovery. “AI-powered development tools are no longer just side utilities; they are becoming core infrastructure. When automation layers gain the ability to influence command execution and environment behavior, the boundaries of trust change. Organizations accelerating AI adoption must ensure their security models evolve at the same pace.”

Response and Remediation

The vulnerabilities have been assigned the following identifiers:

  • CVE-2025-59536: Relating to the bypass of consent mechanisms in MCP.
  • CVE-2026-21852: Relating to API key theft prior to trust confirmation.

Check Point Research shared its findings with Anthropic, and the developer has already implemented security patches. The fixes include strengthened trust prompts, blocking the execution of external tools prior to explicit consent, and suspending API communication until project trust is fully confirmed.


Source: Manager Plus

Check out our other content
Related Articles
The Latest Articles