Is Anthropic’s Git Server a Security Risk?

Is Anthropic’s Git Server a Security Risk?

The seamless integration of powerful AI assistants into developer workflows has quietly opened a new and treacherous front in the battle for cybersecurity, a reality brought into sharp focus by a recent investigation. An extensive and detailed analysis from the cybersecurity firm Cyata has revealed three significant vulnerabilities in Anthropic’s official Git Model Context Protocol (MCP) Server. These security flaws could be exploited by malicious actors to tamper with Large Language Models (LLMs), execute unauthorized code, and ultimately compromise the integrity of interconnected AI systems, serving as a critical warning to the developer community.

The central issue stems from the potential for threat actors to leverage prompt injection attacks against the vulnerable server. This sophisticated method allows an attacker to manipulate an LLM’s input, causing it to trigger actions on connected systems with malicious, attacker-controlled arguments. The consequences are severe, ranging from the subtle manipulation of an LLM’s responses to the overt execution of arbitrary code on the underlying system, creating a significant security risk for any organization deploying these advanced tools.

Unveiling Critical Vulnerabilities in AI Infrastructure

This research summary examines the recent discovery of three significant security vulnerabilities in Anthropic’s official Git Model Context Protocol (MCP) Server. It addresses how these flaws could allow malicious actors to exploit Large Language Models (LLMs) through prompt injection, leading to unauthorized code execution and compromising the integrity of AI systems. The alert, issued by researchers at Cyata, highlights a new class of threats emerging at the confluence of AI and traditional IT infrastructure.

These vulnerabilities are particularly alarming because they affect any default, out-of-the-box installation of the server software prior to the patch released in late 2025. Unlike previous issues that required specific, non-default configurations, these flaws are universally applicable, dramatically widening the pool of potentially affected systems. This accessibility lowers the barrier for attackers, making the threat both widespread and immediate for organizations that have not yet applied the necessary updates.

The Context AI Integration and the Model Context Protocol

The Model Context Protocol (MCP), an open standard introduced by Anthropic, is designed to create a unified framework for AI assistants to interact with external tools and data sources. MCP servers function as the crucial bridge between the LLM and these external systems, which can include filesystems, databases, APIs, and development tools like Git. They execute real-world actions based on the decisions and instructions generated by the LLM, enabling the AI to perform complex, multi-step tasks that go far beyond simple text generation.

This research is critical as it highlights the emerging security challenges at the intersection of LLMs and traditional IT infrastructure, where vulnerabilities can have far-reaching consequences. As AI agents are granted more autonomy and deeper access to sensitive systems, the protocols governing their actions become a prime target. The security of the MCP is therefore not just a technical concern but a foundational requirement for safely deploying advanced AI tools within an enterprise environment.

Research Methodology Findings and Implications

Methodology

The analysis is based on research from the cybersecurity firm Cyata, which identified the vulnerabilities through methodical security testing and vulnerability research. The primary attack vector demonstrated by the firm involves prompt injection, a technique where an attacker tricks an AI agent into reading and processing controlled content, such as a malicious document or a compromised code repository.

Once the AI’s context is influenced by this malicious input, it can be manipulated into triggering vulnerable tool calls through the MCP server. These calls are then executed with malicious, attacker-controlled arguments, effectively turning the AI agent into an unwitting accomplice. This methodology underscores how an LLM’s trust in its input can be weaponized to compromise the very systems it is designed to assist.

Findings

The research identified three critical vulnerabilities affecting all mcp-server-git versions prior to the 2025-12.18 patch, each exploitable on any default installation. The first, CVE-2025-68143 (Unrestricted git_init), allows an attacker to create a Git repository in an arbitrary location on the filesystem via prompt injection. Because the server fails to sufficiently validate the path it is given, a threat actor can instruct the LLM to initialize a repository in a sensitive directory, which can then be populated with malicious code.

The other two flaws compound this risk significantly. CVE-2025-68145 (Path Validation Bypass) is a fundamental failure to properly validate file paths, enabling malicious instructions to target and access protected system directories that should be off-limits. Furthermore, CVE-2025-68144 (Argument Injection in git_diff) enables the injection of arbitrary command-line flags into Git commands. This critical flaw can be used to overwrite files, delete data, or, most alarmingly, achieve arbitrary code execution by leveraging Git’s own internal mechanisms.

Implications

The findings reveal a significant security risk, especially when organizations run both Git and Filesystem MCP servers concurrently. Security experts describe this configuration as a “toxic combination” that dramatically expands the attack surface, putting the AI agent at critical risk by allowing an attacker to more easily stage malicious files and then use the Git server’s vulnerabilities to execute them. The synergy between the two servers creates a powerful exploit chain that can be difficult to defend against without proper precautions.

Moreover, the disclosure timeline, which spanned several months from the initial report in mid-2025 to the final patch in December, underscores the challenges in addressing novel AI-centric vulnerabilities. It highlights a learning curve for the industry in recognizing and responding to complex exploits that cross the boundary between natural language processing and system command execution. This case emphasizes the paramount importance of securing the protocols that connect LLMs to external systems.

Reflection and Future Directions

Reflection

This study highlights that completely solving prompt injection is an inherently difficult, if not impossible, challenge. The fluid and contextual nature of LLM processing makes it difficult to create rigid rules that can distinguish malicious instructions from legitimate ones. Therefore, a defense-in-depth security posture is not just recommended; it is essential. Key defensive strategies must begin with immediate software updates to patched versions, but they cannot end there.

Organizations must also audit their server configurations to eliminate unnecessary risks, such as running multiple MCP servers without a clear business need. Actively monitoring for anomalous filesystem activity, such as the sudden appearance of .git directories in unexpected locations, can serve as an early warning of a potential compromise. Broader security principles, including implementing the principle of least privilege, strengthening input validation in all connected tools, and enhancing logging capabilities, are essential for mitigating these types of risks.

Future Directions

Future research and development must focus on building more secure and resilient frameworks for LLM-to-tool interaction. This includes developing robust input sanitization and validation protocols that are designed specifically for the nuances of AI-driven commands, moving beyond the traditional security models that were not built for this new paradigm. A deeper exploration into creating fine-grained access controls and sandboxing environments is also urgently needed.

Such environments could effectively limit the potential damage from a compromised AI agent, ensuring that the functionality of integrated tools cannot be abused for malicious purposes. By isolating the AI’s actions and strictly defining its permissions, organizations can create a safety net that protects critical infrastructure even if the AI itself is successfully manipulated. The goal is to allow AI to be a powerful tool without making it a powerful weapon.

Securing the Future of AI Integration

The vulnerabilities discovered in Anthropic’s Git MCP Server are a stark reminder that as AI becomes more powerful and integrated into core business processes, its attack surface expands in tandem. The research by Cyata provided a critical case study in a new class of security risks that live at the intersection of natural language and executable code. For organizations to safely leverage the transformative power of LLMs, they must move beyond a reactive stance and adopt a proactive security posture. This approach necessarily combines timely patching, architectural best practices, and continuous, vigilant monitoring to defend against the sophisticated, AI-driven threats that are becoming the new frontier of cybersecurity.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later