In April 2026, Mercor, a company that supports several major LLM developers (Anthropic, Meta, and OpenAI), was the victim of a hack. This was an extension of the LiteLLM hack, which granted the attackers access to Mercor’s environment and internal data.
Inside the Attack
The hack began with attackers compromising Trivy, a security scanning tool. From there, they were able to insert malicious code into LiteLLM, a widely used Python library for building AI tools.
Mercor is a contractor that hires experts in various tools to help train AI models. The company uses LiteLLM, and the infostealer malware injected into the LiteLLM code provided the attacker with credentials for Mercor systems and environments. From there, the attacker was able to access various internal systems and tools, stealing an estimated 4TB of Slack data, ticketing information, videos of contractors, database records, and source code.
Compromised contractor data includes passport scans, Social Security Numbers (SSNs), and video interviews. The attacker also stole detailed records on how OpenAI chooses training data, Meta builds reinforcement learning, and Anthropic implements safety labels for AI output.
Included in the breached information was data on Anthropic’s naming conventions for its tools. By reconstructing how this worked, unauthorized users were able to use the Mythos model, which Anthropic previously stated was “too dangerous to release” due to its ability to identify zero-day vulnerabilities. In an unrelated incident earlier in April, Claude Code’s source code was also accidentally leaked through a map file posted to a public npm repo.
Lessons Learned from the Attack
The Mercor incident is the latest in a string of supply chain attacks targeting the AI sector. As Mercor noted, thousands of companies were impacted by the LiteLLM breach, and the others may not have realized that their environments and credentials had been compromised.
Beyond the implications for Mercor and its contractors, the leak of OpenAI, Meta, and Anthropic data poses significant risks for these tools. Knowledge of how OpenAI collects training data could potentially allow attackers to more effectively poison this data. Access to the source code of Mythos and other Anthropic models could allow attackers to identify and exploit biases or train adversarial models for prompt injection.
While the long-term impacts of these incidents are unpredictable, the threat of the LiteLLM supply chain attack is very real. If organizations use the tool — or rely on third-party vendors that do — then they should rotate potentially impacted passwords and explore the potential risks associated with this supply chain threat.
