Meta freezes $10 billion partnership with AI startup Mercor following massive supply chain breach
- Marijan Hassan - Tech Journalist
- 12 minutes ago
- 2 min read
Meta Platforms has "indefinitely" suspended its collaboration with Mercor, a high-profile AI data startup valued at $10 billion, following a sophisticated supply chain attack that has potentially exposed the industry’s most guarded secrets: the proprietary methodologies used to train world-leading AI models.

The pause, confirmed on April 4, 2026, follows a breach at Mercor that originated from a open-source library. While Meta has declined to comment publicly, internal sources indicate the company is scrambling to assess if its data selection criteria, labelling protocols, and specific training strategies, worth billions in R&D, have been leaked to competitors.
The LiteLLM "poisoning"
The breach was not a direct attack on Mercor’s servers but a supply chain compromise of LiteLLM, a popular Python library used by millions of AI developers to interface with various language models.
On March 27, a threat actor known as TeamPCP used compromised maintainer credentials to publish two malicious versions of LiteLLM (1.82.7 and 1.82.8) to the Python Package Index (PyPI). Although the tainted packages were removed within 40 minutes, they were downloaded tens of of times by automated systems.
Mercor, which relied on the library for its internal data pipelines, was one of the primary downstream victims.
4 Terabytes of stolen data
The fallout from the breach is staggering. The hacking group Lapsus$, reportedly collaborating with TeamPCP, claims to have exfiltrated 4 terabytes of data from Mercor’s environment, including:
939 GB of platform source code: Revealing the inner workings of Mercor's data-matching algorithms.
3 TB of video and identity data: Including interview recordings and Social Security numbers for over 40,000 contractors.
Proprietary training sets: Bespoke datasets and "ground truth" labeling strategies developed specifically for Meta, OpenAI, and Anthropic.
"Every major lab has billions of dollars of value at risk here," noted one Silicon Valley venture capitalist. "If these training methodologies end up in the hands of rival state-sponsored labs, it’s a national security issue as much as a corporate one."
Impact on the "ghost work" force
The suspension has left thousands of specialized AI trainers in limbo. Mercor operates as a massive "expert network," hiring thousands of software engineers, lawyers, and writers to provide high-quality human feedback for AI models.
Following Meta's decision to freeze all projects, contractors assigned to Meta-specific tasks reported they were suddenly unable to log hours or access their dashboards. Internal communications suggest Meta is conducting a deep forensic audit of all data handled by Mercor before deciding whether to resume the partnership.
A structural risk for big tech
The incident highlights a growing vulnerability in the AI ecosystem: concentrated vendor risk. Because Meta, OpenAI, and Anthropic all rely on the same small pool of specialized data vendors like Mercor, a single vulnerability in a shared open-source tool like LiteLLM can trigger a "cascading compromise" across the entire industry.
While OpenAI confirmed it is also investigating the breach, it has not yet followed Meta’s lead in pausing its projects. However, the incident has already triggered a class-action lawsuit against Mercor, filed on April 1, alleging the startup failed to maintain adequate security protocols for its vast contractor network.












