DeepSeek may have used Google’s Gemini to train its latest model

Last week, Chinese lab DeepSeek released an updated version of its R1 reasoning AI model that performs well on a number of math and coding benchmarks. The company didn’t reveal the source of the data it used to train the model, but some AI researchers speculate that at least a portion came from Google’s Gemini family of AI.
Sam Paeach, a Melbourne-based developer who creates “emotional intelligence” evaluations for AI, published what he claims is evidence that DeepSeek’s latest model was trained on outputs from Gemini. DeepSeek’s model, called R1-0528, prefers words and expressions similar to those Google’s Gemini 2.5 Pro favors, said Paeach in an X post.
That’s not a smoking gun. But another developer, the pseudonymous creator of a “free speech eval” for AI called SpeechMap, noted the DeepSeek model’s traces — the “thoughts” the model generates as it works toward a conclusion — “read like Gemini traces.”
DeepSeek has been accused of training on data from rival AI models before. In December, developers observed that DeepSeek’s V3 model often identified itself as ChatGPT, OpenAI’s AI-powered chatbot platform, suggesting that it may’ve been trained on ChatGPT chat logs.
Earlier this year, OpenAI told the Financial Times it found evidence linking DeepSeek to the use of distillation, a technique to train AI models by extracting data from bigger, more capable ones. According to Bloomberg, Microsoft, a close OpenAI collaborator and investor, detected that large amounts of data were being exfiltrated through OpenAI developer accounts in late 2024 — accounts OpenAI believes are affiliated with DeepSeek.
Distillation isn’t an uncommon practice, but OpenAI’s terms of service prohibit customers from using the company’s model outputs to build competing AI.
To be clear, many models misidentify themselves and converge on the same words and turns of phrases. That’s because the open web, which is where AI companies source the bulk of their training data, is becoming littered with AI slop. Content farms are using AI to create clickbait, and bots are flooding Reddit and X.
This “contamination,” if you will, has made it quite difficult to thoroughly filter AI outputs from training datasets.
Still, AI experts like Nathan Lambert, a researcher at the nonprofit AI research institute AI2, don’t think it’s out of the question that DeepSeek trained on data from Google’s Gemini.
“If I was DeepSeek, I would definitely create a ton of synthetic data from the best API model out there,” Lambert wrote in a post on X. “[DeepSeek is] short on GPUs and flush with cash. It’s literally effectively more compute for them.”
Partly in an effort to prevent distillation, AI companies have been ramping up security measures.
In April, OpenAI began requiring organizations to complete an ID verification process in order to access certain advanced models. The process requires a government-issued ID from one of the countries supported by OpenAI’s API; China isn’t on the list.
Elsewhere, Google recently began “summarizing” the traces generated by models available through its AI Studio developer platform, a step that makes it more challenging to train performant rival models on Gemini traces. Anthropic in May said it would start to summarize its own model’s traces, citing a need to protect its “competitive advantages.”
We’ve reached out to Google for comment and will update this piece if we hear back.