4 min read

DeepSeek Might Have Used Google's Gemini To Train Its Newest Model

DeepSeek Might Have Used Google's Gemini To Train Its Newest Model

In the ever-evolving world of artificial intelligence, where breakthroughs are as common as coffee breaks, Chinese AI lab DeepSeek has dropped a bombshell—or rather, a model. The release of DeepSeek R1-0528, an updated version of its reasoning AI model, has not only impressed the tech world with its stellar performance in math and coding benchmarks but also stirred up a pot of controversy. The model’s origins, specifically the source of its training data, have become the subject of intense speculation. Some researchers, led by Melbourne-based developer Sam Paech, have raised eyebrows with claims that DeepSeek’s latest creation may have been trained using Google’s Gemini AI outputs.

Let’s dive into this AI soap opera, where innovation meets intrigue, and where every parameter tells a story.

This post is for paying subscribers only