Watch: Google DeepMind CEO and AI Nobel winner Demis Hassabis on CBS’ ’60 Minutes’

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
A segment on CBS weekly in-depth TV news program 60 Minutes last night (also shared on YouTube here) offered an inside look at Google’s DeepMind and the vision of its co-founder and Nobel Prize-winning CEO, legendary AI researcher Demis Hassabis.
The interview traced DeepMind’s rapid progress in artificial intelligence and its ambition to achieve artificial general intelligence (AGI)—a machine intelligence with human-like versatility and superhuman scale.
Hassabis described today’s AI trajectory as being on an “exponential curve of improvement,” fueled by growing interest, talent, and resources entering the field.
Two years after a prior 60 Minutes interview heralded the chatbot era, Hassabis and DeepMind are now pursuing more capable systems designed not only to understand language, but also the physical world around them.
The interview came after Google’s Cloud Next 2025 conference earlier this month, in which the search giant introduced a host of new AI models and features centered around its Gemini 2.5 multimodal AI model family. Google came out of that conference appearing to have taken a lead compared to other tech companies at providing powerful AI for enterprise use cases at the most affordable price points, surpassing OpenAI.
More details on Google DeepMind’s ‘Project Astra’
One of the segment’s focal points was Project Astra, DeepMind’s next-generation chatbot that goes beyond text. Astra is designed to interpret the visual world in real time.
In one demo, it identified paintings, inferred emotional states, and created a story around a Hopper painting with the line: “Only the flow of ideas moving onward.”
When asked if it was growing bored, Astra replied thoughtfully, revealing a degree of sensitivity to tone and interpersonal nuance.
Product manager Bibbo Shu underscored Astra’s unique design: an AI that can “see, hear, and chat about anything”—a marked step toward embodied AI systems.
Gemini: Toward actionable AI
The broadcast also featured Gemini, DeepMind’s AI system being trained not only to interpret the world but also to act in it—completing tasks like booking tickets and shopping online.
Hassabis said Gemini is a step toward AGI: an AI with a human-like ability to navigate and operate in complex environments.
The 60 Minutes team tried out a prototype embedded in glasses, demonstrating real-time visual recognition and audio responses. Could it also hint at an upcoming return of the pioneering yet ultimately off-putting early augmented reality glasses known as Google Glass, which debuted in 2012 before being retired in 2015?
While specific Gemini model versions like Gemini 2.5 Pro or Flash were not mentioned in the segment, Google’s broader AI ecosystem has recently introduced those models for enterprise use, which may reflect parallel development efforts.
These integrations support Google’s growing ambitions in applied AI, though they fall outside the scope of what was directly covered in the interview.
AGI as soon as 2030?
When asked for a timeline, Hassabis projected AGI could arrive as soon as 2030, with systems that understand their environments “in very nuanced and deep ways.” He suggested that such systems could be seamlessly embedded into everyday life, from wearables to home assistants.
The interview also addressed the possibility of self-awareness in AI. Hassabis said current systems are not conscious, but that future models could exhibit signs of self-understanding. Still, he emphasized the philosophical and biological divide: even if machines mimic conscious behavior, they are not made of the same “squishy carbon matter” as humans.
Hassabis also predicted major developments in robotics, saying breakthroughs could come in the next few years. The segment featured robots completing tasks with vague instructions—like identifying a green block formed by mixing yellow and blue—suggesting rising reasoning abilities in physical systems.
Accomplishments and safety concerns
The segment revisited DeepMind’s landmark achievement with AlphaFold, the AI model that predicted the structure of over 200 million proteins.
Hassabis and colleague John Jumper were awarded the 2024 Nobel Prize in Chemistry for this work. Hassabis emphasized that this advance could accelerate drug development, potentially shrinking timelines from a decade to just weeks. “I think one day maybe we can cure all disease with the help of AI,” he said.
Despite the optimism, Hassabis voiced clear concerns. He cited two major risks: the misuse of AI by bad actors and the growing autonomy of systems beyond human control. He emphasized the importance of building in guardrails and value systems—teaching AI as one might teach a child. He also called for international cooperation, noting that AI’s influence will touch every country and culture.
“One of my big worries,” he said, “is that the race for AI dominance could become a race to the bottom for safety.” He stressed the need for leading players and nation-states to coordinate on ethical development and oversight.
The segment ended with a meditation on the future: a world where AI tools could transform almost every human endeavor—and eventually reshape how we think about knowledge, consciousness, and even the meaning of life. As Hassabis put it, “We need new great philosophers to come about… to understand the implications of this system.”