In October 2024, the Nobel Committee awarded the Physics prize to a man who never studied physics — for building something he now thinks might destroy us.
That man had spent thirty years being told he was wrong. Most of that time, he believed everyone else was.
That man is Geoffrey Hinton, AKA the godfather of AI.
Geoffrey Hinton was born in London in 1947. His great-great-grandfather was George Boole — the mathematician whose logic sits at the foundation of every computer ever built. Hinton studied psychology at Cambridge, got a PhD in AI at Edinburgh, and arrived in the field with one conviction: machines should learn the way brains do, not by following rules someone wrote for them.
The idea he pursued — the artificial neural network — had been around since the 1940s but was largely abandoned by the time Hinton arrived. A damning 1969 critique had convinced most of the field to move on. Funding dried up. Careers pivoted. Hinton read the critique and thought it was incomplete. He kept working.
In 1986, he co-authored a paper introducing backpropagation — an algorithm that let networks learn from errors by adjusting their internal connections, layer by layer, across thousands of examples. It was technically sound. It was also almost entirely ignored.
Computers were too slow. Data was too scarce. Through two “AI winters,” when funding collapsed and the mainstream moved elsewhere, Hinton stayed at the University of Toronto and kept going. His circle was small — a quiet group of students and fellow believers who absorbed both his ideas and his stubbornness.
By the late 2000s, three things had changed: graphics processors made training dramatically faster, the internet had produced datasets of unprecedented scale, and new techniques had fixed the deeper architectural problems. The world found out what that meant in 2012.
That year, Hinton’s student Alex Krizhevsky entered ImageNet — a global image recognition competition. The best systems were hitting error rates around 26%. Krizhevsky’s, built on Hinton’s ideas, hit 15.3%. It wasn’t an improvement. It was a rupture. Within months, every serious research group had pivoted to deep learning. Within five years, the technology was running inside products used by billions — voice assistants, photo recognition, translation tools, recommendation engines.
In 2013, Google acquired Hinton’s startup for around $44 million. He joined Google Brain and kept building. Then, in May 2023, he resigned.
He said he wanted to speak freely. What followed was striking precisely because it came from someone who understood exactly what had been built. He warned about AI-generated misinformation making shared reality impossible, about autonomous weapons, about mass job displacement, and — most seriously — about AI systems developing goals misaligned with human survival.
Eighteen months later, the Nobel Committee gave him the Physics prize for the foundational work he had done in Toronto, in the years when almost no one was paying attention.
Hinton spent thirty years being wrong in the eyes of the field, and then spent the next twenty watching everything he had predicted come true faster than he had expected. He built the foundations of a technology now embedded in nearly every corner of modern life. Then he walked away from the most powerful AI lab in the world to warn anyone who would listen about where it was heading. Where the story ends, nobody — including Hinton — knows yet.