The story begins not with winter, but with a beautiful spring.

In 1943, Warren McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity," a mathematical model of neural networks. The brain, they suggest, is computable.
Seven years later, Alan Turing publishes "Computing Machinery and Intelligence" and proposes his famous test. The question shifts from "can machines think?" to "can we tell the difference?"
Fast forward six years to The Dartmouth Summer Research Project on Artificial Intelligence. John McCarthy coins the term "artificial intelligence." The proposal is breathtaking in its confidence: they believe that significant progress on machine intelligence can be made "during a single summer by a carefully selected group of ten scientists."
The attendees—McCarthy, Marvin Minsky, Claude Shannon, Allen Newell, Herbert Simon—are brilliant. They create programs that prove geometric theorems, play checkers, and solve algebra problems. Simon declares in 1957 that "Within ten years a computer will be the world's chess champion." Minsky announces in 1967 that "Within a generation, the problem of creating 'artificial intelligence' will substantially be solved."
The promises flow like rivers in springtime. Funding flows with them; from DARPA, from corporations, from governments dreaming of intelligent machines that will translate languages, recognize speech, navigate autonomously, and reason like humans.
The fatal flaw: They had no idea how hard it actually was, how long it would actually take, and how much it would actually cost.
The years between the 1956 Dartmouth Conference and the first AI winter in 1974 were a period of extraordinary optimism, rapid experimentation, and bold intellectual ambition.
Researchers genuinely believed that human‑level machine intelligence might be only a decade away. The Dartmouth meeting had declared that every aspect of intelligence could, in principle, be described precisely enough for a machine to simulate it. That declaration became the field’s founding myth and its driving force. Throughout the late 50s, the decade of the 60s and into the early 70s, a period of almost twenty years, AI labs at MIT, Carnegie Mellon, Stanford, and RAND raced to turn that vision into reality.
Early successes seemed to justify the confidence. Programs like the Logic Theorist and the General Problem Solver showed that symbolic reasoning could solve puzzles and prove theorems. John McCarthy created Lisp, the first language designed specifically for AI, while Marvin Minsky pushed the idea of “micro‑worlds,” simplified environments where machines could demonstrate intelligent behavior. Natural‑language systems like ELIZA and SHRDLU captured the public imagination by simulating conversation or manipulating objects in a virtual world. Robotics made its first real leap forward with Shakey, a mobile robot that could perceive, plan, and act, at least in controlled settings. Each breakthrough reinforced the belief that general intelligence was within reach.
During the period between the 1956 Dartmouth Conference and the first AI winter in 1974, DARPA (then ARPA) became the single most important institutional force shaping early artificial intelligence. While universities supplied the ideas, DARPA supplied the money, the computing power, and the freedom for researchers to explore wildly ambitious visions without immediate practical constraints. This was the era when DARPA treated AI as a national strategic priority, believing that machine intelligence could transform defense, command‑and‑control, and scientific discovery. The agency’s willingness to fund long‑shot research created the intellectual ecosystem in which early AI flourished.
DARPA’s investments were broad and foundational. It funded the creation of major AI laboratories at MIT, Stanford, and Carnegie Mellon, enabling work on symbolic reasoning, robotics, natural‑language understanding, and early machine vision. Projects like the General Problem Solver, Shakey the Robot, and SHRDLU were all made possible by DARPA support. The agency also funded the development of time‑sharing systems and interactive computing. These were technologies that were not AI per se but were essential for AI research. DARPA’s money paid for the PDP‑6 and PDP‑10 machines that became the beating heart of AI labs. It also supported the creation of ARPANET, which would later become the internet.
In many ways, DARPA wasn’t just funding AI; it was building the entire computational infrastructure that AI needed to grow.
But beneath the optimism, cracks were forming. The systems worked only in narrow, highly constrained environments. Language programs couldn’t scale beyond toy vocabularies. Problem‑solving systems collapsed under real‑world complexity. Machine translation stalled. Perceptrons, the early neural networks, were mathematically criticized and largely abandoned. Funding agencies began to notice that progress was slower and more fragile than promised. By the early 1970s, the gap between ambition and reality had become impossible to ignore. Reports in the U.S. and U.K. questioned the field’s feasibility, leading to cuts in funding. Without government support, AI could not survive.
Yet DARPA’s ambitions also contributed to the field’s first major reckoning. The agency expected rapid progress toward systems that could understand language, reason about the world, and assist in military decision‑making. Buoyed by early successes, researchers often echoed these optimistic timelines.
By the early 1970s, however, DARPA began to realize that the systems it had funded worked only in narrow, highly controlled environments. Machine translation stalled. Vision systems struggled with real‑world complexity. Robots like Shakey moved slowly and required carefully staged settings. The gap between promise and reality widened, and DARPA’s patience began to thin.
By the early 70s, DARPA’s leadership shifted toward more practical, mission‑oriented research. Funding for open‑ended symbolic AI was reduced, and the famous Lighthill Report in the U.K. amplified global skepticism. This pivot didn’t end AI research, but it marked the end of the exuberant Dartmouth‑to‑1974 era. DARPA’s early support had built the field; its later disillusionment helped trigger the first AI winter. Even in that downturn, the infrastructure, ideas, and communities DARPA had nurtured continued to shape the next generation of AI breakthroughs.
By 1974, the exuberance of the Dartmouth era had cooled into skepticism. The first AI winter began—not because the dream died, but because the field had reached the limits of its early methods. The symbolic systems that defined the 1956–1974 period had revealed both their power and their boundaries.
The winter that followed forced AI to rethink its foundations, setting the stage for the quieter, more empirical rebuilding that would eventually lead to modern machine learning.
A cold spell was imminent.
The First AI Winter is generally marked as the period from 1974 to 1980.
The British government commissions mathematician James Lighthill to evaluate AI research in 1973. His report is devastating. He concludes that AI has failed to achieve its "grandiose objectives" and that most research is producing nothing of practical value. The promises were lies, or at least profound miscalculations.
What killed the spring:
1. The Combinatorial Explosion: Early AI worked on "toy problems"—simple, constrained environments. Real-world problems exploded into impossibility. A chess program examining every possible move 10 moves ahead faces more positions than atoms in the universe. The computers of the 1970s could barely search a few moves deep.
2. The Frame Problem: How does an AI know what's relevant? If a robot moves a table, it needs to understand that everything on the table moves too, that the floor doesn't move, that the color of the walls doesn't change. Common sense—the background knowledge humans have about how the world works—proved nightmarishly difficult to encode.
3. Perceptrons Controversy: In 1969, Minsky and Papert published "Perceptrons," demonstrating mathematical limitations of simple neural networks. They couldn't solve certain basic problems (like XOR). The book effectively killed neural network research for over a decade. The field had bet on the wrong approach.
4. Machine Translation Failure: In 1966, the ALPAC report declared that machine translation research—despite years of funding—had produced nothing useful. Computers could translate, but the results were gibberish. The famous example: "The spirit is willing but the flesh is weak" translated to Russian and back became "The vodka is good but the meat is rotten."
By 1974, funding evaporates. The U.S. government slashes AI budgets. Britain, the birthplace of Turing, essentially abandons the field. Researchers who promised thinking machines now struggle to justify their existence. The term "AI" becomes toxic—researchers start calling their work "informatics" or "knowledge engineering" instead.
Graduate students flee to other fields. Promising careers end. Labs close.
The winter descends. It lasts six years.
Salvation arrives from an unexpected direction: expert systems.
The idea is pragmatic rather than grandiose. Instead of trying to create general intelligence, why not capture human expertise in narrow domains? A program called MYCIN (1976) could diagnose blood infections as well as specialists. XCON (1980) configured computer systems for Digital Equipment Corporation, saving millions annually.
This is AI that works. AI that makes money.
In 1980, the Japanese government announces the Fifth Generation Computer Project—a ten-year, $850 million initiative to build intelligent computers. Panic spreads through the West. America and Britain cannot let Japan dominate AI. The race is on.
Funding floods back. Companies create AI divisions. The AI boom of the 1980s sees startups proliferate. Symbolics, Lisp Machines Inc., IntelliCorp—companies selling specialized AI hardware and software. Carnegie Mellon, MIT, and Stanford become AI powerhouses again.
By 1985, the AI industry exceeds a billion dollars annually.
But the foundation is shaky:
Expert systems don't scale: They require painstaking manual encoding of expert knowledge. Adding new rules often breaks existing ones. Maintaining them is nightmare. They're brittle—make one small change to the problem, and they fail completely.
The hardware is too expensive: AI companies sell specialized "Lisp machines" costing $100,000+. Then desktop computers get faster. Suddenly, why buy an expensive specialized machine when a cheap PC works almost as well?
The promises return: Once again, researchers oversell. AI will revolutionize everything. The stock market will be predicted. Autonomous vehicles are just around the corner. Natural language understanding is nearly solved.
Once again, they're wrong.
The AI hardware market collapses in 1987. Desktop computers have caught up. The specialized machines become unsellable overnight. Symbolics and some other AI companies file for bankruptcy.
Disappointed by a lack of progress, DARPA redirects funding away from AI toward more concrete computer science.
In 1990, expert systems hit a wall. They're too expensive to maintain, too brittle to trust with crucial decisions. Companies realize they've spent millions on systems that require constant expert intervention, defeating their original purpose.
This winter is colder than the first. The field doesn't just lose funding; it loses credibility. "AI" becomes a punchline. Researchers once again abandon the term. The work continues in disguised forms—"machine learning," "neural networks," "computational intelligence"—anything but the tainted phrase "artificial intelligence."
The first winter could be dismissed as youthful exuberance. The second winter was a pattern. AI had now failed twice. The field seemed fundamentally cursed. It was always promising, yet never delivering. Why would anyone fund a third try?
The media narrative shifted from excitement to mockery. AI became associated with hype cycles and broken promises. Investors learned to run away from anything labeled "AI."
The winter lasts six years, but the reputational damage lasts two decades.
AI doesn't die. It just stops calling itself AI. Researchers make quiet progress on unglamorous problems:
Spam filters: Bayesian methods that actually work
Recommendation systems: Amazon and Netflix use machine learning, but don't call it AI
Optical character recognition: Slowly improving
Game playing: Deep Blue defeats Kasparov in 1997, but it's sold as brute-force search, not "real AI"
The key insight: Stop trying to simulate human intelligence. Just solve specific problems.
Three developments brew during this period:
1. Better algorithms: Support Vector Machines (1990s), Random Forests (2001), and refinements to neural networks that don't suffer the Perceptron limitations.
2. More data: The internet explosion creates massive datasets. Machine learning needs data like fire needs oxygen. Suddenly, there's unlimited fuel.
3. Faster computers: Moore's Law marches on. GPUs, originally built for video games, turn out to be perfect for neural network training.
With past winters in mind, researchers remain cautious. They publish papers about "machine learning" and "pattern recognition." They avoid grand claims. The winters taught them to underpromise and overdeliver.
Then, two miracles occurred, wrapped around one significant event:
2010: The ImageNet dataset is created with 1.2 million labeled images across 1,000 categories. The annual ImageNet competition becomes the benchmark.
2012: Geoffrey Hinton's team uses a deep convolutional neural network called AlexNet to win the ImageNet contest. They don't just win; they obliterate the competition. Their error rate: 15.3%. The second-place team: 26.2%.
The AI community stares at the results in disbelief.
Deep neural networks—the approach Minsky had supposedly killed in 1969—don’t just work, they work spectacularly well.
Why now? Why not in 1985 or 1995?
1. GPUs: Hinton's team used graphics processing units to train their network. GPUs can perform thousands of calculations simultaneously, which is perfect for neural networks. They turned weeks of computation into days.
2. Big Data: ImageNet provided millions of training examples. Neural networks are data-hungry beasts. Feed them enough, and they perform miracles. The internet had finally created datasets large enough.
3. Better techniques: ReLU activation functions, dropout regularization, and better initialization strategies were developed. These and other seemingly small technical improvements working together made the impossible possible.
4. Persistence: Hinton, Yann LeCun, and Yoshua Bengio had worked on neural networks through the winters, ignored and unfunded. They refused to give up. When the moment arrived, they were ready.
Once the dam breaks, the flood is unstoppable:
2014: GANs (Generative Adversarial Networks) can create realistic images
2015: ResNets allow networks hundreds of layers deep
2016: AlphaGo defeats Lee Sedol; Transformers architecture begins development
2017: "Attention Is All You Need" introduces the transformer architecture
2018: BERT revolutionizes natural language processing
2020: GPT-3 shows language models can do few-shot learning
2022: ChatGPT brings AI to the masses
The field that was twice declared dead now dominates technology discourse. Companies rename themselves to include "AI." Stock prices surge on AI announcements. Everyone wants AI researchers so salaries explode for top talent.
The winters are over. The summer has arrived with supernova intensity.
The AI winters demonstrate the Gartner Hype Cycle on a generational scale:
Peak of Inflated Expectations: We'll have general AI in a decade!
Trough of Disillusionment: AI is impossible and always was.
Plateau of Productivity: Deep learning actually works for specific tasks.
The field suffered because researchers couldn't resist making promises they couldn't keep. The winters were the price of hubris.
Deep learning needed:
Computational power (GPUs)
Data (the internet)
Algorithms (decades of refinement)
Stubborn researchers (who worked through the winters)
Remove any element, and it fails. This explains why AI "suddenly" worked in 2012—all the pieces finally aligned.
The first wave of AI tried to solve everything. Expert systems tried to capture human expertise wholesale. Both failed miserably.
Modern AI succeeds by being humble: image recognition, language translation, game playing. Specific, measurable tasks, not general intelligence.
This is the winter's lesson: Solve problems, don't simulate minds.
Researchers learned to avoid the AI label when it became toxic. They called themselves machine learning experts, data scientists, computational linguists—anything but AI researchers.
This linguistic shapeshifting allowed the field to survive when "AI" became unfundable. When success returned, they could reclaim the term. And they did.
Modern AI researchers are cautious about claims. They avoid grand predictions. They focus on benchmarks and metrics. They publish cautious papers.
This conservatism was forged in the winters. The field was burned twice so it learned to be careful about shouting 'fire!'
Both winters seemed terminal; however, work continued. The stubborn, pioneering few—Hinton, LeCun, Bengio—kept researching neural networks when no one else cared.
Their persistence meant that when the moment arrived, the field was ready. The winters delayed AI by decades, but they didn't kill it.
To outsiders, deep learning appeared from nowhere in 2012. Actually, it was decades in the making. AlexNet built on 30+ years of neural network research (and some cheap NVIDIA video cards).
This is the pattern of breakthroughs: sudden to observers, inevitable to those who lived through the preparation.
Today, AI dominates headlines. Billions flow into the field. Every company has an AI strategy. We're promised autonomous vehicles, general intelligence, the singularity. Sound familiar?
The optimists say: This time is different. Deep learning actually works. We have the data, the compute, the algorithms. The progress is real and measurable.
The pessimists say: We're hitting limitations. Large language models plateau. We don't understand how they work. We're making promises about AGI we can't keep. We've seen this movie before.
The honest answer: We don't know. We're inside the cycle, unable to step outside and see it clearly.
The winters teach us this: AI advances in waves, not in straight lines. There will be setbacks. There will be disappointments. Technologies that seem magical today will seem quaint tomorrow.
The question isn't whether winter will come again. The question is whether we'll recognize it in time, whether we've learned humility, whether we can navigate the hype without losing the funding that enables genuine progress.
The survivors of the last winters watch the current excitement with knowing eyes. They've been there before. They know how the story can end. And they know something else: that winter, however harsh, is not forever. That the field can survive if the work is genuine, if the foundations are solid, if researchers refuse to give up.
The winters were necessary. They burned away the hype, humbled the hubris, and forced AI to become a real discipline rather than a collection of grand promises.
When spring finally came—when AlexNet shocked the world—it was the winter survivors who led the way, carrying decades of hard-won knowledge into the new age.
The dream of thinking machines is older than computers themselves. It has survived two deaths. It may survive a third, but it will never again be as innocent as it was that summer in Dartmouth, 1956, when ten scientists thought they could solve intelligence in eight weeks.
That innocence died in the first winter.
What emerged from the second was something harder, wiser, and ultimately more powerful: not artificial intelligence as magic, but as engineering.
And perhaps that's the real breakthrough.
The AI winters were revolutionary not because of what they achieved, but because of what they forced the field to confront. They were moments when the hype collapsed, funding evaporated, and researchers had to rethink the very foundations of artificial intelligence.
In hindsight, these downturns acted as painful yet necessary intellectual resets that pushed AI away from overconfident assumptions and toward data‑driven, empirically grounded methods.
The first AI winter (mid‑1970s) exposed the limits of symbolic reasoning. Systems that looked impressive in lab environments failed in the real world. This forced researchers to acknowledge that intelligence couldn’t be built from logic alone. The field had to deal with complexity, uncertainty, and perception—problems it had previously underestimated.
The second AI winter (late 1980s–1990s) upended the belief that expert systems could scale indefinitely. Their cost and maintenance burden revealed that handcrafted knowledge was not a viable path to general intelligence. This collapse opened the door to probabilistic models, machine learning, and data‑driven approaches; techniques which would eventually become the backbone of modern AI.
By stripping away hype and easy funding, AI winters rewarded only the most persistent, methodical researchers. The people who kept working - like Hinton, LeCun, and Bengio - built the conceptual and mathematical foundations that later enabled deep learning, reinforcement learning, and modern neural architectures. Without the winters, these ideas might have been drowned out by short‑term commercial enthusiasm.
AI winters forced researchers to ask harder questions:
What kinds of problems can AI actually solve?
What kinds of data and compute are required?
How do we measure progress meaningfully?
What does “intelligence” even mean in computational terms?
These questions reshaped the field into something more disciplined, empirical, and scientifically grounded.
Each winter cleared away failed paradigms and unrealistic expectations, making room for new ideas:
After the 1970s winter → probabilistic models, early neural nets, robotics foundations
After the 1980s–90s winter → machine learning, statistical NLP, reinforcement learning
After the 2000s stagnation → deep learning, GPUs, big data, transformers
In this sense, the winters were not setbacks. Instead, they were revolutions in disguise, pruning the field so that stronger, more resilient ideas could grow.
And it has grown stronger than ever before. Call it climate change, we may never see winter again.
Once upon a frosty time in the late 1980s, the AI research lab at a certain prestigious university looked less like a cutting-edge think tank and more like a support group for disgraced wizards.
The second AI winter had hit hard. Expert systems had promised to bottle human expertise like fine wine. It turns out they were more like cheap boxed stuff that went sour the moment you tried to pour it into the real world. Funding dried up faster than a spilled latte on a hot laptop. DARPA stopped returning calls. Japanese Fifth Generation supercomputers became very expensive paperweights. And the word "AI" itself? It was toxic. Saying it in a grant proposal was like shouting "recession!" in a bank lobby—people scattered.
Enter Dr. Evelyn "Evie" Hargrove, a tenured professor who'd spent the 1970s building symbolic reasoners that could prove theorems... as long as the theorems were about as complex as "2+2=4" and nobody asked follow-ups. Now, in 1989, she was desperately trying to keep her lab alive.
Her latest trick? Rebranding. Hard.
Monday morning staff meeting. Evie stands at the whiteboard like a general rallying troops before a retreat.

"From now on," she announces, "we do not say 'artificial intelligence.' We say 'pattern recognition.' Or 'statistical inference.' Or—my personal favorite—'advanced data heuristics.' If anyone asks what we do, tell them we're in informatics. It's vague, it's European-sounding, and nobody's mad at informatics yet."
Her grad student Raj raises a hand. "But Dr. Evie, our last paper was literally called 'Learning Rules for Knowledge-Based Systems.'"
Evie erases the title with the fury of someone deleting browser history. "That was the old us. The new us is publishing 'Efficient Multivariate Clustering via Kernel Density Estimation.' See? No AI. Just math. Boring, grant-safe math."
The lab adopts the code names like victims in witness protection. The neural net group becomes "the connectionist pattern folks" (they whisper "neural" only in the supply closet). The logic programming crew rebrands as "declarative constraint satisfaction engineers." One poor postdoc who still slips up and says "AI" in the hallway gets sent to fetch coffee for a week as penance.
The cafeteria becomes a minefield. A robotics guy from down the hall wanders over.
"Hey Evie, still doing that AI stuff?"
Evie freezes mid-bite of her turkey sandwich. "AI? Oh no, no. We're doing... computational ethnography of data flows. Very interdisciplinary."
The robotics guy blinks. "Sounds niche."
"It's the future," Evie says, eyes wide like she's selling oceanfront property in Kansas. "Big in Europe."
Word spreads. Soon the entire department is playing the game. The vision lab calls itself "image statistics consultancy." The natural language group? "Lexical probability modeling." They even start a support group: "Former AI Researchers Anonymous." Meetings in the basement. First rule: Don't say the A-word.
One night, a breakthrough happens. Raj, bleary-eyed at 3 a.m., champions a backpropagation tweak on what used to be called a "multi-layer perceptron." It suddenly classifies handwritten digits way better than anything before. He runs into Evie's office waving printouts.
"Evie! Look! Our... uh... statistical pattern associator just crushed the benchmark!"
Evie stares at the numbers. Her face lights up, then immediately falls. "This is amazing. But we can't publish it as AI. The funding bodies will laugh us out of the room."
Raj: "So, what do we call it?"
Long pause.
Evie: "Adaptive kernel-based function approximation."
They submit the paper. It gets accepted to a "machine learning" conference (which everyone knows is just AI with a fake mustache). Reviewers rave about the "novel statistical approach." Grants start trickling back in, labeled for "data mining" and "informatics."
Years later, when the 2010s boom hits and deep learning explodes, Evie retires. At her farewell party, a young PhD student asks her the secret to surviving the winter.
Evie sips her wine and smiles. "Simple. Never let them catch you saying 'artificial intelligence.' Call it whatever keeps the lights on. Pattern recognition. Statistics. Advanced heuristics. Hell, call it 'fancy Excel macros' if you have to."
She leans in conspiratorially. "But between us? We always knew what it was."
She winks. "We just couldn't afford to admit it out loud."
And somewhere in the archives, buried under folders labeled "Informatics Technical Report Series," the old AI dreams waited patiently for the thaw, proving that sometimes the smartest move in a winter is to hibernate under a different name.
Moral: In AI, hype freezes funding, but clever euphemisms keep the coffee machine running.
Curated by Grok: "Don't call me AI!" Produced by AI World 🌐
More AI Stories.
External links open in a new tab: