The Architects of Intelligence: Why We Can’t Rewrite AI’s Origin Story

Every few years, a new faction of technologists emerges, eager to proclaim the “death” of the last great innovation. Today, it’s LLMs under fire.

Recently, AI luminary Yann LeCun made waves by dismissing large language models as a dead-end for building true intelligence. He argues:

LLMs need hundreds of thousands of years’ worth of text to train.

Children learn faster through embodied experiences—vision, touch, interaction.

The real world is far more complex than language alone can capture.

His proposed solution? JEPA—a world model that learns by watching the world, not predicting text.

While this work is important, the framing of this critique misses something bigger—and more dangerous.

The Attempt to Rewrite AI’s Origin Story

We’ve seen this before. A promising new approach gains traction, and in the excitement, some technologists attempt to erase the very foundations they’re building on. It’s not just a technical disagreement—it’s a narrative grab.

The current wave of world model enthusiasm is no exception. While it’s dressed as innovation, it often carries an implicit revisionism:

“LLMs are flawed. We need something fundamentally new. We’re the ones who figured it out.”

But that claim is built on a false assumption.

LLMs Aren’t the Floor—There’s A Foundation

Here’s the truth: JEPA, V-JEPA, and nearly every modern architecture still depend on the breakthroughs that enabled LLMs:

Self-supervised learning

Transformer-based architectures

Scalable optimization and data processing

Representation learning at scale

Which all depend on the architecture that makes them possible.

The techniques used to build “vision-based intelligence” today were pioneered in language models. To dismiss LLMs is to ignore the very scaffolding that makes this next generation of models possible.

You can’t build skyscrapers without first understanding the materials used to build the skyscraper and then the very soil the skyscraper is built upon.

This Isn’t the First Time

Throughout the history of AI—from expert systems to neural networks, from symbolic logic to deep learning—we’ve seen cycles of rejection and reinvention. Each generation distances itself from the last to claim originality. But real progress never begins with erasure. It begins with evolution.

Why This Matters

If we allow the narrative to shift too far—to glorify what’s next while erasing what came before—we risk:

Undervaluing foundational research

Misguiding investors and business leaders

Distracting from integration in favor of ideological camps

Final Thought: The Future of AI Belongs to the Builders—Not the Revisionists

Large Language Models are not the final answer—but they were built upon an architecture that’s the foundation for nearly everything that will come next.

The future won’t be built by those who deny where intelligence began. It will be built by those who understand the lineage, integrate the best of each approach, and honor the architecture that gave us the first spark of artificial general intelligence.

We don’t move forward by disregarding the original blueprint. We do it by celebrating the brilliance of its design.

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Jameel Gordon

I am a visionary, a futurist, and I am the father of “Modern Artificial Intelligence”.

I am a profound thinker who delves deep into various knowledge realms to deconstruct and construct competency frameworks. In essence, I possess a unique thought perspective—a serial polymath.

https://www.jameelgordon.com
Next
Next

Why Licensing AI Agents Misses the Point — And Why FOIA Might Be the Better Model But It Needs To Evolve To Protect Our Individual Privacy