Why GPT-5 disappoints: the myth of artificial superintelligence (ASI)
- Samuel Fernández Lorenzo

- Sep 10
- 4 min read
Updated: Oct 19
If you believe that artificial superintelligence is just around the corner, capable of surpassing us in everything we do, you might be wrong. Inspired by the evolution from Paleolithic to Neolithic art, I realized that true advances don't come from doing more of the same, but from seeing the world in a completely new way. What if superintelligence isn't about being the best at everything, but about revealing connections that humans cannot imagine yet?
The evolution of human understanding
Thousands of years ago, humans made a revolutionary leap: they went from painting what they saw in Paleolithic caves to creating geometric art in the Neolithic, a change that wasn't merely aesthetic, but a reflection of a new way of understanding the world. As Arnold Hauser points out in Social History of Art:
The Paleolithic artist still paints what he is actually seeing. He paints nothing more than what he can capture in a specific moment and in a single glance.
With the arrival of the Neolithic period, artistic work evolved from being a faithful and meticulous representation of the object to becoming «a conceptual representation, not just an image from memory but also an allegory». This change, Hauser argues, reflects how humans began to perceive intelligent forces beyond their control, forces that influenced the activities of the new sedentary man such as agriculture or animal husbandry, thus placing him within a worldview closer to the divine than to the magical.
Geometric art evidences an awakening of our capacity to model and understand the world beyond the visible. Geometrism not only organized forms but symbolized a connection with a "beyond" that Paleolithic humans could not imagine. This leap led me to wonder: could artificial superintelligence be a similar change, not just in capability, but in perspective?
The horizon of superintelligence
Today, tech giants like OpenAI and Meta speak of artificial superintelligence (ASI) as an imminent goal. Sam Altman stated in early 2025 that OpenAI was "confident in knowing how to build AGI" and aimed at a "superintelligence in the true sense." Mark Zuckerberg, for his part, stated recently:
Over the last few months we have begun to see glimpses of our AI systems improving themselves. The improvement is slow for now, but undeniable. Developing superintelligence is now in sight.
However, the results of models like GPT-5 have generated skepticism. Users have quickly identified serious hallucinations, such as severe errors when solving first-degree equations.

Is it really possible to build an ASI that surpasses humans in all domains, as defined by Nick Bostrom in Superintelligence: Paths, Dangers, Strategies (2014)? Bostrom describes ASI as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest." But is this the right path for a generalist model?
The inherent limits of artificial intelligence
The dream of a superintelligence that does everything better than us collides with a reality: dominance or superiority in a physical-mental area of competence normally arises from intentional specialization. And clearly, specializing in something necessarily implies ceasing to do so in other areas. There is an inevitable trade-off between building a generalist machine competent at any task, and one extraordinarily good in a specific domain (the aforementioned superintelligence).
Using an analogy, let's think of an Olympic athlete compared to a pentathlete. A specialized swimmer will always outperform the pentathlete in swimming, but cannot compete in fencing. The pentathlete can participate in five different disciplines with an acceptable level, but will never achieve the excellence of a specialist in each one.
This result resonates with the famous "no free lunch" theorem, which establishes that there is no universally superior optimization algorithm for all problems. GPT-5 is, in a way, a meta-algorithm that faces the same intrinsic limitations as any other algorithm, as it aims to give us the best solution among many possibilities. Consequently, the most likely output of this entire race for superintelligence is a model that continues to feel "mediocre". This raises a critical question: are we pursuing the wrong ideal? Instead of seeking a machine that surpasses us by our own metrics, shouldn't we aspire to something deeper?
An alternative vision of superintelligence
Imagine a superintelligence that isn't limited to being "better" at what we already do, but transcends our understanding, just as Neolithic art transcended simple visual representation. What I long to see is an ASI that acts as an autonomous agent of understanding, as I express in Everything I Can Imagine:
The second major set of technologies to dream about would be autonomous agents of understanding. Is it possible to build an artificial intelligence that mimics human understanding? Or put another way, is it possible to mechanize the process of comprehending the world? Here we're talking about artificial intelligences that create their own maps of understanding. They would be machines capable of asking themselves "why?", or at least of appearing to do so, to subsequently seek an answer that we, from the outside, are able to interpret. Unlike a personal assistant, this class of intelligence would not necessarily seek to accompany a person on their particular journey through the cycle of understanding, but would be 'free' to explore its own based on its 'curiosity', and thus build its own atlas of understanding. Note that this does not necessarily imply that these machines would have consciousness, but simply simulate a search for interpretable explanations for an external observer.
We don't need a superintelligence that's the best at everything, but just better than the average human at finding global and profound connections.
Conclusion
Art has the value of reminding us that advances don't necessarily come from doing the same thing more quickly, but from seeing the world in a radically different way.
Instead of obsessing over creating machines that surpass us by our own metrics, perhaps we should aspire to develop intelligences that complement ours, that see what we cannot see and understand what is incomprehensible to us because it is too vast; that help us glimpse what still remains invisible to us even though it's right before our eyes.
If you are interested in exploring this topic further, I invite you to read part III of Everything I Can Imagine, the Algorithm of Understanding, where I reflect on artificial understanding and its limits.




Comments