A few days ago, while I was browsing YouTube, I stumbled upon another video—or maybe it was a Short—about AI, GPT-4 and AGI. My feed has been full of them lately. But this one stood out because it expressed something I had been thinking about in the back of my mind, though I hadn’t yet put it into words. In the video, or Short, a software developer was voicing her discontent with the AI hype, saying that soon enough, someone will decide what AGI means, and just like that, we’ll have achieved AGI overnight.
And honestly, when you look at the current definitions of AGI, it’s easy to see how that could happen within the next few months.
Some people even argue that GPT-4 is already an AGI.
What are the current definitions of AGI?
After some research, I found five definitions of AGI from prominent thinkers like Nick Bostrom, Ben Goertzel, Marcus Hutter, and organizations such as IBM and OpenAI.
Nick Bostrom, in his book Superintelligence, defines three levels of AI: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). AGI, as he defines it, refers to a machine with the ability to understand, learn, and apply intelligence across a broad range of tasks, similar to human cognitive abilities. It is not limited to specific tasks but can handle any intellectual challenge a human can, learning and adapting from experience.
However, Bostrom does not provide a precise, formal definition of the term ‘task’ in Superintelligence. His focus is more on AGI’s capacity to handle complex tasks and the potential challenges of ensuring its goals align with human intent and interests.
Depending on how we define ‘task’ and how AI learns to perform new tasks, we could argue that we have already achieved AGI.
Ben Goertzel, who coined the term AGI, emphasizes learning, self-improvement, and adaptability as key components of AGI. He defines AGI as a system that can generalize knowledge across domains, learn to perform new tasks in a way similar to humans, and transfer learning from one domain to another.
Similarly, based on how AI learns to perform new tasks, we could argue that we have already achieved this definition of AGI.
Marcus Hutter offers a more theoretical, formal, and unbounded mathematical definition of intelligence, centered on performance and reward maximization over time. Hutter’s model focuses on reward maximization without directly addressing how specific tasks are defined or learned.
If we consider supervised learning as a valid form of task learning under any of these definitions, one could argue that we’ve already achieved AGI.
Lastly, both IBM and OpenAI define AGI by traits such as general intelligence, understanding, reasoning, and non-specialization—traits that modern large language models (LLMs) already exhibit to varying degrees, depending on how we interpret task learning.
The only distinction between IBM and OpenAI’s definitions is the extent to which AGI surpasses human abilities in certain areas, such as processing speed, data analysis, and problem-solving in complex environments. Again, something that even ANI already achieves.
So yes, it’s only a matter of time before someone declares that AGI has been achieved, and few will contest it. Why? Because it will likely be an authoritative figure making the claim, and in that moment, many will agree. AGI will be ‘born,’ even if the definition of AGI will remain a fuzzy, ever-shifting term, shaped by how we choose to define it.
It might feel a bit unsettling, but what a fascinating time to be alive, right?