Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

More

Is GPT-4 Already an AGI?

By Marcos Cooper 2024-10-22 No Comments 3 Min Read

A few days ago, while I was browsing YouTube, I stumbled upon another video—or maybe it was a Short—about AI, GPT-4 and AGI. My feed has been full of them lately. But this one stood out because it expressed something I had been thinking about in the back of my mind, though I hadn’t yet put it into words. In the video, or Short, a software developer was voicing her discontent with the AI hype, saying that soon enough, someone will decide what AGI means, and just like that, we’ll have achieved AGI overnight.

And honestly, when you look at the current definitions of AGI, it’s easy to see how that could happen within the next few months.

Some people even argue that GPT-4 is already an AGI.

What are the current definitions of AGI?

After some research, I found five definitions of AGI from prominent thinkers like Nick Bostrom, Ben Goertzel, Marcus Hutter, and organizations such as IBM and OpenAI.

Nick Bostrom, in his book Superintelligence, defines three levels of AI: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). AGI, as he defines it, refers to a machine with the ability to understand, learn, and apply intelligence across a broad range of tasks, similar to human cognitive abilities. It is not limited to specific tasks but can handle any intellectual challenge a human can, learning and adapting from experience.

However, Bostrom does not provide a precise, formal definition of the term ‘task’ in Superintelligence. His focus is more on AGI’s capacity to handle complex tasks and the potential challenges of ensuring its goals align with human intent and interests.

Depending on how we define ‘task’ and how AI learns to perform new tasks, we could argue that we have already achieved AGI.

Ben Goertzel, who coined the term AGI, emphasizes learning, self-improvement, and adaptability as key components of AGI. He defines AGI as a system that can generalize knowledge across domains, learn to perform new tasks in a way similar to humans, and transfer learning from one domain to another.

Similarly, based on how AI learns to perform new tasks, we could argue that we have already achieved this definition of AGI.

Marcus Hutter offers a more theoretical, formal, and unbounded mathematical definition of intelligence, centered on performance and reward maximization over time. Hutter’s model focuses on reward maximization without directly addressing how specific tasks are defined or learned.

If we consider supervised learning as a valid form of task learning under any of these definitions, one could argue that we’ve already achieved AGI.

Lastly, both IBM and OpenAI define AGI by traits such as general intelligence, understanding, reasoning, and non-specialization—traits that modern large language models (LLMs) already exhibit to varying degrees, depending on how we interpret task learning.

The only distinction between IBM and OpenAI’s definitions is the extent to which AGI surpasses human abilities in certain areas, such as processing speed, data analysis, and problem-solving in complex environments. Again, something that even ANI already achieves.

So yes, it’s only a matter of time before someone declares that AGI has been achieved, and few will contest it. Why? Because it will likely be an authoritative figure making the claim, and in that moment, many will agree. AGI will be ‘born,’ even if the definition of AGI will remain a fuzzy, ever-shifting term, shaped by how we choose to define it.

It might feel a bit unsettling, but what a fascinating time to be alive, right?

Tags /
T
M
Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *