A few weeks ago, Sam Altman, the CEO of OpenAI, has made a bold claim on Reddit that's got the tech world buzzing: artificial general intelligence (AGI) is achievable with current hardware. Futurism writes about it here. In a nutshell, according to Altman, we don't need to wait for some futuristic breakthrough or a massive leap in computing power to create an AI that surpasses human intelligence. We can do it with what we've got right now.
But what does this claim really mean? And is it just a bunch of hype to get investors excited and the media talking? In this article, we'll delve into the world of AI and AGI, explore the possibilities and challenges of creating such a powerful AI, and evaluate Altman's claim. We'll also take a look at the history of bold AGI predictions and see if Altman's statement is just another example of tech industry optimism or a genuine assessment of the current state of AI research.
Who’s Sam Altman? What’s OpenAI?
In 2015, Sam Altman co-founded the American company OpenAI, alongside notable figures such as Elon Musk, Jessica Livingston, and Peter Thiel. In 2019, Altman became CEO. OpenAI is at the forefront of current AI developments and is most famous for its AI chatbot ChatGPT which became an instant hit in late 2022.
After Google researchers came up with the foundational transformer architecture, OpenAI independently developed the Generative Pre-trained Transformer (GPT) model. OpenAI's key innovation was the approach of "pretraining" followed by "fine-tuning" to creating large-scale generative systems using transformer models. In 2018, OpenAI introduced the first GPT model in this series, known as GPT-1. ChatGPT is based on a later version called GPT-3.5. In 2023, GPT-4 was released.
As for future developments, GPT-5 is in active development and is expected to be released sometime in late 2024 or early 2025. Meanwhile, competition in the AI landscape is intensifying, with several companies developing their own advanced language models and AI systems to challenge OpenAI's dominance.
AGI
So, what is AGI, exactly? Artificial general intelligence refers to a type of AI that can perform any intellectual task that a human can. It's like the ultimate Swiss Army knife of AI - it can learn, reason, and apply knowledge across a wide range of domains, from science and engineering to art and entertainment.
The term was first coined in 2007 by computer scientist Ben Goertzel and AI researcher Cassio Pennachin. But don’t let that fool you: people have been speculating about human-like machines for much longer. In 1980, John Searle introduced the term "strong AI" in the context of his philosophical argument distinguishing between "strong AI" and "weak AI." Strong AI refers to the hypothesis that a machine can truly understand and possess a mind, whereas weak AI suggests that machines can only simulate human-like understanding and behavior without genuine comprehension. And the closely related concept of the "singularity" was discussed by John von Neumann as early as 1958.
AGI is often considered the holy grail of AI research, and its significance in the industry cannot be overstated. Imagine having a machine that can learn and adapt at an exponential rate, solving complex problems that have stumped humans for centuries. It's a tantalizing prospect, and one that has captured the imagination of scientists, entrepreneurs, and science fiction writers alike.
But here's the thing: despite all the hype, there's still a lot of debate about what AGI actually means and when we can expect to achieve it. Some experts define AGI as a machine that can pass the Turing Test, a benchmark for measuring a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Others argue that AGI requires a more nuanced definition, one that takes into account the complexities of human intelligence and the many different ways that we think and learn.
As for the timeline, well, that's a whole different story. Some experts predict that AGI is just a few years away, while others argue that it's at least decades off. The truth is, we just don't know yet. And that's what makes the debate so fascinating - and so frustrating.
The hype explained
So, what's driving the AGI hype? One reason is the massive investments being made in AI infrastructure. Companies like Google, Microsoft, and Altman’s OpenAI are pouring billions of dollars into AI research and development, and governments are starting to take notice too. The stakes are high, and the potential rewards are enormous. If AGI can be achieved, it could revolutionize industries from healthcare to finance, and create new opportunities for economic growth and innovation.
But there's also a sense of urgency driving the AGI debate. As AI becomes more pervasive in our lives, there are growing concerns about its impact on society. Will AGI create new jobs or displace existing ones? Will it exacerbate existing social inequalities or help to address them? These are just a few of the questions that are being asked, and they're not easy to answer.
In the next section, we'll take a closer look at the challenges and possibilities of achieving AGI, and explore what it might take to get there.
The Long and Winding Road to AGI
So, where are we on the road to AGI? The current state of AI research and development is a mixed bag. On the one hand, we've made tremendous progress in areas like machine learning, natural language processing, and computer vision. We've got AI systems that can recognize faces, understand speech, and even beat humans at complex games like Go.
But on the other hand, we're still a long way from true AGI. Creating a machine that can learn, reason, and apply knowledge across a wide range of domains is a daunting task. It requires significant advancements in areas like natural language processing, computer vision, and decision-making. Current AI models are simply unable to think logically like humans do.
One of the biggest challenges is creating AI systems that can understand and navigate the complexities of human language. We've got AI that can recognize and respond to simple commands, but we're still far from creating machines that can engage in nuanced and context-dependent conversations.
Another challenge is creating AI systems that can make decisions in complex and uncertain environments. Sure, there are systems that can optimize processes and make predictions, but we're still far from creating machines that can make decisions that require creativity, empathy, and common sense.
Despite these challenges, the potential benefits of achieving AGI are enormous. Imagine having machines that can help us solve some of the world's most pressing problems, from climate change to disease diagnosis. Imagine having machines that can help us explore space, create new forms of art and entertainment, and improve our daily lives in countless ways.
Risky business
But there are also risks to consider. What if AGI systems become uncontrollable or unpredictable? What if they're used for malicious purposes, like hacking or cyber warfare? What if they exacerbate existing social inequalities or create new ones?
These are just a few of the questions that are being asked, and they're not easy to answer. But one thing is clear: achieving AGI will require a fundamental shift in how we think about intelligence, creativity, and decision-making. It will require us to rethink our assumptions about what it means to be human, and to consider the potential consequences of creating machines that are smarter and more capable than we are.
Evaluating Altman's AGI Claim
With all that out of the way, let’s revisit Sam Altman's claim that AGI is achievable with current hardware, and evaluate the evidence for and against it.
As the CEO of OpenAI, Altman has a vested interest in promoting the idea that AGI is within reach. After all, OpenAI's mission is to develop and deploy AGI in a way that benefits humanity, and Altman's statement is likely intended to reassure investors and the public that the company is on track to achieve its goals.
But Altman's claim is not without precedent. Other tech entrepreneurs, like Elon Musk, have made similar predictions about the future of AI and automation. Musk, for example, has promised that Tesla's full self-driving cars will be available by the end of 2023, despite numerous delays and setbacks. Musk's predictions have been met with skepticism by some, but they've also helped generate excitement and enthusiasm for Tesla's technology. When the end of 2023 came around, Tesla's 'Full Self-Driving' technology was still not fully autonomous and continued to require driver supervision, with plans to launch in Europe and China by early 2025 pending regulatory approval.
So, what's driving Altman's claim? We can only speculate. One possibility is that he's trying to maintain investor enthusiasm and company valuation. OpenAI has received significant funding from investors like Microsoft and Reid Hoffman, and Altman's statement may be intended to reassure them that the company is making progress towards its goals.
Another possibility is that Altman is trying to create a sense of urgency around AGI development. By claiming that AGI is achievable with current hardware, Altman may be trying to encourage other researchers and developers to focus on AGI development, rather than waiting for future breakthroughs.
But there's also a risk that Altman's claim could be seen as overly optimistic or even misleading. If OpenAI fails to deliver on its promises, it could damage the company's reputation and undermine trust in the AI industry as a whole.
A Brief History of Bold AGI Claims
Sam Altman's claim that AGI is achievable with current hardware is just the latest in a long line of bold predictions and claims made about AGI. Here's a selection of some of the most notable ones:
1950: Computer scientist Alan Turing suggests that machines may be able to pass the Turing Test within 50 years.
1965: Computer scientist Herbert A. Simon predicts that "machines will be capable, within 20 years, of doing any work a man can do".
1970: Computer scientist Marvin Minsky predicts that AGI will be achieved within 3 to 8 years.
1979: AI pioneer John McCarthy believes human-level AI will be accomplished (while refraining from giving a timeline), and argues that machines already have beliefs and other mental qualities.
2005: Futurist and inventor Ray Kurzweil predicts that AGI will be achieved by 2045.
2011: Shane Legg, co-founder of DeepMind, states that there is a 50-50 chance of achieving AGI by 2028.
2023: Tech executive Sam Altman claims that AGI could be achieved soon, possibly by 2025.
Early 2024: Tech executive Elon Musk claims that AGI will be achieved soon, possibly by 2025.
Late 2024: Altman claims that AGI is achievable with current hardware.
How accurate have these predictions been? Well, it's safe to say that most of them have been overly optimistic. Minsky's 1970 prediction was famously way off the mark. While we've made significant progress in AI research since then, we're still nowhere near achieving AGI.
The early 2020s saw growing optimism around AI capabilities, with significant advancements in machine learning, natural language processing, and AI applications across various industries. This period has seen increased investment and interest in AI research. The final three entries on the list reflect this trend.
The Verdict: AGI with Current Hardware?
Altman's recent prediction is certainly optimistic. It’s also vague, as the truth of Altman's claim may depend on the details. What exactly does he mean by "current hardware"? Is he referring to the latest generation of GPUs, or something more exotic? And what kind of AGI is he talking about - a narrow, specialized AI, or a more general-purpose intelligence?
So, is Sam Altman's claim that AGI is achievable with current hardware credible? I'd say it's a stretch. While OpenAI has made significant progress in AI research, we're still far from achieving true AGI.
But what if Altman is right? What if we can achieve AGI with current hardware? The implications would be profound. On the one hand, AGI could bring about tremendous benefits, from solving complex problems to improving our daily lives. On the other hand, it could also pose significant risks, from job displacement to existential threats.
Where are we now on the road to AGI? We'd say we're still in the early stages. While we've made significant progress in AI research, we're still far from achieving true AGI. But with continued investment and innovation, it's possible that we could achieve AGI in the near future.
AGI is still a long shot, but it's not theoretically impossible. With continued progress and innovation, we could achieve AGI in the next few decades. But for now, we'll just have to wait and see.
What’s you prediction on AGI?

You asked how should we define General AI and I don’t know that we have a good answer. What a company throwing tons of money at a technology can do is a real question. When ChatGPT came out everyone in business talked about how they were going to utilize AI in their products and services and most didn’t have a clue as to what that really meant or how much time, money, and effort it would take to do something meaningful and get real business benefit.
I’m a slow adopter of new whiz bang technology and a skeptic. We will see benefits, though it will take way longer than anyone realizes and the reality won’t match the hype.
Thanks for a well researched article.
Good article. I think you can do a multipart series on this topic. You've only scratched the surface.