How an intelligence explosion might lift us from conventional AI to reaching the vaunted AGI … More
In today’s column, I continue my special series covering the anticipated pathways that will get us from conventional AI to the revered hoped-for AGI (artificial general intelligence). The focus here is an analytically speculative deep dive into the detailed aspects of a so-called intelligence explosion during the journey to AGI. I’ve previously outlined that there are seven major paths for advancing AI to reach AGI (see the link here) – one of those paths consists of the improbable moonshot path, whereby there is a hypothesized breakthrough such as an intelligence explosion that suddenly and somewhat miraculously spurs AGI to arise.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who have been following along on my special series about AGI pathways, please note that I provide similar background aspects at the start of this piece as I did previously, setting the stage for new readers.
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
AI Experts Consensus On AGI Date
Right now, efforts to forecast when AGI is going to be attained consist principally of two paths.
First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040.
Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus?
Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused.
The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date.
Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here.
Seven Major Pathways
As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). Here’s my list of all seven major pathways getting us from contemporary AI to the treasured AGI:
- (1) Linear path (slow-and-steady): This AGI path captures the gradualist view, whereby AI advancement accumulates a step at a time via scaling, engineering, and iteration, ultimately arriving at AGI.
- (2) S-curve path (plateau and resurgence): This AGI path reflects historical trends in the advancement of AI (e.g., early AI winters), and allows for leveling-up via breakthroughs after stagnation.
- (3) Hockey stick path (slow start, then rapid growth): This AGI path emphasizes the impact of a momentous key inflection point that reimagines and redirects AI advancements, possibly arising via theorized emergent capabilities of AI.
- (4) Rambling path (erratic fluctuations): This AGI path accounts for heightened uncertainty in advancing AI, including overhype-disillusionment cycles, and could also be punctuated by externally impactful disruptions (technical, political, social).
- (5) Moonshot path (sudden leap): Encompasses a radical and unanticipated discontinuity in the advancement of AI, such as the famed envisioned intelligence explosion or similar grand convergence that spontaneously and nearly instantaneously arrives at AGI (for my in-depth discussion on the intelligence explosion, see the link here).
- (6) Never-ending path (perpetual muddling): This represents the harshly skeptical view that AGI may be unreachable by humankind, but we keep trying anyway, plugging away with an enduring hope and belief that AGI is around the next corner.
- (7) Dead-end path (AGI can’t seem to be attained): This indicates that there is a chance that humans might arrive at a dead-end in the pursuit of AGI, which might be a temporary impasse or could be a permanent one such that AGI will never be attained no matter what we do.
You can apply those seven possible pathways to whatever AGI timeline that you want to come up with.
Futures Forecasting
Let’s undertake a handy divide-and-conquer approach to identify what must presumably happen to get from current AI to AGI.
We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That’s essentially 15 years of elapsed time. The idea is to map out the next fifteen years and speculate what will happen with AI during that journey.
This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting.
Is this kind of a forecast of the future ironclad?
Nope.
If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever.
All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial.
I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed.
Intelligence Explosion On The Way To AGI
The moonshot path entails a sudden and generally unexpected radical breakthrough that swiftly transforms conventional AI into AGI. All kinds of wild speculation exists about what such a breakthrough might consist of, see my discussion at the link here.
One of the most famous postulated breakthroughs would be the advent of an intelligence explosion. The idea is that once an intelligence explosion occurs, assuming that such a phenomenon ever happens, AI will in rapid-fire progression proceed to accelerate into becoming AGI.
This type of path is in stark contrast to a linear pathway. In a linear pathway, the progression of AI toward AGI is relatively equal each year and consists of a gradual incremental climb from conventional AI to AGI. I laid out the details of the linear path in a prior posting, see the link here.
When would the intelligence explosion occur?
Since we are assuming a timeline of fifteen years and the prediction is that AGI will be attained in 2040, the logical place that an intelligence explosion would occur is right toward the 2040 date, perhaps happening in 2039 or 2038. This makes logical sense since if the intelligence explosion happens sooner, we would apparently reach AGI sooner. For example, suppose the intelligence explosion occurs in 2032. If indeed the intelligence explosion garners us AGI, we would declare 2032 or 2033 as the AGI date rather than 2040.
Let’s use this as our postulated timeline in this context:
- Years 2025-2038: AI is advancing to some degree but not yet at AGI.
- Years 2038-2039: An intelligence explosion happens.
- Years 2039-2040: AGI is reached.
Defining An Intelligence Explosion
You might be curious what an intelligence explosion would consist of and why it would necessarily seem to achieve AGI.
The best way to conceive of an intelligence explosion is to first reflect on chain reactions such as what occurs in an atomic bomb or nuclear reactor. We all nowadays know that atomic particles can be forced or driven into wildly bouncing off each other, rapidly progressing until a massive explosion or burst of energy results. This is generally taught in school as a fundamental physics principle, and many blockbuster movies have dramatically showcased this activity (such as Christopher Nolan’s famous Oppenheimer film).
A theory in the AI community is that intelligence can do likewise. It goes like this. You bring together a whole bunch of intelligence and get that intelligence to feed off the collection in hand. Almost like catching fire, at some point, the intelligence will mix with and essentially fuel the creation of additional intelligence. Intelligence gets amassed in rapid succession.
Boom, an intelligence chain reaction occurs, which is coined as an intelligence explosion.
The AI community tends to attribute the initially formulated idea of an AI intelligence explosion to a research paper published in 1965 by John Good Irving entitled “Speculations Concerning The First Ultraintelligent Machine” (Advances in Computers, Volume 6). Irving made this prediction in his article:
- “It is more probable than not that, within the twentieth century, an ultra-intelligent machine will be built and that it will be the last invention that man need make since it will lead to an ‘intelligence explosion.’ This will transform society in an unimaginable way.”
Controversies About Intelligence Explosions
Let’s consider some of the noteworthy controversies about intelligence explosions.
First, we have no credible evidence that an intelligence explosion per se is an actual phenomenon. To clarify, yes, it is perhaps readily apparent that if you have some collected intelligence and combine it with other collected intelligence, the odds are that you will have more intelligence collected than you had to start with. There is a potential synergy of intelligence fueling more intelligence.
But the conception that intelligence will run free with other intelligence in some computing environments and spark a boatload of intelligence, well, this is an interesting theory, and we have yet to see this happen on any meaningful scale. I’m not saying it can’t happen. Never say never.
Second, the pace of an intelligence explosion is also a matter of great debate. The prevailing viewpoint is that once intelligence begins feeding off other intelligence, a rapid chain reaction will arise. Intelligence suddenly and with immense fury overflows into massive torrents of additional intelligence.
One belief is that this will occur in the blink of an eye. Humans won’t be able to see it happen and instead will merely be after-the-fact witnesses to the amazing result. Not everyone goes along with that instantaneous intelligence explosion conjecture. Some say it might take minutes, hours, days, weeks, or maybe months. Others say it could take years, decades, or centuries.
Nobody knows.
Starting And Stopping An Intelligence Explosion
There are additional controversies in this worrisome basket.
How can we start an intelligence explosion?
In other words, assume that humans want to have an intelligence explosion arise. The method of getting this to occur is unknown. Something must somehow spark the intelligence to mix with the other intelligence. What algorithm gets this to happen?
One viewpoint is that humans won’t find a way to make it happen, and instead, it will just naturally occur. Imagine that we have tossed tons of intelligence into some kind of computing system. To our surprise, out of the blue, the intelligence starts mixing with the other intelligence. Exciting.
This brings us to another perhaps obvious question, namely how will we stop an intelligence explosion?
Maybe we can’t stop it, and the intelligence will grow endlessly. Is that a good outcome or a bad outcome? Perhaps we can stop it, but we can’t reignite it. Oops, if we stop the intelligence explosion too soon, we might have shot our own foot since we didn’t get as much new intelligence as we could have garnered.
A popular saga that gets a lot of media play is that an intelligence explosion will run amok. Things happen this way. A bunch of AI developers are sitting around toying with conventional AI when suddenly an intelligence explosion is spurred (the AI developers didn’t make it happen, they were bystanders). The AI rapidly becomes AGI. Great. But the intelligence explosion keeps going, and we don’t know how to stop it.
Next thing we know, ASI has been reached.
The qualm is that ASI is going to then decide it doesn’t need humans around or that the ASI might as well enslave us. You see, we accidentally slipped past AGI and inadvertently landed at ASI. The existential risk of ASI arises, ASI clobbers us, and we are caught completely flatfooted.
Timeline To AGI With Intelligence Explosion
Now that I’ve laid out the crux of what an intelligence explosion is, let’s assume that we get lucky and have a relatively safe intelligence explosion that transforms conventional AI into AGI. We will set aside the slipping and sliding into ASI.
Fortunately, just like in Goldilocks, the porridge won’t be too hot or too cold. The intelligence explosion will take us straight to the right amount of intelligence that suffices for AGI. Period, end of story. Here then is a strawman futures forecast roadmap from 2025 to 2040 that encompasses an intelligence explosion that gets us to AGI:
Years 2025-2038 (Before the intelligence explosion):
- AI multi-modal models finally become robust and fully integrated into LLMs. Significant improvements in AI real-time reasoning, sensorimotor integration, and grounded language understanding occur.
- Agentic AI starts to blossom and become practical and widespread. AI systems with memory and planning capabilities achieve competence in open-ended tasks in simulation environments.
- The use of AI large-scale world models spurs substantially improved AI capabilities. AI can now computationally improve from fewer examples via advancements in AI meta-learning.
- AI agents gradually gain wide acceptance and are capable of executing multi-step tasks semi-autonomously in digital and physical domains, including robotics.
- AI is advanced sufficiently to have a generalized understanding of physical causality and real-world constraints through embodied learning.
- But no semblance of AGI seems to be in sight and many inside AI and outside of AI are handwringing that AI is not going to become AGI.
Years 2038-2039 (Intelligence explosion):
- An AI intelligence explosion produces massive amounts of new intelligence.
- Self-improving AI systems begin modifying their own code under controlled conditions.
- AI agents achieve human-level performance across all cognitive benchmarks, including abstraction, theory of mind (ToM), and cross-domain learning.
- AI systems exhibit bona fide signs of self-reflection, not just routinized mimicry or parroting. Advances in AI showcase human-like situational adaptability and innovation.
- AI systems now embody persistent identities, able to reflect on experiences across time. Some of the last barriers to acceptance of AI as being AGI are overcome when AI demonstrates creativity, emotional nuance, and abstract reasoning in diverse contexts.
Years 2039-2040 (AGI is attained):
- There is a widespread general agreement in 2040 that AGI has now been attained, though it is still the early days of AGI. The intelligence explosion brought us AGI.
Contemplating The Timeline
I’d ask you to contemplate the strawman timeline and consider where you will be and what you will be doing if an intelligence explosion happens in 2038 or 2039. You must admit, it would be quite a magical occurrence, hopefully of a societal upbeat result and not something gloomy.
The Dalai Lama made this famous remark: “It is important to direct our intelligence with good intentions. Without intelligence, we cannot accomplish very much. Without good intentions, the way we exercise our intelligence may have destructive results.”
You have a potential role in guiding where we go if the above timeline plays out. Will AGI be imbued with good intentions? Will we be able to work hand-in-hand with AGI and accomplish good intentions? It’s up to you. Please consider doing whatever you can to leverage a treasured intelligence explosion to benefit humankind.