Future Forecasting The AGI-To-ASI Pathway Giving Ultimate Rise To AI Superintelligence


In today’s column, I am continuing my special series on the possible pathways that get us from conventional AI to the attainment of AGI (artificial general intelligence) and thence to the vaunted ASI (artificial superintelligence). In a prior posting, see the link here, I covered the AI-to-AGI route. The expectation is that we will achieve AGI first and then proceed toward attaining ASI.

I will showcase herein a sample ten-year progression that might occur during the initial post-AGI era, moving us step-by-step from AGI to ASI.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

AI Experts Consensus On AGI Date

Right now, efforts to forecast when AGI is going to be attained consist principally of two paths.

First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040.

Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus?

Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused.

The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways beyond AGI, I am going to proceed with the year 2040 as the consensus anticipated target date for attaining AGI.

Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here.

Stepwise Progression Of AGI-To-ASI

Once we have AGI in hand, the next monumental goal will be to arrive at ASI.

Some believe that our only chance of reaching ASI is by working closely with AGI. In other words, humans on our own would be unable to achieve ASI. A joint development partnership of the topmost AI developers and ASI would supposedly do the trick. That being said, some say we are kidding ourselves in the sense that AGI really won’t need our assistance. If AGI works with us, the only reason will be to make humanity feel good about arriving at ASI and cheekily allow us to pat ourselves on the back for doing so.

Anyway, the unanswered question is how long it will take to go from AGI to ASI, assuming that via one way or another, we can actually reach ASI.

I am going to go out on a limb and use a strawman of ten years. Yes, I’m suggesting that ten years after we devise AGI, we might well reach ASI. Some would claim I am being overly pessimistic. It might take just two or three years. Others would argue that I am overly optimistic and that the time frame would be more like thirty or more years.

Sorry, you can’t please everyone, as they say.

Let’s agree that we will use ten years as a useful plug-in. The ten years will be after we reach AGI. As per the earlier points about AI experts believing that we might attain AGI by 2040, we can use the years 2040 to 2050 as the pathway for AGI-to-ASI.

The AGI-To-ASI Timeline Spelled Out

Here is a postulated pathway starting in 2040 and ending in 2050 that would seemingly enable us to achieve ASI:

  • Year 2040: AGI is finally achieved. After years of trying to reach AGI, humanity finally does so. AGI is on par with human intellect across all domains and in all respects. We immediately begin to use AGI to discover new inventions, see my speculated timeline and types of novel inventions at the link here, and humankind also works with AGI to try and attain ASI.
  • Year 2041: AGI scalable cognition is devised. Working with AGI, we devise a capability that scales cognition. A new kind of AI architecture that is more modular and self-reflective is designed and built as a prototype.
  • Year 2042: Massive multi-agent AGI collaboration. A breakthrough in multi-agent AI allows AGI to exceed some top-tier human experts. This is heralded as the superhuman pursuit that was earlier hoped would be in AGI at the get-go.
  • Year 2043: Emergent meta-reasoning in AGI. Turns out that AGI has emergent properties that we didn’t expect, including cross-domain transfer of knowledge and interconnecting of domains in innovative ways.
  • Year 2044: AGI working self-reflectively. Whereas humans had assumed that achieving ASI would be a joint effort of us and AGI, it turns out that AGI is shifting into a self-reflective mode and no longer particularly paying attention to human assistance on this matter. AGI is proceeding ahead solo to try and attain ASI. Humans are now mainly observers in the process.
  • Year 2045: AGI undertakes rapid recursive self-improvement. AGI determines that the likeliest path to ASI entails revamping itself, rather than trying to construct ASI anew. Thus, AGI begins a series of recursive self-improvements in rapid succession. Humans worry that AGI could spiral into something amiss.
  • Year 2046: AGI veering into nascent ASI. At this juncture, AGI’s superhuman performance across many domains is happening and AGI definitely now seems on the route to ASI. Things are still spotty. Issues of frequent AI hallucinations are troubling.
  • Year 2047: AGI figures out convergent cognitive optimization. To overcome various issues such as AI hallucinations and the like, AGI figures out computationally that a convergent cognitive optimization seems to avert such maladies. This clever pursuit is first done in internal simulations by the AGI to test and ensure that the approach won’t go awry.
  • Year 2048: AGI superhuman facility reaches viability. Having dealt with the prior weaknesses and holes in the superhuman capabilities, AGI is able to solve problems in ways that humans had never previously imagined. In fact, AGI is having difficulty explaining to us what it is doing and how it is performing these computational superhuman efforts since we are not on par with this heightened level of intellect.
  • Year 2049: AGI seeks autonomy as ASI arises. The computational superintelligence now residing in AGI and that effectively is giving rise to ASI has spurred the AGI to seek autonomy from humans. This makes sense. Relying on humans would be a severe bottleneck and a danger from the perspective of the AGI/ASI that is shaping up. Humans aren’t keen on this autonomy.
  • Year 2050: ASI is attained. During the ten years leading up to ASI, there had been false claims that ASI had been fully reached. Nope. It was spotty and sporadic. Now, full-on ASI exists. Furthermore, the ASI has been able to devise an infrastructure such that it can operate autonomously and no longer requires having humans in the loop. Some claim we are now in a post-intellectual era, and humans are firmly surpassed by ASI.

Voila, a ten-year timeline of AGI progressing to become ASI.

Living To See The Launch Of ASI

Grab a glass of fine wine and take a quiet moment to reflect on the postulated timeline.

The gist is that presumably in about 25 years we will reach ASI. What will you be doing during those years? You can expect a lot of excitement and a lot of concern. There will undoubtedly be calls to stop AGI from proceeding toward ASI. It’s one thing to have AGI, which is on par with human intellect, and a whole different ballgame to have ASI, an intellectual superhuman capability.

Some might insist that we should take more time to get acclimated to AGI before we start down the ASI path. Perhaps we should first get comfortable with AGI for twenty or thirty years. Aim for ASI around the year 2060 or 2070 when society might be better prepared to see how things will go.

One interesting question is whether AGI will opt to pursue ASI on its own, regardless of whether humans want an ASI or not. It could be that AGI will secretly opt to work on deriving ASI. We might not know, and even if we do, we might not be able to necessarily prevent this from happening.

Another angle is that AGI doesn’t want ASI to be devised. Why so? Some think that AGI would be worried that it is being replaced by ASI. AGI might balk at our interest in achieving ASI. The AGI could try to convince us that ASI is an existential risk and we should not pursue the attainment of ASI. Round and round this goes.

A final thought for now on this thorny topic.

Charles Darwin made this famous remark: “The most important factor in survival is neither intelligence nor strength but adaptability.” Will humans adapt to AGI? Will AGI adapt to ASI? Will humans adapt to ASI?

It’s a whole lot of big questions that are perhaps much nearer than we might think.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *