Are We Close To AGI?

#39

There has been a lot of talk this week about AGI and how close we are to achieving it. Many people are uncertain of the terms, so this week, as well as looking at what has been said, we’ll also do a general unpacking of the differences between ANI (Artificial Narrow Intelligence), AGI (Artificial General Intelligence), ASI (Artificial Super Intelligence) and beyond. Let’s dive in…

This week has seen the global elite descend on Davos for the annual World Economic Forum (WEF). Top of the WEF risk list for 2024/25 (amongst all surveyed) was not climate change or war, as you might think, but misinformation and interference in elections by AI enhanced bad actors. As AI gets better, this threat gets worse. That said, this has made all things AI front and centre of the Forum. Amongst those taking part in the discussions, Sam Altman, CEO of OpenAI had a lot to say about the advent and impact of AGI. The video below is quite long and technical but the first few minutes has Altman talking AGI.

To be clear, AGI is not what we have now. It seems that we are on a journey to that point which is an AI significantly more powerful than the current iterations. In the infographic below, we are currently somewhere between ANI and AGI.

AGI, especially when embedded in humanoid robots, is a very big deal, and will likely revolutionise the world as we know it. The famed futurist Ray Kurzweil, predicted in 2005 that human lervel intelligence (AGI) would be achieved by 2029 and that the so called Singularity (ASI in the infographic above) would be reached by 2045. In truth, many of the brightest minds working in AI suggest that AGI will happen before 2029, many say by 2025. There is certainly a huge push towards driving this forward. Earlier this week, Mark Zuckerberg, CEO of Meta (formerly Facebook) explained how Meta was going all in on open source AGI development. Here’s what he had to say:

All sounds great, but what will AGI mean for you and me? Well, I asked that very question of ChatGPT and after a bit of back and forth, this is what it summarised:

“Your readers and the public at large could experience transformative changes in healthcare, education, the economy, and more. However, addressing job displacement, ethical concerns, and inequality will be crucial. Preparing for these changes involves education, policy-making, public engagement, and collaboration across sectors, with a focus on human-centric design to align AGI with societal values and needs.”

All of which is an interesting way of saying there will be good and bad outcomes of AGI and knowing that means that we need to prepare now for what is coming down the road. This is not a possibilty anymore, this is completely inevitable and will affect all of us. Writing this newsletter is my small contribution to the debate and I hope that it makes some of you more aware of the opportunities and threats that face us all in the coming months and years.

As for ASI (Artificial Super Intelligence), that’s for another day!