Artificial General Intelligence: To Worry or Not to Worry?

In the famous Shakespeare play, “Hamlet”, the play's protagonist asks the question, “To be or not to be.” This line remains one of literature’s unanswered inquiries, forcing readers to come to their own conclusion about which side of the question to take a stance on.

In the AI world, two similar questions of this nature exist where Artificial General Intelligence is concerned:

1. To be or not to be – will Artificial General Intelligence ever become a reality?

2. To worry or not to worry – if Artificial General Intelligence ever becomes a reality, should we be scared?

As in Hamlet, we do not pretend to know the answers to these questions but we can offer you the knowledge required to understand Artificial General Intelligence and give you enough material to form an opinion on whether AGI will be mankind’s greatest hero or mankind’s worst villain.

What is Artificial General Intelligence?

Artificial Intelligence (AI) is the capacity of a computer system to perform tasks that are typically associated with intelligent beings e.g. the ability to reason and learn from previous experiences.

The notion of AGI is derived from the division of Artificial Intelligence according to its stage of development. There are three distinct kinds:

Artificial Narrow Intelligence (ANI): This is also called weak AI. It is the application of Artificial Intelligence to solve problems in fast and efficient ways. It refers to the domain-specific use of AI and is the kind of AI most commonly used for problem solving in the world today. For example, image recognition using Computer Vision or text-to-speech using Natural Language Processing.

Artificial General Intelligence (AGI): This is also referred to as strong AI. It is the theoretical capacity of an intelligence system to understand or learn any intellectual task that can a human being can i.e. AGI is human-level AI. It is, in effect, general purpose AI.

Superintelligence: This is defined by renowned philosopher, Nick Bostrom, as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest." Theories exist that intelligent systems will reach a hypothetical point in time in which technological growth becomes impossible to control and irreversible, causing unpredictable changes to human civilization. This tipping point, it is believed, would spring from an “intelligence explosion” in which an upgradable intelligent agent enters a “runaway reaction” defined by self-improvement where each successive version of the agent is a more advanced version of its predecessor until super-intelligence is achieved.

Fact or Fiction?

What is fact and what is fiction where the above are concerned?

Fact: Artificial General Intelligence does not currently exist. Neither does Superintelligence. All forms of AI applications that you see in the world today are all ANI.

If the idea of AGI and SI triggered your paranoia about synthetic creations, you can, at least, breathe a sigh of relief knowing that it’s nothing to worry about yet.

History of AGI

The most important landmark event responsible for kickstarting AI research is the Dartmouth conference of 1956 which was convened by James McCarthy (regarded as the father of AI). Most of the early history of AI was defined by the search for AGI. Over-exuberant researchers made bold predictions that AGI was just around the corner. This sparked a lot of government interest backed by adequate funding. However, as project after project failed to yield any note-worthy results, the initial interest cooled off leading to the AI winters – a period of AI’s history where there was no hype around AI and research around the field faded into obscurity. 

It's essential to approach discussions around AGI with background information on AGI’s history for two reasons:

1. To understand that it is not a new term and that a lot of work has been poured into the field.

2. To avoid repeating the mistakes of the past characterized by over-promising and under-delivering.

Perhaps we will achieve AGI. Maybe we won’t. Whatever the fate of AGI is, its history must guide its current discourse.

You can learn more on AI’s history here.

Why is AGI Delayed?

So, why has AI not reached that point of human-level intelligence? 

We do not need to look any further than the opinion of Dr. Ben Goertzel, the man credited with coining the term, “Artificial General Intelligence”. According to him, the reasons for the delayed emergence of AGI are:

1. The weakness of existing computer hardware. 

2. Limited funding for AGI research. Most funding is being funneled to ANI.

3. The complexity of integrating several complicated components to form a complex dynamic software system. No single algorithm exists yet to replicate the synergistic architectural nature of the human brain’s several parts in a computer system.

How Would We Know when AGI is Here?

The most widely known test for assessing AGI is the Turing Test.

Alan Turing designed the Turing Test. It is the assessment of a machine’s capacity to display intelligent behavior unrecognizable from that of a human. In this test, a human examiner would be required to evaluate the discussion between a human and a machine developed to provide human-like replies in writing. The assessor’s task would be to tell the difference between the human and the machine. A machine has successfully passed the Turing test if the examiner cannot reliably conclude which replies are from the machine or the human.

So far, only one chatbot called “Eugene Goostman” has been able to pass the test after winning a competition billed as the biggest ever Turing test contest. In the competition, the bot - a program designed to simulate a 13-year-old boy - successfully convinced 29% of its judges that it was a human. 

Other tests for assessing AGI in an AI system include AI-complete or AI-hard problems. AI-complete problems are considered the most difficult kinds of tasks in the AI field. An AI-hard problem is considered to involve computer vision, natural language understanding, and handling unexpected scenarios when solving real-world problems.

No modern computer exists that can solve an AI-complete problem.

The age of AGI will have commenced when a computer emerges that can pass either the Turing Test or an AI-complete problem.

Is AGI Near?

Marvin Minsky, the late Cognitive and Computer Scientist once said: “In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level, and a few months after that, its powers will be incalculable.”

He made this statement decades ago. Clearly, his prediction was way off the mark. This drives home the point that no one can know for certain when AGI will truly emerge.

One of the more popular modern projections about AGI is from Richard Sutton, a Professor of Computer Science who, in 2017, projected: “Understanding human-level AI will be a profound scientific achievement (and economic boon) and may well happen by 2030 (25% chance), or by 2040 (50% chance)—or never (10% chance).”

Most of the optimistic projections these days are buoyed by the advancements in computing power.

Some of the optimistic projections around AGI have been met with stiff resistance from other researchers who regard the pursuit of AGI as the search for the holy grail. One AI researcher in America sums it up like this: “Belief in AGI is like belief in magic. It is a way of abandoning rational thought and expressing hope/fear for something that cannot be understood.”

The Paperclip Scenario

Human-like intelligence, for some, seems like an innocent enough idea. Who wouldn’t like to hand off to a computer and watch it done with accuracy and speed faster than a human is capable of?

Yet, those skeptical about AI attaining human-like intelligence have reason to be. The Paperclip scenario is a good example of the possible existential risk posed by the technology.

The Paperclip scenario is a well-known example of “instrumental convergence” – the hypothetical potential of a highly-advanced intelligent agent to work towards an apparently harmless aim in potentially harmful ways. In this scenario, described by Nick Bostrom:

“Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

Bostrom himself concludes that this scenario is unlikely in the real world but it is a perfect idea of why many distrust AGI, much less Superintelligence.

To Worry or Not to Worry?

How you view AGI or Superintelligence will probably depend on your initial opinion on AI.

Whatever side of this debate you belong, it should thrill you to know that humans still wield full control over the fate of AI. Perhaps even more importantly, you have a role to play.

Understanding how AGI can shape history is one step towards playing your role. Another step you can take is to commit to learning how you can become an AI Citizen i.e. a person who is devoted to the responsible use of AI. 

Platforms like AIQOM AI exist to teach AI and related concepts like machine learning and reinforcement learning to foster the progressive use of AI for the advancement of humanity.

We invite you to take these steps and then ask yourself – Artificial General Intelligence: To Worry or Not to Worry? 


Resources

http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf

http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet

https://www.huffpost.com/entry/artificial-intelligence-oxford_n_5689858

https://towardsdatascience.com/what-is-artificial-general-intelligence-4b2a4ab31180

https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/

https://www.mckinsey.com/business-functions/operations/our-insights/an-executive-primer-on-artificial-general-intelligence

https://en.wikipedia.org/wiki/Artificial_general_intelligence

https://www.wildfirepr.com/blog/can-ai-really-pass-the-turing-test/




Comment  0

No comments.