The Evolution of AI
1936: Alan Turing proposed the idea of a Universal Turing Machine.
1943: Warren McCulloch and Walter Pitts publish a seminal work titled “A Logical Calculus of the Ideas Immanent in Nervous Activity" which remains the foundation on which artificial neural networks were created.
1950: A test to determine the intelligence of a machine was formed by Alan Turing. This test is known as the “Turing Test”.
1956: The term “Artificial Intelligence” was coined by John McCarthy at a summer conference at Dartmouth College.
1966: The first chatbot -a natural language processing program- was built by Joseph Weizenbaum.
1969: The idea of “Back-propagation” is introduced. Perhaps the most significant algorithm in AI, “backprop”- as it is called- enabled a neural network to learn through trial and error.
1972: WABOT-1, the first intelligent humanoid robot was built in Japan.
1973: There was a general decline in the interest and funding for research in artificial intelligence. This period is regarded as “AI winter”.
1981: Renewed interest in the field of AI, ending the “AI winter” period. The first commercially available expert system was introduced at Digital Equipment Corp.
1986: A test on the first self-driving car was conducted by German researchers from the Bundeswehr University. The car made use of mirrors and sensors to drive successfully.
1987: Marked the beginning of the second AI winter period. This lasted till 1993. In 1993, an attempt to explore the Mt. Erebus volcano in Antarctica was made by an eight-legged robot named Dante, which was controlled from the United States of America.
1997: Deep Blue, a chess-playing computer invented by IBM defeats Garry Kasparov- world chess champion- in a match.
2011: IBM’s AI-Watson- wins the Jeopardy gameshow by defeating two of the game’s all-time best players.
2011: Apple introduces “Siri, a voice-activated personal assistant, and natural language processing software into the Apple iPhone 4s, which can understand language request and modify results.
2014: Alexa, a virtual assistant AI developed by Amazon was introduced.
2014: A chatbot Eugene Goostman developed in Russia, claims to have passed the Turing test but upon further investigation was found to be false.
2016: DeepMind’s AI, AlphaGo defeats world champion Lee Sedol at the game of Go.
2018: For the first time, a person is killed by a self-driving car in Arizona, USA, thereby raising doubts about the use of self-driving cars.
The Birth of "Artificial Intelligence"
As far as the history of AI is concerned, two names stand out heads and shoulders above other contributors to the field. They are Alan Turing and John McCarthy.
Turing was inspired by the question of whether machines could become intelligent. This was the foundation of the Turing test. Besides this, Alan Turing in 1936, invented the Turing machine which he had initially called an a-machine (the automatic machine). Turing’s theoretical machine model described an abstract machine that would be capable of carrying out any imaginable mathematical computation if it could be represented as an algorithm.
Turing was central to the work at Bletchley Park during World War II. At Bletchley Park, Turing was able to drive efforts towards the successful development of an electromechanical machine called the bombe which was used to decrypt German encoded messages done on the Enigma machine. Turing’s work was pivotal to shortening the duration of World War II saving countless lives in the process.
The Dartmouth conference of 1956 – organized by John McCarthy- is famed for being the seminar that established the field of AI. The term “Artificial Intelligence” was coined by McCarthy. He also developed a program called Lisp in 1960 which was used by most AI applications at the time.
The efforts to build AI capable of understanding and responding to natural language can be traced back to one of the first chatbot programs ever built called ELIZA. Natural language is the language humans use to speak and type.
ELIZA was built in 1966 by a German American scientist called Joseph Weizenbaum who was inspired by the Turing test. Using the technique of pattern matching and a program called DOCTOR, Weizenbaum was able to develop ELIZA to be able to interact with humans to a certain extent in the way that a therapist might. So, a person might say, “I worry a lot” and ELIZA would reply with “What do you worry about?’
ELIZA’s pattern matching worked by recognizing the words or phrases fed into it by the user and then searching through appropriate replies that had already been fed into its database. When it found a perfect match, it gave a response.
Machine Learning, Deep Learning & Neural Nets
Machine learning began to take flight in the ‘90s. AI scientists, to achieve machine learning, fed computer systems with a lot of information and then left it up to the system to decide how to make sense of the information.
Walter Pitts (L) & Warren McCulloch (R), 1949. Published in Biosyst. 2007
The discussion of the AI neuronal network began in 1943 when two scientists, Warren McCulloch and Walter Pitts developed a model for neural networks.
In 2006, Geoffrey Hinton introduced the term – deep learning. Deep learning refers to even deeper learning layers of an AI system. Along with other names such as Yoshua Bengio and Yann LeCun, the English-Canadian computer scientist Geoffrey Hinton, is regarded as a “Godfather of Deep Learning”. Hinton’s impact extended even to his students as Alex Krizhevsky, one of his students, designed AlexNet for the ImageNet challenge of 2012. AlexNet revolutionized the computer vision field and is considered one of the most influential papers ever published in the field. The paper has been cited over 61,000 times.
In 2018, Hinton alongside Yoshua Bengio and Yann LeCun was awarded the Turing Award for his work on deep learning.
Neural networks of the 1960s, expert systems of the 1980s; these advancements helped to fuel a great deal of hype about AI in those periods. The gains recorded in these areas invited funding from both public and private stakeholders and several researchers jumped on the AI train to gain access to these grants.
Many of these researchers talked a great deal and offered unrealistic projections of what their research could uncover. However, it became clear that there would be many technical hurdles to overcome which many research efforts could not. These research shortcomings created impatience on the part of the stakeholders funding AI efforts. Giving up on the hope that the situation would improve, many pulled out. This paved the way for the cold and barren AI winters which were marked by little or no development in AI.