The discussion surrounding AI dates beyond this century. It extends even further back than when Alan Turing asked the question, “What if machines could think?”
The “AI buzz” is certainly not new. If, however, one was to examine the modern world and then attempt to pinpoint the existing factors responsible for the supersonic AI boom, the following would be crucial to AI’s meteoric rise – faster and cheaper computing power, big data, advances in algorithms (especially deep learning), open-source libraries and open research publication.
Now, both experts and keen observers of the field watch in eager expectation of
the wonder that AI can become in the near and distant future.
In the early days, limitations in computing power restricted access to AI to those companies with the capabilities. Therefore, the playing field was largely dominated by the top players such as Amazon, Google, and Microsoft. The necessary hardware and software capability to boost AI operations were simply too expensive or complex for smaller players to implement. This was the picture until Cloud computing came into the mix. Cloud computing helped to level the playing field by opening up the world of AI to small and medium enterprises on a scale previously only available to large corporations and governments.
Also, there is now what is termed Artificial Intelligence as a Service (AI-as-a- Service), an option offered by Cloud providers. AI-as-a-Service, through API REST-based protocols, can seamlessly work with the internal applications of a company. These allow software developers to put any AI algorithm into production without any hitches. The economic benefit is inherent in the fact that only the time spent on the platform is paid for.
Advances in computing power can only get better as time progresses. Moore’s law puts it well – “The number of transistors on a microchip doubles every two years, though the cost of computers is halved”. Not only will computing power get better, but it will also get cheaper to utilize!
To have an idea of how much bigger the world has become in terms of data, just imagine this – 90% of the world’s data has been created in the last two years. Data creation is easier by the minute. Today, over 2.5 quintillion bytes of data are created daily. In 2020, 1.7MB of data was created per second per person. These figures are according to the sixth edition of a Domo report.
This can be attributed to the rapid creation and availability of data thanks to the reduced costs and new data generation sources. The successful deployment of AI will depend on how much data the system can be fed.
We are in the Big Data age where a variety of platforms and devices such as cloud computing, smartphones, cameras, and sensors serve as data storage facilities that AI can leverage for problem-solving. As AI can gather data from all of these sources, its ability to be used to solve specific problems across several fields such as agriculture, finance, education, medicine, etc. will grow by leaps and bounds.
The advances in data storage coupled with management technology such as NoSQL databases have boosted the use of large datasets to train AI models. Training AI models has also been helped by open databases such as ImageNet, a database made of over 10 million hand-tagged images. Other databases that have been used in training deep learning models include MNIST, STL, OpenImages, Visual Question Answering, SVHN, etc.
The story of AI’s improvement is completed by advances in algorithms (such as deep learning), open-source libraries, and open research publications.
Deep learning (or deep neural network) is a type of machine learning technique based on artificial neural networks (patterned after human neural networks) in which several layers exist through which raw data is processed from the lowest layers to the highest layers until an output that makes sense is formed. To work, deep learning needs a ton of data to process. The side-by-side growth of computing power and Big Data has thus helped to improve the powers of deep learning. As advancements emerge in deep learning and data gathering, increasingly complex neural networks can be built which would improve its current use in finding patterns in speech, images, etc.
The creation of open-source frameworks such as TensorFlow, Keras, and PyTorch have also played their role in improving access to machine learning research and allowing flexibility in the construction of neural networks.
In much the same way, the open research approach and the number of available free publications are helping researchers stay up to date on the latest thinking in the AI field without the delays of peer review associated with other scientific fields.