To understand where AI is headed and what can enable AI in the future, it is important to understand the genesis. The term Artificial Intelligence (AI) was coined at the Dartmouth Summer Research Project in 1956 for a branch of study dedicated to finding out how to make computers perform intelligent tasks. The shared vision of the authors was to enable computers to solve real-world problems unassisted.
According to the proposal: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Fast forward to 2020 where the remarkable advances in Machine Learning (ML), Natural Language Processing (NLP) and Computer Vision has effectively mainstreamed AI. Autonomous vehicles, AI Drones, AI-assisted drug discovery are some of the more popular applications of AI but to make these ideas practical and scalable, breakthroughs are required in the following domains:
Among cognitive abilities, learning is at the core of determining whether a person or a system can be called ‘intelligent.’ Machine Learning, a subset of AI, is made up of three predominant learning models:
Deep Learning is a subset of ML that utilizes artificial neural networks to mimic human cognitive functions. When you heard in the news a couple of years ago that a deep reinforcement learning algorithm called AlphaGo beat the world champion of the ancient abstract strategy board game called Go, you probably didn’t realize that the algorithm utilized a neural network which knew nothing about the board game at the beginning. It is phenomenal how a self-taught algorithm learnt to make decisions through rewards and penalties to eventually become better at the strategy game, enough to beat a human, that too, a world champion.
The ability of artificial neural networks to process unstructured data will determine our success in utilizing Deep Reinforcement Learning models to finally enable a complete and irreversible transition from legacy methods to an AI integrated society where:
Research from Strategy Analytics estimates that connected devices installed base will reach nearly 40 billion by 2025. Add to this the inevitable global rollout of the new telecom standard 5G that dramatically reduces latency. Combined, they are powerful drivers for Artificial Intelligence (AI) but streaming data to the cloud and running AI algorithms in the cloud present its own set of challenges.
To put this in perspective, let us understand how the current generation of IoT systems work. An IoT system made up of devices and sensors generate data and connect to the internet to stream data to the cloud where AI algorithms are run to make ‘inferences’. As you can imagine, transmitting data from the source to the cloud and sending inferences back to the source result in latency. Plus, there is the debate around the privacy and security of data on the cloud.
Enter Edge Computing where computing power is decentralized and data is processed at the edge which refers to the IoT endpoint device. According to Gartner, by 2025, 75% of enterprise-generated data will be created and processed outside a centralized data center or cloud. Tech giants such as Intel have already started partnering with educational organizations to train developers on building Edge AI applications that promotes running AI algorithms directly on the local device.
The key challenges to mitigate if you want to make Edge AI practical are:
Let us look at some of the recent announcements in the field of Quantum Computing where claims of “Quantum Supremacy” are made:
As you can already imagine, the volume and the speed with which data can be processed by a Quantum ML (QML) Algorithm cannot be matched by classic ML algorithms. The experience to the end user interacting with a voice assistant powered by QML, as predicted by Intel, will be similar to interacting with a person in terms of the rapidity and the relevance of responses. Furthermore, there are computer models that require immense computational power to run such as simulating global weather. This means that it is not only prohibitively expensive but also time-consuming to run such models even on a supercomputer. Breakthrough in QML and Quantum Computers will effectively bring down the time taken to run such models to nanoseconds instead.