“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” ― Alan Turing .
The given quote by Dr. Turing brings into light a future, that we are moving towards. Discovery of the power of data and its numerous applications sparked a revolution of the age of artificial intelligence (AI). The human brain, center of our intellectual identity and reasoning has contemplated many neurologists and scientists immensely to the point that after spending years of dedicated research in studying it, they have finally unraveled its complexity and are positive about recreating it for AI. The approach behind replicating the activity of neurons in the neocortex of the human mind that facilitates thinking, for the AI, is one way to define deep learning. It involves building and training artificial neural networks to recognize patterns of sound, images, speech and other digital information. A breakthrough in improving the efforts for utilizing data and developing mathematical models for training neural networks has not been achieved until recently. Artificial neural networks are inspired by the biological model proposed by Nobel laureates David H. Hubel & Torsten Wiesel in 1959, who found two types of cells in the primary visual cortex: simple cells and complex cells. Many artificial neural networks can be viewed as cascading models of cell types adapted from these biological observations.
Deep learning refers to a class of machine learning algorithms, developed largely since 2006, where many stages of nonlinear information processing in hierarchical architectures are exploited for pattern classification and for feature learning. In the more recent literature, it is also connected to representation learning, which involves a hierarchy of features or concepts where higher-level concepts are defined from lower-level ones. Generic view of neural networks has evolved with discovery of new data by experimentation and application to deep learning architectures such as Convolutional neural networks, Deep belief networks, Long Short Term Memory, Deep Boltzmann Machines and more, that aim to train the AI with the help of feature extraction and machine learning techniques.
By using feature extraction, the models can build derived variables from initial set of data and facilitate subsequent learning by generalizing steps that produce better human interpretations. Each of these architectures or models are unique in their focus on training the AI for a particular task like image processing, pattern recognition, sound, etc. Convolutional neural networks (CNN) are the most popular deep learning architectures for processing visual and two-dimensional data. CNNs have been easy to train due to less number of parameters they require, yielding high results in image and speech recognition. They have a stacked layer construct with each module consisting of pooling and convolution (mathematical operation on functions) layers. A famous application of CNN is DeepDream by Google, a computer vision program that utilizes CNN to find and enhance patterns in images via algorithmic pareidolia which creates a dreamlike hallucinogenic appearance in over processed images.
A deep learning system that google had developed recently was shown around 10 million YouTube videos in order to develop image recognition and it was successful in identifying objects such as cats, yellow flowers, human faces without any prior knowledge of these objects. Microsoft Chief Research Engineer Rick Rashid, demonstrated the development of a speech recognition software at a lecture in China that transcribed the words he spoke into English with a 7% error rate, translated them into Mandarin while simultaneously simulating his own voice in Mandarin.
IBM’s Watson computer uses deep learning techniques and a sophisticated analytics software to assist doctors with QA type of research, enabling them to make better decisions. Microsoft has also deployed deep learning software in their windows phones and Bing voice search engine. Of course, these examples are just at the edge of the potential of deep learning applications. There needs to be extensive development done in order to expand the applications beyond speech and image recognition, perhaps to the point where AIs can finally learn to think for themselves!