Getting to the Roots - Disciplines that Make up Today's AI
Posted by Tirthankar RayChaudhuri on Jan 12,2024
As a consequence of the endeavors of researchers studying the human mind and thought processes over a number of decades, the following popular disciplines have emerged today.
Cognitive science,
Machine learning,
Deep learning and
Natural language processing
Each one of these has had significant advanced contributions from numerous researchers.
In the next sections these disciplines are described in more detail.
Cognitive Science
Cognitive Science is an interdisciplinary study of the human mind with an emphasis on acquisition and processing of knowledge and information.
The contributing disciplines to Cognitive Science are
- Conventional Artificial Intelligence also known as Expert Systems
- Linguistics
- Neuroscience
- Cognitive Psychology
- Philosophy
We describe these disciplines of Cognitive Science briefly in the following subsections.
Conventional Artificial Intelligence: Expert Systems
An expert system is a computer system that emulates the decision-making ability of a
human expert.
The first expert systems were created in the 1970s and then proliferated in the1980s.
Expert systems were among the first truly successful forms of AI software.
An expert system is divided into two sub-systems:
•The Knowledge Base. The knowledge base represents facts and rules.
- The Inference Engine. The inference engine applies the rules to the knownfacts to deduce new facts. Inference engines can also include explanation and debugging capabilities
Expert systems are designed to solve complex problems by reasoning about knowledge represented primarily as if-then rules rather than through conventional procedural code.
Linguistics is the scientific study of language.
One of the main objectives of Linguistics is to elucidate the mental andphysical processes underlying the perception and production of language.
Noam Chomsky who is known as the father of modern linguistics, considerslanguage as an essential mental faculty, universally endowed to all humanbeings.
He postulated in 1965 that linguistic structures are partly innate, reflecting anunderlying similarity, a Universal Grammar in all human languages.
There are three major aspects of linguistic study:
language form (grammar, morphology, phonetics),
language meaning (semantics) and
more subtle useof language such as context, intent and implicature (pragmatics).
A more detailed blog on Linguistics is available later in this series.
Neuroscience is the scientific study of the nervous system. The scope of neuroscience has broadened to include different approaches used to study the molecular,cellular, developmental, structural, functional, evolutionary, computational, and medical aspects of the nervous system.
Neurons (or nerve cells) are one of the main constituents of the brain. The brain’s neo cortex is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought and language. Recenttheoretical advances in neuroscience have also been aided by the study of neuralnetworks (sub symbolic models of the neo cortex).
Cognitive psychology is the study of mental processes such as attention, language use, memory, perception, problem solving, creativity and thinking.
Much of the work derived from cognitive psychology has been integrated into various other modern disciplines of psychological study, including educational psychology, social psychology, personality psychology, abnormal psychology, developmental psychology, and consumer psychology.
Philosophy is the study of general and fundamental problems, such as those connected with reality, existence, knowledge, values, reason, mind and language.
The ancient Greek word φιλοσοφία (philosophia) was probably coined by Pythagoras and literally means "love of wisdom" or "friend of wisdom."
Philosophy has been divided into many sub-fields. It has been divided
•chronologically (e.g., ancient and modern);
•by topic (the major topics being epistemology, logic, metaphysics, ethics, and aesthetics);
•by style (e.g., analytic philosophy).
Machine Learning
Machine Learning is a field of computer science.
It evolved from the study of pattern recognition and computational learning theory.
Machine Learning explores the construction and study of algorithms that can typically learn from data and make predictions and/or recommendations. Common examples of Machine Learningapplications are spam filtering, optical character recognition (OCR), search engines and computer vision.
Examples of Machine Learning algorithms are Decision tree learning, Association rule learning, Artificial neural networks, Inductive logic programming, Support vector machines, Clustering, Bayesian networks, Reinforcement learning, Representation learning. Similarity and metric learning, Sparse dictionary learning and Genetic algorithms
A longer list of Machine Learning applications is as follows
Affective computing |
Medical diagnosis |
Adaptive websites |
Natural language processing |
Bioinformatics |
Object recognition |
Brain-machine interfaces |
Process Optimization |
Cheminformatics |
Recommender systems |
Classifying DNA sequences |
Robot locomotion |
Computational finance |
Search engines |
Computer vision |
Sentiment analysis |
Credit card fraud detection |
Sequence mining |
Game playing |
Speech and handwriting recognition |
Information retrieval |
Stock market analysis |
Internet fraud detection |
Structural health monitoring |
Machine perception |
Syntactic pattern recognition |
A list of highly advanced and in some cases dangerous automations (refer also to the last section on Risks/Challenges) that employ Machine Learning in varying degrees is given below
Self-driven vehicles
Robots in general
Unmanned aircraft (drones)
Smart weapons, eg, guided missiles
Guided rockets
Unmanned spacecraft
Artificial satellites (eg, for communication and weather data)
Deep Learning
Deep learning (deep machine learning, or deep structured learning, or hierarchical learning,
or sometimes DL) is formally defined as a branch of Machine Learning based on a set of algorithms that attempt to model high-level abstractions in data by using model architectures, with complex structures or otherwise, composed of multiple non-linear transformations.
The term "deep learning" gained traction in the mid-2000s after a publication by
Geoffrey Hinton and Ruslan Salakhutdinov showed how a many-layered feed forward
neural network could be effectively pre-trained one layer at a time, treating each layer
in turn as an unsupervised restricted Boltzmann machine, then using supervised back propagation for fine-tuning. The use of many layers in the learning model led to the creation of the term ‘deep learning’. Deep Learning therefore is a rebranding term signifying machine learning with the use of Artificial Neural Networks with manifold ‘hidden layers’.
The real impact of deep learning in industry commenced in large-scale speech recognition around 2010 following a project conducted at Microsoft Research by Geoff Hinton and Li Deng.
In March 2013, Geoff Hinton and two of his graduate students, Alex Krizhevsky and Ilya Sutskever, were hired by Google.Their work focused on both improving existing machine learning products at Google and also to help in dealing with the growing amount of data at Google.Google also purchased Hinton's company DNNresearch. In the last decade this research initiative has grown significantly and today Google AI and Google DeepMind (earlier Google Brain) are leading industrial R&D groups in the field of AI. Google have made numerous contributions to the field of AI research including the highly popular application development platform Tensorflow (2015) and the design of the Transformer algorithm (2017) which is the basis of the now extremely popular LLM product ChatGPT. In early May 2023 Geoff Hinton affectionately called ‘the godfather of AI’ announced his retirement at age 75 and officially resigned from Google AI.
Towards the end of 2013, Yann LeCun of New York University, another Deep Learning guru, was appointed head of the newly-created AI Research Lab at Facebook. Today this research group is called ‘Meta AI’. They are known for their significant contribution of having developed the well-known platform ‘PyTorch’ for building AI/ML applications in 2016.
In 2014, Microsoft established The Deep Learning Technology Center in its MSR division, amassing deep learning experts for application-focused activities.
Natural Language Processing
Natural Language Processing (or NLP) involves
•Natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and also
•Natural language generation.
Up to the 1980s, most NLP systems were based on complex sets of hand-written rules.
Modern NLP algorithms are based on machine learning, especially statistical machine learning.
NLP is closely related to Computational Linguistics, a field concerned with the statistical or
rule-based modelling of natural language from a computational perspective.
A powerful application of NLP is semantic search: improving search accuracy by understanding the searcher’s intent and context to generate more relevant results, developed by Google.
For some years Geoff Hinton’s deep learning techniques were being employed to solve NLP problems. However there were challenges with training recurrent neural networks with time-series data and this roadblock was eventually addressed by the emergence of the Transformer algorithm from the Google Brain team in 2017. Today large language models (LLMs) such as ChatGPT (GPT stands for Generative Pre-trained Transformer) from OpenAI and the PaLM2 (earlier Bard) and the LUMIERE products from Google are some of the latest developments in this exciting field.
A more detailed blog on LLMs and Generative AI will be available later in this series.