
Introduction
The terms Artificial Intelligence (AI) and Cognitive Computing (CC) are often used interchangeably but there the approaches and objectives of each differ.
AI and Cognitive Computing are “based on the ability of machines to sense, reason, act and adapt based on learned experience” (Brian Krzanich, CEO, Intel)
The two topics are closely aligned; while they are not mutually exclusive, both have distinctive purposes and applications due to their practical, industrial, and commercial appeal as well as their respective challenges amongst academia, engineering, and research communities.
This topic is explored further in my book, “Understanding the Role of AI and its Future Social Impact”. Checkout one of my other LinkedIn Articles for further information.
For reference, my new book is available via :-
- https://lnkd.in/gbk-zba (Amazon)
- https://bit.ly/34cfJVf (IGI Global)
Definitions
Before we get into too much detail, let’s start by defining each of these terms. We will then explore what they have in common, their differences and typical uses.
Artificial Intelligence (an “umbrella term”)
AI has been studied for decades and, despite threatening to disrupt everything human, remains one of the least understood subjects in computer science. We use it every day without even noticing it. Google Maps applies it to provide directions. Gmail applies it to locate spam. Spotify, Netflix and others apply intelligent customer service via automatic response systems.
As the popularity of AI grows, there remains a misunderstanding of the technical jargon that comes with it.
In simple terms, AI is an umbrella term that includes a diverse array of sub-topics which may be best described by the following mind map (Mills, 2016).

In the words the person who coined the term artificial intelligence, John McCarthy, AI is “the science and engineering of making intelligent machines”.
In layman’s terms, AI is an understanding that is achieved by machines that interpret, mine and learn from external data in ways that the machine functionally imitates the cognitive processes of a human. These processes include learning from constantly changing data, reasoning to make sense of the data and related self-correction mechanisms. Human intelligence is rooted in sensing the environment, learning from it and processing its information.
Thus, AI includes
- A simulation of human senses: sight, hearing, smell, taste and touch
- A simulation of learning and processing: deep learning, ML, etc.
- Simulations of human responses: robotics AI applications includes problem-solving, game playing, natural language processing (NLP), speech recognition, image processing, automatic programming and robotics.
Cognitive Computing
Cognitive Computing (CC) refers to the development of computer systems based on mimicking human brains. It is a science that was developed to train computers to think by analysing, interpreting, reasoning and learning without constant human involvement. CC represents the third era of computing.
In the first era (19th century) Charles Babbage, the ‘father of the computer’, introduced the concept of a programmable machine. Used for navigational calculation, his computer tabulated polynomial functions. The second era (1950s) resulted in digital programming computers like ENIAC (see End Notes) and ushered an era of modern computing and programmable systems.
CC utilises deep-learning algorithms and big-data analytics to provide insights.
A cognitive system
- Understands natural language and human interactions
- Generates and evaluates evidence based hypothesis
- Adapts and learns from user selections and responses
The “brain” of a cognitive system is a neural network: fundamental concept behind deep learning. A neural network is a system of hardware and software that mimics the central nervous system of humans to estimate functions that depend on a huge amount of unknown or learned inputs. By the 1980s, two trends affected the way experts and researchers began to unpack ‘the black box’ of the neural approaches to studying, thinking and learning. This was the advent of computing and cognitive sciences.
Thus, CC refers to
- Understanding and simulating reasoning
- Understanding and simulating human behaviour
Using CC systems, we can make better human decisions at work. Applications include speech recognition, sentiment analysis, face detection, risk assessment and fraud detection.
The Differences
Augmentation
- AI augments human thinking to solve complex problems. It focuses on accurately reflecting reality and providing accurate results.
- CC tries to replicate how humans solve problems, whereas AI seeks to create new ways to solve problems potentially better than humans.
Mimicry
- CC focuses on mimicking human behaviour and reasoning to solve complex problems.
- AI is not intended to mimic human thoughts and processes but are instead to solve problems using the best possible algorithms.
Decision-making
- CC is not responsible for making the decisions of humans. They simply provide intelligent information for humans to use to make better decisions.
- AI is responsible for making decisions on their own while minimising the role of humans.
The Similarities
Technologies
- The technologies behind CC are similar to those behind AI, including ML, deep learning, NLP, neural networks etc.
- In the real world, applications for CC are often different than those for AI.
Industrial Use
- AI is important for service-oriented industries, such as healthcare, manufacturing and customer service.
- CC is important in analysis intensive industries, such as finance, marketing, government and healthcare.
Human decision-making
- People do not fear CC, because it simply supplements human decision-making.
- People fear that AI systems will displace human decision-making when used in conjunction with CC.
- The middle-man is now humans, who still make the decisions. Do we need to cut out the middle-man and replace him/her with AI to facilitate optimal decision making?
Observations
Calling CC a form AI is not wrong, but it misses a fundamental distinction that is important to understand.
When we talk about AI, we are most often talking about an incredibly sophisticated algorithm that includes some form of complex decision tree. This is how autonomous vehicles work: they take a starting point and a destination as input and navigate between the two points through a mind-bogglingly long sequence of ‘if-then-else’ statements.
AI enables computers to do intelligent things. The possible applications for AI are quite extensive and already are fully embedded into our daily routines. For example, AI and fully autonomous vehicles are an inseparable part of the future. ‘AI’ watches countless hours of driving footage for training and is assigned variables that enable them to identify lanes, other cars and pedestrians and then to provide decision results nearly instantly.
CC, while a handy marketing term, helps solve problems by augmenting human intelligence and decision making, not by replacing it. Several AI fundamentals are included, such as ML, neural networks, NLPs, contextual awareness and sentiment analysis, to augment problem-solving that humans constantly need. This is why IBM defines CCs as ‘systems that learn at scale, reason with purpose and interact with humans naturally’.
The main driver and common thread across the topics of AI and CC is ‘data’. Without these technologies, there is not much we can do with data. Hence a renewed push in areas of advanced analytics, giving rise to solutions that improve predictability in areas where silos exist, decision making via visualised dashboards that draw upon real-time and historical data made possible via the improved handling of unstructured data.
Additionally, deep learning, a form of ML, accelerates progress in these areas. AI, ML and NLP, with technologies such as NoSQL, Hadoop, Elasticsearch, Kafka, Spark, Kubernetes, etc., form part of a larger cognitive system. Solutions should be capable of handling dynamic real-time and static historical data. Enterprises looking to adopt cognitive solutions should start with specific business segments that have strong business rules to guide the algorithms and large volumes of data to train the machines.
Instead of debating the utility and applicability of CC and AI and forcing competition between the respective experts and research communities, our view is that we should expend our collective energy on creating a future in which the benefits of both AI and CC are combined within a single system, operating from the same sets of data and the same real-time variables to enrich humans, society and our world.
Concluding Thoughts
To summarise, AI empowers computer systems to be smart (and perhaps smarter than humans). Conversely, CC includes individual technologies that perform specific tasks that facilitate and augment human intelligence. When the benefits of both AI and CC are combined within a single system, operating from the same sets of data and the same real-time variables, they have the potential to enrich humans, society, and our world.
In 2019, the second meeting of the International Conference on Cognitive Computing (ICCC) was held. Its aim was to combine technical aspects of CC with service computing and sensory intelligence, building on the study of traditional human senses of sight, smell, hearing and taste to develop enhanced scientific and business platforms and applications. These encompass ML, reasoning, NLP, speech and vision, human-computer interaction, dialogue and narrative generation.
Working with and supporting organisations like ICCC, future researchers should continue to explore how best to leverage the combination of CC and AI with other emerging technologies, such as blockchain, bioinformatics, internet of things, big data, cloud computing and 5G digital cellular networks and wireless communications.
Although countries such as China may be ‘leading the race’ in many areas related to AI, the question of combining emerging technologies with CC and AI is one that, if done ethically with social good as the focus, could lead to many societal benefits that empower individuals, communities, institutions, businesses and governments throughout the world while driving competition, research and development.
It is undeniable that Covid-19 has transformed the lives of humans everywhere. We have been forced to quickly adapt to these ‘new norms’ due to the pandemic.
On a positive note, we have all learnt valuable life lessons and become more resilient.
Let’s work together to create a digitally-driven civil society underpinned by socially minded technology.
End Notes
- ENIAC was the first electronic general-purpose computer. It was “Turing complete”, digital and able to solve “a large class of numerical problems” through reprogramming.
References
Mills, M. (2016). Artificial Intelligence in Law: The State of Play 2016 (Part 1). Artificial Intelligence in Law: The State of Play 2016 (neotalogic.com)