“Smarter”​ Digital Banking

Introduction

In every town across the UK, our high street is changing.

The majority of “bricks and mortar” businesses — including our high street banks — are closing. Additionally, the COVID19 pandemic has devastated many small businesses and, when walking down the high street, there are many shops and stores that are empty, boarded up or imminently winding up with “closing down” signs plastered across their storefront. This also extends to high street banks who have declined to sign up to a pledge to make sure there’s at least one branch in every town.

It’s not a phenomenon that’s limited to the UK either; all over the world, traditional banks are gradually disappearing and being replaced with more efficient services.

Post-COVID19 recovery has been hit hard by the financial crisis that is putting a lot of pressure on business owners forcing them, at pace, to migrate their previous business models to digital models using ecommerce technology often underpinned by Artificial Intelligence (AI), machine learning (ML) algorithms and other emerging technologies such as Blockchain and Augmented/Virtual Reality (AR/VR).

Automation technologies such as Robotic Process Automation (RPA) are on the rise and, at the same time, consumers are themselves busy adapting their lifestyles which are fast becoming a hybrid of “work from home”, “work from office” and “work from anywhere” options.

Traditional Banking is dying

Traditionally, customers choose their bank partially based on location; people didn’t join a bank which didn’t have a local branch. There were lots of services which needed to be conducted in-branch, from paying in a cheque to applying for a loan, and everything in between.

However, as technology has evolved more customers have started to take advantage of the online services that banks have offered. In turn, this meant that the number of customers visiting the branches shrank. Some branches reported seeing less than 10 customers per day.

No alt text provided for this image

One-third of UK bank branches closed in the past five years, led by RBS, which cut its network 56% between January 2015 and August 2019. These large-scale closures are driven by the need to slash some of the massive costs associated with operating physical infrastructure — freeing up funds to invest in building out digital tools. These branch closures are going to continue — TSB is closing 80 branches this year, Lloyds is closing 56, and HSBC is closing 27 — and will be a primary factor behind the decline in branch penetration. Some banks — like RBS and Barclays — have turned to non-traditional branch formats like grocery stores or post offices to maintain a wide geographic footprint without investing in added infrastructure.

While it’s true that a minority of customers found this to be an inconvenience, for the majority of people it simply wasn’t an issue.

Banking, as we know it, is dying. Banks, as we know them, will either vanish or mutate. 

The truth is that banking habits have simply changed, with the majority of transactions and services now available online. Many millennials have never visited a bank, nor are they likely to need to do so in the near future. Perhaps it’s time to accept that traditional banking isn’t a significant part of the financial future?

Looking Ahead: Customer Centric Digital Ecosystems

A majority of global banking executives don’t see a future for the branch-based model.

According to eMarketer (July 2021), 65% of worldwide banking executives expect that the branch-based model will be “dead” within the next five years, according to survey data collected by The Economist Intelligence Unit (EIU) on behalf of Temenos: a Swiss-based banking software provider who is busy rolling out (globally) an explainable AI (XAI) product “Temenos Virtual COO” which intends to show how AI can streamline back-office functions of a bank aggregating data to give overviews of financial health whilst cutting administrative workloads.

There are many other examples, which includes Santander Consumer Bank who have been piloting Robotic Process Automation (RPA) in the Nordic region since 2019 to process account activity. The Spanish-based banking giant reported in July 2020 that the pilot saved 30,000 man hours in 2019 and more than $2 million. The RPA pilot utilised 150 intelligent bots, offered by Automation Anywhere, to automate tasks such as loan processing.

The Bigger Picture

No alt text provided for this image

Insider Intelligence for the UK carried out an insightful study into branch penetration forecasts. “Penetration” is defined as bank account holders ages 18+ who visit a bank, credit union, or brokerage branch at least once per year.

Here are some interesting facts (source: eMarketer, July 2021)

  • Branch penetration in the country will decline from 65.3% in 2019 to between 60% and 62% in 2024. Temporary branch closures during the pandemic accelerated a trend of UK banks scaling back their physical footprints to rein in operating costs. Avoiding branches during lockdown is the most common reason (40%) why consumers are using digital banking more frequently, per Virgin Money UK.
  • Digital banking penetration will climb to reach between 75.8% and 78% by 2024, up from 68.7% in 2019. Banks will bolster their digital channels by prioritising security measures and ensuring their platforms can accommodate heavier volume — an issue that was emphasised during the pandemic when overwhelming volume led to outages among some major banks. provide educational resources to streamline the transition to digital for those users to make the onboarding experience and navigability of digital platforms frictionless, to encourage repeat usage. 
  • Smartphone banking penetration will spike from 37.1% in 2019 to reach between 46% and 59% in 2024. The immediate impacts of the pandemic and lockdown measures will drive up smartphone banking penetration as new users turn to these channels out of convenience. And the growing presence of neobanks — digital-only banks that don’t operate branches — is also driving up penetration, while pressuring incumbents to improve their mobile banking products. They have gained popularity through their competitive offerings and slick digital user experience — their growth has been fuelled by branch closures, Open Banking regulations, and consumer desire for convenient banking options. Incumbents will thrive with a two-pronged approach to their mobile strategy. First, they’ll add innovative features to reach parity with upstarts and attract younger consumers, and second, they’ll simplify their interface to engage hesitant digital users, including older consumers.
  • ATM penetration will drop sharply from 84.7% in 2019 to land between 74.2% and 76.2% in 2024. There has been a major years-long shift away from cash in the UK, and as digital channel penetration increases and functions like depositing checks and transferring funds between accounts can be done remotely, consumers will be even less reliant on ATMs. Banks have begun leveraging their ATMs to bridge the gap between in-branch and digital capabilities, while still serving those customers who need cash access. Enhancing ATMs with branch-like capabilities such as video calling with bank staff can allow banks to cut costs, streamline processes, and accommodate customers who prefer in-branch banking or have not yet adapted to digital banking.
  • Call centre penetration will tick up to 34% in 2020 but will return to pre-pandemic levels of around 26% by 2024. Despite advancements in digital channels, consumer preference for human assistance have sustained the need for call centres that have served as the front line of communication between customers and banks as demonstrated during the onset of the coronavirus pandemic. This channel will retract to its pre-pandemic levels when the COVID19 crisis abates with Banks likely to streamline in two ways, (i) they will take a targeted approach to their call centres to create dedicated lines for different inquiry categories (like mortgages) or demographics (like the elderly), and (ii) banks will expand their customer service by opening it up on additional channels such as iMessage to help divert volume to cut costs and make customer service conversations more efficient.

Closing Thoughts

Even with a partial lifting of COVID19 lockdown measures, the coronavirus continues to limit movement of people—and this has hit the UK high street hard. From retailers with a high dependency on physical stores to restaurants and coffee shops without delivery facilities, the obstacles have proven insurmountable for some. For others, the longer-term question is, “Will the UK high street be able to recover when (and if) normalcy returns?”

One thing is certain high street banks are dying and rapidly being replaced by digital platforms. These platforms will typically be enhanced using AI and emerging technology.

As banks look to enhance their digital channels and drive adoption, they will focus on the following three areas:

  1. Banks can offer educational resources to first-time digital banking users to encourage adoption of digital platforms and increase awareness of the available tools. The pandemic has created an opportunity for new users to discover digital banking platforms.
  2. Investments in AI will allow banks to deepen engagement through personalisation. Through AI-powered features like real-time analysis of spending habits, banks can deliver insights that help customers manage their specific financial situations. AI solutions to improve the customer experience will be a key opportunity. Banks paused some of their investments to curb costs amid the economic downturn, especially since AI programs don’t generate an immediate return on investment. But banks will resume their investments in AI once the worst of the pandemic passes.
  3. Designing platforms with an emphasis on security could be integral in encouraging adoption. As new users come onto digital platforms, it’s imperative for banks to ensure the security of their customers’ most sensitive information and communicate that priority to customers — especially given the size and frequency of data breaches globally. This could cement loyalty beyond the initial adoption period, as a platform that’s not perceived as secure could lead to customer distrust or turnover.

Additionally, the combination of digital transformation and emerging technologies will give rise to

  • Democratisation: enabling users easier access to technology without getting any training. It is providing people with expertise; they look for without investing much.
  • Multi-experience: providing the customer with an immersive experience using Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) — a new trend offering a multichannel human-machine interface. In some cases, this might extend to “human augmentation” which will involve changing people’s inherent physical capabilities implanting technology elements.
  • Transparency and Traceability: with the increasing demand of technology advancement, customers are increasingly aware of their personal information and how valuable it is. This is the domain of topics such as #responsibleai #ethicalai and so on focussing on six areas: ethics, integrity, openness, accountability, competence and consistency.

Transparency and Traceability will typically focus on six area: Ethics, Integrity, Openness, Accountability, Competence and Consistency

  • Autonomous devices/solutions/services: will lead to automation of human tasks that increases productivity which were previously performed by people. In some cases, “mundane” and “administrative” tasks done by humans will become obsolete (or at the very least, offer an option of “human versus machine”). Typically, this will be underpinned by collaborative and augmented intelligence using AI and Machine Learning much of which already exists in robots, drones, vehicles, and home appliances.
  • AI security: as the volume and speed of data being generated continues to grow, the deployment of AI and emerging technologies must be predictable and safe. This extends to data privacy, data integrity and the rights of the consumer.

While much of this article has outlined changes to traditional banking, it is the rise of AI and emerging technologies that will ultimately make noticeable changes to our lifestyles and day-to-day choices.

Whether we are using AI and emerging technologies to automatically streamline and enhance businesses processes, develop a “digital twin” or deploy smart solutions across a business, community or country, we all have a responsibility to be future ready.

After all, a future which is built on “AI for Social Good” and “AI for All” is one that we all should be invested in — for everyone’s sake.

Tackling AI bias .. using AI ..!

www.nytimes.com/2021/06/30/technology/artificial-intelligence-bias.html

Great article in NYTimes.com showcasing a different approach to tackling the issues of #bias in #artificialintelligence and #machinelearningalgorithms.

Parity is one of many organisations, including more than a dozen start-ups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from A.I. systems.

While other start-ups, like Fiddler and Weights & Biases, offer tools for monitoring A.I. services and identifying potentially biased behavior, Parity’s technology aims to analyse the data, technologies and methods a business uses to build its services and then pinpoint areas of risk and suggest changes.

Rumman Chowdhury, PhD

Liz O’Sullivan

The Algorithmic Justice League

The tool uses artificial intelligence technology that can be biased in its own right, showing the double-edged nature of A.I.

Tools that can identify bias in A.I. are imperfect, just as A.I. is imperfect.

However, the goal is to use the tools to create a wider dialogue among people with a broad range of views .. and bring more diversity to solving the problem of A.I. bias.

#aibias #diversityandinclusion #ethicalai

AI and Future Society

Introduction

Human advancement has come hand-in-hand with advances in technology. As computer science underpinned by emerging technologies — including cloud, mobile and Internet of Things (IoT) progress exponentially — Artificial Intelligence (AI) is increasingly standing out as the transformational technology of our age.

Few will debate that we are now standing at the threshold of a new age that will usher society through yet another pivotal transformation. Human society has reached a crossroads that will confront it with the task of choosing which “Society 5.0” path to follow.

Some doors to the future may lead to a society in which robots and AI create a dystopian disaster for humanity. Others may lead to a society in which all personal information is under centralised control by the government and state.

However, we should remain focused on opening a door (or doors) that leads toward a better, more inclusive world for humanity, a world in which each and every individual may find true happiness, safety and opportunity. In this future world, technology and data will be used to balance economic development with the solution of social challenges; people will finally be liberated from all restrictions that the information society proved unable to remove; and we will witness the formation of a human-centred society in which everyone is able to shine and flourish — reflected by #diversityequityandinclusion.

Society 5.0

Society 5.0 is the vision for the next stage of human society, a “Super Smart Society”, which is an evolution of previous stages of “a hunter-gatherer” society (Society 1.0), “agrarian” society (Society 2.0), “industrial” society (Society 3.0), and “information” society (Society 4.0).

No alt text provided for this image

It was initially proposed by Keidanren (translated as “Japan Business Federation”) and incorporated in the 5th Science and Technology Basic Plan (click here for full copy of report in English) in Japan as a concept for the future society to which we should aspire.

Society 5.0 envisions a future where people are liberated from various constraints that previously societal evolutions could not overcome; resulting in an age where everyone has the freedom to pursue diverse lifestyles and values.

However, Society 5.0 requires active collaborations between all people across governments, businesses, industry competitors, universities, research institutions, local communities, and a vast array of other stakeholders, all uniting in a common vision of open innovation.

With this in mind, Society 5.0 is founded on the following ideals, underpinned by AI and Emerging Technologies (such as blockchain, deep learning, and more) :-

  • People will be liberated from focus on efficiency. The emphasis will be placed on satisfying individual needs, solving problems and creating value; for the betterment of society.
  • People will be able to live, learn and work, free from suppressive influences on individuality, such as discrimination by gender, race, nationality, etc. and alienation because of their values and ways of thinking.
  • People will be liberated from the disparity caused by the concentration of wealth and information, and anyone will be able to get opportunities to play a part any time, anywhere.
  • People will be liberated from anxiety about terrorism, disasters and cyberattacks, and live with security with strengthened safety nets for unemployment and poverty. This might be supported by concepts such as universal basic income (UBI) built using blockchain technology. Checkout this excellent TED Talk by Hilde Latour
  • People will be liberated from resources and environmental constraints, and able to live sustainable lives in any region.

All of the above are underpinned by Sustainability Development Goals (SDGs) which are part of the 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015.

There are NINE YEARS TO GO to achieve the SDGs: we have much more work to do!

Add alt textNo alt text provided for this image

In short, Society 5.0 promises a world in which anyone can create value any time, anywhere, with security and in harmony with nature.

With AI and Emerging Technologies, we have an opportunity to create this future world where all humans have agency; enabling a society that is #peoplecentric.

Naturally, technology alone will not suffice.

We will need to boldly combine together to re-imagine our politics, laws, communications, borders, ownership of assets and resources, and wealth distribution based on new approaches to international relations (through some combination of multilateralism, bilateralism, and unilateralism).

Only through a new world-wide ecosystem underpinned by economic, environmental and social considerations can we bring about SDGs and restore our planet. Without this, we will continue to witness over-exploitation of natural resources (allowed by our present day economies, governance systems and corruption) threatening the well-being of future generations.

It won’t be quick or easy, and it will take deep changes. Perhaps the most important thing is that everyone has a role to play i.e. all people across all parts of society not just Big Tech or the elite, ruling and governing classes.

Closing Thoughts

When I reflect on the role and importance of #AI (and #EmergingTech) and its impact on society, I realise that the vision people hold of the world to come is but a reflection, with predictable wishful distortions, of the world in which they live. 

Today, we each live in a labyrinth of attitudes fused with #cancelculture #fakenews #selfinterest #selfgratification.

And these attitudes, furthermore, though the person is usually unaware of it (and arguably, unaware of so much!), are historical and public attitudes. They do not relate to the present any more than they relate to the person.

Anything in the world today that we want to change, totally and forever, should be done with one eye on the past … a willingness to learn from and reflect on the true meaning of our history. 

We human beings now have the power of universal extinction .. a terrifying ability to exterminate ourselves; regardless of how far we have come or believe we have come, this seems to be the entire sum of our achievement.

Everything now, we must assume, is in our hands; we have no right to assume otherwise. If we genuinely want a future which is based on #diversityequityandinclusion then we must let go of the countless atrocities, acts of violence and divisive philosophies that are rooted in our collective past. 

Only then can we truly change the world — despite our history — and have a chance of creating #newnormal that provides the foundation for #tomorrowspeople #fairness #accountability #transparency #equity #equality.

Otherwise, we will fail to realise, perhaps our last chance, to #reimaginerecreaterestore a world endowed with #artificialintelligence #augmentedintelligence #automationanywhere #tech4good #aiforsocialgood that is above all #peoplecentric without the social, economic and political ills of today’s reality.

About the Author #aboutme

Over the past 25 years, Salim has built a career in consulting, working both client and supplier side as an interim CIO/CTO and a Business Change / Transformation Consultant.

Salim has engaged in, and led, digital and technology transformations and programmes involving rescue & recovery (“turnaround”), process optimisation & improvement and organisational change — globally across the UK, Central Europe, Nordics, Turkey, UAE, US, Asia and Australia.

Salim is an Oxford University alumni and an author in the field of Artificial Intelligence. Key interests include the role of AI for the betterment of people and society.

Checkout Salim’s latest book –

“Understanding the Role of Artificial Intelligence and Its Future Social Impact”

(1) https://lnkd.in/gbk-zba (Amazon)

(2) https://bit.ly/34cfJVf (IGI Global)

This work has been endorsed by academia and industry thought leaders (https://lnkd.in/dWHBXBE).

‘Humanistic’ AI-powered Business

Prelude – The ‘S’ Curve of Business

Every company—regardless of business type or business model (i.e. product, SaaS, service, media, etc.)—will go through a highly predictable cycle of growth and maturity called the “S Curve of Business” summarised well by the infographic below (ref: RocketSource).

The “S” Curve of Business. Courtesy of RocketSource.

As we’ve seen time and again, impressive growth in the short term may be achievable for many companies, but sustaining that momentum without facing moments where growth stalls simply doesn’t happen. The market fluctuates. Competitors make adjustments to stay relevant and spearhead new campaigns targeting and drawing on your customer base. Your ground-breaking product is no longer as impressive and unique as it used to be.

Put another way, from the point of view of transformation and business change,

  • A project typically starts with high energy and limited resources. Think Tuckman’s “forming–storming–norming–performing” model. 
  • Every investment for growth reduces profitability until anticipated benefits are realised.
  • Lot’s of projects and companies fail because they can’t afford to fund the dip. This often includes sustaining backfill resources to compensate for people seconded from the business to help support change. Typically, this includes hiring contractors. However, cross-training employees or having an employee referral platform in place are also options — regardless of which model is chosen, there is a cost.
  • Eventually the organisation reaches peak performance and then requires the next level of investment to go to the level beyond.
  • Add another project … and add another ‘S’ Curve. And the cycle continues. For some companies, this may feel like “boom and bust” cycle with alternate periods of high and low levels of profit and loss.
  • According to InspiredCEOs, 96.4% of businesses NEVER get past the £1m revenue and 10 staff.

So why spend much of the beginning of this article talking about the above?

In short, embracing AI will require navigating the ‘S’ Curve and sustaining funding, executive sponsorship and Board-level support. Each of these challenges present unique challenges and hurdles on their own; they will distract most companies from focusing on topics relating to fairness, accountability, transparency and ethics and perhaps other social issues relating to availability, affordability, equity, bias, etc.

Taken together, all of these ultimately affect the overall human experience of both

  1. employees, engaging in AI projects to create products, services and solutions, and
  2. customers, expecting a positive “human” experience when using AI-powered products & services.

The biggest mistake a business can make is jumping straight into a major change initiative that is akin to “a square peg, round hole” scenario which may lead quickly to the significant failures. Take note of this Forbes article which catalogues lessons learnt from companies that failed at Digital Transformation.

Introduction

Artificial Intelligence (AI) — along with one of its most popular sub-components, Machine Learning (ML) — is a key technology that is radically transforming our world. Let’s pause to reflect on the differences between AI and ML (picture courtesy of Ronald Van Loon).

Machine Learning is NOT AI

We also experience the convenience offered by AI and ML in our daily lives, for example in the form of intelligent search engines, translation algorithms, navigation systems, on-line e-commerce stores, chatbots that automatically respond to questions and complaints, and algorithms that make recommendations to us or even develop tailor-made products that meet our personal needs. AI (and ML) can also be combined with robotics or unmanned systems, for example in the manufacturing industry. AI owes its huge societal and economic potential to the fact that this key technology can be applied in almost all domains and sectors.

From start-ups and scale-ups to small and medium-sized enterprises (SMEs) or large corporations, AI initiatives are crucial due to their innovative and competitive power.

That said, businesses should consider taking a “humanistic” approach to AI adoption by

  • working together in public-private partnerships to realise societal and economic opportunities of AI.
  • focusing on “Explainable” and “Responsible” AI — to uphold fairness, transparency and equity.
  • focusing on equitable hiring which supports diversity and embraces remote working.
  • establishing AI applications that serve the interests of people and society — whilst avoiding #codedbias and other situations that lead to division.
  • opting for an inclusive approach that puts people first whilst we strive for reliable AI.

But this is not a straightforward task for any business. There are some key areas that every company should consider when embarking on their AI/ML journey, namely,

  1. Investing in an AI-enabled Operating Model
  2. Building (and sustaining) an “AI” Talent Pool
  3. Establishing collaboration structures to support “AI” Product & Service implementation.

In short, there are no shortcuts. Additionally, it is not just about technology. You also need to develop strategies that relate to how your organisation will manage data (to support analytics), people (to encourage collaboration) and processes (to support governance).

Without introducing strategies that address the areas listed above, your company may end up trying to fit “a square peg in a round hole” — which will ultimately affect your bottom line and allow your competition to “leap frog” ahead of you.

AI-enabled Operating Model

For any company, the “Operating Model” is HOW a business runs itself. It is how it uses its people, processes, data and technology to execute its “Mission and Vision” and bring value to its customers which may extend to its employees and workforce.

In contrast, the “Business Model” is a strategy that a company sets to drive how it will create and grow value, and WHAT it shall do to achieve it. It is a model used in strategy and planning. It fails when it doesn’t achieve its projected targets such as a particular revenue stream or profitability.

In today’s world of big data, large corporates have access to a wealth of information about their customers, as well as about how they do business and deliver value to their customers. By using AI and ML to build technology that can model and predict these customer and user journeys, or logistical challenges, companies can really leverage their data to find new sources of value.

Building new technology that streamlines a company’s logistics and powers its Operating Model is valuable. When a company has AI assets that can be licenced and sold to other businesses whilst also serving its own challenges, the value (i.e. revenue and profit) of the company has the potential to increase.

Types of Operating Models

When embarking on your AI journey, there are essentially THREE types of Operating Models that should be considered.

Each type needs to be underpinned with a data capability that most likely will need to be established in your organisation; this will be very different from building a digital capability. Many organisations are mistakenly shoehorning AI initiatives into digital transformation — which, in my view, may be a bit short sighted given they are very distinct topics.

1) Centralised model

Centralised AI Operating Model

This model will underpin your analytics strategy. The AI and analytics talent is unified and located in one centralised department that behaves like an internal consulting firm for the rest of the organisation on topics related to AI and ML.

2) Decentralised model

De-centralised AI Operating Model

This model pushes embeds the analytics team in individual lines of business (LoBs); working very closely with SMEs (Subject Matter Experts) or potentially act as SMEs themselves. There might be an IT (or R&D) department centrally working with them, or the IT team might also be inside the business unit.

3) Hub-and-Spoke model

Hub-and-Spoke AI Operating Model

This model aims to provide the best of both worlds, looking for a way to get the benefits of having a centralised team, while keeping analytics talent embedded within the business. It has one central team working in coordination with the AI and analytics talent that is purposely “scattered” throughout and across the organisation.

AI initiatives need more than a “Competency Centre” (CC) or “Centre of Excellence” (CoE) which both require new skills, organisational design structures, and executive support. Being able to adopt AI in a business starts with choosing the operating model coupled with clear business objectives, budgets and real-world use cases — and, dare I say it, upholding “humanistic” values and priorities as part of the operating model so that it is stitched into the fabric of new ways of working.

Whether your operating model is centred within a line of business (LoB) or centred at the heart of the organisation, the people charged with its success must ensure AI and analytics-powered products and services are created with “humanistic” values embedded throughout the design and development process rather than bolted on as an after-thought.

The “Humanistic” AI Team

Globally, companies are adopting artificial intelligence to solve their business problems to become AI-driven enterprises, however, the route is full of challenges/obstacles like

  • No “pipeline” of business-driven use cases to feed pilots or prototypes
  • Lack of data science resources
  • Losing valuable time with manual processes
  • Being not able to trust in artificial intelligence processes

Whether an AI team is an outgrowth of an existing analytics team or an entirely new group, there are many different activities that it can and should pursue. Some of these — like developing AI models and systems, working closely with vendors, and building a technical infrastructure — can be done in collaboration with an IT organisation; others will involve working closely with business leaders.

Building “Humanistic” AI Talent

Demand for skilled AI professionals is a major priority for countries like — the US, China, India, Israel, Germany, Switzerland, Canada, France, Spain and Singapore — in that order.

The US at 100% penetration of artificial intelligence skills, is the benchmark for other countries like China with about 92% of AI talent found here. India, Israel and Germany come close third, fourth and fifth with 84%, 54% and 45% of AI talent found in these countries.

The US stands a clear winner in terms of skilled AI professionals across various categories including research, followed closely by China at number two and the UK at number third. Australia and Canada are the other two countries with highest penetration of AI talent at number four and five respectively.

One of the most critical factors for a business looking to acquire and develop an AI Competency Centre (CC) is recruiting, attracting, or building talent. It is no secret that leading-edge AI engineers and data scientists are difficult to hire — even in Silicon Valley. Most organisations will require a few people with the ability to develop and implement AI algorithms—say, a Ph.D. in AI or computer science. But many of the business-focused tasks of a CC can be carried out by graduates and MBA-level analysts who are familiar with AI capabilities and who can use automated machine learning tools. Alternatively, to get started, businesses may consider hiring consultants or vendors to engage in the early stages of AI projects. In this case, these resources should be combined with internal employees to enable a level of “shadowing” and “on the job training”.

Companies should start now in building and nurturing AI talent. This should not be limited to quantitatively-oriented employees but rather all employees who express interest in “career pivot” opportunities; checkout this link to the excellent work that Sudha Jamthe from Stanford Continuing Studies and BusinessSchoolofAI is leading.

Additionally, more and more companies are partnering with universities to provision employees with access to modular “block programmes” in AI and Data Science disciplines leading to certification as part of Continuing Professional Development (CPD) initiatives. Read more about this in one of my recent postings available here.

Organisational Re-design .. Enabling Better Collaboration

Since AI talent is scarce, it is difficult to develop critical mass if it is scattered around the organisation. Regardless of what Operating Model our business settles for, the organisational structures should incentivise (and reward) cross functional collaboration that is baked into roles and responsibilities.

To avoid excessive bureaucracy, a centralised group should embed or assign its staff — at least some of them — to business units or functions where AI is expected to be common. That way the centre staff can become familiar with the unit’s business issues and problems, and develop relationships with key executives. Rotational programmes across business units can improve knowledge growth and transfer. As AI starts to become pervasive, these embedded staff may move their primary organisational reporting line to business units or functions.

As with many technologies today, AI projects are best conducted in an “agile” fashion, with many short-term deliverables and frequent meetings with stakeholders. If there needs to be substantial system development or integration, more traditional project management approaches may come into play.

At a minimum, the following key roles should be included in every AI Team :-

AI Engineer Roles

  • Focus on engineering and problem-solving skills (not “research” driven);
  • Create technology and data architectures that scale;
  • Create AI applications, products and services;
  • Practice “Humanistic” values (i.e. ethics, bias, equity, etc.) when designing, developing and testing algorithms, applications, products and services.

AI Data Governance Roles

  • Adapt existing Governance mechanisms, policies and processes to be “AI” friendly;
  • Engage in the selection of data sources used to address an AI question or problem;
  • Ensure data sources are representative and uphold diversity and inclusivity principles;
  • Monitor how data is used in algorithms in “humanistic” ways;
  • Assess data quality (and subsequently, oversees data cleansing initiatives);
  • Review and approves data usage by all technical / development / operational teams.

AI Translator Roles (i.e. “business partners”)

  • Has an excellent understanding of the organisation, its strategy and customers.
  • Member of a business function e.g. Finance, IT, HR, Legal, External Relations, who act as an internal liaison or translator, linking functions and business units (at all levels including the “C-suite”);
  • Help a business area to deliver its product and/or service strategy by bringing together the right expertise;
  • Scan AI solutions in the market and used by competitors;
  • Questions and challenges others to ensure “humanistic” values are upheld when designing, developing and deploying AI applications, products and services.

Business Leader Roles

  • Work closely with AI Ethicists
  • Ensure the business’ values, principles, codes and culture are aligned with an ethically and socially responsible business operation;
  • Help protect the brand and reputation of the business by working to prevent potentially unethical designs and bias in products, and consumer backlash resulting from the design and application;
  • Manage any negative legal consequence and liability as a result of biased algorithms — in a “humanistic” way by taking proactive steps to learn lessons and adjust any internal policies and governance frameworks;
  • Prevent any negative financial impact on the business due to one of the above and a resulting decline in market share. 

AI Ethicist Roles

  • Work closely with all of the above roles
  • Advocate against algorithmic bias and unethical behaviours that impact AI products & services;
  • Protect against unintended consequences of AI (via policies & processes);
  • Advise on the impact on consumers of AI applications, products and services;
  • Establish AI frameworks & policies that uphold company standards and codes of ethics.

Ultimately, all members of the AI Team should think and act as “stewards” of AI algorithms, applications, products and services.

Concluding Thoughts

Most artificial intelligence focuses on mimicking or exceeding the cognitive intelligence of humans often lacking any notion of emotional intelligence or “humanistic” values.

Typically, ethical and legal dilemmas arise because of their dominance, combined with ownership by a few big companies. They have assumed essential social and community functions and now set norms in society. From Trump’s screeds on Twitter, to Greta Thunberg’s warnings on Facebook, to YouTube performances by the poet Amanda Gorman, social media is the primary platform for information, free speech and artistic expression.

If your business wants to embrace AI and embark on its own journey towards intelligent systems and automation, you need to create a vision for AI in your company. This will inevitably require new business models and strategies. Otherwise it may be sub-optimal and impact your business, your culture, profitability, and future growth.

Starting your AI journey requires significant commitment – embarking on a change programme, disrupting “ways of working”, migrating to new operating models all while developing skills and initiatives that build long term value.

To build a great house you start with an architect. You imagine what’s possible and then you make plans. You don’t dig a trench and pour concrete into a hole and then think about where the walls should go, what kind of windows you like, what kind of roof you like.

To do it properly, you imagine it. Design it. Calculate the materials. Map out the plans. Hire the best people for the project etc etc. There is an order to success and much of it can be expressed in numbers that help you work it out.

It’s the same with your company when initiating your own personal AI journey.

Start with the end in mind (ref: Stephen Covey) and plan backwards to build forwards.

Key Take-Aways

  • Being specific on what kind of AI capability you want to build or harness.
  • Be clear about what you want to achieve and when.
  • Take your time. Hurried implementation will result in pitfalls that may ultimately cost you more time, money and effort.
  • Companies don’t have to do this on their own – learn to delegate (internally AND externally), and get people in to do stuff you are not good at.
  • Focus on moving from the “here and now” and beyond that.

Above all .. enjoy your personal journey building a “humanistic” AI-powered business.

Reach out to me if you would like some advice, guidance and/or support to get started.

About the Author #aboutme

Over the past 25 years, Salim has built a career in consulting, working both client and supplier side as an interim CIO/CTO and a Business Change / Transformation Consultant.

Salim has engaged in, and led, digital and technology transformations and programmes involving rescue & recovery (“turnaround”), process optimisation & improvement and organisational change — globally across the UK, Central Europe, Nordics, Turkey, UAE, US, Asia and Australia.

Salim is an Oxford University alumni and an author in the field of Artificial Intelligence. Key interests include the role of AI for the betterment of people and society.

Quantum Computing and AI

Introduction

Quantum computing is the area of study focused on developing computer technology based on the principles of “quantum mechanics”, which explains the nature and behaviour of energy and matter on the quantum (atomic and sub-atomic) scale.

Wikipedia describes “quantum computing” as

“use of quantum-mechanical phenomena such as superposition and entanglement to perform computation”.

While a standard computer handles data in an exclusive binary state of 0s and 1s, quantum computers use quantum bits or “qubits”, which can take any value between 0 and 1. And if you “entangle” the qubits, you can solve problems that classical computers cannot. A future quantum computer could, for example, crack any of today’s common security systems – such as 128-bit AES encryption – in seconds. Even the best supercomputer today would take millions of years to do the same job.

Entanglement is a property of qubits that allow them to be dependent of each other that a change in the state of one qubit can result and immediate change in others. more than one state during computation. Superposition states that qubits can hold both 0 and 1 state at the same time.

According to Shohini Ghose, Professor of Quantum Physics and Computer Science, at Wilfrid Laurier University in Waterloo, Canada

“Quantum computers are not just a faster version of our current computers. They operate on the laws of quantum physics. It’s just like a light bulb compared to a candle.”

Quantum computing and Artificial Intelligence (AI) are both transformational technologies. Today, AI using classical computing enables “Artificial Narrow Intelligence” (or ANI). Quantum computing will significantly accelerate the journey towards “Artificial General Intelligence” (or AGI) imitating how the human brain functions and, perhaps, pave the way towards “Artificial Super Intelligence” (or ASI) which may surpass the human brain and mimic levels of self-awareness and self-consciousness.

Quantum Computing Timeline

It was the unorthodox theories of quantum mechanics, born out of the 20th Century, which were later to spawn quantum computing. The concept of using quantum entities to process data and solve complex problems, much like a classical computer, can be traced back to the 1980s – the era of the “God Fathers” of Quantum Computing.

1980 – Paul Benioff described the first quantum mechanical model of a computer, showing that quantum computers are theoretically possible. His idea of a quantum computer was based on Alan Turing’s famous paper tape computer described in his 1936 paper.

1981 – The next year, physicist Richard Feynman, proved it was impossible to simulate quantum systems on a classical computer. His argument hinged on Bell’s theorem, written in 1964. Feynman did propose how a quantum computer might be able to simulate any quantum system, including the physical world in a 1984 lecture. His concept borrowed from Benioff’s quantum Turing computer.

1985 – David Deutsch, a physicist, published a paper describing the world’s first universal quantum computer: a way to mathematically understand what is possible on a quantum computer. He showed how such a quantum machine could reproduce any realisable physical system. What’s more it could do this by finite means and much faster than a classical computer. He was the first to set down the mathematical concepts of a quantum Turing machine, one which could model a quantum system.

1994 – Peter Shor developed “Shor’s algorithm”, which would allow a quantum computer to factor large numbers much faster than the best classical algorithm.

The timeline below summarises what has happened since and imagines the future.

No alt text provided for this image

Key Players

Quantum technology is still at an early stage of development. The first commercial devices have started to emerge in recent years, capable of performing a few hundred operations with tens of qubits. This early hardware was already sufficient to demonstrate quantum supremacy by solving a specific problem intangible for classical supercomputers.

Google, IBM, and a handful of start-ups are competing to create the next generation of supercomputers. The emergence of quantum computing might help solve problems, such as modelling complex chemical processes that the existing computers cannot handle.

D-Wave Systems Inc., a Canadian company, became the first to sell quantum computers in 2011, although the usefulness of quantum computers is limited to certain kinds of math problems. IBM, Google, Intel, and Rigetti Computing, a start-up in Berkeley, California, have collaboratively created working quantum computers for businesses and researchers.

Intel has started shipping a superconducting quantum chip to researchers. It has also created a much smaller, but so far, a less powerful quantum computer that runs on a silicon chip, which is not all that different from those found in normal computers.

Microsoft initiated a well-funded program to build a quantum computer using an unusual design that might make it more practical for commercial applications. Airbus Group also established a team in 2015 to tackle quantum computing at its site in Newport, Wales. Airbus’ Defense and Space unit’s main objectives was to study all technologies related to quantum mechanics, ranging from cryptography to computation.

The infographic below highlights the role of collaboration to support advancement of Quantum Computing technology.

No alt text provided for this image

UK-based Innovation

Whilst the US and China may be dominating, the UK and Europe are not far behind.

France can justifiably claim to be one of Europe’s leading lights in quantum computing with national plans building upon a January 2020 report entitled: « Quantique : le virage technologique que la France ne ratera pas » aiming to make France a global leader.

The German government set aside two billion euros in a stimulus package, and just recently a panel of experts from research and industry presented a roadmap to quantum computing. 

In the UK, £1bn has been set aside which includes a 10-year investment by the UK’s National Quantum Technologies Programme, which was launched by the UK government in 2013. This has resulted in more than 30 quantum start-ups including a national network of quantum technology hubs in quantum sensors and metrology (Birmingham), quantum communications (York), quantum enhanced imaging (Glasgow), and quantum IT (Oxford).

Thanks entirely to a £93m investment from UK Research and Innovation (UKRI), the new National Quantum Computing Centre (NQCC) is being built at the Harwell lab of the Science and Technology Facilities Council in Oxfordshire. When it opens in late 2022, the NQCC will bring together academia, business and government with the aim of delivering 100+ qubit user platforms by 2025, thereby allowing UK firms to tap fully into this technology’s potential.

Another major achievement is the launch of the world’s first cloud-based Quantum Random Number Generation (QRNG) service built using an IBM quantum computer envisioned by Cambridge Quantum Computing (CQC) – impossible with classical computing.

It’s great to see such a co-ordinated and visionary programmes in the UK right now.

Quantum AI

Quantum technology has an immense power. It will allow us to do computing tasks that are outside of the reach of even the best computers today. Artificial intelligence, which is designed to analyse huge amounts of data, could benefit from this, as could materials and pharmaceutical research.

The term “quantum AI” means the use of quantum computing for computation of machine learning algorithms, which takes advantage of computational superiority of quantum computing, to achieve results that are not possible to achieve with classical computers, the following are some of the applications of this super mix of quantum computing and AI.

This allows industrial and academic researchers to perform simulations for solving ever-more complex design and optimisation problems, and ultimately leads to the development of better products and services. Still, many economically, technologically, and scientifically relevant problems (e.g. computational chemistry, drug design, biological processes, route optimisation) remain out of reach for modern and even future supercomputers, assuming the computing power will continue to grow at the present rate. As a result, countless approximate methods have been developed over the years, characterised by various trade-offs between accuracy and computational cost.

Benefits of Quantum Computing

The following are some of the advantages of Quantum Computing that make it so desirable for our world.

  • Quantum Computers will deliver enormous speed for specific problems. Researchers are working to build algorithms to find out and solve the problems suitable for quantum speed-ups.
  • The speed of quantum computers will improve many of our technologies that need immense computation power like Machine Learning, 5G (and even faster internet speeds), bullet trains (and many other transport methods), and many more.
  • Quantum computing is important in the current age of Big Data. As we need efficient computers to process the huge amount of data we are producing daily.

Applications of Quantum Computing

The following are some of the fields of quantum online application benefits that can be applied to make them more efficient than ever.

Artificial Intelligence

Artificial Intelligence (AI) is a key and one of the best technologies of quantum computing. The base of AI is on the concept of learning from experience. it is becoming more accurate depending on the feedback until the computer program begins to show “intelligence.” This feedback is base on estimating the probabilities for many possible choices. Thus, AI is an ideal candidate for quantum computing. It aims to change many industries. From cars to medicine, and in the future, AI will be what electricity was in the twentieth and twenty-first centuries.

Machine Learning Algorithms

Machine Learning (ML) and AI technologies are the two key areas of research in the application of quantum computing algorithms, giving rise to a new discipline that’s been dubbed Quantum Machine Learning (QML).

Currently, most industrial applications of artificial intelligence come from the so-called “supervised learning”, used in tasks such as image recognition or consumption forecasting.  With quantum computing, we are likely to start seeing acceleration – which, in some cases, could be exponential.

Hardware and Software Error Simulation

Large software programs with millions of lines of code or hardware systems with billions of transistors can be difficult and expensive to verify for correctness. Billions or trillions of different states can exist and it is impracticable and impossible for a classical computer to check and simulate every single one. Not only do we need to understand what is happening when the system operates in a normal manner but we also want to know what happens if an error occurs. Can our device identify it and has a coping mechanism to reduce any potential problems? Through the use of quantum computing to assist with these simulations, one can hope to provide much better coverage of their simulations with an improved time.

Cryptography

Most online security systems nowadays depend on the complexity of factoring large numbers into primes. While this is possible by using digital computers to scan through every possible factor. The enormous amount of time needed makes it expensive and impractical to “crack the code.” Quantum computers can compute these factors are more efficient than digital computers. This means such methods of security will soon become obsolete. There are also innovative methods of quantum encryption that are base on the one-way nature of quantum interdependence. Networks across cities have already been deployed in various countries.

Data Analytics

Quantum computing has the ability to solve problems on impressive scales by engaging with complex material that might otherwise ignore. A particular field of study called “topological analysis” helps to identify how certain geometric shapes behave in specific ways. In doing so, it describes computations that are more or less impossible to conjure onto conventional computers.

With the introduction of a topological quantum computer, one can do simple calculations. Hence, making the process that much easier.

Nanotechnology

Through the introduction of quantum dots, researchers hope to further improve their standards of nanotechnology. The ultimate goal is to improve health conditions in developing nations, while also introducing purification processes for various industries.

While this is a field of research that scientists are into already, there still exists a wide gap that needs to look upon to quite by the introduction of quantum algorithms that can ease the research process and also speed up results.

Digital Security

In today’s digital world where almost every individual has massive amounts of personal data uploaded onto the cloud, there exists a growing need to improve security standards in an attempt to help make the data more secure.

According to Shohini Ghose,

Quantum offers a way to encrypt information that can never be hacked, no matter how good the hackers are.

Quantum Key Distribution (or QKD), which implements a cryptographic protocol involving components of quantum mechanics, is being put forward as a secure mechanism to tackle the issue of security by helping users encrypt data while also enabling them to share that with a limited number of resources. So not only can messages/data is secure but also distributed among personnel thus helping with secure distribution.

Future Applications of Quantum Computing

Quantum computing is a promising technology which will change our lives in many ways. As research gets more attention from government, industry, and academia, more uses are expected to be found.

No alt text provided for this image

Source: Futurebridge

Concluding Thoughts

Quantum computing is no longer just for physicists and computer scientists, but also for information system researchers.

According to a report published by Inside Quantum Technology (IQT), the quantum computing market will reach $2.2 Billion, and the number of installed quantum computers will reach around 180 in 2026, with about 45 machines produced in that year. These include both machines installed at the quantum computer companies themselves that are accessed by quantum services as well as customer premises machines. The report is available here.  

Cloud access revenues will likely dominate as a revenue source for quantum computing companies in the format of Quantum Computing as a Service (QCaaS) offering, that will be accounting for 75 percent of all quantum computing revenues in 2026. Although in the long run quantum computers may be more widely purchased, today potential end users are more inclined to do quantum computing over the cloud rather than make technologically risky and expensive investments in quantum computing equipment. 

Today, amongst the financial institutions using quantum computing, none have quantum computing as part of day-to-day operations. Some appear very close and are hiring staff at a level that makes one think they are on the verge. IQT Research expects that by 2026, revenues from cloud access to reach circa $410 million, making financial institutions the largest single end-user segment of the quantum access cloud market.

In a parallel track quantum software applications, developers’ tools and number of quantum engineers and experts will grow as the infrastructure developed over the next 5 years which will make it possible for more organisations to harvest the power of two transformational technologies quantum computing and AI and encourage many universities to add quantum computing as an essential part of their curriculum.

Artificial Intelligence (AI) and Machine Learning (ML) are today’s latest buzzwords, and when you mix that with ‘quantum’, these terms become a “mega-buzzword”. This lends itself to dystopian fears such as those previously raised by the late Stephen Hawking.

“The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” — Stephen Hawking (2014)

There’s little doubt that quantum computing could help develop revolutionary AI systems. However, there’s is much more that needs to be done before it becomes mainstream.

Switching to a more positive note, I’d like to quote Brian Solis, Global Innovation Evangelist at Salesforce and a world renowned keynote speaker. 

“Now is the time to start building the vision, the expertise, dedicating teams and resources” for quantum computing. The stepping stones to get there are building a Center of Excellence (CoE) around AI”.

This will help make AI the focal point of an organisation’s efforts to become more agile and innovative.

Additionally, Solis adds “it forces you to get better data, clean the data, and build expertise and key capabilities around the data. Complement that with a smaller set of resources and a Center of Excellence for quantum computing”.

Inspired by Solis, I will end with this empowering quote from F. Scott Fitzgerald.

No alt text provided for this image

Further Reading

Be sure to checkout this video “Quantum Computing Expert Explains One Concept in 5 Levels of Difficulty” hosted by IBM’s Dr. Talia Gershon who explains quantum computing to 5 different people; a child, teen, a college student, a grad student and a professional.

About the Author #aboutme

Over the past 25 years, Salim has built a career in consulting, working both client and supplier side as an interim CIO/CTO and a Business Change / Transformation Consultant.

Salim has engaged in, and led, digital and technology transformations and programmes involving rescue & recovery (“turnaround”), process optimisation & improvement and organisational change — globally across the UK, Central Europe, Nordics, Turkey, UAE, US, Asia and Australia.

Salim is an Oxford University alumni and an author in the field of Artificial Intelligence. Key interests include the role of AI for the betterment of people and society.

Evolve into the REAL you

In a reflective mood this evening.. sharing some #foodforthought 🤔

We all need to adapt to an ever changing world especially one ravaged and disrupted by #covid19pandemic.

At the same time, one should be mindful of allowing ‘life’ to determine (or even hijack) one’s aspirations and fate. Worse still, one should be careful about pursuing power and wealth at the expense of all else or becoming a ‘parasite’ living off (and exploiting) others.

Take a moment to view the video I’ve included in this posting (click on the YouTube icon below) .. something I recommend everyone watches and reflects on.

PS The Godfather remains my all time #number1 movie and Al Pacino is my hero 🤩❤️🎉

Consider striving daily to become the best you can be while remaining congruent and true to yourself.

Don’t let life pass you by .. grab every opportunity that presents itself .. remember, every challenge is an opportunity to grow and evolve into something stronger and more remarkable.

Don’t live life with #fearofmissingout .. rather invest in yourself and #beextraordinary .. for YOU not others who are mere spectators .. blind to and ignorant of YOU-R GREATness.

In a reflective mood this evening.. sharing some #foodforthought 🤔

We all need to adapt to an ever changing world especially one ravaged and disrupted by #covid19pandemic.

At the same time, one should be mindful of allowing ‘life’ to determine (or even hijack) one’s aspirations and fate. Worse still, one should be careful about pursuing power and wealth at the expense of all else or becoming a ‘parasite’ living off (and exploiting) others.

Take a moment to view the video I’ve included in this posting .. something I recommend everyone watches and reflects on.

PS The Godfather remains my all time #number1 movie and Al Pacino is my hero 🤩❤️🎉

Consider striving daily to become the best you can be while remaining congruent and true to yourself.

Don’t let life pass you by .. grab every opportunity that presents itself .. remember, every challenge is an opportunity to grow and evolve into something stronger and more remarkable.

Don’t live life with #fearofmissingout .. rather invest in yourself and #beextraordinary .. for YOU not others who are mere spectators .. blind to and ignorant of YOU-R GREATness.

Audio Podcast is now LIVE!

It’s been a busy weekend culminating in a cautious step into the world of Podcasting. I’ve been procrastinating for a while (!)

and realised I had several bucket loads of content which I could be publishing to the world.

And .. so after a bit of inspiration and lots of coffee, I am pleased to announce my new podcast about “Artificial Intelligence and Emerging Tech”.

Over the next few weeks, I plan on uploading additional episodes.

In the meantime, if you would be so kind, take a look at what I have already.

Episode TitleDuration (mm:ss)Date Posted
AI for Social Good10:4920 March 2021
Are AI and Cognitive Computing the SAME thing?10:42 20 March 2021
Artificial Intelligence and Immersive Technologies09:30 20 March 2021
Towards an “AI enabled”​ Society09:4221 March 2021

The podcast is available on the following platforms which may be downloaded to mobile, tablet and computer.

Hope you enjoy what I’ve created so far. Thanks for taking the time to read this posting and for listening to my podcasts.

Are AI and Cognitive Computing the SAME thing?

Introduction

The terms Artificial Intelligence (AI) and Cognitive Computing (CC) are often used interchangeably but there the approaches and objectives of each differ.

AI and Cognitive Computing are “based on the ability of machines to sense, reason, act and adapt based on learned experience” (Brian Krzanich, CEO, Intel)

The two topics are closely aligned; while they are not mutually exclusive, both have distinctive purposes and applications due to their practical, industrial, and commercial appeal as well as their respective challenges amongst academia, engineering, and research communities.

This topic is explored further in my book, “Understanding the Role of AI and its Future Social Impact”. Checkout one of my other LinkedIn Articles for further information.

For reference, my new book is available via :-

  1. https://lnkd.in/gbk-zba (Amazon)
  2. https://bit.ly/34cfJVf (IGI Global)

Definitions

Before we get into too much detail, let’s start by defining each of these terms. We will then explore what they have in common, their differences and typical uses.

Artificial Intelligence (an “umbrella term”)

AI has been studied for decades and, despite threatening to disrupt everything human, remains one of the least understood subjects in computer science. We use it every day without even noticing it. Google Maps applies it to provide directions. Gmail applies it to locate spam. Spotify, Netflix and others apply intelligent customer service via automatic response systems.

As the popularity of AI grows, there remains a misunderstanding of the technical jargon that comes with it.

In simple terms, AI is an umbrella term that includes a diverse array of sub-topics which may be best described by the following mind map (Mills, 2016).

In the words the person who coined the term artificial intelligence, John McCarthy, AI is “the science and engineering of making intelligent machines”.

In layman’s terms, AI is an understanding that is achieved by machines that interpret, mine and learn from external data in ways that the machine functionally imitates the cognitive processes of a human. These processes include learning from constantly changing data, reasoning to make sense of the data and related self-correction mechanisms. Human intelligence is rooted in sensing the environment, learning from it and processing its information.

Thus, AI includes

  • A simulation of human senses: sight, hearing, smell, taste and touch
  • A simulation of learning and processing: deep learning, ML, etc.
  • Simulations of human responses: robotics AI applications includes problem-solving, game playing, natural language processing (NLP), speech recognition, image processing, automatic programming and robotics.

Cognitive Computing

Cognitive Computing (CC) refers to the development of computer systems based on mimicking human brains. It is a science that was developed to train computers to think by analysing, interpreting, reasoning and learning without constant human involvement. CC represents the third era of computing.

In the first era (19th century) Charles Babbage, the ‘father of the computer’, introduced the concept of a programmable machine. Used for navigational calculation, his computer tabulated polynomial functions. The second era (1950s) resulted in digital programming computers like ENIAC (see End Notes) and ushered an era of modern computing and programmable systems.

CC utilises deep-learning algorithms and big-data analytics to provide insights.

A cognitive system

  1. Understands natural language and human interactions
  2. Generates and evaluates evidence based hypothesis
  3. Adapts and learns from user selections and responses

The “brain” of a cognitive system is a neural network: fundamental concept behind deep learning. A neural network is a system of hardware and software that mimics the central nervous system of humans to estimate functions that depend on a huge amount of unknown or learned inputs. By the 1980s, two trends affected the way experts and researchers began to unpack ‘the black box’ of the neural approaches to studying, thinking and learning. This was the advent of computing and cognitive sciences.

Thus, CC refers to

  • Understanding and simulating reasoning
  • Understanding and simulating human behaviour

Using CC systems, we can make better human decisions at work. Applications include speech recognition, sentiment analysis, face detection, risk assessment and fraud detection.

The Differences

Augmentation

  • AI augments human thinking to solve complex problems. It focuses on accurately reflecting reality and providing accurate results.
  • CC tries to replicate how humans solve problems, whereas AI seeks to create new ways to solve problems potentially better than humans.

Mimicry

  • CC focuses on mimicking human behaviour and reasoning to solve complex problems.
  • AI is not intended to mimic human thoughts and processes but are instead to solve problems using the best possible algorithms.

Decision-making

  • CC is not responsible for making the decisions of humans. They simply provide intelligent information for humans to use to make better decisions.
  • AI is responsible for making decisions on their own while minimising the role of humans.

The Similarities

Technologies

  • The technologies behind CC are similar to those behind AI, including ML, deep learning, NLP, neural networks etc.
  • In the real world, applications for CC are often different than those for AI.

Industrial Use

  • AI is important for service-oriented industries, such as healthcare, manufacturing and customer service.
  • CC is important in analysis intensive industries, such as finance, marketing, government and healthcare.

Human decision-making

  • People do not fear CC, because it simply supplements human decision-making.
  • People fear that AI systems will displace human decision-making when used in conjunction with CC.
  • The middle-man is now humans, who still make the decisions. Do we need to cut out the middle-man and replace him/her with AI to facilitate optimal decision making?

Observations

Calling CC a form AI is not wrong, but it misses a fundamental distinction that is important to understand.

When we talk about AI, we are most often talking about an incredibly sophisticated algorithm that includes some form of complex decision tree. This is how autonomous vehicles work: they take a starting point and a destination as input and navigate between the two points through a mind-bogglingly long sequence of ‘if-then-else’ statements.

AI enables computers to do intelligent things. The possible applications for AI are quite extensive and already are fully embedded into our daily routines. For example, AI and fully autonomous vehicles are an inseparable part of the future. ‘AI’ watches countless hours of driving footage for training and is assigned variables that enable them to identify lanes, other cars and pedestrians and then to provide decision results nearly instantly.

CC, while a handy marketing term, helps solve problems by augmenting human intelligence and decision making, not by replacing it. Several AI fundamentals are included, such as ML, neural networks, NLPs, contextual awareness and sentiment analysis, to augment problem-solving that humans constantly need. This is why IBM defines CCs as ‘systems that learn at scale, reason with purpose and interact with humans naturally’.

The main driver and common thread across the topics of AI and CC is ‘data’. Without these technologies, there is not much we can do with data. Hence a renewed push in areas of advanced analytics, giving rise to solutions that improve predictability in areas where silos exist, decision making via visualised dashboards that draw upon real-time and historical data made possible via the improved handling of unstructured data.

Additionally, deep learning, a form of ML, accelerates progress in these areas. AI, ML and NLP, with technologies such as NoSQL, Hadoop, Elasticsearch, Kafka, Spark, Kubernetes, etc., form part of a larger cognitive system. Solutions should be capable of handling dynamic real-time and static historical data. Enterprises looking to adopt cognitive solutions should start with specific business segments that have strong business rules to guide the algorithms and large volumes of data to train the machines.

Instead of debating the utility and applicability of CC and AI and forcing competition between the respective experts and research communities, our view is that we should expend our collective energy on creating a future in which the benefits of both AI and CC are combined within a single system, operating from the same sets of data and the same real-time variables to enrich humans, society and our world.

Concluding Thoughts

To summarise, AI empowers computer systems to be smart (and perhaps smarter than humans). Conversely, CC includes individual technologies that perform specific tasks that facilitate and augment human intelligence. When the benefits of both AI and CC are combined within a single system, operating from the same sets of data and the same real-time variables, they have the potential to enrich humans, society, and our world.

In 2019, the second meeting of the International Conference on Cognitive Computing (ICCC) was held. Its aim was to combine technical aspects of CC with service computing and sensory intelligence, building on the study of traditional human senses of sight, smell, hearing and taste to develop enhanced scientific and business platforms and applications. These encompass ML, reasoning, NLP, speech and vision, human-computer interaction, dialogue and narrative generation.

Working with and supporting organisations like ICCC, future researchers should continue to explore how best to leverage the combination of CC and AI with other emerging technologies, such as blockchain, bioinformatics, internet of things, big data, cloud computing and 5G digital cellular networks and wireless communications.

Although countries such as China may be ‘leading the race’ in many areas related to AI, the question of combining emerging technologies with CC and AI is one that, if done ethically with social good as the focus, could lead to many societal benefits that empower individuals, communities, institutions, businesses and governments throughout the world while driving competition, research and development.

It is undeniable that Covid-19 has transformed the lives of humans everywhere. We have been forced to quickly adapt to these ‘new norms’ due to the pandemic.

On a positive note, we have all learnt valuable life lessons and become more resilient.

Let’s work together to create a digitally-driven civil society underpinned by socially minded technology.

End Notes

  1. ENIAC was the first electronic general-purpose computer. It was “Turing complete”, digital and able to solve “a large class of numerical problems” through reprogramming.

References

Mills, M. (2016). Artificial Intelligence in Law: The State of Play 2016 (Part 1). Artificial Intelligence in Law: The State of Play 2016 (neotalogic.com)

AI & Immersive Technologies

Augmented, Virtual, Mixed and Extended Realities

Background

Virtual Reality (VR) was first imagined in science fiction, and emerged in real life via an immersive film-viewing cabinet created in the 1950s. It wasn’t until 2010 that the Oculus Rift VR headset was first shown and by 2014 the company was purchased by Facebook for $2 billion – valued largely for its consumer gaming applications. Since then other major manufacturers have developed VR technology, including Sony and HTC and several hundred companies have been launched to develop content, applications and provide systems and services.

If you’re among the millions of those who’ve used their smartphones to chase Pokémon in the real world, then you already know how amazing augmented reality (AR) can be, although you may not be aware that it has moved beyond gaming and entertainment to wider applications such as advertising, retail, training – and more.

This technology continues to grow rapidly, and it could soon become bigger than VR. The reason for this is simple. Unlike VR, which requires expensive equipment, AR can be deployed on smartphones or tablets, which makes it more accessible and far cheaper. Plus, AR is more realistic. While VR puts the user into a completely different, immersive environment, AR uses our existing environment and enriches it with virtual objects.

Ultimately, immersive technologies are on the rise with new forms emerging as part of online and mobile games, cinematic experiences, exhibitions and events, and news media, which all aim to engage audiences and place them at the centre of the action or experience.

Definitions

There is a whole spectrum of immersive technologies out there. Before we progress further, let’s pause and ensure we’re all on the same page regarding what these terms actually mean.

  • Virtual reality (VR) immerses users in a fully artificial digital environment.
  • Augmented reality (AR) overlays virtual objects on the real-world environment.
  • Mixed reality (MR) not just overlays but anchors virtual objects to the real world i.e. blend the physical and digital worlds. This is sometimes referred to as XR which stands for Cross Reality or Extended Reality which HoloLens inventor Alex Kipman describes as – “the world of atoms and the world of bits”.

These immersive technologies are not about one application; rather they represent the evolution of personal and mobile computing disrupting and revolutionising the way humans interact with machines and one another.

Perhaps the most exciting potential for immersive technology is the impact on learning and experiential learning.

Few would argue that it’s much easier to learn something by “doing it yourself” (DIY) rather than by watching, reading, or being told what to do. Imagine a future where you can immerse yourself in various scenarios so that the experiences and actions within it feel naturally like your own – within a hybrid physical and digital world.

Immersive Technology Improves Efficiency

The UK Government is so convinced immersive technologies (AR, VR, MR) are the future that they have set aside significant investment as part of its Industrial Strategy Challenge Fund. The fund aims to enable business and researchers to work together in creating the technologies which will entertain and educate the audiences of the future.

It is expected that by pushing immersive technologies into the public consciousness, it will spark a wider uptake of the technology and shape the perception and behaviour towards virtual and augmented reality.

report commissioned by Immerse UK concluded that immersive technologies have the potential to help drive the UK economy. In the wake of Brexit, UK companies will look to AR/VR to overcome the productivity gap.

Advantages of VR

Virtual Reality environments have been used in learning simulators for years. They were not exactly originally conceived as games and their usage was more oriented to the adult public. Its evolution and the appearance of new immersive technologies, such as the Samsung Gear VR or Oculus Rift (vincular a páginas?) and other low cost solutions make VR more accessible to the public in general. It also makes possible the development of a wide variety of applications based on VR and with many different purposes, from leisure to educational, and for all ages.

Advantages of AR

If VR seems to be a powerful development tool for training or teaching, what happens if the information comes to you in a real world environment? The possibilities of AR applications are incredible because the knowledge can be offered in the space where the learning, training or coaching needs occur.

The real environment is augmented with information which can help the user learn or train for specific skills. In this case, AR is supporting the real world rather than replacing it with virtual environments. Its everyday usage is more often in games, tourism, leisure, training and education.

COVID19 and Immersive Technologies

In the pre-pandemic era, companies from across different industries including entertainment, healthcare, gaming, and manufacturing, etc. were already harnessing AR/VR technologies to reach customers and fuel growth. Tech giants including Facebook, Google, Samsung, HTC, etc. have been investing billions of dollars into developing AR/VR products and services for the past few years. 

However, the #covid19pandemic has given a significant impetus to the development and adoption of new and emerging technologies such as robotics, augmented reality (AR), virtual reality (VR), and mixed reality (MR).

As a result of government enforced lockdown across countries throughout the world, physical channels and in-person meetings have come to a grinding halt forcing all industries, namely, education, businesses, healthcare and entertainment to transition to digital on-line experiences. Many businesses have sadly been destroyed and our high streets and towns have emptied significantly.

AR/VR technologies aren’t just hype. They have proven real-world use cases.

Remote working

As more and more businesses are adapting to the new normal of work, i.e. remote working, the demand for AR and VR solutions is also surging to stay connected and boost productivity.

Facebook announced the availability of Oculus for Business, an enterprise solution for streamlining and expanding virtual reality in the workplace to help organisations meet the early demand for VR-powered training and collaboration to keep up with the tech-driven future of work.

Platforms and devices like Workplace, Portal, and Oculus were built for a time when the economic opportunity might no longer depend on geography, a time when what you do could matter more than where you are. That time starts now“, says Facebook CEO Mark Zuckerberg.

Product visualisation

In times of social distancing, online shopping is becoming more and more prevalent; supplemented by home delivery and courier services.

AR/VR technologies are helping businesses interact with their customers and allowing them to virtually see their products in great detail as in reality. The use of visualisation technology in online shopping reduces the time and physical effort of visiting “bricks and mortar” stores and improves customer satisfaction and subsequently consumer-brand interaction.

Education / Online learning

Around the world many educational institutions (schools, colleges, universities) have been forced to close temporarily. They have all transitioned to virtual learning to curb the spread of COVID-19 whilst preserving the educational experience for its students – some better than others. While this in itself deserves praise, societal inequalities have surfaced highlighting economic and class differences between schools, families and communities. 

Healthcare

Healthcare is one of the most promising sectors for the growth of AR/VR. These immersive technologies have allowed researchers and health experts to better analyse viruses or diseases like COVID-19.

Combined with Artificial Intelligence (AI), AR/VR technologies have aided in the COVID-19 drug discovery efforts by enabling scientists to gain insights into the molecular mechanics of the novel coronavirus in virtual reality.

This has also extended to telehealth services to coronavirus patients which has included remote medical care enabled delivered via VR headsets and VR therapy.

Virtual tourism

Travel and tourism businesses have arguably suffered the most due COVID-19 induced travel restrictions. According to United Nations’ WTO latest research, 83 percent of destinations in Europe have introduced complete closure of borders for international tourism. Amidst this challenge, Virtual Reality has a major role in making a real difference in the sector.

As the COVID 19 pandemic has limited travel and public gatherings, the Tower of David Museum has created a VR project that lets visitors immerse in Jerusalemʼs Old City through a transcendent stereoscopic 360 degrees Virtual Reality Documentary. The wide-ranging initiative, dubbed “The Holy City,” features immersive experiences of the Holy Fire ceremony at the Church of the Holy Sepulcher, the priestly blessing (“Birkat Kohanim”) at the Western Wall, and Ramadan prayers in Al-Aqsa Mosque compound.

Each one of us approached the different institutions in Jerusalem and [made clear] that they are not going to be misrepresented by anyone”, explained Nimrod Shanit, executive director of Holy City VR. “It was a really interesting way to work, kind of like an interfaith project”.

While VR may never replace in-person travels, it could offer time and cost-saving substitute to adapt to the COVID-19 induced ‘new normal’ within the travel and tourism industry and also help reduce the industry’s environmental and carbon footprint.

AR/VR post COVID-19

The rapid shift to remote work enabled by the internet and digital channels (such as Zoom and Teams) has also fuelled interest in immersive (and other emerging) technologies which will most likely continue even after the pandemic subsides.

No alt text provided for this image

We know that the road ahead is bumpy for all businesses, regardless of the industry. The bottom line is that attention-grabbing immersive technologies including AR/VR have huge potential to remotely enhance collaboration, educational and workplace productivity but given the economic downturn and the immature state of the technologies, it will take some more time before they go mainstream.

One thing is certain: AR/VR are no longer just about the technology; it’s about defining how we want to live in the real world with these new immersive technologies and how we will design experiences that are meaningful and can enrich humanity.

AI & Immersive Tech

Artificial Intelligence (AI) is the capability of a machine to imitate intelligent human behaviour. When coupled with AR/VR, AI has the capacity to take immersive experiences to another level.

For example: in a learning, training & skills development capacity, AI can replace numerous situations that occur randomly and learn from a students behaviour. As the student gets better, the system will present increasing difficult situations, personalising the education.

Implementation of AI for AR/VR is expected to offer more immersive technology which will be increasingly personalised. Based on previous behaviours within environments, AI has the potential to offer up relevant options to the user or predict required outcomes before they occur.

The trick with VR will be for AI to anticipate what the viewer wants to see and prepare as it streams out to their headset. Gamers are doing this now by starting to incorporate AI alongside canned player action responses. This way when a player does something out of the norm, a new reaction is “dreamed up” by the AI engine.

One of the more common implementations of AI and AR is managing and recognising real-world items in the context of an augmented world.

An example of this is the face filters used on apps like Snapchat. AI is used to determine the face and how it is orientated. The technology is also used to continuously track a person’s face and facial expressions.

Without the help of AI, AR would be unable to detect the proper orientation of a person’s face. Instead, the movement of your device would determine both the vertical and horizontal axis of your filter.

Perhaps one of the most exciting announcements this year is the debut of Microsoft’s Mesh technology a new collaborative mixed reality platform allowing users to experience a combination of AR and VR together; bringing people into your world, and vice versa.

Think of it a bit like the holographic messages we’ve seen in Star Wars and other science fiction stories.

Microsoft Technical Fellow, Alex Kipman, the man behind the HoloLens and Kinect, indicated that Mesh is powered by Azure and “all of its AI and compute capabilities, working seamlessly together whether companies are accessing resources in the cloud or at the edge of the network.

Likewise, Facebook has also been making headlines. The company is heavily investing in an augmented reality (AR) future. Today, Facebook Reality Labs (FRL) has given another teasing look into its vision, which revolves around a contextually-aware, AI-powered interface for its AR glasses.

This could work hand-in-hand with soft, wearable input systems like CTRL-Labs’ wristband – the company Facebook acquired in 2019 – where the AI would offer suggestions that you could say yes or no to. CEO Mark Zuckerberg said, “I think neural interfaces are going to be fundamentally intertwined with VR and AR work in terms of how the input works as well”.

Facebook also mentions that next week it’ll be unveiling more of its research to do with “wrist-based input combined with usable but limited contextualised AI”, followed later in the year by its work on all-day wearable devices and haptic gloves.

There are already a lot of companies trying to be the pathway to a global interconnected metaverse of VR and AR – powered by AI. Microsoft could be one of the first to make it all work, but how well Microsoft Mesh ends up playing with all of Facebook’s collaborative VR/AR tools, and eventually Google and Apple’s, is the big unknown.

In closing, a natural consequence of AI adoption is the increased automation of routine tasks to eradicate internal inefficiencies, increase operational efficiency and improve profitability.

Imagine for a moment .. the countless possibilities .. perhaps likened to a “sixth sense” beyond our “normal” human senses of :-

  1. Sight
  2. Hearing
  3. Touch
  4. Smell
  5. Taste

Nonetheless, AI and immersive technologies are all cool technologies that need to find practical uses. When combined, just like the invention of the internet and smartphones, it’s hard to predict where exactly this transformative new tech can take us. 

Concluding Thoughts

Today, we are at the beginning of a Fourth Industrial Revolution. Developments in genetics, artificial intelligence, robotics, nanotechnology, 3D printing and biotechnology, to name just a few, are all building on and amplifying one another. This will lay the foundation for a revolution more comprehensive and all-encompassing than anything we have ever seen.

Additionally, from enabling online learning to opening access to cultural events and experiences, applications of AR and VR can help us overcome the isolation of COVID-19 lockdowns.

Below are four key strategies to expand AR/VR initiatives across cultural institutions, schools and workplaces:

  1. Put a centralised governance model in place and build AR/VR awareness.
  2. Invest in upgrading talent to gear up for future adoption.
  3. Focus on the right use cases that provide lasting value and support employees.
  4. Prepare technology infrastructure to integrate AR/VR.

Seeing, hearing, and touching possible realities through the power of AR/VR can stir our collective willingness to welcome and activate positive change in a world that will be forever changed due to COVID19. Combined with AI, the possibilities appear to be endless.

Let’s make it our collective goal and commitment to design for the best of humanity.