Before We Begin …
This article seeks to promote an understanding of the potentially transformative impacts and consequences of Artificial Intelligence (AI) on people and society. Each section will be published in parts – like a Netflix series!

Introduction
AI — which is actually an “umbrella term” encompassing automation, machine learning, robotics, computer vision and natural language processing — impacts every aspect of our lives, ranging from customer services, retail, education, healthcare, autonomous cars, industrial automation, and more. It has become increasingly integrated into our society, automating tasks, accelerating computational and data analytics-based solutions whilst also assisting (and in some cases, displacing) humans with decision making.
For some, AI is synonymous with terms like ‘the fourth industrial revolution’ (or “4IR”). Interestingly, previous industrial revolutions have brought about huge social and economic change giving rise to more financial opportunities and more time for leisure activities, etc.
At the same time, AI is also fuelling anxieties and ethical concerns. There are questions about the trustworthiness of AI systems, including the dangers of codifying and reinforcing existing biases, such as those related to gender and race, or of infringing on human rights and values, such as privacy. Concerns are growing about AI systems exacerbating inequality, climate change, market concentration and the digital divide.
AI is in fact a new technology that promises and delivers great benefit to portions of society but harms certain groups. This is often referred to as “unintended consequences”.
The potential benefits of AI to society will be blunted if human biases find their way into coding. Hence, engineers tasked with designing AI algorithms and developing “intelligent systems”, and the like, should accept more responsibility for considering potential unintended consequences of their work.
A start in this direction would be integrate social sciences into engineering and computer science curricula. The key word here is “integration”. Students from both disciplines would greatly benefit by learning from each other, as would the faculty members who assemble the course syllabus and deliver lectures and seminars. Students majoring in the social sciences might discover interesting technological issues of societal importance while engaged in projects with engineering students. Likewise, engineering students would learn to value the social dimensions of innovation and gain a heightened awareness of potential unintended consequences of their work on both society and our environment.
We should be well past the days when the development of technology is separated from human needs, desires, and behaviour. This is why engineers should engage with the social sciences, and vice versa.
One important question we all need to seriously consider is “How can society regulate the way AI (and emerging technology) alters, augments, and enhances our lives – safely?”.
The response is not simple; it requires new paradigms, language, and regulatory frameworks that promote the idea of Artificial Social Intelligence. Hence, I have coined the term “Societal AI” that represents AI as a domain underpinned by principles and laws that govern social interactions between humans and AI.
“Societal AI” is about incorporating human-centred perspectives and humane requirements (including constraints) when designing AI algorithms, agents, and systems. Not only in terms of the capabilities of AI technology but more importantly what we are doing with it; potentially meshed with the behaviours, attitudes, intentions, feelings, personalities, and expectations of people.
At the same time, we cannot afford to leave important decisions and principles that affect fairness, accountability, transparency, and ethics (FATE) to businesses, governments, and policy writers. Instead, as citizens, we must influence how AI is leveraged to help shape and influence “AI for Social Good” – for the benefit of society.