I’m in a reflective mood today .. or perhaps I’m suffering from #heatstroke whilst enjoying the glorious sunshine 💭🤔
We are experimenting with building #sentientai #robotsforhumans and #intelligentsystems and more .. all crafted by a mix of #computerscience #datascience #algorithms #ai #ml #quantumtechnologies #bioengineering and more .. by people who don’t operate with common beliefs (be they political, social, economic or religion) or who share a common framework of morals or ethics.
Governments across the globe are competing to be the first to realise #artificialgeneralintelligence .. working against one another rather than together.
Who amongst these agencies truly cares for social needs of the masses?
Who is pausing to consider intended and unintended consequences?
Why in fact are we building these types of capabilities? And for what purpose? To enslave or to free or something else?
In our past, when scientists came together on the #manhattanproject to create nuclear weapons .. it was only when they succeeded, was there an admission that this capability would devastate human lives.
In fact, the scientist who led the effort (J. Robert Oppenheimer) was humiliated and (dare I say it) ‘cancelled’ (yes, this phenomenon is NOT new!) in an effort to silence anyone who questioned the morality, ethics and real purpose of nuclear weapons.
Will we repeat this part of our history .. with #artificialintelligence? 😔
Will we simply continue to watch and follow historians such as Yuval Noah Harari (and similar) rather than collectively subscribe to a moral compass that binds us to one another? 🙄
Or will we just continue to be consumers of #emergingtechnologies that could cause more harm than good? Interested instead in #abundance #materialism #indulgence 🤔
This article seeks to promote an understanding of the potentially transformative impacts and consequences of Artificial Intelligence (AI) on people and society. Each section will be published in parts – like a Netflix series!
AI — which is actually an “umbrella term” encompassing automation, machine learning, robotics, computer vision and natural language processing — impacts every aspect of our lives, ranging from customer services, retail, education, healthcare, autonomous cars, industrial automation, and more. It has become increasingly integrated into our society, automating tasks, accelerating computational and data analytics-based solutions whilst also assisting (and in some cases, displacing) humans with decision making.
For some, AI is synonymous with terms like ‘the fourth industrial revolution’ (or “4IR”). Interestingly, previous industrial revolutions have brought about huge social and economic change giving rise to more financial opportunities and more time for leisure activities, etc.
At the same time, AI is also fuelling anxieties and ethical concerns. There are questions about the trustworthiness of AI systems, including the dangers of codifying and reinforcing existing biases, such as those related to gender and race, or of infringing on human rights and values, such as privacy. Concerns are growing about AI systems exacerbating inequality, climate change, market concentration and the digital divide.
AI is in fact a new technology that promises and delivers great benefit to portions of society but harms certain groups. This is often referred to as “unintended consequences”.
The potential benefits of AI to society will be blunted if human biases find their way into coding. Hence, engineers tasked with designing AI algorithms and developing “intelligent systems”, and the like, should accept more responsibility for considering potential unintended consequences of their work.
A start in this direction would be integrate social sciences into engineering and computer science curricula. The key word here is “integration”. Students from both disciplines would greatly benefit by learning from each other, as would the faculty members who assemble the course syllabus and deliver lectures and seminars. Students majoring in the social sciences might discover interesting technological issues of societal importance while engaged in projects with engineering students. Likewise, engineering students would learn to value the social dimensions of innovation and gain a heightened awareness of potential unintended consequences of their work on both society and our environment.
We should be well past the days when the development of technology is separated from human needs, desires, and behaviour. This is why engineers should engage with the social sciences, and vice versa.
One important question we all need to seriously consider is “How can society regulate the way AI (and emerging technology) alters, augments, and enhances our lives – safely?”.
The response is not simple; it requires new paradigms, language, and regulatory frameworks that promote the idea of Artificial Social Intelligence. Hence, I have coined the term “Societal AI” that represents AI as a domain underpinned by principles and laws that govern social interactions between humans and AI.
“Societal AI” is about incorporating human-centred perspectives and humane requirements (including constraints) when designing AI algorithms, agents, and systems. Not only in terms of the capabilities of AI technology but more importantly what we are doing with it; potentially meshed with the behaviours, attitudes, intentions, feelings, personalities, and expectations of people.
At the same time, we cannot afford to leave important decisions and principles that affect fairness, accountability, transparency, and ethics (FATE) to businesses, governments, and policy writers. Instead, as citizens, we must influence how AI is leveraged to help shape and influence “AI for Social Good” – for the benefit of society.