I personally find the topic of “AI and Society”fascinating 💭🤔
While many ‘pundits’, self-proclaimed experts and social media postings advocate the need for #digitaltransformation #ethicalai #responsibletech #wellbeing and so forth, I personally feel that there is something missing ..
Specifically, the distinct lack of a #worldview and rejection of the human soul — especially where #ai and #machineintelligence is concerned.
Let’s not forget that BOTH #technology and #society are co-related, co-dependent, co-influence with each other.
AI and Technology impacts on society, including the potential for society to progress or decline, in both good and bad ways – with both beneficial and harmful consequences.
With this backdrop in mind –
I will be releasing more content over the next few months that raises important questions that we should ALL be asking.
Click ‘Follow’ for more posts and articles on #AI and #EmergingTech —
By Cade Metz Cade Metz reported this story in Toronto.
May 1, 2023 Updated 3:07 p.m. ET
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.
Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open lettercalling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.”
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.
Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural networkis a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
Students at Pascal Education in Cyprus created “AInstein” (named after Albert Einstein) as part of an AI project to design and develop a robot that could communicate with humans in a natural and intuitive way.
AInstein is powered by a state-of-the-art natural language processing AI model developed by OpenAI.
AInstein‘s hardware includes a Raspberry Pi, a camera, microphone, and speaker, which are used to detect human presence and enable communication.
AInstein is programmed to learn and improve over time through a combination of #supervisedlearning and #unsupervisedlearning.
The project enabled students to develop and use their creativity, as well as their critical thinking, and communication skills, and develop their sense of teamwork.
To learn more about AI and Social Robots, get in touch with me directly via LinkedIn or book a meeting via the following link – https://ite.ai/book-a-meeting
Students in India have unveiled a rechargeable battery powered robot which provides an alternative to rickshaws
Four students from Surat, India, made a robot that can walk like a human and even pull rickshaws, a passenger cart used for transportation.
It took them 25 days and thousands of dollars to build the robot. Student Maurya Shivam says the robot is powered by a battery and can be recharged.
“This is our prototype which we have tested on the road. And this is not completed yet, work still remains to be done on its leg, hand, head and face,” said Shivam. “We have tried to create it as same as how a normal human walks tells another student.”
Today, humanity around the world is waking up to the challenges and opportunities presented by #artificialintelligence.
Rather than give in to dystopian fears, we should remember that when we work together for our collective good, the future is always bright and full of possibilities.
After all .. as humans, we genuinely have the answers inside us .. we simply often need a helping hand or a friendly ear to listen to, to help dispel fear and instead embrace hope.
This is an opportunity for people of every profession to think about what lies ahead and the role humans will play versus machines.
AI truly is for everyone .. to make this possible, will no doubt require change .. in our minds, our education system, our culture, our governments, how we use technology to augment our society, and more .. for us and our children and future generations.
Machines are NOT our competitors or masters .. they do not have a conscience .. they are not sentient or ‘living’ .. nor do they possess a soul .. unlike humans who have adapted, endured, and thrived for centuries .. from one civilisation to the next.
– Salim Sheikh
If you are struggling to make sense of AI either for yourself, your team, or your business, get in touch.
Happy to meet for a coffee (real or virtual!) and help make sense of things ☕️😊
AI is increasingly being compared to the atomic bomb attributed to Robert Oppenheimer (1904-1967), the director of the ‘Manhattan Project’ — an R&D project that produced the first nuclear weapons during World War II.
Like the atomic bomb, there are fears of AI being misused, weaponised, and worse.
But perhaps it’s too late to turn back the clock .. too late to ‘pause’ development (for a measly 6 months!) .. as the ‘horse has bolted from the stable’ and disappeared over the horizon 👀💬
What is the common theme here?
We (humans) always have the capability to use anything we design, build, manufacture, produce, distribute, and sell ..
for GOOD 😇 or for BAD 😈
The real question is what will we choose? ⏳
Will we forfeit this choice to the billionaires and tech giants OR will we use our democratic voice to influence and shape our future – for the betterment of all of humans equitably, responsibly, and peacefully? 👀💬
Don’t give in to media hype, dystopian fears, or doomsayers who are encouraging the potential decline and end of humanity.
Rather .. if you can’t see good in the world .. be the one who shows others what ‘good’ looks like .. inspire others to follow, echo, and amplify ‘good’ and quash the ‘bad’.
Otherwise, we are at risk of becoming part of the ‘problem’ rather than the ‘solution’ 😱
Why are we so ready to accept machine and ‘artificial’ intelligence as an alternative to (and even a replacement for) humans?
Surely these are just machines.. albeit faster, more efficient, more productive, and so forth.
Why are we so quick to accept dystopian ideas of AI and Robots?
Why are we not planning for and reimagining a future world that is augmented by emerging technologies?
Why are HR professionals and CXOs not collectively brainstorming NOW on what jobs of tomorrow and the ‘day after next’ may look like .. and preparing roadmaps, career pathways, etc. to respond to fears of ‘AI and Robots’ taking over ?!?
If you are a HR professional and/or a CXO, reach out .. let’s work together to explore answers to important questions.. to protect people and their livelihoods.
Get in touch by DM’ing me or book time by clicking the link below
Keep calm .. don’t panic about AI .. and celebrate being human 🤩
In an op-ed for the The New York Times, the highly respected linguist and professor, Noam Chomsky said that although the current spate of AI chatbots such as OpenAI’s ChatGPT and Microsoft’s Bing AI “have been hailed as the first glimmers on the horizon of artificial general intelligence (AGI)” — the point at which AIs are able to think and act in ways superior to humans — we absolutely are not anywhere near that level yet 👀💬
There’s no way that machine learning as it is today could compete with the human mind.
While currently available AI chatbots may seem to mimic human creativity and ingenuity, they are doing so only based on statistical probability, and not as a result of the kind of deeper knowledge and understanding that is inherent in all human thought processes.
In fact todays ‘AIs’ are “stuck in a prehuman or nonhuman phase of cognitive evolution,” Chomsky argued.
So .. don’t believe the hype and those seeking to push a dystopian agenda. Instead, pay attention to what’s real and what is not.
If you’re concerned about AI and want to understand how you can leverage it in your personal and/or professional life .. reach out, and let’s discuss.
We live in a world today, where there is much information and knowledge available .. shared amongst millions of people at the touch of a button or voice prompt — assisted by #AI and the naïveté of many people who seek ‘influencer’ status and followers .. like false prophets.
But what is the point of knowledge without understanding? 💬🤔
This is “inert knowledge” — information that a person knows but doesn’t fully understand, which means that they can only recognise, express, or use it in very limited ways.
Likewise, we’ve become a society that encourages “rote learning” which by itself is an ineffective tool in mastering any complex subject at an advanced level simply to graduate from education into the workplace to support systems that (eventually) enslave us.
If we are to truly benefit from the “Information Age”, we need to look at the future differently which includes how we use and leverage AI – which will certainly become more prevalent and embedded in every part of our lives.
If we continue to act as people who don’t invest in ‘understanding’ knowledge, we will miss our opportunity to create a better world for ourselves, our families, our children, and our communities 😱
Worse still, we will eventually allow the world to become a place where ignorance, incomprehension, insensitivity, misapprehension, misunderstanding, and worse, are behaviours that are normalised.
For the past few months, I’ve been quietly observing “normal” people around the world slowly waking up to the impact and (intended and “unintended”) consequences of Artificial Intelligence (AI) on society.
As I had anticipated, the general public is grossly unprepared for a “tidal wave of shocks” that we will be met with “tsunamis” of complaints, protests, industrial strikes, violence, arrests, indignation and outrage as the world transitions from the “industrial” to the “information” age.
These “shocks” will be amplified by the rapid evolution of emerging technologies such as “AI”, quantum computing, the Internet of Things (IoT), the metaverse (enriched by progress with augmented, virtual, and mixed reality), and more – permeating every part of society regardless of race, creed, or culture.
Awareness & Education
I’ve been attempting to raise awareness about all of these things via articles, blogs, and books about “AI and Society” since 2020, with research stretching as far back as 2013 — a decade ago!
In 2021, when the world was gripped by the COVID19 pandemic, I published an article on LinkedIn — available here — where I highlighted the following
This crisis highlights something that has always been true about AI: it is a tool, and the value of its use in any situation is determined by the humans who design it and use it.
Ultimately, in the current crisis, human action and innovation will determine how far AI is leveraged – across all parts of society.
Land of Confusion
Returning to the present, I feel it is even more important to revisit my previous reflections and advises, especially since OpenAI’s release of ChatGPT which has (from my perspective) sadly generated a lot of fear, confusion, and misinformation.
There are reports of “AI” being sentient (able to perceive or feel things) and that ChatGPT is “proof” that we are on the verge of “Artificial General Intelligence” (any AI system that exhibits human-level intelligence) and we are all doomed!
Why are we so quick to accept dystopian ideas of AI (and Robots) typically shaped by science fiction, our imagination, and fear (of extinction, of ..)?
Why are we so ready to accept machine and ‘artificial’ intelligence as an alternative to (and even a replacement for) humans?
Why are we allowing “digitalisation” to impact our natural human disposition for social interaction, relationships, and community?
Why do we expect “machines” to answer profound questions about “life”, “love”, “God”, the universe, and more? Are we ready to give up trusting other humans such as our own parents, grandparents, teachers, doctors, priests, and so forth?
Social Implications of AI
In Chapter 8 of my book, “Understanding the Role of Artificial Intelligence and Its Future Social Impact” (available here), I described the “Social Implications of AI” as “anything that affects an individual, a community or the wider society”.
I also predicted that “many social implications have been and will continue to be surprising”.
I outlined a series of questions which are even more relevant today, namely,
Will people simply become consumers served by intelligent systems that respond to our every whim?
Are we reaching a tipping point between convenience and dependency?
How will AI affect social issues relating to housing, finance, privacy, poverty and so on?
Do we want a society in which machines supplementing or augment humans?
I stressed then and repeat the same message now – that it is important to be as clear as possible about the social implications to truly understand the benefits and risks.
Otherwise, we will be drowned in misinformation, and worse, which will only lead more fear, powerlessness, and extreme mental health issues.
Impact of AI on Jobs
AI and Robots will not cause job losses. On the contrary, jobs will simply change. It is up to business leaders and professionals involved in job creation, talent management, learning & development, and recruitment, to ACT NOW, to REIMAGINE JOB ROLES, to CREATE NEW CAREER PATHWAYS, and DEVELOP STRATEGIES that embrace a future underpinned by “Humans + AI”.
This presumes that we accept AI will amplify and enhance our creativity with the benefit of significant improvement in our productivity and output in much shorter timescales. Thus, we will be “gifted” time which we don’t currently have to create better quality products, more innovative solutions, and diverse services which would have been impossible without AI augmenting our human potential.
Stop .. Pause .. Breathe!
All of the above, assumes that we will continue operating an “industrial” model (underpinned by manufacturing, “factory” and distribution processes) primarily focused on “producers” and “consumers”.
There must be a better way .. what happened to the promise of #newnormal?
What if we challenged ourselves to create a future that responds to the “call to action” by all countries — poor, rich and middle-income — to promote prosperity while protecting the planet and safeguarding a future for our children and grandchildren?
Wasn’t this at the core of the seventeen (17) Sustainable Development Goals (SDGs) supported unanimously by 193 heads of state and other top leaders?
No wonder the younger generation regularly protest about corruption amongst governments, politicians, and the ruling classes.
2030 Agenda for Sustainable Development
The SDGs recognise that ending poverty must go hand-in-hand with strategies that build economic growth and address a range of social needs including education, health, social protection, and job opportunities, while tackling climate change and environmental protection.
Isn’t it about time we recognise the emergence of a more multipolar international system, not beholden to American or Western dominated institutions, that serves as the backbone of a reimagined future built for the “information age”?
Celebrate Being Human
Rather than allow your mind to be drowned by fear of being replaced or displaced by “AI”, focus instead on your strengths, skills, and expertise gained when you started out in your career or vocation.
Take the time to learn about the “humanities” to help learn from the past, provide crucial insights into the behaviour detailing the future.
The implications of new scientific and technological developments, the effect a new cancer treatment might have on individuals, predicting how increases in digital communication will alter human interaction, informing our understanding of what climate change might mean for where and how we live; for all of these, an understanding of behaviour, community and morality are vital.
However, the humanities and social sciences are first and foremost about understanding, questioning, fulfilment, culture and identity. They also infuse our economy and our public and cultural life.
Our innate human qualities and “inner self” inform our understanding of ourselves and our communities. As COVID19 pandemic showed, together, we can tackle any challenges faced by society across borders, cultures, and generations.
In a recent “opinion” essay for the The New York Times, the linguist and professor, Noam Chomsky said that although the current spate of AI chatbots such as OpenAI‘s ChatGPT and Microsoft‘s Bing AI “have been hailed as the first glimmers on the horizon of artificial general intelligence (AGI)” — the point at which AIs are able to think and act in ways superior to humans — we absolutely are not anywhere near that level yet.
While currently available AI chatbots may seem to *mimic* human creativity and ingenuity, they are doing so only based on statistical probability, and not as a result of the kind of deeper knowledge and understanding that is inherent in all human thought processes.
There’s no way that machine learning as it is today could compete with the human mind. In fact todays ‘AIs’ are “stuck in a prehuman or nonhuman phase of cognitive evolution,” Chomsky argued.
I share this viewpoint and encourage readers of this article to stop supporting the dystopian agenda surrounding AI.
Instead, pay attention to what’s real and what is not.
Connect & Follow Me
If you’re concerned about AI and want to discuss this topic further, get in touch either via #DMs or book a meeting by clicking this link.
Please follow me and subscribe for future articles relating to “AI and Society”.
Most importantly, be the change you want to see in the world. Together, we can shape a better, equitable and sustainable future – for EVERYONE!
Until next time. Stay Healthy – mentally, physically, spiritually.