Elon musk, a famous tech entrepreneur, has said, “AI scares the hell out of me,” claiming that AI is our biggest existential threat. A renowned scientist Stephen Hawking stated in 2014 that the development of full AI could spell the end of the human race.
Someone reading this might be wondering, wait, what? These primary concerns, however, have experimental grounds. Many current AI experts, including Stephen Hawking and Elon Musk, are concerned that the reckless deployment of very advanced AI systems could irreversibly sever human civilization from a promising future.
The discussion concerning AI is rife with confusion, disinformation, and individuals talking past one another, mainly because “AI” refers to many different concepts. Therefore, we will examine the broader picture of why AI might be a catastrophic threat.
Why Research AI Safety?
Some people have doubts that powerful AI can ever be produced, while others claim that developing such systems could only benefit society. The tech experts at Aspired acknowledge both of these possibilities as well as the potential for an AI system to intentionally or inadvertently cause significant harm. Future-proofing our ability to reap the rewards of AI without suffering its drawbacks requires current-day research that can help us anticipate and avert such unintended consequences.
AI Experts Fear Their Creation
Ilya Sutskever, the cofounder of OpenAI, said, “If you want to get the best results on many hard problems, you must use deep learning.” He noted that “they are scalable.”
In this context, the meaning of "scalable" is simple and significant: Put more money and data into your neural network – increase its size, spend more time training it, and incorporate more data — and it will perform better and better.
Large tech corporations today routinely conduct eye-popping multimillion-dollar training runs for their systems. The more you invest, the more you will get in return. This is what fuels the frantic energy that currently surrounds so much of AI. It's not just about what they can do but also about where they're heading.
Regarding text generation, GPT-4 can often fill in any gaps left by GPT-2 and 3. It has been educated to provide answers that are more useful to people. Several shrewd discoveries and novel methods have been made, but generally speaking, the only thing we've done to make these systems smarter is to make them bigger.
As systems grow, it becomes more difficult to comprehend their behavior and ensure that AI models work according to our aims rather than their own. And as we create more advanced techniques, reality will evolve from an intellectual conundrum to a profound existential worry.
A Work of Fiction is Shaping Our Reality
An unnecessary tragedy may occur if we allow fiction to determine our reality. But what options do we have when we cannot distinguish between genuine and fake in the digital world?
Imagine a nightmare scenario, the advent of deep fakes—fake images, video, audio, and text generated using advanced machine-learning tools—could one-day force national-security decision-makers to act in the real world based on false information, resulting in a significant crisis or perhaps a war. AI-enabled systems can now generate disinformation at a massive scale.
A Skinner Box with Human Subjects
Users of social media sites are like rats in a lab, trapped in human Skinner boxes with their faces jammed against their phones and forced to give more of their limited attention to sites that make money off of it. As Malcolm Murdock puts it, "the algorithms short-circuit the way our brain works, making our next bit of participation appealing" when they provide likes, comments, and following incentives. As a result, the more time we spend on these platforms, the less time we spend pursuing positive, productive, and satisfying lives.
The Demise of Freewill and Privacy
Andrew Lohn, a senior scholar at Georgetown University's Center for Security and Emerging Technologies (CSET), stated,
“We are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications.”
As data is collected and evaluated, its usefulness goes beyond mere monitoring and surveillance to precise command and management. The things we buy, the shows we watch, and the websites we click on are all predicted by AI-enabled systems today. Yet, when these platforms know us better than we know ourselves, it might be easy to miss the subtle ways we gradually lose control of our lives to external forces beyond our control.
What's the worst that could occur?
What distinguishes AI from other developing technologies, such as biotechnology, which might cause terrible pandemics or nuclear weapons that could destroy the world?
The distinction is that these potentially devastating technologies are mostly under our control. If they create calamity, it will be because we intentionally utilized them or failed to prevent their misuse by malicious or irresponsible humans. Yet AI is scary precisely because there may come a time when we have no control over it.
The issue is that AI advancements have occurred incredibly rapidly, placing regulators behind the eight ball. The restriction that could be the most valuable – slowing down the creation of enormously powerful new systems — would be highly unpopular with Big Tech, and it is unclear what the best regulations would be in the alternative.
IN A RECENT SURVEY, ABOUT HALF OF THE AI SCIENTISTS ESTIMATE THAT THERE IS A 10% CHANCE THAT THEIR WORK WILL LEAD TO THE END OF HUMANITY.
If nearly half of the researchers believe there is a 10% possibility that their work could result in the extinction of the human race, why is it practically unregulated?
A technology corporation cannot independently develop nuclear weapons. But, private corporations are constructing systems that they admit will likely become far more lethal than nuclear bombs.
At the same time, many in Washington are concerned that slowing down US AI progress could allow China to get there first, a Cold War mentality which is not unjustified — China is pursuing robust AI systems, and its leadership is actively engaged in human rights abuses — but which puts us at grave risk of rushing procedures into production which are pursuing their own goals without our knowledge.
What are we doing to prevent AI apocalypse?
There is no government regulation of artificial general intelligence. Technical work on promising alternatives is being done, but there needs to be more policy preparation, international collaboration, or public-private partnerships. In reality, most of the effort is made by a few organizations, and according to recent research, it is estimated that nearly only 400 people worldwide work full-time on technical AI safety.
Only some firms with a large AI department have a safety team, and some have safety teams mainly concerned with algorithmic fairness and not with the threats posed by advanced algorithms.
No one has delved deeply into the numerous unanswered concerns, many of which may make AI appear significantly more or less frightening.
AI’s Complex Controversies
Finally, the debates concerning the evolution of artificial intelligence are both fascinating and challenging. It is critical to focus on the genuine and thought-provoking conversations that take place among scientists and society at large. What sort of future do we want to build? How can we handle worries about fatal AI uses? What does it mean to be human in the age of AI, and how can we ensure that our values and ethics are reflected in this future? These and other issues necessitate serious analysis and open communication among stakeholders.
We know that the future is not predetermined and that we can influence it based on our values and aspirations. Therefore, entering the discourse and cooperating for a better future is critical to realize the full promise of artificial intelligence while limiting its hazards and guaranteeing ethical use.