AI expert’s dire warning: Uncontrollable robot intelligence could wipe out humans

‘The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.’

LOUISVILLE, Ky. — Uncontrollable artificial intelligence may lead to the extinction of the human race. That’s what one top researcher is warning as a new survey reveals there is a stark absence of evidence to suggest that AI, particularly in its most advanced forms, can be managed safely.

Dr. Roman V. Yampolskiy, an AI safety expert and an associate professor of computer science and engineering at the University of Louisville, argues that the challenge of ensuring AI safety is not only paramount but also largely overlooked and under-researched.

“We are facing an almost guaranteed event with potential to cause an existential catastrophe. No wonder many consider this to be the most important problem humanity has ever faced,” says Dr. Yampolskiy in a media release. “The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”

The Uncontrollable Superintelligence Dilemma

Superintelligence refers to a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. Dr. Yampolskiy points out a troubling gap in the AI field: the lack of proof that such a powerful form of intelligence can be controlled. Despite advancements in AI, the ability to fully understand, predict, and manage these systems remains elusive.

The crux of the problem, as Dr. Yampolskiy sees it, is the inherent complexity and unpredictability of AI systems. These systems can learn, adapt, and make decisions in ways that humans cannot always foresee or understand. This makes the task of ensuring their safety infinitely complex and potentially unachievable.

“Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable,” explains Dr. Yampolskiy.

Ameca AI robot
An AI safety expert argues that the challenge of ensuring AI safety is not only paramount but also largely overlooked and under-researched. (Ameca Generation 1 by Willy Jackson is licensed under CC BY-SA 4.0.)

“This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort.”

One of the significant hurdles in making AI safe is the technology’s capacity for autonomous decision-making and learning. As AI systems become more advanced, they encounter an endless array of potential decisions and failures.

“If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers,” notes Dr. Yampolskiy.

Moreover, the opacity of AI decision-making processes adds another layer of complexity. Many AI systems operate as “black boxes,” making decisions that humans cannot readily interpret or understand. This opacity not only challenges our ability to ensure AI systems are bias-free but also raises concerns about blindly trusting AI recommendations without comprehending their basis.

Call for Caution and Control

Dr. Yampolskiy advocates for a cautious approach to AI development, suggesting that the pursuit of advanced AI should be contingent upon demonstrating its controllability. He calls for a significant ramp-up in AI safety research, emphasizing the need to balance the potential benefits of AI with the risks it poses.

“Humanity is facing a choice, do we become like babies, taken care of but not in control or do we reject having a helpful guardian but remain in charge and free,” says Dr. Yampolskiy.

The idea of finding a middle ground, where some degree of AI capability is sacrificed for greater control and safety, is posited as a potential solution. Yet, this approach also entails challenges, particularly in aligning AI actions with human values without imposing biases that could skew its decision-making.

Artificial intelligence and robot solving equation
Many AI systems operate as “black boxes,” making decisions that humans cannot readily interpret or understand. (Photo by Phonlamai Photo on Shutterstock)

To mitigate the risks associated with AI, Dr. Yampolskiy suggests several strategies, including making AI systems modifiable, transparent, and understandable in human terms. He also advocates for classifying AI technologies based on their controllability and exploring the possibility of moratoriums or bans on certain AI applications.

“Most AI safety researchers are looking for a way to align future superintelligence to values of humanity. Value-aligned AI will be biased by definition, pro-human bias, good or bad is still a bias,” says Dr. Yampolskiy. “The paradox of value-aligned AI is that a person explicitly ordering an AI system to do something may get a ‘no’ while the system tries to do what the person actually wants. Humanity is either protected or respected, but not both.”

While Dr. Yampolskiy acknowledges that achieving 100 percent safe AI may be an unattainable goal, he stresses the importance of concerted efforts to enhance AI safety. The message is clear: the development of AI must proceed with caution, with a prioritized focus on safety and security research. Only through rigorous examination and responsible innovation can humanity hope to harness the benefits of AI while avoiding the perils of an uncontrollable digital future.

AI: Unexplainable, Unpredictable, Uncontrollable is published by the Taylor & Francis Group.


Follow on Google News

About the Author

StudyFinds Staff

StudyFinds sets out to find new research that speaks to mass audiences — without all the scientific jargon. The stories we publish are digestible, summarized versions of research that are intended to inform the reader as well as stir civil, educated debate.

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer

Comments

Comments are closed.