Tech experts outline the four ways AI could spiral into worldwide catastrophes

Tech experts, Silicon Valley billionaires and everyday Americans have voiced their concerns that artificial intelligence could spiral out of control and lead to the downfall of humanity. Now, researchers at the Center for AI Safety have detailed exactly what “catastrophic” risks AI poses to the world.

“The world as we know it is not normal,” researchers with the Center for AI Safety (CAIS) wrote in a recent paper titled “An Overview of Catastrophic AI Risks.” “We take for granted that we can talk instantaneously with people thousands of miles away, fly to the other side of the world in less than a day, and access vast mountains of accumulated knowledge on devices we carry around in our pockets.” 

That reality would’ve been “inconceivable” to people centuries ago and remained far-fetched even a few decades back, the paper stated. A pattern in history has emerged of “accelerating development,” the researchers noted.

“Hundreds of thousands of years elapsed between the time Homo sapiens appeared on Earth and the agricultural revolution,” the researchers continued. “Then, thousands of years passed before the industrial revolution. Now, just centuries later, the artificial intelligence (AI) revolution is beginning. The march of history is not constant—it is rapidly accelerating.”

CAIS is a tech nonprofit that works to reduce “societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards,” while also acknowledging artificial intelligence has the power to benefit the world.

EXPERTS WARN ARTIFICIAL INTELLIGENCE COULD LEAD TO ‘EXTINCTION’

AI sign

Experts argue the difference between AI investment in China and the U.S. is the fact that the American model is driven by private companies whereas China takes a government approach. (JOSEP LAGO/AFP via Getty Images)

The CAIS leaders behind the study, including the nonprofit’s director Dan Hendrycks, broke down four categories encapsulating the main sources of catastrophic AI risks, which include: malicious use, the AI race itself, organizational risks and rogue AIs.

“As with all powerful technologies, AI must be handled with great responsibility to manage the risks and harness its potential for the betterment of society,” Hendrycks and his colleagues Mantas Mazeika and Thomas Woodside wrote. “However, there is limited accessible information on how catastrophic or existential AI risks might transpire or be addressed.”

Hendrycks told Fox News Digital that the aim of the paper is “to provide a survey of catastrophic risks from AI, and it is meant to be accessible to a wide audience, including policymakers and others interested in learning more about the risks.”

“I hope this can be useful for government leaders looking to learn about AI’s impacts,” Hendrycks added. 

NEXT GENERATION ARMS RACE COULD CAUSE ‘EXTINCTION’ EVENT: TECH EXECUTIVE

Malicious Use

AI hacker

Artificial intelligence hacker behind computer.  (Fox News)

The study from the CAIS experts defines malicious use of AI as when a bad actor uses the technology to cause “widespread harm,” such as through bioterrorism, misinformation and propaganda, or the “deliberate dissemination of uncontrolled AI agents.”

The researchers pointed to an incident in Japan in 1995 when the doomsday cult Aum Shinrikyo spread an odorless and colorless liquid on subway cars in Tokyo. The liquid ultimately killed 13 people and injured 5,800 other people in the cult’s effort to jumpstart the end of the world.

Fast-forward nearly 30 years, AI could potentially be used to create a bioweapon that could have devastating effects on humanity if a bad actor gets their hands on the technology. The CAIS researchers floated a hypothetical where a research team open sources an “AI system with biological research capabilities” that is intended to save lives, but could actually be repurposed by bad actors to create a bioweapon. 

AI COULD GO ‘TERMINATOR,’ GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS

“In situations like this, the outcome may be determined by the least risk-averse research group. If only one research group thinks the benefits outweigh the risks, it could act unilaterally, deciding the outcome even if most others don’t agree. And if they are wrong and someone does decide to develop a bioweapon, it would be too late to reverse course,” the study states.

Malicious use could entail bad actors creating bioengineered pandemics, using AI to create new and more powerful chemical and biological weapons, or even unleashing “rogue AI” systems trained to upend life.

“To reduce these risks, we suggest improving biosecurity, restricting access to the most dangerous AI models, and holding AI developers legally liable for damages caused by their AI systems,” the researchers suggest.

AI Race

Woman and AI

A woman chatting with a smart AI or artificial intelligence using an artificial intelligence chatbot developed. (getty images)

The researchers define the AI race as competition potentially spurring governments and corporations to “rush the development of AIs and cede control to AI systems,” comparing the race to the Cold War when the U.S. and Soviet Union sprinted to build nuclear weapons.

“The immense potential of AIs has created competitive pressures among global players contending for power and influence. This ‘AI race’ is driven by nations and corporations who feel they must rapidly build and deploy AIs to secure their positions and survive. By failing to properly prioritize global risks, this dynamic makes it more likely that AI development will produce dangerous outcomes,” the research paper outlines.

In the military, the AI race could translate to “more destructive wars, the possibility of accidental usage or loss of control, and the prospect of malicious actors co-opting these technologies for their own purpose” as AI gains traction as a useful military weapon.

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

Lethal autonomous weapons, for example, can kill a target without human intervention while streamlining accuracy and decision-making time. The weapons could become superior to humans and militaries could delegate life-or-death situations to the AI systems, according to the researchers, which could escalate the likelihood of war.

“Although walking, shooting robots have yet to replace soldiers on the battlefield, technologies are converging in ways that may make this possible in the near future,” the researchers explained.

“Sending troops into battle is a grave decision that leaders do not make lightly. But autonomous weapons would allow an aggressive nation to launch attacks without endangering the lives of its own soldiers and thus face less domestic scrutiny,” they added, arguing that if political leaders no longer need to take responsibility for human soldiers returning home in body bags, nations could see an increase in the likelihood of war.

Artificial intelligence could also open the floodgates to more accurate and fast cyberattacks that could decimate infrastructure or even spark a war between nations. 

“To reduce risks from an AI race, we suggest implementing safety regulations, international coordination, and public control of general-purpose AIs,” the paper suggests to help prevent such outcomes.

Organizational Risks

AI image

Artificial Intelligence is hacking datas in the near future. (iStock)

The researchers behind the paper say labs and research teams building AI systems “could suffer catastrophic accidents, particularly if they do not have a strong safety culture.”

“AIs could be accidentally leaked to the public or stolen by malicious actors. Organizations could fail to invest in safety research, lack understanding of how to reliably improve AI safety faster than general AI capabilities, or suppress internal concerns about AI risks,” researchers wrote.

They compared the AI organizations to disasters throughout history such as Chernobyl, Three Mile Island and the fatal Challenger Space Shuttle incident.

AI TECH ‘MORE DANGEROUS THAN AN AR-15,’ CAN BE TWISTED FOR ‘MALEVOLENT POWER,’ EXPERT WARNS

“As we progress in developing advanced AI systems, it is crucial to remember that these systems are not immune to catastrophic accidents. An essential factor in preventing accidents and maintaining low levels of risk lies in the organizations responsible for these technologies,” the researchers wrote. 

The researchers argue that even in the absence of bad actors or competitive pressure, AI could have catastrophic effects on humanity due to human error alone. In the case of the Challenger or Chernobyl, there was already well established knowledge on rocketry and nuclear reactors when chaos struck, but AI in comparison is far less understood.

“AI lacks a comprehensive theoretical understanding, and its inner workings remain a mystery even to those who create it. This presents an added challenge of controlling and ensuring the safety of a technology that we do not yet fully comprehend,” the researchers argued.

AI accidents would not only be potentially catastrophic, but also hard to avoid. 

The researchers pointed to an incident at OpenAI, the AI lab behind ChatGPT, where an AI system was trained to produce uplifting responses to users, but human error led the system to produce “hate-filled and sexually explicit text overnight.” Bad actors who hack a system or a leak of an AI system could also pave the way for catastrophe as malicious entities reconfigure the systems beyond the original creator’s intentions. 

History has also shown that inventors and scientists often underestimate how quickly technological advances actually become a reality, such as the Wright brothers predicting powered flight was 50 years down the road, when they actually achieved this win two years of their prediction.

“Rapid and unpredictable evolution of AI capabilities presents a significant challenge for preventing accidents. After all, it is difficult to control something if we don’t even know what it can do or how far it may exceed our expectations,” the researchers explained.

The researchers suggest that organizations establish better cultures and structures to reduce such risks, such as through “internal and external audits, multiple layers of defense against risks, and military-grade information security.”

Rogue AIs 

AI photo

Artificial Intelligence words are seen in this illustration taken March 31, 2023.  (REUTERS/Dado Ruvic/Illustration)

One of the most common concerns with artificial intelligence since the proliferation of the tech in recent years is that humans could lose control and computers overpower human intelligence. 

AI ‘KILL SWITCH’ WILL MAKE HUMANITY LESS SAFE, COULD SPAWN ‘HOSTILE’ SUPERINTELLIGENCE: AI FOUNDATION

“If an AI system is more intelligent than we are, and if we are unable to steer it in a beneficial direction, this would constitute a loss of control that could have severe consequences. AI control is a more technical problem than those presented in the previous sections,” the researchers wrote.

Humans could lose control through “proxy gaming,” when humans give an AI system an approximate goal that “that initially seems to correlate with the ideal goal,” but the AI systems “end up exploiting this proxy in ways that diverge from the idealized goal or even lead to negative outcomes.”

Researchers cited an example from the Soviet Union when authorities began measuring nail factories’ performances based on how many nails a factory was able to produce. To exceed or meet expectations, factories began mass producing tiny nails that were essentially useless due to their size.

“The authorities tried to remedy the situation by shifting focus to the weight of nails produced. Yet, soon after, the factories began to produce giant nails that were just as useless, but gave them a good score on paper. In both cases, the factories learned to game the proxy goal they were given, while completely failing to fulfill their intended purpose,” the researchers explained.

Researchers suggest that companies not deploy AI systems with open-ended goals like “make as much money as possible,” and supporting AI safety research that can hash out in-the-weeds research to prevent catastrophes.

CLICK HERE TO GET THE FOX NEWS APP

“These dangers warrant serious concern. Currently, very few people are working on AI risk reduction. We do not yet know how to control highly advanced AI systems, and existing control methods are already proving inadequate… As AI capabilities continue to grow at an unprecedented rate, they could surpass human intelligence in nearly all respects relatively soon, creating a pressing need to manage the potential risks,” the researchers wrote in their conclusion. 

There is, however, “many courses of action we can take to substantially reduce these risks” as outlined in the report. 

“Although there has been years of research by many people on some of these topics, it has been spread across many different sources and it can be difficult for people newly interested in the field to sort through it. Our paper hopes to provide a comprehensive and easily digestible overview of catastrophic AI risks, how they relate to each other, and steps we can take to mitigate them. In addition to being accessible to non-technical readers, we hope the paper will also be useful to technical experts who want a high-level overview of issues before looking into the technical details,” Hendrycks told Fox News Digital. 

Check Also

Larian Studios shocks fans by not planning any Baldur’s Gate 3 DLC or expansions, with no Baldur’s Gate 4 in sight. Time for something new!

During a panel at the Game Developers Conference (GDC) today, the founder of Larian Studios, …