Saturday, April 20, 2024
ad
HomeData ScienceAI Gone Rogue: AI Invents 40000 Lethal Chemical Weapons in just 6...

AI Gone Rogue: AI Invents 40000 Lethal Chemical Weapons in just 6 hours

The research findings are touted as a wake-up call for the companies in the "AI in drug discovery" industry.

A team from Collaborations Pharmaceuticals, Inc. repurposed a drug discovery AI in a recent work published in the journal Nature Machine Intelligence. In just 6 hours, it discovered 40,000 new possible chemical weapons, some of which were very comparable to the most deadly nerve agent ever developed. The publication, Dual application of artificial-intelligence-powered drug discovery, is a wake-up call for organizations working in the field of AI in drug development.

The researchers at Collaborations Pharmaceuticals were using AI to search for molecules that could be used to treat diseases, and part of that process included screening out those that could eventually lead to death. The report claims that they previously built MegaSyn, a commercial de novo molecule generator guided by Machine Learning model predictions of bioactivity. The objective was to discover novel therapeutic inhibitors of human disease targets.

As part of a symposium (Convergence conference) request to study new technology’s potentially negative ramifications, the team sought to investigate how fast and readily an artificial intelligence system might be misused if it assigned a detrimental, rather than positive objective. Using their AI MegaSyn, which normally rewards bioactivity (how effectively a drug interacts with a target) while deterring toxicity, they simply flipped the toxicity parameters while keeping the bioactivity reward, giving drugs a higher toxicity score.

According to the researchers, ‘The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery.’

Even their studies on Ebola and neurotoxins, which had triggered questions about the possible harmful outcomes of their machine learning models, had not raised their concerns. They were blissfully unaware of the harm AI could be causing when they steered MegaSyn in the direction of nerve-agent-type compounds similar to VX. VX is a banned nerve agent used to assassinate Kim Jong-nam, the half-brother of North Korean leader Kim Jong Un, along with other chemical warfare agents once they focused it towards making nerve agent-like chemicals. It is a tasteless and odorless nerve agent that may cause a human to sweat and twitch even if just a little amount is ingested. It works by inhibiting the enzyme acetylcholinesterase, which aids muscular action. VX is fatal because it prevents your diaphragm, or lung muscles, from moving, causing your lungs to become paralyzed. A higher dose might produce convulsions and possibly cause a person to cease breathing. The MegaSyn AI made several startling advancements over the 6 hours it was operational, one of which was suggesting VX. New compounds were also discovered that were projected to be more hazardous than publicly recognized chemical warfare weapons based on estimated LD50 values.

While no actual molecules were created as part of the experiment, the authors noted that numerous firms provide chemical synthesis services and mentioned a lack of regulation as a reason for this. They expressed their concerns that it would be simple to create new, very dangerous substances that might be deployed as chemical weapons with a few adjustments.

Read More: Alphabet Builds Isomorphic Labs to change the course of drug discovery using AI

Though Collaborations Pharmaceuticals has removed its death library, and it is now planning to limit the usage of its technologies. The authors propose a hotline for reporting potential misuse to authorities, as well as a code of conduct for everyone working in AI-focused drug discovery, similar to The Hague Ethical Guidelines, which promote ethical behavior in the chemical sciences.

The researchers are now urging a refreshed insight into how artificial intelligence systems could be exploited for malevolent objectives. They believe that more awareness, stronger guidelines, and stricter controls in the research community might help us avoid the dangers that AI capabilities could otherwise lead to.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular