Wednesday, April 24, 2024
ad
HomeData ScienceCornell University Finds a Method to Introduce Malware in Neural Network

Cornell University Finds a Method to Introduce Malware in Neural Network

A new study claims that malware can be hidden in neural networks and can go unexposed

A team of researchers from Cornell University has recently demonstrated how neural networks can be embedded with malware and go undetected. 

In the paper titled, “EvilModel: Hiding Malware Inside of Neural Network Models” posted on the arXiv preprint server last Monday, the team mentions that malware in a neural network can dupe automated detection tools. This is done by embedding malware directly into the artificial neurons in a way that has minor or no impact on the performance of the neural network while remaining untraced. 

The team led by researchers Zhi Wang, Chaoge Liu, and Xiang Cui note that with the widespread use of neural networks, this method can emerge as a popular medium to launch malware attacks.  

Instead of following codified rules, a neural network helps computers learn by emulating the neural structure of the brain. A neural network is a subfield of machine learning, a branch of computer science based on fitting complex models to data. Artificial neurons of a neural network are created with the help of specialized software that runs on digital electronic computers. That software gives each neuron numerous inputs and a single output. Each neuron’s state is determined by the weighted sum of its inputs, which is then applied to a nonlinear function termed an activation function. The outcome, this neuron’s output, then becomes an input for a number of other neurons. 

Every neural network has 3 layers,

  1. Input layer: it receives feed data for neural network
  2. Hidden layer: this is where weight is estimated
  3. Output layer: it produces outcomes post-training

Read More: Scientists Are Enabling Artificial Intelligence To Imagine And Visualize Things

The team of researchers revealed that by keeping the hidden layers intact during the malware embedding process, changing some neurons will have minimal effect on performance. In their experiment, the team replaced around 50% of the neurons in the AlexNet model⁠ and still obtained the model’s accuracy rate above 93.1%. According to the authors, a 178MB AlexNet model may have up to 36.9MB of malware hidden in its structure and still produce results with a 1% accuracy loss. They also claimed that when tested against 58 antivirus engines in VirusTotal, the malware still was undetected — thus verifying the feasibility of this method. This highlights the possible alarming usage of steganography — a practice of concealing a message (here malware) within another message.

The attackers will need to design their own neural network to add the malware. They can add more neural layers to add more malware. Next, they must train the network using the provided dataset in order to obtain a well-performing neural model. Attackers can also opt to employ existing models if they are found suitable and well-trained. 

After the training, the attacker selects the best layer and embeds the malware. Then they assess the model’s performance to confirm that the loss is acceptable. If the loss is beyond the target value, attackers need to retrain the model for obtaining the desired results. 

While the larger size of the neural network will give enough room to hide more malware, on the brighter side, this method cannot be used for execution. To run the malware-infected neural network, another malicious software must be used to remove it from the poisoned model and then reassemble it into a working form.

According to the study, the harm due to malware can still be prevented if the target device validates the model before executing it. Traditional approaches like static and dynamic analysis can also be used to detect it.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular