Thursday, December 19, 2024
ad
HomeData ScienceAttackers Use Artificial Intelligence Generated Deepfake For Phishing Campaigns

Attackers Use Artificial Intelligence Generated Deepfake For Phishing Campaigns

Cyber attackers use offensive artificial intelligence for impersonation like deepfakes to perpetrate phishing campaigns. They also reverse engineer to steal proprietary algorithms and amplify attacks to expand their campaigns. 

A research conducted by Microsoft, Purdue, and Ben-Gurion University explored the threat of offensive artificial intelligence on organizations. The research identified various capabilities that adversaries can use to strengthen cyber attacks and provide insight into the adversaries while ranking each by severity. Artificial intelligence is used to poison the machine learning models by disrupting their training data and stealing credentials through side-channel analysis. 

The top artificial intelligence threat is the use of deepfakes for phishing campaigns, as AI is capable of automating processes, the adversaries may pace from having slow covert campaigns to fast-paced campaigns that will overwhelm defenders and increase the attacker’s chances of success. According to research conducted by Attesive, as bots gain the ability to make more realistic deepfakes, phishing campaigns will become more widespread. In the next few years, there will be an increase in offensive artificial intelligence in the areas of data collection, training, model development, and evaluation. 

Read more: Facebook’s Artificial Intelligence Can Now Detect Deepfake

The researchers wrote that “With AI’s rapid pace of development and open accessibility, we expect to see a noticeable shift in attack strategies on organizations.” They also mentioned that artificial intelligence would enable adversaries to target more organizations frequently and parallelly. They result in overwhelming the defender’s response teams with thousands of attempts for the chance of one success. As adversaries begin to use artificial intelligence-enabled bots, defenders will also be forced to automate bots for their defense.  

The researchers suggested a few ways to avoid the threats, like recommending the organizations to develop post-processing tools that can secure software from analysis after development, integration of security testing, protecting and monitoring models with MLOps for the organizations to maintain secure systems easily. 

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Manisha Kalagara
Manisha Kalagara
Manisha is a "new age" writer, who loves travel, sustainability and Led Zeppelin. Who is pretty decent at cracking dark jokes and great at complicating stuff!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular