Sunday, March 7, 2021
Home Developer Microsoft Releases Adversarial ML Threat Matrix To Fortify Attacks

Microsoft Releases Adversarial ML Threat Matrix To Fortify Attacks

Along with the contributions from 11 organizations like IBM and NVIDIA, Microsoft and MITRE released the Adversarial ML Threat Matrix. The open-source framework will allow security analytic to identify, respond, and mitigate threats against artificial intelligence-based solutions. According to Gartner’s Top 10 Strategic Technology Trends for 2020, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, and adversarial samples to attack ML solutions.

In another survey by Microsoft, 25 of the 28 organizations struggle to find the right tool or methodologies to address attacks on ML models. Such prevailing challenges among companies have raised concerns about using AI in sensitive and highly regulated business operations.

Mike Rodriguez, director of the computer vision research group of MITRE, said that the ML community needs to focus on security while making AI work for many applications. He further noted that, unlike the Internet, where there are numerous flaws because security was ignored in the 1980s, it is not too late with AI.

As the adoption of AI is increasing, there is a need for a framework to assist organizations in delivering secure AI models by eliminating security shortcomings. As a result, with the Adversarial ML Threat Matrix, Microsoft strives to mitigate flaws in AI.

What is Adversarial ML Threat Matrix, And Its Benefits?

“Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle,” notes in a blog post.

Also Read: GitHub Launches Code Scanning To Find Vulnerability In Your Code

Figure 1 – Adversarial ML Threat Matrix

Adversarial ML Threat Matrix is a curated set of prevalent attacks like model stealing, adversarial attack, data poisoning, and more. The above image (Figure 1) describes the categories of the framework, which are similar to the ATT&CK Enterprise standards. The benefit of adopting the blueprint of the existing framework–ATT&CK–is to avoid the change in security analysts’ workflows while working with AI models.

Read more here.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our Telegram and WhatsApp group to be a part of an engaging community.

Avatar
Analytics Drift
Editorial team of Analytics Drift

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular