Microsoft Counterfit release will allow the artificial intelligence community to quickly determine security flaws in their machine learning-based applications used for businesses. As the adoption of AI applications is proliferating in both business and consumer markets, the need to protect personal information from sneaking out from the ML models.
According to a survey by Microsoft, 25 out of 28 businesses do not have the right tools to secure their AI systems. Unlike other applications, AI-based software are prone to a wide range of security attacks, including adversarial attacks and data leaks. Such attacks not only hamper the brand of organizations but also lead to monetary loss due to stringent data privacy laws in place.
Since machine learning applications vary widely based on the algorithms and architecture used, companies specifically address every application’s shortcoming in security. However, to assist organizations, Microsoft releases Counterfit, which can be leveraged with most machine learning systems.
Counterfit was born out of the internal needs of Microsoft AI systems for pinpointing vulnerabilities. Over the years, the company enhanced Counterfit to make it a generic automation tool that can evaluate multiple AI systems at scale. Today, Counterfit is environment, model, and data agnostic, making it an ideal tool to leverage in numerous use cases.
“Under the hood, Counterfit is a command-line tool that provides a generic automation layer for adversarial AI frameworks such as Adversarial Robustness Toolbox and TextAttack,” mentions Microsoft.
Users can leverage Counterfit for penetration testing and red teaming AI systems, vulnerability scanning for AI systems, and logging for AI systems.
Microsoft heavily relies on Counterfit to make their artificial intelligence applications robots before shipping them to the market. Currently, it can not be used before the models and applications hit production. But, it is being piloted to find AI-specific vulnerabilities before taking the efforts into production.