Fiddler Labs announced it had raised $32 million in its second round of funding. The fund will be used to provide access to the company’s Model Performance Management platform powered by Explainable AI to enable the team to develop Responsible AI in production.
Private equity firm Insight Partners, Amazon, and Global ventures led the funding round along with existing investors like Haystack Ventures, Lockheed Martin, The Alexa Fund, and Bloomberg Beta. As artificial intelligence powered decision-making has drastically expanded into every sector, a growing demand has been seen for the processes, tools, and understanding needed to deploy machine learning models responsibly.
Fiddler said that it aims to build trust in artificial intelligence as modern platforms are so complex that they resemble ‘black holes.’ George Mathew, Managing director of Insight Partners, said that he believes every company in the near future has to adopt AI. He added, “Through its unique MPM platform, Fiddler accelerates the march to an AI-first future while managing the ever present challenges of Explainability & Bias Detection.”
Read More: Top Python Image Processing Libraries
The company’s Explainable AI and ML Monitoring Platform is now setting a benchmark for machine learning engineers and data scientists as they implement their AI initiatives. Fiddler was founded in 2018 by Krishna Gade, who earlier worked with Facebook, where he led a team that developed explainability tools for the machine learning models behind Facebook’s Newsfeed.
The enterprise acquired a spot in Forbes top 50 AI company list in 2021 and was named a World Economic Forum Technology Pioneer in 2020. The current CEO of the company, Gade, said that they have already expanded enough to encompass every stage of the artificial intelligence lifecycle, from development to production, after the launch of the company’s artificial intelligence platform.
He also added that with the new funding, the company would continue to enhance their Model Performance Management solution, help resolve issues like data drift and bias, and educate people about ‘Responsible artificial intelligence.’