Facebook launches Captum v0.4.0, a library for explaining decisions made by neural networks with the deep learning framework PyTorch. Captum implements state-of-the-art versions of AI models like Integrated Gradients, Conductance, and DeepLIFT.
Captum v0.4.0 will allow researchers and developers to design, develop, and debug advanced AI models quickly. They can also interpret decisions made in multimodal environments that combine images, text, and video and allow them to compare results to existing models within the library. The new 0.4.0 version has more tools for new attribution methods, evaluating model robustness, and improvements to existing attribution methods.
Captum (“comprehension” in Latin), built on PyTorch, is an extensible, open-source library for model interpretability. Due to the increase in model complexity, understanding them has become both an area of focus for practical applications across industries using machine learning and an active area of research.
Captum, built on PyTorch, provides state-of-the-art algorithms so researchers, engineers, and developers can easily understand which features contribute to a model’s output.
With Captum v0.4.0, researchers and engineers can now assess how different user-defined concepts affect a model’s prediction through the concept activation vectors (TCAV) feature. They can also use TCAV for fairness analysis and check for algorithmic and label bias. Researchers have found that some networks can inadvertently add difficult to detect biases in the system.
With TCAV, researchers and engineers can also quantify the importance of various inputs. They can do this by quantifying the impact of concepts such as race or gender on a model’s prediction. In Captum v0.4.0, TCAV has been implemented generically, allowing users to define custom concepts with example inputs for different modalities, including vision and text. More information regarding these improvements can be found in the official release notes.