Last week, Amazon unveiled its new open-source library, Fortuna, for uncertainty quantification of machine learning models. Fortuna offers calibration methods like conformal prediction to train any neural network to obtain calibrated uncertainty estimates. The library also supports several bayesian inference methods for deep neural networks written in Flax.
Accurate estimation of predictive uncertainty is vital for applications that involve critical decisions. Uncertainty enables data scientists to evaluate the reliability of model predictions, defer to human decision-makers, or detect if a model can be deployed safely. Fortuna makes it easy for them to run benchmarks and build robust and reliable AI models with advanced uncertainty quantification techniques.
Read more: Perfect raises $13M in seed funding
The existing libraries and tools for uncertainty quantification have a limited scope and do not provide a depth of techniques in a single place. This results in hindering the adoption of uncertainty into production systems. To solve this problem, Amazon developed Fortuna, which brings together prominent methods like conformal prediction, bayesian inference, and temperature scaling across the literature to users with a standardized interface.
Fortuna is readily available on Github, with examples, documentation, and references. Fortuna was developed by a group of applied scientists — Gianluca Detommaso, Alberto Gasparin, Michele Donini, Matthias Seeger, and Cedric Archambeau.