Amazon Web Services, or AWS, introduce a novel method for evaluating facial recognition models and detecting biases. The proposed method does not utilize standard identification annotations and estimates the model’s performance based on previous demographic data.
Artificial intelligence-based models often experience algorithmic bias. Consequently, the area has become an emerging domain of study. The proposed method focuses on examining biases in facial recognition. A simple way to determine if a facial recognition algorithm is biased is to train the model on a massive dataset, including faces from several demographics. However, this requires identity annotation.
The method proposed by Amazon evaluates biases without identity annotations. While annotations are not necessary, it is necessary for the model to have some way of determining which subjects belong to each category. Where standard models generate vector representations (embeddings) in a single space, this method represents the same subject in two embeddings placed at a distance lesser than a predetermined cutoff.
Read More: Top Humanoid Robots Made in India
The researchers then hypothesized that there exists a distribution to which these distances belong and another distribution to which the remaining distances (between two non-identical subjects) belong. The model learns both of the distributions, and the difference between them provides a measure of the model’s accuracy.
The researchers are optimistic about the method being useful for AI as the model shows appreciable results when compared to Bayesian calibration, as seen in the paper.