Microsoft has changed its artificial intelligence ethics policies stating that it will no longer allow access for other companies to use its AI facial recognition tool to perform functions such as interpret gender, emotion, or age.
Microsoft has said that it aims to keep people and their goals at the center of system design decisions, as part of its new responsible AI standard of the company. The high-level principles will lead to tangible changes in practice with some features being altered and others withdrawn from sale, the company added.
For instance, Microsoft’s Azure Face service, a facial recognition tool, is used by companies such as Uber for identity verification. However, according to new changes, the company wanting to use the facial recognition features, will need to apply for use actively. They will be obliged to prove that they are matching Microsoft’s AI ethics standards and the features benefit the end user and society as a whole.
According to Microsoft, even the companies that are already granted access to the tool will no longer be able to use the controversial features of Azure Face. The company will be retiring the facial analysis technology that infers emotions and attributes such as gender or age.
According to Sarah Bird, a product manager at Microsoft, the team collaborated with internal and external researchers to comprehend the shortcomings and potential advantages of the face recognition tool and navigate the tradeoffs.
In the case of emotion segregation specifically, the efforts raised important questions about privacy, she added. The lack of unanimity on a definition of emotions, and the inability to generalize the connection between emotional state and facial expression, also came into question.