Several US Federal agencies are considering how to develop trustworthy artificial intelligence (AI) and machine learning (ML) tools to prevent bias and inequity across IT systems and decision-making processes.
According to Defense Logistics Agency AI Strategic Officer Jesse Rowlands, AI can expose shortfalls across organizations if they lack quality data or if there are gaps in training algorithmic models. He added that AI could help organizations identify where to strengthen fairness and equity practices.
Department of Commerce Chief Data Scientist Chakib Chraibi said that AI is a powerful technology tool that can potentially fill gaps in human biases as long as the right policies and frameworks are in place.
Chraibi also said that currently, there are equity shortfalls and barriers across different business functions. While AI can exacerbate and amplify some of those issues, it can also promote and achieve equity and fairness.
Several federal AI leaders are focusing on ethics use cases in areas where human judgment is involved, as they are often susceptible to human biases. These include human resources, buying, contractor selection, bidding, finance, and more.
Donna Murphy, Community Affairs Deputy Comptroller for Compliance Risk, said that her agency, along with the Federal Housing Finance Agency and Consumer Finance Protection Bureau, is working on automated valuation model rule making to avoid potential discrimination in property valuations.