Financial regulators of the United Kingdom (UK) warn banks that are using artificial intelligence (AI) solutions to approve loan requests.
According to regulators, banks can only use AI technology provided they can demonstrate that it will not discriminate against minorities.
Multiple people familiar with the discussions said that the regulatory agencies are increasingly pressuring the country’s major banks about the precautions they want to put in place around the use of AI, reported Financial Times.
Read More: Elon Musk’s Neuralink Trials kills 15 out of 23 Monkeys
The European Union’s banking regulators urged lawmakers to look into the usage of data in AI/ML models and any bias that might lead to discrimination and exclusion.
“The banks would quite like to get rid of the human decision-maker because they perceive, I think correctly, that is the potential source of bias,” said Simon Gleeson, a lawyer at Clifford Chance.
Machine learning techniques have been used by banks to determine lending decisions. Banks believes that artificial intelligence will not make subjective or biased decisions and help reduce racial prejudice. However, the regulators have a different take on this situation and claim that AI could pose a more significant threat to biases.
Sara Williams said, “If somebody is in a group which is already discriminated against, they will tend to often live in a postcode where there are other (similar) people … but living in that postcode doesn’t actually make you any more or less likely to default on your loan.”
She further added that the more big data is shared around, the more info that is not immediately related to the person is sought. There’s a serious danger of propagating preconceptions in this situation.
Earlier, a similar episode occurred in the United States where regulators were asked to ensure that artificial intelligence increased access to credit for low- and middle-income families and people of color.