The White House’s science advisors have proposed an AI “Bill of Rights” to restrict the extent of artificial intelligence (AI) damages in a first-of-its-kind initiative. On Friday, the White House Office of Science and Technology Policy started a new fact-finding mission focused on facial recognition software and other biometric technologies that are used to identify people as well as gauge their mental and emotional states. This bill is said to be mirroring the United States Bill of Rights adopted in 1791.
The development of artificial intelligence has been accompanied by an increase in evidence of algorithmic prejudice. Algorithms trained on real-world data sets mimic the bias inherent in the human decisions they are attempting to replace. Women have been passed over for positions as computer programmers, and black patients have been prosecuted for crimes they did not commit. In other words, artificial intelligence technologies rely on data sets that are frequently skewed in ways that duplicate and magnify existing social prejudices.
Eric Lander, President Biden’s chief science adviser, and Alondra Nelson, the deputy director for science and society, warned of the dangers presented by technology such as face recognition, automatic translators, and medical diagnosis algorithms. The two also raised concerns about the security and privacy dangers posed by internet-connected gadgets, such as smart speakers and webcams.
They also wrote an opinion piece for Wired magazine about the need for new safeguards against faulty and harmful AI that can discriminate against people unfairly or violate their privacy. According to their article, although the first aim is to “enumerate the rights,” they also hope to persuade the federal contractors to refuse to purchase technology and software that “fails to respect these rights.” Another alternative is to make it necessary for government contractors using such technology to adhere to this “bill of rights,” and to enact new rules to fill in any gaps.
This isn’t the first time the Biden administration has expressed worry about AI’s potential for harm, but it is one of the clearest moves toward taking action.
Algorithms have become so powerful in recent years as a result of breakthroughs in computer science that their developers have pitched them as resources that may help humans make choices more effectively and impartially. However, the notion that algorithms are impartial is a myth; they still reflect human prejudices. And, as they grow more common, we must establish clear guidelines for what they can and must not — be permitted to accomplish.
While the COVID-19 pandemic triggered urgency to develop and use artificial intelligence technologies, it also highlighted the need to tackle deep-rooted bias to ensure transparency, explainability, and fairness.
The White House Office of Science and Technology Policy has issued a “public call for information” for AI specialists and others who employ AI technology. They’re also encouraging anyone who wants to weigh in on the issue to send an email to firstname.lastname@example.org.
The Software Trade Association, which is supported by corporations like IBM, Microsoft, Oracle, and Salesforce, applauded the White House’s decision. But it has also demanded that companies do their own risk assessments of AI products and explain how they would minimize such risks.
For now, the United States Bill of Rights, a 230-year-old text with 652 words, is still the topic of heated discussion, according to the report’s summary.
Their European counterparts have already taken a few steps to curb potentially dangerous AI uses. The European Parliament has passed legislation prohibiting mass surveillance and predictive policing.
Meanwhile, the UK government has proposed the idea of repealing or diluting Article 22 of the GDPR laws, which gives individuals the right to have AI judgments reviewed by a person. The UK Government released a consultation document on ideas to overhaul the UK’s data protection framework on September 10, 2021. The deadline for responding to the consultation is November 19, 2021.
Article 22 lays forth the right to a human review of automated decisions, including profiling, such as whether or not to provide a loan or a job.
Revocation or amendment of Article 22 would very certainly have a negative impact on algorithmic prejudice, disproportionately affecting minorities. Removing the right to review might hinder innovation rather than help it, leading to more algorithmic disparities and a loss of public trust in AI.