Thursday, December 19, 2024
ad
HomeNewsWHO Issues Guidelines for Ethical Use Of Artificial Intelligence

WHO Issues Guidelines for Ethical Use Of Artificial Intelligence

The World Health Organization (WHO) recently published a document mentioning six guidelines for ethical use of artificial intelligence in the healthcare industry. 

This guideline comes after an intensive two-year research conducted by more than twenty scientists. The report highlights how doctors can use artificial intelligence to treat patients in underdeveloped regions of the world. 

But it also points out that technology is not a quick solution for health challenges, especially in low and middle-income countries. Governments and regulators should carefully analyze where and how artificial intelligence is used in healthcare. 

Read More: GitHub’s New Copilot Programming Uses GPT-3 To Generate Code

The World Health Organization said that it hopes the six principles can be the foundation for how governments, developers, and regulators approach the technology. 

In the document, six guidelines mentioned by WHO are : 

  • Protect autonomy: Humans should have the final say on all health decisions. The decisions should not be made entirely by machines, and doctors should be able to override them at any time. Artificial intelligence should not be used to guide someone’s medical care without their consent, 
  • Promote human safety: Developers should continuously monitor any artificial intelligence tools to ensure they’re working as they’re supposed to and not causing harm.
  • Ensure transparency: Developers should publish information about the design of AI tools. One common criticism of the systems is that they’re “black boxes,” and it’s too hard for researchers and doctors to know how they make decisions. The WHO wants to see enough transparency to be fully audited and understood by users and regulators.
  • Foster accountability: When something goes wrong with an AI technology — like if a decision made by a tool leads to patient harm — there should be mechanisms determining who is responsible (like manufacturers and clinical users).
  • Ensure equity: That means making sure tools are available in multiple languages, that they’re trained on diverse sets of data. In the past few years, scrutiny of standard health algorithms has found that some have racial bias built-in.
  • Promote sustainable artificial intelligence: Developers should regularly update their tools, and institutions should have ways to adjust if a tool seems ineffective. Institutions or companies should also only introduce mechanisms that can be repaired, even in under-resourced health systems.

There are numerous potential ways artificial intelligence can be used in the healthcare industry. There are applications in development that use artificial intelligence to screen medical images such as mammograms, devices that help people monitor their health, tools that scan patient health records to predict if they might get sicker, and systems that help track disease outbreaks. 

“The appeal of technological solutions and the promise of technology can lead to overestimation of the benefits and dismissal of the challenges and problems that new technologies such as artificial intelligence may introduce,” The report mentioned.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Dipayan Mitra
Dipayan Mitra
Dipayan is a news savvy writer, who does not leave a single page of news paper unturned. He is also a professional vocalist who enjoys ghazals. Building a dog shelter is his forever dream.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular