Stanford University and researchers at the Harvard Medical School have developed an artificial intelligence model that detects abnormalities and diseases by studying NLP-based reports. The AI model does not rely on standard human annotations of X-rays to learn to predict diseases.
Using AI in medical imaging technologies is not a new advancement. However, many challenges still limit its application to only a handful of clinical applications. A massive amount of data and human annotations must go into training the standard disease prediction models.
However, the model created by Harvard and Stanford, called CheXzero, has shown accurate results by relying on reports created by NLP rather than human annotations. The model is self-supervised, meaning that it can train itself to learn more. Self-supervised algorithms automatically address the issue of over-dependence on labeled data.
Read More: Diffusion Bee: a Mac app that creates AI images with text
Pranav Rajpurkar, assistant professor at HMS, said, “Up until now, most AI models have relied on manual annotation of huge amounts of data—to the tune of 100,000 images—to achieve a high performance. Our method needs no such disease-specific annotations.”
Researchers have used chest X-rays as an example to show CheXzero’s capabilities, but it can be generalized to a vast array of other medical setups that deal with unstructured data. The AI model helps bypass the requirement of large-scale labeling bottlenecks that have been a long-standing challenge in medical machine learning.