Saturday, December 21, 2024
ad
HomeNewsStudy uses Explainable AI to detect Lung and Bronchus Cancer Mortality Rates

Study uses Explainable AI to detect Lung and Bronchus Cancer Mortality Rates

The aim of the study was to understand how known risk factors for lung and bronchus cancer mortality vary across the United States.

Researchers from the University of Buffalo use explainable artificial intelligence in a study to effectively detect lung and bronchus cancer mortality rates in patients. The system is capable of making high-level predictions about LBC mortality rates. 

It is the first research to use ensemble machine learning with an explainable algorithm for visualizing and understanding spatial heterogeneity of the relationships between LBC mortality and risk factors. 

The new study was written by Zia U. Ahmed, Kang Sun, Michael Shelly, and Lina Mu, and it uses explainable artificial intelligence or XAI, to identify key risk factors for LBC mortality. 

Read More: Quantum Star Launches AI-powered Malware Detection Software

Explainable artificial intelligence (XAI) was used with a stack-ensemble machine learning model framework to examine and display the spatial distribution of known risk factors’ contributions to lung and bronchus cancer (LBC) death rates across the United States. 

Researchers say that smoking prevalence, poverty, and a community’s elevation were most important in predicting LBC mortality rates among the risk factors studied. However, the risk factors and LBC mortality rates were found to vary geographically. 

The study mentioned, “Explainable artificial intelligence for exploring spatial variability of lung and bronchus cancer mortality rates in the contiguous USA.”

Researchers used five base-learners, namely the generalized linear model (GLM), random forest (R.F.), Gradient boosting machine (GBM), extreme Gradient boosting machine (XGBoost), and Deep Neural Network (DNN), to develop the stack-ensemble models. With more data and multiple models, A.I. algorithms operate better, making the stack ensemble model more effective than any single model. 

“The results matter because the U.S. is a spatially heterogeneous environment. There is a wide variety in socioeconomic factors and education levels — essentially, one size does not fit all. Here local interpretation of machine learning models is more important than global interpretation,” said Ahmed. 

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Dipayan Mitra
Dipayan Mitra
Dipayan is a news savvy writer, who does not leave a single page of news paper unturned. He is also a professional vocalist who enjoys ghazals. Building a dog shelter is his forever dream.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular