News Science & Technology

New AI model diagnoses diseases by drawing visual maps

[ad_1]

The unique transparency of the model, described in the journal IEEE Transactions on Medical Imaging, allows doctors to easily follow its line of reasoning, double-check for accuracy, and explain the results to patients, the researchers said.

HT Image
HT Image

“The idea is to help catch cancer and disease in its earliest stages — like an X on a map — and understand how the decision was made,” said Sourya Sengupta, a graduate student at Beckman Institute for Advanced Science and Technology in the US.

Hindustan Times – your fastest source for breaking news! Read now.

“Our model will help streamline that process and make it easier on doctors and patients alike,” said Sengupta, the study’s lead author.

The process of decoding medical images looks different in different regions of the world.

“In many developing countries, there is a scarcity of doctors and a long line of patients. AI can be helpful in these scenarios,” Sengupta said.

When time and talent are in high demand, automated medical image screening can be deployed as an assistive tool — in no way replacing the skill and expertise of doctors, Sengupta said.

Instead, an AI model can pre-scan medical images and flag those containing something unusual — like a tumour or early sign of disease, called a biomarker — for a doctor’s review. This method saves time and can even improve the performance of the person tasked with reading the scan.

These models work well, but they leave much to be desired when, for example, a patient asks why an AI system flagged an image as containing (or not containing) a tumour.

The new AI model interprets itself every time — that explains each decision instead of blandly reporting the binary of “tumour versus non-tumour,” Sengupta said.

The researchers trained their model on three different disease diagnosis tasks including more than 20,000 images.

First, the model reviewed simulated mammograms and learned to flag early signs of tumours. Second, it analysed optical coherence tomography (OCT) images of the retina, where it practiced identifying a buildup called Drusen that may be an early sign of macular degeneration.

OCT is a non-invasive imaging test that uses light waves to take cross-section pictures of the retina.

Third, the model studied chest X-rays and learned to detect cardiomegaly, a heart enlargement condition that can lead to disease.

Once the mapmaking model had been trained, the researchers compared its performance to existing AI systems — the ones without a self-interpretation setting.

The model performed comparably to its counterparts in all three categories, with accuracy rates of 77.8 per cent for mammograms, 99.1 per cent for retinal OCT images, and 83 per cent for chest X-rays, the researchers said

These high accuracy rates are a product of the AI’s deep neural network, the non-linear layers of which mimic the nuance of human neurons in making decisions, they added.

[ad_2]

Source link