Opportunistic A.I. for Medical Scans
The remarkable wealth of data that's embedded and being missed
This week a new study using chest X-rays capped off a crop of many others for using medical scans for unintended diagnostic purposes, for which the term “opportunistic” is getting adopted.
Before I get into the new report, here’s a summary I put together, a Table of 10 publications, with 4 different imaging modalities, detecting things that have not previously been contemplated.
The New Study
The atherosclerotic cardiovascular disease (ASCVD) risk score, based on 9 variables, is the most frequent way clinicians quantitatively gauge heart disease risk for patients over the next 10 years for the major adverse CV events (MACE) of heart attack, stroke and cardiovascular death. You can use the nomogram to quickly calculate your score right now. The main output is categorization of risk into 4 groups (below) and recommendation about statin use. Details for which statin and dose are provided in the link.
Jakob Weiss and colleagues hypothesized that the ASCVD could be derived from the chest X-ray. That would have seemed highly improbable, a real reach, just a few years ago. They first developed a model using a very large cancer screening trial dataset with over 40,000 participants and >147,000 chest X-rays. With multi-year follow up for cardiovascular events from that large cohort, they went on to do independent testing of 2 different patient groups from the MassGeneralBrigham. one cohort of 2,132 patients with known ASCVD risk and another 8,869 with unknown risk (total of 11,001 as seen below).
This Table summarizes the substantial gap in terms of hazard ratio for the Chest X-ray (CXR) assessed risk versus the ASCVD risk score for statin eligibility.
The striking bottom line result is that the A.I. of the chest X-ray for risk was better than the ASCVD! Better because it identified substantially more people who would benefit from statin therapy, the main output of the ASCVD risk score.
This has remarkable utility because the data to calculate the ASCVD is frequently missing (such as cholesterol values or systolic blood pressure). It was only available in 19% of the patients (2,132 of 11,001) in the current report. The chest X-ray is the most frequent type of medical image obtained—over 70 million in the United States per year alone! While we wouldn’t order a chest X-ray to determine CV risk, think of the large number of people who would get this information “free” as a readout from their scan. Of course, this report needs to be independently replicated before it would be part of routine chest X-ray interpretations, but it gives you a sense of the rich information embedded in a scan that human eyes cannot detect, but somehow, inexplicably (for the most part, vide infra) at this point, digital, machine eyes can. And better primary prevention for heart disease over the next decade could be lifesaving for a substantial number of people who had this information.
Prior Studies
I’ve been stuck by many of them. Detecting diabetes from a chest X-ray was not something I would have anticipated. But deep learning (DL) from over 271,000 chest X-rays, >160,000 patients, provided surprisingly high accuracy (AUC 0.84). To the credit of these researchers, there was a hunt for explainability, which turned out, by occlusion (analyze the data with and without regions) maps, to be related to fat pads in the chests seen in the highlighted regions of the right panel below.
Accurately determining the ejection fraction from the chest X-ray, as less than or greater than 40%, with an AUC of 0.92, via cross-training from echocardiography, along with many other cardiac parameters is another such achievement. As is estimation of the calcium score from the chest X-ray.
Imagine the chest X-ray of the future with readouts on heart disease risk, diabetes, and whether and what dose a stain medication should be considered.
There are more than 20 million chest CT scans done in the United States per year. But they aren’t being used to detect pancreatic cancer or coronary artery disease risk or impute coronary artery calcium scores. Picking up heart disease risk from mammography via breast artery calcification is another example. And for abdominal CT scans, the opportunity afforded by A.I. to detect diabetes, cardiovascular risk, or pick up of pancreatic cancer far more sensitively.
What It All Means
Over the past 8 years, we’ve had ample evidence from hundreds of studies in radiology that deep learning A.I. can potentially be used to promote accuracy of medical image interpretation, across all different types of scans (X-rays, CT, MRI, PET, ultrasound). But that body of data is centered on a focused interpretation of the scan, such as pneumonia or a lung nodule on a chest X-ray. Opportunistic interoperation of medical scans presents something quite different.
This is a largely unanticipated windfall of A.I. applied to medical imaging—the ability to use machine eyes to uncover what human experts can’t see, markedly enriching potential outputs of medical scans in the future. While that may provide much more bang for the buck, like a two-fer or three-fer of added findings outside the organ of interest, it’s also possible it will lead to unwanted, incidental, false-positive findings that require further work-up. That’s why it’s vital to nail this down—to provide clearcut benefit-risk assessment before trying to take advantage of it on a routine clinical basis. It would also be helpful to see more work as done in the chest X-ray detection of diabetes study that deconstructs and explains the model performance.
It’s one more example of the power of machine eyes, which I‘ve written about previously, like the retina being a window to multi-organ findings. It’s in the continuum of unexpected A.I. outgrowths in medicine that should make us ponder more about what we are still missing that could be detected with the help of digital eyes. Or ears.
Thanks for reading and subscribing to Ground Truths. Please consider sharing this piece if you found it informative.
The Ground Truths newsletters and podcasts are all free, open-access.
Voluntary paid subscriptions all go to support Scripps Research.
Amazing stuff. Even without AI, I have increasingly received human interpreted radiology reports in which calcifications and atherosclerosis are reported as incidental findings on x-rays and CT scans, which for me prompts a more aggressive conversation about risk reduction with my patients.
This is all rolling out so fast it will be difficult for physicians to keep up, we need a central AI continuing medical education resource. Are there any? (In addition to you Dr Topol 😉)
Just plain WOW from a data scientist.