19 Comments
Apr 29Liked by Eric Topol

Appreciate your always balanced analysis.

A.I. is often conflated with with LLM or machine learning. Aggregation, synthesizing & estimating I would not describe as A.I.

Expand full comment

"For the overall trial there was s statistically significant 17% reduction of all-cause mortality. " - there's a typo; "a" statistically significant.

Expand full comment
Apr 29Liked by Eric Topol

Thank you for your patient persistence in this investigation. Invaluable.

Expand full comment
author

Much appreciated!

Expand full comment
Apr 29Liked by Eric Topol

From someone who’s lost close family members to heart attacks, this is great news!

Expand full comment

A through analysis and wonderful graphics

Expand full comment
author

Thank you!

Expand full comment
Apr 29·edited Apr 29

"However, the ability of LLMs to improve efficiency and reduce cognitive burden has not been established"

Oh, the dread "cognitive burden". You know. Thinking.

This is like when people get financial mangers because they don't want to have to think about money. Then the money's gone, and it's a mystery why. It doesn't matter what you've delegated; you still have to know enough, and be able to think well enough, to evaluate what your designee is doing. You also have to be doing enough oversight to sense when things are sliding off the rails, and it's robots all the way down if you try to outsource this to robots.

The best use of AI is as a committee. A second, corporate-expert opinion. Good doc evaluates; AI evaluates; compare; doc takes best stuff they've missed from the AI assessment.

If the main problem is not enough good docs, this is solved in med schools and hospital-management systems, not robot factories. And if the barrier to doing that is greed, then this is a legislative matter.

Expand full comment

It looks like the whole benefit is from ED patients who are <65 year olds without known CAD, with heart failure, without Afib, without diabetes but with HTN, and in the lowest possible Modified Early Warning score group. So the group that is almost the least sick appearing of people who show up to the ED, other than the heart failure, so it would be interesting to drill down and see what exactly, if anything, was being picked up on.

Expand full comment

anyone seen objective analysis on hippocratic AI or MedPalm LLMs? Studies above were surprising to me only because of hype those two models have generated, but answer could very well be (and often is!) that they are good at marketing and nothing else. Would just love to see some studies / analysis similar to the above.

Expand full comment

Agreed Hunter, I do think the performance of the public GPTs is all that bad - without the context of fine-tuned Medical LLMs (that are freely available in Hugging Face - hint hint)

Expand full comment

MINDCUROLOGY is the crystallization of human wisdom.

Expand full comment

I don’t have access to the source article, but would like to know more about the participating physicians. Seems that they weren’t very good at ECG interpretation.

Expand full comment
author

Here's there info https://static-content.springer.com/esm/art%3A10.1038%2Fs41591-024-02961-4/MediaObjects/41591_2024_2961_MOESM1_ESM.pdf

They weren't cardiologists but that's the way medicine is practiced

Expand full comment

This keeps slipping into my mind. Can AI and quantum computing possibly predict cancer causing mutations and if so ? Maybe this is already being done. Or I have no idea what I'm talking about. Tks.

Expand full comment

"patients at 2 hospitals in Taiwan were randomly assigned to their physician getting an AI alert of their patient’s electrocardiogram compared to conventional care (single-blind design), with the primary endpoint of all-cause mortality at 90 days." 👏🏻

Expand full comment

"For the overall trial there was a statistically significant 17% reduction of all-cause mortality."

Where does this % come from? Is it the percent difference in all-cause mortality between the intervention and control arm (3.6% in the intervention group, and 4.3% in the control group, i.e., a % difference of 17.7%)? Just curious because I couldn't find it in the paper.

Expand full comment

Sacrificial AI training grist is quite the ongoing moral horror show indeed.

Expand full comment

A potential problem occurs when a computer is the sorter of who gets looked at most closely by docs (i.e. flagged so that docs look at the patient more closely); that is problematic. Granted this wasn't the intent of this study. However, there is evidence that the use of AI being done simultaneous to doctor's own work is associated with doctors making more errors in their work - they come to rely on the computer to catch what they miss. Until that problem is fixed, AI needs to be used with extreme caution.

Research of this type can easily lead to a computer determining what access to care is given. This is as bad as the problems that exist now when drug access is determined by a board of people who has never seen the patient or read their medical history.

Expand full comment