It looks like the whole benefit is from ED patients who are <65 year olds without known CAD, with heart failure, without Afib, without diabetes but with HTN, and in the lowest possible Modified Early Warning score group. So the group that is almost the least sick appearing of people who show up to the ED, other than the heart failure, so it would be interesting to drill down and see what exactly, if anything, was being picked up on.
anyone seen objective analysis on hippocratic AI or MedPalm LLMs? Studies above were surprising to me only because of hype those two models have generated, but answer could very well be (and often is!) that they are good at marketing and nothing else. Would just love to see some studies / analysis similar to the above.
Agreed Hunter, I do think the performance of the public GPTs is all that bad - without the context of fine-tuned Medical LLMs (that are freely available in Hugging Face - hint hint)
I don’t have access to the source article, but would like to know more about the participating physicians. Seems that they weren’t very good at ECG interpretation.
This keeps slipping into my mind. Can AI and quantum computing possibly predict cancer causing mutations and if so ? Maybe this is already being done. Or I have no idea what I'm talking about. Tks.
"patients at 2 hospitals in Taiwan were randomly assigned to their physician getting an AI alert of their patient’s electrocardiogram compared to conventional care (single-blind design), with the primary endpoint of all-cause mortality at 90 days." 👏🏻
"For the overall trial there was a statistically significant 17% reduction of all-cause mortality."
Where does this % come from? Is it the percent difference in all-cause mortality between the intervention and control arm (3.6% in the intervention group, and 4.3% in the control group, i.e., a % difference of 17.7%)? Just curious because I couldn't find it in the paper.
A potential problem occurs when a computer is the sorter of who gets looked at most closely by docs (i.e. flagged so that docs look at the patient more closely); that is problematic. Granted this wasn't the intent of this study. However, there is evidence that the use of AI being done simultaneous to doctor's own work is associated with doctors making more errors in their work - they come to rely on the computer to catch what they miss. Until that problem is fixed, AI needs to be used with extreme caution.
Research of this type can easily lead to a computer determining what access to care is given. This is as bad as the problems that exist now when drug access is determined by a board of people who has never seen the patient or read their medical history.
Appreciate your always balanced analysis.
A.I. is often conflated with with LLM or machine learning. Aggregation, synthesizing & estimating I would not describe as A.I.
"For the overall trial there was s statistically significant 17% reduction of all-cause mortality. " - there's a typo; "a" statistically significant.
Thank you for your patient persistence in this investigation. Invaluable.
Much appreciated!
From someone who’s lost close family members to heart attacks, this is great news!
A through analysis and wonderful graphics
Thank you!
It looks like the whole benefit is from ED patients who are <65 year olds without known CAD, with heart failure, without Afib, without diabetes but with HTN, and in the lowest possible Modified Early Warning score group. So the group that is almost the least sick appearing of people who show up to the ED, other than the heart failure, so it would be interesting to drill down and see what exactly, if anything, was being picked up on.
anyone seen objective analysis on hippocratic AI or MedPalm LLMs? Studies above were surprising to me only because of hype those two models have generated, but answer could very well be (and often is!) that they are good at marketing and nothing else. Would just love to see some studies / analysis similar to the above.
Agreed Hunter, I do think the performance of the public GPTs is all that bad - without the context of fine-tuned Medical LLMs (that are freely available in Hugging Face - hint hint)
MINDCUROLOGY is the crystallization of human wisdom.
I don’t have access to the source article, but would like to know more about the participating physicians. Seems that they weren’t very good at ECG interpretation.
Here's there info https://static-content.springer.com/esm/art%3A10.1038%2Fs41591-024-02961-4/MediaObjects/41591_2024_2961_MOESM1_ESM.pdf
They weren't cardiologists but that's the way medicine is practiced
This keeps slipping into my mind. Can AI and quantum computing possibly predict cancer causing mutations and if so ? Maybe this is already being done. Or I have no idea what I'm talking about. Tks.
"patients at 2 hospitals in Taiwan were randomly assigned to their physician getting an AI alert of their patient’s electrocardiogram compared to conventional care (single-blind design), with the primary endpoint of all-cause mortality at 90 days." 👏🏻
"For the overall trial there was a statistically significant 17% reduction of all-cause mortality."
Where does this % come from? Is it the percent difference in all-cause mortality between the intervention and control arm (3.6% in the intervention group, and 4.3% in the control group, i.e., a % difference of 17.7%)? Just curious because I couldn't find it in the paper.
Sacrificial AI training grist is quite the ongoing moral horror show indeed.
A potential problem occurs when a computer is the sorter of who gets looked at most closely by docs (i.e. flagged so that docs look at the patient more closely); that is problematic. Granted this wasn't the intent of this study. However, there is evidence that the use of AI being done simultaneous to doctor's own work is associated with doctors making more errors in their work - they come to rely on the computer to catch what they miss. Until that problem is fixed, AI needs to be used with extreme caution.
Research of this type can easily lead to a computer determining what access to care is given. This is as bad as the problems that exist now when drug access is determined by a board of people who has never seen the patient or read their medical history.