Discussion about this post

User's avatar
PONS's avatar

Eric, that is a great article. You’re making the uncomfortable point that needs to be made: when the best RCT evidence shows +29% cancer detection, no recall/false-positive penalty (1.4% in both arms), and −44% reading workload (MASAI), “AI as optional add-on” starts to look like institutional inertia, not clinical judgment.

What I keep coming back to is your line of logic in Section 2: once AI flags risk/density/suspicion, the pathway often tightens surveillance (US/MRI/shorter intervals). That’s where the next bottleneck shows up: follow-up ultrasound quality is still wildly variable, especially in dense tissue and outside major centers.

So yes, make AI-mammo the default. But the “new standard” should be end-to-end: AI-assisted detection + quality-locked follow-up imaging, otherwise we’re upgrading the front door and leaving the hallway dark.

Curious where you land on this: should “AI-mammo standard” be paired with a quality standard for ultrasound in the follow-up pathway?

We just published with Mayo Clinic in Mayo Clinic Proceedings: Innovations, Quality & Outcomes (retrospective, 62,912 breast US scans / 688 patients), showing that adding image-enhanced + quality-improved ultrasound representations materially improves downstream AI classification performance

https://www.sciencedirect.com/science/article/pii/S254245482500102X?utm_source=chatgpt.com

J Lee MD PhD's avatar

This is a good report, thanks for putting this out. The data ARE very impressive.

8 more comments...

No posts

Ready for more?