AI Triage Slashes Breast Cancer Diagnosis Time
Artificial intelligence might not be reading your mind yet, but it might just help read your mammogram faster, and maybe even save your life.
According to a story in NBC News, researchers at the University of California San Francisco, used AI to flag suspicious mammograms so radiologists could prioritize which patients needed to be seen first. It’s called triage, and for women who were later diagnosed with breast cancer, the AI triage reduced the time from mammogram to biopsy by 87%. That’s not a typo. What used to take 73 days now took nine.
The study, posted on the preprint server MedRxiv, hasn’t been peer-reviewed yet, but it’s drawing attention across the cancer research world. That’s because mammograms, while life-saving, have limitations, especially for the 40% of women in the U.S. who have dense breasts. Dense tissue makes mammograms harder to read and increases cancer risk, creating a double bind that artificial intelligence might help untangle.
FDA-cleared AI software is already in use at major academic centers like MD Anderson, Mount Sinai, Penn Medicine, and Siteman Cancer Center, but as FDA regulations require, doctors still make the final call.
In a 2024 JAMA Oncology study involving more than 8,800 Swedish women, an AI tool called Lunit correctly identified breast cancers 88.6% of the time. Another study published in Radiology found that AI even caught cancers missed by two radiologists. Still, no technology is flawless. The Swedish study also showed that AI produced false positives in 7% of cases—flagging cancer when there was none. That’s not far off from the typical mammogram false positive rate of around 10%, which can lead to unnecessary follow-up tests and a wave of anxiety while waiting for answers.
One major question is whether AI can reduce disparities. Dr. Otis Brawley, a professor at Johns Hopkins, pointed out to NBC that many women don’t have access to radiologists who specialize in breast imaging. Their mammograms are often read by generalists. A study using RadNet’s software found that specialists correctly identified breast cancers 89% of the time, while generalists were at 84%. Add AI, and both groups rose to about 93%. According to Brawley, this could make screenings more accurate across the board.
Cost-wise, most academic centers don’t charge patients extra to use AI. There’s no billing code for it, so insurance isn’t paying either, according to the Susan G. Komen Foundation. Some private imaging centers like SimonMed and RadNet offer a second layer of AI screening for a fee — $40 to $50 — but that’s optional.
Of course, there’s a downside to catching more. Brawley said AI might over-detect cancers that technically meet the definition but wouldn’t grow, spread, or kill. These are the “harmless” tumors that still lead to real-world treatment, costs, stress and, for some, permanent changes to their bodies.
“There’s cancer that’s not genetically programmed to grow,” he told NBC. “I am worried that AI may help us find even more of these tumors that don’t need to be found.” So far, there’s no U.S. data proving that AI reduces breast cancer mortality, just evidence that it improves detection speed and accuracy.
That’s why a $16 million, two-year study is underway at seven medical centers in California to take a deeper look. Researchers want to know not just what AI can find, but whether what it finds should be treated.
Other concerns remain: Will doctors become too dependent on AI? Will algorithms trained on white women’s images be less effective for women of color? Can the technology keep improving without introducing new gaps in care?
Still, for some patients, the results speak louder than the risks. One woman in Arizona told NBC she paid $50 for extra AI screening. While she doesn’t love the idea of artificial intelligence in general — “creepy,” she said — she’s grateful it flagged her cancer.
“No matter how it was found,” she said, “I’m glad it was found.”