
Routine use of AI-assisted colonoscopy tools has been shown to reduce physicians’ diagnostic skills by over 20% within just three months, according to a new peer-reviewed study.
At a Glance
- Physicians’ adenoma detection rate (ADR) fell from 28.4% to 22.4% after AI-assisted systems were introduced
- The study covered 1,443 non-AI colonoscopies at four endoscopy centers in Poland
- Researchers attribute the drop to over-reliance on automation, likening it to “the Google Maps effect”
- AI-assisted procedures maintained a higher ADR of 25.3%, despite skill loss in manual procedures
- Experts call for regulated “non-AI” intervals and new training models to preserve human diagnostic skills
Skill Decay Behind the Scope
A recent observational study published in The Lancet Gastroenterology & Hepatology followed four endoscopy centers in Poland as they deployed AI-assisted colonoscopy tools. Comparing physician performance before and after the introduction of AI, researchers noted a 6% drop in adenoma detection rate during procedures without AI—falling from 28.4% to 22.4%.
Watch now: The Impact of AI on Colonoscopy Skills: A Cautionary Tale · YouTube
AI-assisted exams continued to produce higher detection rates overall, averaging 25.3%, but when doctors operated without AI, their effectiveness dropped significantly. Researchers suggest clinicians grew dependent on the software’s visual prompts, reducing their own pattern-recognition acuity and attention to detail—a behavioral trend described as automation bias.
“The Google Maps Effect” in Medicine?
Experts say this phenomenon mirrors the broader cognitive offloading seen with technologies like GPS, where habitual use leads to a loss of personal skill. In the medical setting, this dependency could carry significant risks.
Omer Ahmad of University College London noted that even minor drops in ADR can affect colorectal cancer outcomes at scale. A 6-point reduction, then, is cause for systemic concern. While some argue that increased patient loads or clinician fatigue may also be contributing factors, the correlation with AI implementation has reignited debates over the role of decision-support tools in clinical education and long-term skill retention.
To counteract degradation, some researchers propose regulated intervals of non-AI use, allowing doctors to preserve critical diagnostic instincts. Others call for redesigned training protocols that keep clinicians engaged regardless of whether AI is active.
Balancing Accuracy With Autonomy
AI’s benefits in real-time diagnostics are not in question—improved consistency, fewer missed lesions, and faster procedures have all been documented. But this study suggests that constant reliance on automation may come at the cost of clinician independence. In high-stakes or edge-case scenarios where AI may misfire or be unavailable, degraded human performance could have life-threatening implications.
The challenge now, according to researchers, lies in developing integration models that treat AI as a supplement rather than a substitute. As AI tools proliferate across medical imaging, pathology, and beyond, future systems may need to incorporate features that prompt human engagement, rather than replace it.
Sources














