Industry News
23 Mar 2026

Is AI Making Eyecare Professionals Worse at Their Jobs? Aviation Has the Answer

Is AI Making Eyecare Professionals Worse at Their Jobs? Aviation Has the AnswerA new perspective paper argues that eyecare professionals need "flight rules" for working with AI before the damage becomes irreversible.

As artificial intelligence (AI) tools become fixtures in retinal screening clinics and OCT reading rooms across Australia, a sobering question is emerging: could AI be quietly eroding the very clinical skills it was designed to support?

A perspective paper published in npj Digital Medicine this January, co-authored by ophthalmologists and Lufthansa aviation safety experts, argues that medicine is on the cusp of making the same costly mistakes that plagued aviation when autopilot technology first took hold.

The automation paradox hits the clinic

The evidence is already accumulating. A recent multicentre study found that endoscopists who regularly performed AI-assisted colonoscopy performed worse at detecting adenomas, a 6.0 percentage point absolute reduction, after their AI assistant was removed, suggesting a potential dependency effect. If colonoscopy performance degrades after AI exposure, ophthalmologists have good reason to ask whether the same is happening in their own practice.

From autopilot to digital copilot

The authors are not arguing against AI. They want a reframing of how clinicians relate to it. Rather than treating AI as an autopilot that replaces human input, clinicians should view it as a "digital copilot" that supports it, with the clinician remaining as the "pilot-in-command" and accountable for overall judgement.

To get there, the paper outlines five practical steps: regular benchmarking of clinician performance without AI; restricting AI access in training until basic competency is established; embedding AI literacy in medical curricula; mandating scenario-based simulation where AI deliberately fails; and ensuring clinicians understand enough about their AI tools to know when to override them.

The regulatory gap

The paper's most pointed observation concerns accountability. Beyond simply certifying AI as a medical device, we should also consider how we regulate the human-AI dyad, and how we maintain competence, accountability, and situational awareness in that context.

For Australian practitioners, where AI-assisted diabetic retinopathy grading and glaucoma detection are already in clinical use, the runway for getting this right is shorter than many may realise.