Tuesday, January 27, 2026

The AI Will See You Now

Recently there was an article in the NY Post that touched upon a study from MIT where 300 people evaluated medical advice written by a doctor or by AI. Across the board, people rated AI-generated responses as more accurate, valid, trustworthy, and complete than those written by human physicians. This included advice that could lead to unnecessary or harmful medical actions.

There certainly is a place for AI in healthcare. I have personally used it to research and cross-reference treatment options for myself and our family. But I always take it with a bit of skepticism. As Ronald Regan noted, “trust but verify”!

What I’m seeing and worrying about is straightforward: low-cost, high-throughput clinics staffed by a single human and driven by AI triage and decision systems. Think of a MinuteClinic or PatientFirst where the first line of care is an algorithm housed in a robot that follows committee-driven best practices. A human clinician is present, but only to be called in for exceptions or higher-acuity cases. For many people this will feel like access – fast, cheap, and predictable. For those of us who value the art of healing, it feels like a narrowing of what medicine can be.

Certainly there are real benefits to standardized, AI-driven pathways: faster triage, lower per-visit costs, and consistent adherence to guidelines. For underinsured or uninsured patients, that can mean care they otherwise wouldn’t get. But efficiency is not the same as wisdom. Clinical judgment often depends on small, nonverbal cues, a patient’s tone over time, or a clinician’s memory of past visits. Algorithms excel at pattern recognition across large datasets; they struggle with nuance, context, and the messy individuality of human illness.

When care is reduced to algorithmic outputs, we risk losing the doctor-patient relationship that turns information into healing and where trust, empathy, and individualized judgment live. If AI becomes the default gatekeeper, deciding who needs a trained human provider and who does not, I believe we’ll see a two-tiered system.

·  A baseline, AI-delivered public option and

·  A premium, human-centered private option for those who can pay. 

In the end we arrive back at a point where we have a stratification that perpetuates inequities that today’s “healthcare for all” advocates complain about. Healthcare is not a right due to the fact it must be delivered by someone’s work (and you have no right to compel someone to work). Be skeptical when the politicians and influencers, be it Musk or Sanders, tell you that with AI lowering the cost of providing healthcare that we can offer a high-quality national healthcare delivery system. In the end, it’s almost always that you get what you pay for (or vote for).