Welcome to Commure Uncharted, a Q&A series where we sit down with healthcare industry thought leaders to discuss what the future of AI in healthcare could look like.
This installment, we are joined by Commure’s Chief Medical Officer, Dr. Jamie Colbert, MD, MBA, who still sees patients and works with clinicians and clients on how AI fits into care. Jamie discusses privacy and the sanctity of the exam room, explains when AI tools can help versus when a person needs to be in the loop, and shares what actually builds trust in busy workflows. He also offers a realistic take on accountability and bias and sets expectations for the next five years.
Q: Looking five years ahead, where do you see AI in healthcare? Will doctors still be face-to-face with patients?
Dr. Colbert:
It’s important to remember that AI is still in its infancy. ChatGPT has only been available for less than three years, so it is hard to predict exactly where things are going. Today, most organizations are not comfortable putting patients in front of AI without a human in the loop for anything high stakes. By high stakes, I mean clinical decisions about how to treat a patient or how to handle a symptom or concern related to their health.
However, I do expect the field to evolve. It may not be very far down the road before we feel confident that AI is drawing on the right body of evidence, providing high-quality recommendations, and handling cognitive tasks that today belong to people with 10 or 15 years of specialized training.
That being said, I do not expect AI to completely replace physicians. AI will replace many aspects of healthcare, but there will still be a need for someone to sit with you, hold your hand, and lay a stethoscope on you. There is a science of medicine, and there is an art of medicine. The humanistic component is essential. It is health care because there is care in it. In the near term, say five years, I do not see a point where we feel cared for entirely by AI.
Q: What are the most common concerns you’re hearing from patients about AI in healthcare?
Dr. Colbert:
There are a few broad categories of concern. First, there is the issue of privacy and what happens to patient information after it is shared with AI. If a large language model owned by a big tech company like Google or OpenAI is involved, what happens to the data that is recorded to generate a note or to power a chatbot? Who has access to it? Could it be used to develop new products? Could someone trace it back to them? Patients worry that the exam room is no longer a sacred space and that their words could show up elsewhere.
Another category is trust in the technology itself and whether AI can provide accurate answers. Is it trustworthy? Is it drawing on appropriate resources? Was it trained on up-to-date datasets, clinical evidence, and studies?
A third category is concern about losing the human connection. Patients value and trust their providers. They worry hospitals and health systems will try to replace that connection with technology, so they no longer have a friendly face when they are not feeling well. There is no substitute for someone holding your hand when you are in pain or uncomfortable. Even if the AI has the correct answers, being alone in a sterile room without a person to offer comfort and reassurance would feel like a loss.
However, on the whole, I find that the overwhelming majority of patients are excited about the potential of AI to help improve their healthcare experience. While they may have concerns, they are on the whole hopeful that AI will be a net benefit to them.
Q: Do patients trust AI more when it works behind the scenes versus when it interacts with them directly?
Dr. Colbert:
It depends on how the AI is used. When AI works behind the scenes to help physicians stay current and summarize treatment options, the patient is still interacting with their doctor, and the role of the AI is more in the background. In my experience, this tends to be acceptable to nearly all patients.
Direct interaction varies by task. For something algorithmic like scheduling, many patients are fine with an agent because it is a discrete task that gets them an appointment. When the interaction moves into clinical territory, like what to do next for pain or shortness of breath, the threshold for trust is higher, and some patients will want a person who can answer questions and handle more complex situations.
Q: What’s driving clinician hesitancy around utilizing AI tools in daily workflows?
Dr. Colbert:
A couple of reasons come to mind. Firstly, physicians have been misled in the past by technology that promised efficiency and ended up adding work. Many tools translated to more clicks and more administrative tasks, so there is skepticism about whether AI is really helping or just a shiny object that generates buzz, versus something that will truly make their jobs better and more efficient.
Trust is another issue. For a complex case that might take two or three hours of literature review, AI could summarize the research so a provider spends 20 minutes instead. That can be valuable, but clinicians need to feel confident that the model used the right studies, is current, and did not miss important evidence. The same questions apply to ambient AI. If it listens to a 20-minute conversation and condenses it into 250 to 300 words, it has to be accurate. It also has to make judgment calls about what stays in the note and what gets left out, and providers need to trust how those decisions are made.
There is also the transparency challenge. AI can sometimes feel like a black box, and a physician may not know exactly which sources the AI used to generate its response. Citation is getting better, but there are still questions.
Q: How much of the hesitation around AI comes from technical limitations versus a general fear of change and disruption?
Dr. Colbert:
They are both equally important. It depends on who you speak with and what they worry about, but each is a real barrier to broader adoption. We have not solved the workforce questions yet: where AI fits in the workflow, what changes we make to daily routines, and how those changes make physicians happier and more productive. Providers also look at the outputs that define their work, which vary by role and setting. They want tools that help increase productivity and revenue where relevant, while still letting them enjoy what they do and helping patients feel more cared for.
On the technical side, the models are still evolving. The way to address that is to keep watching them, testing them, and continuing to really poke at them. We need to understand the limitations and the biases, and make sure we do not have blinders on that cause us to miss issues that come from relying too heavily on AI. That ongoing scrutiny surfaces problems and blind spots, and over time it gives us a path to address them as the tools improve.
Q: If AI makes a mistake in a clinical setting, how should health systems think about accountability?
Dr. Colbert:
When a physician has a treatment relationship with a patient, the doctor is ultimately responsible for the care. Physicians who use AI should continue to own the information shared with patients, the visit documentation, and any AI output that directly affects care. That is why there still needs to be a human in the loop for anything that interfaces with patients. The physician is still the one who signs their name and is the one who can be sued if something goes wrong.
Q: How should health systems address potential bias when using AI?
Dr. Colbert:
One approach is to consult other sources and use AI as one tool rather than fully relying on it when there are concerns about bias. Bias can show up in many ways. If a model is more oriented to patients of a certain race or demographic, and the patient in front of you does not fit that profile, you can try to tweak the AI by introducing the patient’s unique characteristics.
If the patient has a disability, for example, make sure the system accounts for the special considerations that come with that. My hope is that as these models improve, we will better account for potential biases by acknowledging differences between patients and steering the output toward recommendations that match what makes that individual patient special, instead of generic defaults.
Q: What is one myth or fear about healthcare AI you wish we could put to rest for good?
Dr. Colbert:
The myth that AI is replacing doctors. Right now, that is not the reality, and patients/doctors should not worry about it. I am not seeing any widescale attempt to fully replace clinicians with AI. Patients should think of AI as tools that help deliver better care so they have a better experience and achieve better outcomes.