An irony about AI is that 60 percent of Americans would feel uncomfortable if their healthcare provider relied on it for medical care, when in reality AI is transforming healthcare already and is in use throughout the patient experience. Case in point: ChatGPT excited us all about AI, but AI’s use in provider organization chatbots and machine learning’s use in surgery prep precede it. We have yet to arrive at the point where it becomes an explicit part of diagnosis or treatment recommendations, but many academic medical centers are testing prototypes.

So, where can AI improve the patient experience?

Take scheduling. AI can deliver enormous benefits here, allowing a patient to provide a bot with essential information and then get back a priority-appropriate date, time, and office location for a visit. This automation frees staff from the bulk of the scheduling process, which has become a nightmare due to the pandemic-driven provider and staff shortages.

Scheduling a colonoscopy traditionally often involves calling a specific scheduling number. You wait on hold before being triaged to one of several facilities and given a time slot two, three, or more months in the future. If someone cancels in the interim, staff must invest the cycles to confirm that the scheduled space is still open, then reach out to patients on the cancellation list until they find a taker. With AI, all this is done automatically. In addition, physicians can use AI to identify their availability, using information from their clinical schedules and HR records. The bot can then record their open slots in the scheduling software.

Integrating medical documentation with the electronic health record (EHR) is another potential use case for AI in patient care that could improve clinical care. This integration yields several benefits to a primary care provider, including:

  • More efficient documentation of clinical care components required to be in the EHR to enhance billing and patient care delivery
  • Connecting procedures to the proper billing codes to increase claim submission accuracy
  • Allowing primary care providers to focus on empathetic patient interaction to improve data collection, diagnoses, treatments, and patient compliance

AI-driven clinical care: jumping the gun?
Clinical decision-making driven by AI remains a source of concern. AI has no knowledge or intent and does not know the difference between right and wrong. It does not think or reason. It runs purely on statistical relationships between words and phrases. On the other hand, a human being always has intent, can always reason, and can usually identify something that does not seem right, whether through intuition or decades of experience. 

AI perfectly models the same biases in the data that trained it, and the history of clinical trials offers ample examples of biased study cohorts delivering results inappropriate for a more diverse population. Recent research shows that humans can reduce their bias, albeit slowly. 

As for patient care decision-making, thankfully, I see few doctors, nurses, fellows, or residents mindlessly following a clinical algorithm unless they know its basis and limitations.  

In considering AI as part of the patient experience, we can quickly gather requirements we must apply to any AI application used by patients or clinicians:

  • How will you show that the application is valuable and trustworthy for users?
  • How do you keep it up to date when dealing with fast-changing medical specialties such as oncology?
  • How will it integrate into existing workflows so that the information is as relevant and actionable as it is timely and does not deliver unintended outcomes or make care delivery less efficient?
  • Can the intelligence be meted out to the user in a digestible way that delivers value?

Successful AI, in short, must augment the clinician’s decision, not supplant it. 

Change management + security: the one-two punch
A surefire way to ensure a project’s failure is to ignore change management. If you want to implement AI – or any technology – you need to figure out the stakeholders and recruit representatives from those stakeholder groups. You must work with the representatives and other experts to develop implementation plans for the new technology. These representatives become information conduits to the senior people in their stakeholder group, keeping them informed while constantly obtaining feedback. If you do this correctly, you build a team of project champions that enhance the probability of successful implementation. Everybody is informed and feels comfortable with the plan. That is the way you get the change management to happen. 

Security is my other big AI concern. The idea that a single person in their bedroom with a computer and a fast internet connection can generate unbelievable amounts of misinformation using AI and make it look authentic is a problem. I call it misinformation on steroids. My other concern is the theft of personal information for profit, whether identity theft or committing fraud with Medicaid and Medicare, government agencies, and insurance companies. Data leaders in healthcare must take all these issues seriously.

Adopting a systems perspective
With all these benefits and their attendant risks, what needs to happen from a systems perspective to make AI effective in healthcare?

  1. Better development. Software developers and vendors must take the proper steps to keep evolving AI technology to be more accurate, reliable, and based on the highest-quality available data for its intended use. They must continue to improve their models, keep testing them, be honest about their shortfalls, and educate their internal staff, particularly their salespeople, about their models’ strengths and weaknesses and share those findings honestly with their clients.
     
  2. Better implementation. Provider and payer organizations must understand the basis of AI use and then implement it in a way that has the proper safeguards and does not generate unintended outcomes. 
     
  3. Better patient understanding. The third consideration is assigned to the public. They should learn more about what AI can and cannot do for their well-being rather than consider it a be-all and end-all power over their lives. Knowledge is indeed power when it comes to AI.

Transparency is the best medicine
Given patient and information security sensitivities, even if patients may not know that AI is being used in their experience, they should be told AI is involved. It is self-evident that you are likely to be working with AI if you are using a voice response system or a bot, but it still makes sense to point this out to your stakeholders. This transparency is similar to a food manufacturer stating that their product contains GMOs. Many people have no problem with AI as long as they know its use. Transparency creates clarity and trust.

The use of AI in patient care will continue, and AI will continue transforming healthcare moving forward. It is our responsibility to deploy it correctly. I do not think AI will run amuck and destroy humanity, but it will one day have that power. AI will give us undesirable results if we are not careful about establishing guidelines within its models and thoughtfully plan how to deploy them. Therefore, we must build intent and guardrails into healthcare-targeted AI to accelerate medical research and help deliver better clinical outcomes and patient experience.