top of page

ChatGPT in Healthcare: Where it is Now and A Roadmap for Where it is Going-The HSB Blog 2/2/23


Our Take:

AI chatbots such as ChatGPT and tools that are being developed like it have significant promise in revolutionizing the way care is delivered and the way that patients and care providers can connect with each other. Due to their ease of use and equity of access, patients from all backgrounds can receive effective care, particularly in the fields of medical administration & diagnosis, mental health treatment, patient monitoring, and a variety of other clinical contexts. However as ChatGPT, the most advanced publicly available AI yet is still in its beta stage, it is important to keep in mind that these chatbots operate on a statistical basis and lack real knowledge that leads them to frequently give inaccurate information, and make up solutions via inferences based on the data they are trained on, raising concerns as to whether they can be trusted to deliver correct information. This is especially true for patients with chronic conditions who may be putting their health in danger by following chatbots’ advice that could be seriously flawed.


Key Takeaways:

  • Chatbots have the potential to reduce annual US healthcare spending by 5-10%, translating to $200-360 billion in savings (NBER)

  • In a study evaluating the performance of virtual assistants in helping patients maintain physical activity, diet, and track medication 79% of participants reported virtual assistants had the potential to change their lifestyle (International Journal of Environmental Research and Public Health)

  • Artificial intelligence solutions in healthcare are easier to access than ever before and care providers are quickly adopting AI chatbots to solve deficiencies in manpower and equity of access for tasks they see as easy to automate.

  • As noted by STATNews, “ChatGPT was trained on a dataset from late 2021, [so] its knowledge is currently stuck in 2021…it would be crucial for the system to be updated in near real-time for it to remain highly accurate for health care”


The Problem:

With new developments in advanced medicine and technology including artificial intelligence and machine learning tools, care providers are working hard to adopt these systems to healthcare particularly where they have the potential to address an ongoing workforce shortage. Moreover, as populations age, the gap between incidence of disease and treatment options broadens. For example, according to the Journal of Preventing Chronic Disease, in 2018, an estimated 51.8% of US adults had at least one out of the ten most commonly diagnosed chronic conditions, and 27.2% of adults had multiple chronic conditions. As a result, hospitals and physicians (providers) are seeing greater levels of care utilization and a need to connect these patients with care resources that address their conditions and/or prevent the conditions from becoming more severe. In addition, given the inefficiencies and disparities in the delivery of care in the U.S. healthcare system (ex: lack of providers in certain rural areas, long wait time for specialists) technology may be best positioned to address these deficiencies and improve outcomes. Over time as these tools become more sophisticated they can be used for initial triage escalation to clinicians for a high level of care increasing the use and application of human intelligence/experience where it may be most needed.


The Backdrop:

The advent of AI chatbots holds great promise in changing the way we deliver and manage care, especially for practices that lack the resources to handle large numbers of patients and the amount of data they generate. According to Salesforce, chatbots (coined from the term “chat robot”) is a computer program that simulates human conversation either by voice or text communication, and is designed to solve a problem. Early versions of chatbots were used to engage customers alongside the classic customer service channels like phone, email, and social media. Current chatbots such as ChatGPT, leverage machine learning to continually refine and improve their performances using data provided and analyzed by an algorithm. As noted in WIRED magazine, the technology at the core of ChatGPT is not new “it is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web.” What makes ChatGPT stand out is “it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5 (which provides an intuitive interface for users to have a conversation with AI). This tweak has unlocked a new capacity to respond to all kinds of questions, giving the powerful AI model a compelling new interface just about anyone can use.“ After creating an account with OpenAI (the open source developers behind ChatGPT), all users have to do is type their query into a search bar to begin using ChatGPT’s services. Although ChatGPT is still in beta, its capabilities are impressive and it has surpassed any previously publicly available AI chatbot to date.


Using ChatGPT makes it easy to learn as it can quickly and easily summarize any topic the user wishes, saving hours of research and digging through links to understand a certain topic. It can help people compose written materials on anything they wish, including essays, stories, speeches, resumes and more. It is also good at helping people to come up with ideas, and since AI is particularly good at dealing with volume it can provide a litany of possible solutions to humans looking for those solutions. Perhaps the largest change it will bring lies in computer programming. ChatGPT and other AI chatbots have been found to be particularly good at writing and fixing computer code, and there is evidence that using AI assistance in coding could cut total programming time in half according to research conducted by GitHub.


For certain healthcare administrative takes, chatbots seem to have a bright future and can connect patients to their care providers in ways that weren’t possible before. Access to healthcare services is one of the most apparent ways, particularly for those living in rural and remote areas far away from the nearest hospital. Disparities among According to the Journal of Public Health, evidence clearly shows that Americans living in rural areas have higher levels of chronic disease, worse health outcomes, and poorer access to digital health solutions as compared with urban and suburban areas. Not only do individuals living in rural areas live much farther away from their nearest hospital, but the facilities themselves often lack the medical personnel and outpatient services common at more urban hospitals which contributes to the inconsistencies in care and outcomes. Similarly, given the increased administrative burden that accompanies the digitization of healthcare and healthcare records, doctors are increasingly occupied by the deluge of tasks that are more suited to automation than others. For example, certain tasks like appointment scheduling, medical records management, and responding to routine and frequently asked patient questions aren’t always the most effective use of medical professionals’ time and could be handled in a consumer friendly and efficient manner by chatbots like ChatGPT.

Given the easy way that users are able to interact with ChatGPT there is also the potential to eliminate some of the traditional barriers to the delivery of healthcare, particularly the one-to-many issue given clinician shortages. However, this will not happen near term and will require some refinement. First, as noted by STATNews, “ChatGPT was trained on a dataset from late 2021, [so] its knowledge is currently stuck in 2021…even if the company is able to regularly update the vast swaths of information the tool considers across the internet, it would be crucial for the system to be updated in near real-time for it to remain highly accurate for health care uses.” In addition, the article quoted Elliot Bolton from Stanford’s Center for Research on Foundation Models, who noted that ChatGPT is “susceptibe to inventing facts and inventing things, and so text might look [plausible], but may have factual errors.”


Bearing that in mind, should ChatGPT follow the path of other chatbots in medicine it does have potential in a number of clinical settings, particularly in the field of mental health. Here it is important to note that neither ChatGPT nor chatbots possess the skills of a licensed and trained mental health professional or the ability to make a nuanced diagnosis so should not be used for diagnosis, drug therapy or treatment of patients in severe distress. That said, the study of chatbots in healthcare has been most extensive around mental health with most systems designed to “empower or improve mental health, perform mental illness screening systems, perform behavior change techniques and in programs to reduce/treat smoking and/or alcohol dependence.” [Review of AI in QAS]. For example, a study from the Journal of Medical Internet Research reported that chatbots have seen promising results in mental health treatment, particularly for depression and anxiety. Among other things “participants reported that chatbots are useful for 1) practicing conversations in a private place; 2) learning, 3) making usersfeel better, 4) preparing users for interactions with health care providers, 5) implementing the learned skills in daily life; 6) facilitating a sense of accountability from daily check-in, and, 7) keeping the learned skills more prominently in users’ minds.


Similarly, in a literature review published in the Canadian Journal of Psychiatry assessing the impact of chatbots in a variety of psychiatric studies, numerous applications were found, including the efficacy of chatbots in helping patients recall details from traumatic experiences, decreasing self-reported anxiety or depression with the use of cognitive behavioral therapy, decreasing alcohol consumption, and helping people who may be reluctant to share their experiences with others to talk through their trauma in a healthy way.


However, as pointed out by Mashable, ChatGPT wasn’t designed to provide therapeutic care and “while the chatbot is knowledgeable about mental health and may respond with empathy, it can’t diagnose users with a specific mental condition, nor can it reliably and accurately provide treatment details.” In addition to the general cautions about ChatGPT noted previously (it was only trained on data through 2021 and it may invent facts and things), Mashable notes three additional cautions when using ChatGPT for help with mental illness: 1) It was not designed to function as a therapist and can’t diagnose, noting “therapists may frequently acknowledge when they don’t know an answer to a client’s questions, in contrast to a seemingly all-knowing chatbot” in order to get patients to reflect on their circumstances and discover their own insights; 2) ChatGPT may be knowledgeable about mental health, but it is not always comprehensive or right, pointing out that ChatGPT responses can provide incorrect information and that it was unclear what clinical information or treatment protocols it had been trained on; 3) there are [existing] alternatives to using ChatGPT for mental health help, these include chatbots which are specifically designed for mental health like Woebot and Wysa which offer AI guided therapy for a fee. While it is important to keep these cautions and challenges in mind, they also provide a roadmap of areas that ChatGPT is likely to be most effective once these issues are addressed.


Similarly, chatbots are also good for monitoring patients and tracking symptoms and behaviors, and many are used as virtual assistants in order to ensure patients’ well-being is positive while ensuring they are adhering to their treatment schedule. A study published in the International Journal of Environmental Research and Public Health evaluated the performance of a virtual health assistant to help ensure patients maintain physical activity, a healthy diet, and track their medication. Results revealed that 79% of participants believed that virtual health assistants have the potential to help change their lifestyles. However, some common complaints were that the chatbot didn’t have as much personality as a real human would, it performed poorly when participants initiated spontaneous communication outside of pre-programmed “check-in” times and that it lacked the ability to provide more personalized feedback.


Implications:

AI-based chatbots like ChatGPT have the potential to address many of the challenges facing the medical community and help alleviate issues faced due to the workforce shortage. As many have noted, a report by the National Bureau of Economic Research stated that chatbots have the potential to reduce annual US healthcare spending by 5-10%, translating to an estimated $200-360 billion in savings. In addition, due to their 24/7 availability, chatbots provide the ability to respond to questions and concerns of patients at any hour addressing pressing medical issues and reaching people in a non-intrusive way.


Moreover, as AI technology continues to develop, an increasing number of healthcare providers are beginning to leverage these solutions to solve persistent industry problems such as high costs, medical personnel shortages, and equity in care delivery. Chatbots are perfectly poised to fill these roles and increase efficiency in the process given they can perform at a similar level to humans. Generally, assessments of physician opinions on the use of chatbots in healthcare indicate the willingness to continue their use, and a study published in the Journal of Medical Internet Research reported that of 100 physicians surveyed regarding the effectiveness of chatbots, 78% believed then to be effective for scheduling appointments, 76% believed them to be helpful in locating nearby health clinics, and 71% believed they could aid in providing medication information. Given current workforce shortages, chatbots can act as virtual assistants to medical professionals and have the potential to greatly expand a physician’s capabilities and free up the need for auxiliary staff to attend to such matters. Although AI chatbot platforms and algorithm solutions show great promise in optimizing routine work tasks, current technology is not yet sufficient to allow independent operation as there are certain nuances that are best addressed by humans. Also, as one research review found “acceptance of new technology among clinicians is limited, which possibly can be explained by misconceptions about the accuracy of the technology.”


Along with the opportunities for ChatGPT in healthcare, there are a number of challenges to implementing the technology. According to a study from the Journal of Medicine, Health Care, and Philosophy, since chatbots lack the lived experience, empathy, and understanding of unique situations that real-world medical professionals have they should not be trusted to provide detailed patient assessments, analyze emergency health situations, or triage patients because they may inadvertently give false or flawed advice without the knowledge of the personal factors affecting patients’ health conditions. Some clinicians are worried that they may one day be replaced by AI-powered machines or chatbots which lack the personal touch and often the specific facts and data that make in-person consultations significantly more effective. . While over time AI may be able to mimic human responses, chatbots will still need to be developed so they can effectively react to unexpected and unusual user inputs, provide detailed and factual responses, and deliver a wide range of variability in their responses so that they can have a future in clinical practices. This will ultimately require further developer input and more experience interacting with patients in order to adequately personalize chatbot conversations.


Additionally, despite safeguards put in place by developers like the many pre-programmed controls to decline requests that it cannot handle, and the ability to block “unsafe:” and “sensitive” content, an article published in WIRED Magazine noted that ChatGPT will sometimes fabricate facts, restate incorrect information, and exhibit hateful statements & bias that previous users may have expressed to it, leading to unfair treatment of certain groups. The article noted that the safeguards put in place by ChatGPT’s developers can easily be bypassed using different wording to ask a question and emphasized the need for strong and comprehensive protections to prevent abuse of these systems. In addition, there is also the need for data security to ensure patient privacy as all of this data will be fed to private companies developing these tools As the aforementioned Mashable article noted about using ChatGPT for mental health advice, “therapists are prohibited by law from sharing client information, people who use ChatGPT…do not have the same privacy protections.”


Related Reading:



コメント


Search By Tags
Recent Posts
Archive
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Social Icon
bottom of page