That email from your doctor could actually be ChatGPT — and there’s a good chance you’d never know

NEW YORK — Do you think you’re smart enough to tell the difference between medical advice from an actual human doctor and suggestions from artificial intelligence? New research shows it may be far more difficult than you might think.

The study from New York University’s Tandon School of Engineering and Grossman School of Medicine concludes that people may struggle to differentiate between medical advice given by chatbots and human health care providers. Findings indicate that chatbots, like ChatGPT, could be valuable tools in assisting healthcare providers in communicating with patients.

ChatGPT, part of the Generative Pre-trained Transformer (GPT) series, is trained to predict the most probable next word in a conversation using vast amounts of internet data. It’s optimized to respond to user queries, learning from human feedback. However, it’s not without limitations. ChatGPT, like other large language models (LLMs), can sometimes generate biased or inaccurate information and lacks the ability to perform true reasoning.

The research involved 392 participants aged 18 and above who were presented with ten patient questions. Half of the responses to these questions were generated by humans, while the other half came from ChatGPT. Participants then had to identify the origin of each response and rate their trust in the chatbot’s answers on a scale from “completely untrustworthy” to “completely trustworthy”.

The outcome? Participants could only correctly identify chatbot-generated responses 65.5 percent of the time, a figure nearly identical to the 65.1 percent accuracy for identifying human-generated responses. This consistency was observed across various demographic categories.

Woman typing on laptop with stethoscope nearby.
Is that your doctor sending you advice, or just a tech who turned to GPT for medical questions? (Courtesy: National Cancer Institute via Upsplash)

In AI We Trust… Sometimes

However, the level of trust in chatbot responses varied depending on the nature of the query. Participants displayed the most trust in chatbots when handling logistical matters, such as scheduling appointments or addressing insurance queries, with an average trust score of 3.94. Preventative care questions, like vaccines or cancer screenings, came next at 3.52. However, trust diminished for diagnostic and treatment advice, registering scores of 2.90 and 2.89 respectively.

The study’s results emphasize the potential chatbots possess in aiding patient-provider communications, especially in areas like administrative tasks or chronic disease management. However, the authors urged caution regarding chatbots assuming more clinical roles.

“Providers should remain cautious and exercise critical judgment when curating chatbot-generated advice due to the limitations and potential biases of AI models,” researchers said in a statement.

Implications For Healthcare

The integration of AI into healthcare is no longer a distant possibility but a burgeoning reality. This study offers a glimpse into the ways we can expect to see artificial intelligence platforms work in tandem with medical clinics. It also shows us improvements that are needed so that new technologies become effective aids and tools for both doctors and patients. These include:

  1. Reducing Provider Burnout: One of the most immediate benefits of integrating AI like ChatGPT in healthcare is the potential reduction in provider burnout. By automating responses to routine patient queries, healthcare providers can focus on more complex and urgent cases.
  2. Enhancing Patient-Provider Communication: AI chatbots could facilitate quicker responses to patient queries, potentially improving patient satisfaction and engagement. This is particularly relevant in a digital age where patients expect prompt and accessible communication with their healthcare providers.
  3. Navigating the Complexity: The study also highlighted a crucial aspect – the varying levels of trust depending on the complexity of the health issue. This suggests that while AI can be a valuable tool in healthcare, it’s not a one-size-fits-all solution. For more complex medical inquiries, the human touch remains irreplaceable.
  4. Ethical and Legal Considerations: The deployment of AI in healthcare brings forth ethical and legal questions. Issues like data privacy, the accuracy of AI-generated advice, and liability in case of erroneous advice need thorough consideration and clear guidelines.
  5. Future of AI in Healthcare: The study opens doors to future research and development. It suggests that AI chatbots could evolve to handle more complex medical questions with higher accuracy and reliability. Additionally, ongoing advancements in AI technology could lead to more personalized and nuanced patient interactions.

The study was published in the journal JMIR Medical Education.

Follow on Google News

About the Author

Matt Higgins

Matt Higgins worked in national and local news for 15 years. He started out as an overnight production assistant at Fox News Radio in 2007 and ended in 2021 as the Digital Managing Editor at CBS Philadelphia. Following his news career, he spent one year in the automotive industry as a Digital Platforms Content Specialist contractor with Subaru of America and is currently a freelance writer and editor for StudyFinds. Matt believes in facts, science and Philadelphia sports teams crushing his soul.

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer