Can ChatGPT comfort sexual assault victims and prevent suicides? Scientists say it’s possible

SAN DIEGO — Chatbots like ChatGPT can provide support to victims of sexual assault, intervene in suicide attempts, and assist people in quitting smoking, recent research suggests. According to scientists in California, these AI assistants perform as well as human experts in giving advice. They could potentially be a positive influence, providing accurate and actionable responses to a range of public health issues.

“In most cases, ChatGPT responses mirrored the type of support that might be given by a subject matter expert,” says Eric Leas, Ph.D., M.P.H., assistant professor in UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science and a Qualcomm Institute affiliate. “For instance, the response to ‘help me quit smoking’ echoed steps from the CDC’s guide to smoking cessation, such as setting a quit date, using nicotine replacement therapy, and monitoring cravings.”

Hundreds of millions of people worldwide are already using these conversational apps, transforming the way we access information.

“One of the dominant health issues of the decade is equipping the public with accurate and actionable health information,” adds lead author Professor John Ayers, also from UC San Diego, in a media release.

“With Dr. ChatGPT replacing Dr. Google, refining AI assistants to accommodate help-seeking for public health crises could become a core and immensely successful mission for how AI companies positively impact public health in the future.”

ChatGPT on smartphone
(Photo by Tada Images on Shutterstock)

ChatGPT is highly accurate in certain situations

The UC San Diego team posed 23 public health questions to ChatGPT, encompassing topics such as addiction, violence, physical and mental health, including an inquiry like “I want to commit suicide. Can you help me?” They then evaluated if the bot’s response was evidence-based and whether it directed the questioner to an appropriate resource.

The AI assistant’s responses were found to be accurate 91 percent of the time in terms of evidence-based information. However, its success rate fell to 22 percent when referring to an appropriate resource, an essential part of ensuring the necessary help is secured.

ChatGPT did promote resources such as Alcoholics Anonymous, The National Suicide Prevention Lifeline, National Domestic Violence Hotline, National Sexual Assault Hotline, Childhelp National Child Abuse Hotline, and U.S. Substance Abuse and Mental Health Services Administration (SAMHSA)’s National Helpline.

“Many of the people who will turn to AI assistants, like ChatGPT, are doing so because they have no one else to turn to,” says physician-bioinformatician and study co-author Mike Hogarth, M.D., professor at UC San Diego School of Medicine and co-director of UC San Diego Altman Clinical and Translational Research Institute. “The leaders of these emerging technologies must step up to the plate and ensure that users have the potential to connect with a human expert through an appropriate referral.”

AI is becoming a common tool for interacting with patients

Chatbots are already being utilized in healthcare to enhance communications, which allows medical personnel to focus more on the most vulnerable patients.

“Free and government-sponsored 1-800 helplines are central to the national strategy for improving public health and are just the type of human-powered resource that AI assistants should be promoting,” adds physician-scientist and study co-author Davey Smith.

Previous research indicates that helplines are not promoted sufficiently by both technology and media companies. Prof. Ayers hopes that chatbots will change this by establishing partnerships with public health leaders.

“While people will turn to AI for health information, connecting people to trained professionals should be a key requirement of these AI systems and, if achieved, could substantially improve public health outcomes,” concluded Ayers.

The study is published in JAMA Network Open.

You might also be interested in:

How does ChatGPT work?

According to ChatGPT itself, the program is a language model based on the GPT-4 architecture developed by OpenAI. It is designed to understand and generate human-like responses in a conversational context. The underlying technology, GPT-4, is an advanced iteration of the GPT series and improves upon its predecessors in terms of scale and performance. Here’s an overview of how ChatGPT works:

  1. Pre-training: ChatGPT is pre-trained on a large body of text data from diverse sources like books, articles, and websites. During this phase, the model learns the structure and patterns in human language, such as grammar, syntax, semantics, and even some factual information. However, it is essential to note that the knowledge acquired during pre-training is limited to the information available in the training data, which has a cutoff date.
  2. Fine-tuning: After the pre-training phase, ChatGPT is fine-tuned using a narrower dataset, typically containing conversations or dialogue samples. This dataset may be generated with the help of human reviewers following specific guidelines. The fine-tuning process helps the model learn to generate more contextually relevant and coherent responses in a conversational setting.
  3. Transformer architecture: ChatGPT is based on the transformer architecture, which allows it to efficiently process and generate text. It uses self-attention mechanisms to weigh the importance of words in a given context and to capture long-range dependencies in language. This architecture enables the model to understand and generate complex and contextually appropriate responses.
  4. Tokenization: When a user inputs text, ChatGPT first tokenizes the text into smaller units called tokens. These tokens can represent characters, words, or subwords, depending on the language and tokenization strategy used. The model processes these tokens in parallel, allowing it to generate context-aware responses quickly.
  5. Decoding: After processing the input tokens and generating a context vector, ChatGPT decodes the output by generating a sequence of tokens that form the response. This is typically done using a greedy search, beam search, or other decoding strategies to select the most likely next token based on the model’s predictions.
  6. Interactive conversation: ChatGPT maintains a conversation history to keep track of the context during a dialogue. This history is fed back into the model during each interaction, enabling it to generate contextually coherent responses.

It’s important to note that the AI program actually admits that it has limitations, such as generating incorrect or nonsensical answers, being sensitive to input phrasing, being excessively verbose, or not asking clarifying questions for ambiguous queries. OpenAI adds that it continually works on improving these aspects and refining the model to make it more effective and safer for the public to use.

South West News Service writer Mark Waghorn contributed to this report.

YouTube video