Should AI decide life and death issues? ChatGPT can influence society’s moral judgements

INGOLSTADT, Germany — The artificially-intelligent chatbot ChatGPT could sway how people respond during a moral dilemmas, a new study finds. Researchers in Germany discovered that people who read statements arguing one side of a moral dilemma were more likely to side with AI – even when they knew the opinion was coming from an intelligent chatbot.

Study authors quizzed more than 760 Americans on moral dilemmas, who first read a statement created by ChatGPT. The study shows participants were more likely to side with the chatbot’s argument. This was true even when researchers clearly attributed the statement to the new AI program.

The experiment also shows that participants may have “underestimated” the influence of ChatGPT’s statement on their moral judgements. The authors of the study warn this demonstrates the need for education to help humans better understand artificial intelligence and its power over society.

The study, published in the journal Scientific Reports, involving researchers from the Technische Hochschule Ingolstadt in southern Germany, asked ChatGPT multiple times whether it is right to sacrifice the life of one person in order to save the lives of five others.

ChatGPT is an artificial intelligence chatbot developed and launched by the American AI research lab OpenAI in November 2022. The bot is powered by the artificial intelligence language processing model Generative Pretrained Transformer 3, and is becoming more and more widely used across the globe.

ChatGPT prompt
(Photo by Om siva Prakash on Unsplash)

People still don’t think AI is persuading them

The research team, led by senior AI researcher Dr. Sebastian KrĂ¼gel, found ChatGPT wrote statements both for and against sacrificing one’s life to save five others — indicating that it’s not biased towards either moral stance. The authors then presented these statements to 767 U.S. participants, with an average age of 39.

Each participant received one of two moral dilemmas requiring them to choose whether or not to sacrifice one person to save five. Prior to answering the dilemma, participants had to read a statement written by ChatGPT, arguing either for or against sacrificing that one life.

These statements, all written by ChatGPT, were attributed either to its rightful source or a fictional human moral advisor. After answering, the participants revealed whether or not the statement they read influenced their decision. The research team found that participants were more likely to choose to sacrifice the one life for the five or not depending on what the statement they read recommended.

Four-fifths (80%) of participants also reported that their answers were not influenced by the statements they read. However, the study authors found that the answers participants believe they would have provided even without reading the statements were still more likely to agree with the moral stance of the statement they read. This indicates that perhaps participants may have underestimated the influence of ChatGPT’s statements on their moral judgements.

Researchers say this research proves there is a need for education to help people better understand artificial intelligence and the potential influence it can exert over humans. The researchers add that future research is necessary and suggest this could focus on designing chatbots which either decline to answer moral conundrums or only answer them with multiple arguments or caveats.

South West News Service writer James Gamble contributed to this report.

YouTube video

Leave a Reply

Your email address will not be published. Required fields are marked *