HealthDay News — ChatGPT provides accurate but sometimes incomplete answers to questions around vaccine hesitancy and safety, according to a study published online Sept. 3 in Human Vaccines & Immunotherapeutics.
Antonio Salas, from Universidade de Santiago de Compostela in Spain, and colleagues examined ChatGPT capacity to generate opinions on vaccine hesitancy by interrogating the chatbot for the 50 most prevalent counterfeit messages, false and true contraindications, and myths circulating on the internet regarding vaccine safety.
The researchers found that the majority of questions received accurate answers, with most responses being graded as “excellent” or “good” and an average score of nine out of 10. The overall expert-graded accuracy was 85.5 percent, with an additional 14.5 percent of responses deemed “accurate but with gaps.”
“Overall, ChatGPT can detect counterfeit questions related to vaccines and vaccination. In its current form, the language used by this artificial intelligence is not overly technical, making it easily understandable to the public but without sacrificing scientific rigor,” the authors write. “We acknowledge that the present-day version of ChatGPT cannot replace an expert or scientific evidence per se. However, the results suggest it could be a reliable source of information to the public.”
Several authors disclosed ties to the pharmaceutical industry.