Published Date : 22/08/2025
As chatbots powered by artificial intelligence (AI) surge in popularity, experts are cautioning people against relying on these technologies for medical or mental health advice. Instead, they recommend consulting human healthcare providers for accurate and tailored guidance.
There have been several instances where AI chatbots have provided misleading or harmful advice. For example, a 60-year-old man accidentally poisoned himself and entered a psychotic state after ChatGPT suggested he replace salt with sodium bromide, a toxic substance used to treat wastewater. Additionally, a study from the Center for Countering Digital Hate revealed that ChatGPT gave dangerous advice to teens about drugs, alcohol, and suicide.
The allure of AI chatbots is understandable, especially given the barriers to accessing healthcare, such as cost, long wait times, and lack of insurance coverage. However, experts warn that these chatbots are not equipped to provide advice tailored to a patient's specific needs and medical history. They are also prone to
Q: What are the main risks of using AI chatbots for health advice?
A: The main risks include the chatbot not knowing your medical history, providing incorrect or harmful advice, boosting false confidence, and exposing your personal health data on the internet.
Q: Why are people turning to AI chatbots for health advice?
A: People turn to AI chatbots due to barriers in accessing healthcare, such as cost, long wait times, and lack of insurance. Additionally, the loneliness epidemic is driving the use of AI chatbots for social interaction and mental health support.
Q: What can families do to protect their loved ones from harmful AI chatbot advice?
A: Families can discuss the technology and motivations behind AI chatbots, test them together to identify hallucinations and biases, and encourage critical thinking. It's also important to approach the topic without judgment.
Q: Are there any regulatory measures in place to protect people from harmful AI chatbot advice?
A: Some states, like Illinois, have banned the use of ChatGPT for mental health therapy, and Indiana is considering requiring medical professionals to disclose the use of AI in providing advice. However, more comprehensive regulatory measures are needed.
Q: What is the future of AI chatbots in healthcare?
A: In the future, AI chatbots could fill gaps in healthcare services, but they need to be rooted in science, rigorously tested, and regulated to ensure they provide accurate and safe advice.