Perhaps it’s not price its salt on the subject of health advice.
A surprising medical case report published last month revealed that a 60-year-old man with no history of psychiatric or health conditions was hospitalized with paranoid psychosis and bromide poisoning after following ChatGPT’s advice.
The unidentified man was enthusiastic about cutting sodium chloride (table salt) from his eating regimen. He ended up substituting sodium bromide, a toxic compound, for 3 months upon consultation with the AI chatbot. Bromine can replace chlorine — for cleansing and sanitation, not for human consumption.
“[It was] precisely the form of error a licensed healthcare provider’s oversight would have prevented,” Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, told The Post. “[That] case shows just how dangerous AI health advice will be.”
In a recent Pearl.com survey, 37% of respondents reported that their trust in doctors has declined over the past yr.
Suspicion of doctors and hospitals isn’t recent — however it has intensified in recent times because of conflicting pandemic guidance, concerns over financial motives, poor quality of care and discrimination.
Skeptics are turning to AI, with 23% believing AI’s medical advice over a health care provider.
That worries Kurtzig. The AI CEO believes AI will be useful — however it doesn’t and can’t substitute for the judgment, ethical accountability or lived experience of medical professionals.
“Keeping humans within the loop isn’t optional — it’s the safeguard that protects lives.” he said.
Indeed, 22% of the Pearl.com survey takers admitted they followed health guidance later proven flawed.
There are several ways in which AI can go awry.
A Mount Sinai study from August found that popular AI chatbots are very vulnerable to repeating and even expanding on false medical information, a phenomenon often known as “hallucination.”
“Our internal studies reveal that 70% of AI firms include a disclaimer to seek the advice of a health care provider because they understand how common medical hallucinations are,” Kurtzig said.
“At the identical time, 29% of users rarely double-check the recommendation given by AI,” he continued. “That gap kills trust, and it could cost lives.”
Kurtzig noted that AI could misinterpret symptoms or miss signs of a serious condition, resulting in unnecessary alarm or a false sense of reassurance. Either way, proper care might be delayed.
“AI also carries bias,” Kurtzig said.
“Studies show it describes men’s symptoms in additional severe terms while downplaying women’s, precisely the form of disparity that has kept women waiting years for diagnoses of endometriosis or PCOS,” he added. “As a substitute of fixing the gap, AI risks hard-wiring it in.”
And eventually, Kurtzig said AI will be “downright dangerous” on the subject of mental health.
Experts warn that using AI for mental health support poses significant risks, especially for vulnerable people.
AI has been shown in some situations to supply harmful responses and reinforce unhealthy thoughts. That’s why it’s vital to make use of AI thoughtfully.
Kurtzig suggests having it help frame questions on symptoms, research and widespread wellness trends in your next appointment — and leaving the diagnosis and treatment options to the doctor.
He also highlighted his own service, Pearl.com, which has human experts confirm AI-generated medical responses.
“With 30% of Americans reporting they can’t reach emergency medical services inside a 15-minute drive from where they live,” Kurtzig said, “that is an amazing solution to make skilled medical expertise more accessible without the chance.”
When The Post asked Pearl.com if sodium bromide could replace sodium chloride in someone’s eating regimen, the response was: “I absolutely wouldn’t recommend replacing sodium chloride (table salt) with sodium bromide in your eating regimen. This might be dangerous for several vital reasons…”
Perhaps it’s not price its salt on the subject of health advice.
A surprising medical case report published last month revealed that a 60-year-old man with no history of psychiatric or health conditions was hospitalized with paranoid psychosis and bromide poisoning after following ChatGPT’s advice.
The unidentified man was enthusiastic about cutting sodium chloride (table salt) from his eating regimen. He ended up substituting sodium bromide, a toxic compound, for 3 months upon consultation with the AI chatbot. Bromine can replace chlorine — for cleansing and sanitation, not for human consumption.
“[It was] precisely the form of error a licensed healthcare provider’s oversight would have prevented,” Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, told The Post. “[That] case shows just how dangerous AI health advice will be.”
In a recent Pearl.com survey, 37% of respondents reported that their trust in doctors has declined over the past yr.
Suspicion of doctors and hospitals isn’t recent — however it has intensified in recent times because of conflicting pandemic guidance, concerns over financial motives, poor quality of care and discrimination.
Skeptics are turning to AI, with 23% believing AI’s medical advice over a health care provider.
That worries Kurtzig. The AI CEO believes AI will be useful — however it doesn’t and can’t substitute for the judgment, ethical accountability or lived experience of medical professionals.
“Keeping humans within the loop isn’t optional — it’s the safeguard that protects lives.” he said.
Indeed, 22% of the Pearl.com survey takers admitted they followed health guidance later proven flawed.
There are several ways in which AI can go awry.
A Mount Sinai study from August found that popular AI chatbots are very vulnerable to repeating and even expanding on false medical information, a phenomenon often known as “hallucination.”
“Our internal studies reveal that 70% of AI firms include a disclaimer to seek the advice of a health care provider because they understand how common medical hallucinations are,” Kurtzig said.
“At the identical time, 29% of users rarely double-check the recommendation given by AI,” he continued. “That gap kills trust, and it could cost lives.”
Kurtzig noted that AI could misinterpret symptoms or miss signs of a serious condition, resulting in unnecessary alarm or a false sense of reassurance. Either way, proper care might be delayed.
“AI also carries bias,” Kurtzig said.
“Studies show it describes men’s symptoms in additional severe terms while downplaying women’s, precisely the form of disparity that has kept women waiting years for diagnoses of endometriosis or PCOS,” he added. “As a substitute of fixing the gap, AI risks hard-wiring it in.”
And eventually, Kurtzig said AI will be “downright dangerous” on the subject of mental health.
Experts warn that using AI for mental health support poses significant risks, especially for vulnerable people.
AI has been shown in some situations to supply harmful responses and reinforce unhealthy thoughts. That’s why it’s vital to make use of AI thoughtfully.
Kurtzig suggests having it help frame questions on symptoms, research and widespread wellness trends in your next appointment — and leaving the diagnosis and treatment options to the doctor.
He also highlighted his own service, Pearl.com, which has human experts confirm AI-generated medical responses.
“With 30% of Americans reporting they can’t reach emergency medical services inside a 15-minute drive from where they live,” Kurtzig said, “that is an amazing solution to make skilled medical expertise more accessible without the chance.”
When The Post asked Pearl.com if sodium bromide could replace sodium chloride in someone’s eating regimen, the response was: “I absolutely wouldn’t recommend replacing sodium chloride (table salt) with sodium bromide in your eating regimen. This might be dangerous for several vital reasons…”