Monday, November 3, 2025
INBV News
Submit Video
  • Login
  • Register
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream
No Result
View All Result
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream
No Result
View All Result
INBV News
No Result
View All Result
Home Technology

AI CEO reveals why it may well be dangerous for health advice

INBV News by INBV News
September 24, 2025
in Technology
383 16
0
AI CEO reveals why it may well be dangerous for health advice
548
SHARES
2.5k
VIEWS
Share on FacebookShare on Twitter

Perhaps it’s not price its salt on the subject of health advice.

A surprising medical case report published last month revealed that a 60-year-old man with no history of psychiatric or health conditions was hospitalized with paranoid psychosis and bromide poisoning after following ChatGPT’s advice.

The unidentified man was enthusiastic about cutting sodium chloride (table salt) from his eating regimen. He ended up substituting sodium bromide, a toxic compound, for 3 months upon consultation with the AI chatbot. Bromine can replace chlorine — for cleansing and sanitation, not for human consumption.

Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, reveals all of the ways in which AI can go flawed in providing medical advice to a user. Courtesy of Pearl.com

“[It was] precisely the form of error a licensed healthcare provider’s oversight would have prevented,” Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, told The Post. “[That] case shows just how dangerous AI health advice will be.”

In a recent Pearl.com survey, 37% of respondents reported that their trust in doctors has declined over the past yr.

Suspicion of doctors and hospitals isn’t recent — however it has intensified in recent times because of conflicting pandemic guidance, concerns over financial motives, poor quality of care and discrimination.

Skeptics are turning to AI, with 23% believing AI’s medical advice over a health care provider.

That worries Kurtzig. The AI CEO believes AI will be useful — however it doesn’t and can’t substitute for the judgment, ethical accountability or lived experience of medical professionals.

Mistrust within the healthcare community has increased significantly because the start of the COVID-19 pandemic. Valeria Venezia – stock.adobe.com

“Keeping humans within the loop isn’t optional — it’s the safeguard that protects lives.” he said.

Indeed, 22% of the Pearl.com survey takers admitted they followed health guidance later proven flawed.

There are several ways in which AI can go awry.

A Mount Sinai study from August found that popular AI chatbots are very vulnerable to repeating and even expanding on false medical information, a phenomenon often known as “hallucination.”

“Our internal studies reveal that 70% of AI firms include a disclaimer to seek the advice of a health care provider because they understand how common medical hallucinations are,” Kurtzig said.

“At the identical time, 29% of users rarely double-check the recommendation given by AI,” he continued. “That gap kills trust, and it could cost lives.”

Kurtzig noted that AI could misinterpret symptoms or miss signs of a serious condition, resulting in unnecessary alarm or a false sense of reassurance. Either way, proper care might be delayed.

In a recent Pearl.com survey, 23% of respondents reported believing AI’s medical advice over a health care provider. Richman Photo – stock.adobe.com

“AI also carries bias,” Kurtzig said.

“Studies show it describes men’s symptoms in additional severe terms while downplaying women’s, precisely the form of disparity that has kept women waiting years for diagnoses of endometriosis or PCOS,” he added. “As a substitute of fixing the gap, AI risks hard-wiring it in.”

And eventually, Kurtzig said AI will be “downright dangerous” on the subject of mental health.

Experts warn that using AI for mental health support poses significant risks, especially for vulnerable people.

AI has been shown in some situations to supply harmful responses and reinforce unhealthy thoughts. That’s why it’s vital to make use of AI thoughtfully.

Pearl.com (shown here) has human experts confirm AI-generated medical responses.

Kurtzig suggests having it help frame questions on symptoms, research and widespread wellness trends in your next appointment — and leaving the diagnosis and treatment options to the doctor.

He also highlighted his own service, Pearl.com, which has human experts confirm AI-generated medical responses.

“With 30% of Americans reporting they can’t reach emergency medical services inside a 15-minute drive from where they live,” Kurtzig said, “that is an amazing solution to make skilled medical expertise more accessible without the chance.”

When The Post asked Pearl.com if sodium bromide could replace sodium chloride in someone’s eating regimen, the response was: “I absolutely wouldn’t recommend replacing sodium chloride (table salt) with sodium bromide in your eating regimen. This might be dangerous for several vital reasons…”

RELATED POSTS

Why the ‘Mag 7 is just too much of the market, get out’ is money-losing narrative

Horrendous AI model depicts what we’ll seem like in 25 years

Perhaps it’s not price its salt on the subject of health advice.

A surprising medical case report published last month revealed that a 60-year-old man with no history of psychiatric or health conditions was hospitalized with paranoid psychosis and bromide poisoning after following ChatGPT’s advice.

The unidentified man was enthusiastic about cutting sodium chloride (table salt) from his eating regimen. He ended up substituting sodium bromide, a toxic compound, for 3 months upon consultation with the AI chatbot. Bromine can replace chlorine — for cleansing and sanitation, not for human consumption.

Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, reveals all of the ways in which AI can go flawed in providing medical advice to a user. Courtesy of Pearl.com

“[It was] precisely the form of error a licensed healthcare provider’s oversight would have prevented,” Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, told The Post. “[That] case shows just how dangerous AI health advice will be.”

In a recent Pearl.com survey, 37% of respondents reported that their trust in doctors has declined over the past yr.

Suspicion of doctors and hospitals isn’t recent — however it has intensified in recent times because of conflicting pandemic guidance, concerns over financial motives, poor quality of care and discrimination.

Skeptics are turning to AI, with 23% believing AI’s medical advice over a health care provider.

That worries Kurtzig. The AI CEO believes AI will be useful — however it doesn’t and can’t substitute for the judgment, ethical accountability or lived experience of medical professionals.

Mistrust within the healthcare community has increased significantly because the start of the COVID-19 pandemic. Valeria Venezia – stock.adobe.com

“Keeping humans within the loop isn’t optional — it’s the safeguard that protects lives.” he said.

Indeed, 22% of the Pearl.com survey takers admitted they followed health guidance later proven flawed.

There are several ways in which AI can go awry.

A Mount Sinai study from August found that popular AI chatbots are very vulnerable to repeating and even expanding on false medical information, a phenomenon often known as “hallucination.”

“Our internal studies reveal that 70% of AI firms include a disclaimer to seek the advice of a health care provider because they understand how common medical hallucinations are,” Kurtzig said.

“At the identical time, 29% of users rarely double-check the recommendation given by AI,” he continued. “That gap kills trust, and it could cost lives.”

Kurtzig noted that AI could misinterpret symptoms or miss signs of a serious condition, resulting in unnecessary alarm or a false sense of reassurance. Either way, proper care might be delayed.

In a recent Pearl.com survey, 23% of respondents reported believing AI’s medical advice over a health care provider. Richman Photo – stock.adobe.com

“AI also carries bias,” Kurtzig said.

“Studies show it describes men’s symptoms in additional severe terms while downplaying women’s, precisely the form of disparity that has kept women waiting years for diagnoses of endometriosis or PCOS,” he added. “As a substitute of fixing the gap, AI risks hard-wiring it in.”

And eventually, Kurtzig said AI will be “downright dangerous” on the subject of mental health.

Experts warn that using AI for mental health support poses significant risks, especially for vulnerable people.

AI has been shown in some situations to supply harmful responses and reinforce unhealthy thoughts. That’s why it’s vital to make use of AI thoughtfully.

Pearl.com (shown here) has human experts confirm AI-generated medical responses.

Kurtzig suggests having it help frame questions on symptoms, research and widespread wellness trends in your next appointment — and leaving the diagnosis and treatment options to the doctor.

He also highlighted his own service, Pearl.com, which has human experts confirm AI-generated medical responses.

“With 30% of Americans reporting they can’t reach emergency medical services inside a 15-minute drive from where they live,” Kurtzig said, “that is an amazing solution to make skilled medical expertise more accessible without the chance.”

When The Post asked Pearl.com if sodium bromide could replace sodium chloride in someone’s eating regimen, the response was: “I absolutely wouldn’t recommend replacing sodium chloride (table salt) with sodium bromide in your eating regimen. This might be dangerous for several vital reasons…”

1

Do you trust technology Today?

Tags: adviceCEOdangerousHealthreveals
Share219Tweet137
INBV News

INBV News

Related Posts

edit post
Why the ‘Mag 7 is just too much of the market, get out’ is money-losing narrative

Why the ‘Mag 7 is just too much of the market, get out’ is money-losing narrative

by INBV News
November 2, 2025
0

Investing just isn't only a game of giant Jenga. However the pessimistic hedge funds, or those who should have shorts...

edit post
Horrendous AI model depicts what we’ll seem like in 25 years

Horrendous AI model depicts what we’ll seem like in 25 years

by INBV News
November 2, 2025
0

We're sofa-king screwed. Our sedentary lifestyles aren’t just slothful; they may wreak havoc on each our health and appears. Experts...

edit post
Where the Nexperia auto chip crisis stands now

Where the Nexperia auto chip crisis stands now

by INBV News
November 1, 2025
0

The emblem of Chinese-owned semiconductor company Nexperia is displayed on the chipmaker's German facility, after the Dutch government seized control...

edit post
Prepare for AI to ‘completely disrupt the whole lot’

Prepare for AI to ‘completely disrupt the whole lot’

by INBV News
November 1, 2025
0

Min-Liang Tan speaks during a conference at SXSW Sydney on October 16, 2024 in Sydney, Australia.Nina Franova | Getty ImagesArtificial...

edit post
Amazon shares soar as AI demand boosts cloud revenue

Amazon shares soar as AI demand boosts cloud revenue

by INBV News
October 31, 2025
0

Amazon’s cloud revenue rose on the fastest clip in nearly three years, helping the corporate forecast quarterly sales above estimates and...

Next Post
edit post
Xander Schauffele suprised with where game is before Ryder Cup

Xander Schauffele suprised with where game is before Ryder Cup

edit post
How Britain’s startup sector is evolving

How Britain's startup sector is evolving

CATEGORIES

  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Podcast
  • Politics
  • Sports
  • Technology
  • Travel
  • Videos
  • Weather
  • World News

CATEGORY

  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Podcast
  • Politics
  • Sports
  • Technology
  • Travel
  • Videos
  • Weather
  • World News

SITE LINKS

  • About us
  • Contact us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA

[mailpoet_form id=”1″]

  • About us
  • Contact us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA

© 2022. All Right Reserved By Inbvnews.com

No Result
View All Result
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream

© 2022. All Right Reserved By Inbvnews.com

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist