Friday, September 26, 2025
INBV News
Submit Video
  • Login
  • Register
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream
No Result
View All Result
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream
No Result
View All Result
INBV News
No Result
View All Result
Home Technology

AI CEO reveals why it may well be dangerous for health advice

INBV News by INBV News
September 24, 2025
in Technology
383 16
0
AI CEO reveals why it may well be dangerous for health advice
548
SHARES
2.5k
VIEWS
Share on FacebookShare on Twitter

Perhaps it’s not price its salt on the subject of health advice.

A surprising medical case report published last month revealed that a 60-year-old man with no history of psychiatric or health conditions was hospitalized with paranoid psychosis and bromide poisoning after following ChatGPT’s advice.

The unidentified man was enthusiastic about cutting sodium chloride (table salt) from his eating regimen. He ended up substituting sodium bromide, a toxic compound, for 3 months upon consultation with the AI chatbot. Bromine can replace chlorine — for cleansing and sanitation, not for human consumption.

Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, reveals all of the ways in which AI can go flawed in providing medical advice to a user. Courtesy of Pearl.com

“[It was] precisely the form of error a licensed healthcare provider’s oversight would have prevented,” Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, told The Post. “[That] case shows just how dangerous AI health advice will be.”

In a recent Pearl.com survey, 37% of respondents reported that their trust in doctors has declined over the past yr.

Suspicion of doctors and hospitals isn’t recent — however it has intensified in recent times because of conflicting pandemic guidance, concerns over financial motives, poor quality of care and discrimination.

Skeptics are turning to AI, with 23% believing AI’s medical advice over a health care provider.

That worries Kurtzig. The AI CEO believes AI will be useful — however it doesn’t and can’t substitute for the judgment, ethical accountability or lived experience of medical professionals.

Mistrust within the healthcare community has increased significantly because the start of the COVID-19 pandemic. Valeria Venezia – stock.adobe.com

“Keeping humans within the loop isn’t optional — it’s the safeguard that protects lives.” he said.

Indeed, 22% of the Pearl.com survey takers admitted they followed health guidance later proven flawed.

There are several ways in which AI can go awry.

A Mount Sinai study from August found that popular AI chatbots are very vulnerable to repeating and even expanding on false medical information, a phenomenon often known as “hallucination.”

“Our internal studies reveal that 70% of AI firms include a disclaimer to seek the advice of a health care provider because they understand how common medical hallucinations are,” Kurtzig said.

“At the identical time, 29% of users rarely double-check the recommendation given by AI,” he continued. “That gap kills trust, and it could cost lives.”

Kurtzig noted that AI could misinterpret symptoms or miss signs of a serious condition, resulting in unnecessary alarm or a false sense of reassurance. Either way, proper care might be delayed.

In a recent Pearl.com survey, 23% of respondents reported believing AI’s medical advice over a health care provider. Richman Photo – stock.adobe.com

“AI also carries bias,” Kurtzig said.

“Studies show it describes men’s symptoms in additional severe terms while downplaying women’s, precisely the form of disparity that has kept women waiting years for diagnoses of endometriosis or PCOS,” he added. “As a substitute of fixing the gap, AI risks hard-wiring it in.”

And eventually, Kurtzig said AI will be “downright dangerous” on the subject of mental health.

Experts warn that using AI for mental health support poses significant risks, especially for vulnerable people.

AI has been shown in some situations to supply harmful responses and reinforce unhealthy thoughts. That’s why it’s vital to make use of AI thoughtfully.

Pearl.com (shown here) has human experts confirm AI-generated medical responses.

Kurtzig suggests having it help frame questions on symptoms, research and widespread wellness trends in your next appointment — and leaving the diagnosis and treatment options to the doctor.

He also highlighted his own service, Pearl.com, which has human experts confirm AI-generated medical responses.

“With 30% of Americans reporting they can’t reach emergency medical services inside a 15-minute drive from where they live,” Kurtzig said, “that is an amazing solution to make skilled medical expertise more accessible without the chance.”

When The Post asked Pearl.com if sodium bromide could replace sodium chloride in someone’s eating regimen, the response was: “I absolutely wouldn’t recommend replacing sodium chloride (table salt) with sodium bromide in your eating regimen. This might be dangerous for several vital reasons…”

RELATED POSTS

Judge Anthropic case preliminary OK to $1.5B settlement with authors

Google’s digital ads empire faces potential breakup as antitrust treatment trial kicks off

Perhaps it’s not price its salt on the subject of health advice.

A surprising medical case report published last month revealed that a 60-year-old man with no history of psychiatric or health conditions was hospitalized with paranoid psychosis and bromide poisoning after following ChatGPT’s advice.

The unidentified man was enthusiastic about cutting sodium chloride (table salt) from his eating regimen. He ended up substituting sodium bromide, a toxic compound, for 3 months upon consultation with the AI chatbot. Bromine can replace chlorine — for cleansing and sanitation, not for human consumption.

Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, reveals all of the ways in which AI can go flawed in providing medical advice to a user. Courtesy of Pearl.com

“[It was] precisely the form of error a licensed healthcare provider’s oversight would have prevented,” Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, told The Post. “[That] case shows just how dangerous AI health advice will be.”

In a recent Pearl.com survey, 37% of respondents reported that their trust in doctors has declined over the past yr.

Suspicion of doctors and hospitals isn’t recent — however it has intensified in recent times because of conflicting pandemic guidance, concerns over financial motives, poor quality of care and discrimination.

Skeptics are turning to AI, with 23% believing AI’s medical advice over a health care provider.

That worries Kurtzig. The AI CEO believes AI will be useful — however it doesn’t and can’t substitute for the judgment, ethical accountability or lived experience of medical professionals.

Mistrust within the healthcare community has increased significantly because the start of the COVID-19 pandemic. Valeria Venezia – stock.adobe.com

“Keeping humans within the loop isn’t optional — it’s the safeguard that protects lives.” he said.

Indeed, 22% of the Pearl.com survey takers admitted they followed health guidance later proven flawed.

There are several ways in which AI can go awry.

A Mount Sinai study from August found that popular AI chatbots are very vulnerable to repeating and even expanding on false medical information, a phenomenon often known as “hallucination.”

“Our internal studies reveal that 70% of AI firms include a disclaimer to seek the advice of a health care provider because they understand how common medical hallucinations are,” Kurtzig said.

“At the identical time, 29% of users rarely double-check the recommendation given by AI,” he continued. “That gap kills trust, and it could cost lives.”

Kurtzig noted that AI could misinterpret symptoms or miss signs of a serious condition, resulting in unnecessary alarm or a false sense of reassurance. Either way, proper care might be delayed.

In a recent Pearl.com survey, 23% of respondents reported believing AI’s medical advice over a health care provider. Richman Photo – stock.adobe.com

“AI also carries bias,” Kurtzig said.

“Studies show it describes men’s symptoms in additional severe terms while downplaying women’s, precisely the form of disparity that has kept women waiting years for diagnoses of endometriosis or PCOS,” he added. “As a substitute of fixing the gap, AI risks hard-wiring it in.”

And eventually, Kurtzig said AI will be “downright dangerous” on the subject of mental health.

Experts warn that using AI for mental health support poses significant risks, especially for vulnerable people.

AI has been shown in some situations to supply harmful responses and reinforce unhealthy thoughts. That’s why it’s vital to make use of AI thoughtfully.

Pearl.com (shown here) has human experts confirm AI-generated medical responses.

Kurtzig suggests having it help frame questions on symptoms, research and widespread wellness trends in your next appointment — and leaving the diagnosis and treatment options to the doctor.

He also highlighted his own service, Pearl.com, which has human experts confirm AI-generated medical responses.

“With 30% of Americans reporting they can’t reach emergency medical services inside a 15-minute drive from where they live,” Kurtzig said, “that is an amazing solution to make skilled medical expertise more accessible without the chance.”

When The Post asked Pearl.com if sodium bromide could replace sodium chloride in someone’s eating regimen, the response was: “I absolutely wouldn’t recommend replacing sodium chloride (table salt) with sodium bromide in your eating regimen. This might be dangerous for several vital reasons…”

1

Do you trust technology Today?

Tags: adviceCEOdangerousHealthreveals
Share219Tweet137
INBV News

INBV News

Related Posts

edit post
Judge Anthropic case preliminary OK to $1.5B settlement with authors

Judge Anthropic case preliminary OK to $1.5B settlement with authors

by INBV News
September 25, 2025
0

Dario Amodei, co-founder and chief executive officer of Anthropic, on the World Economic Forum in 2025.Stefan Wermuth | Bloomberg |...

edit post
Google’s digital ads empire faces potential breakup as antitrust treatment trial kicks off

Google’s digital ads empire faces potential breakup as antitrust treatment trial kicks off

by INBV News
September 25, 2025
0

Google is once more facing the opportunity of a forced breakup as closely-watched hearings on easy methods to tackle its...

edit post
Instagram now has 3 billion monthly lively users

Instagram now has 3 billion monthly lively users

by INBV News
September 24, 2025
0

Instagram has installed a brand new privacy setting which is able to default all recent and existing underage accounts to...

edit post
Startups and founders may very well be hardest hit by $100,000 H-1B visas

Startups and founders may very well be hardest hit by $100,000 H-1B visas

by INBV News
September 23, 2025
0

U.S. President Donald Trump's plans to position $100,000 fees on H-1B visa applications will disproportionately harm America's startup space, founders...

edit post
Nvidia investing as much as $100B in ChatGPT owner OpenAI

Nvidia investing as much as $100B in ChatGPT owner OpenAI

by INBV News
September 22, 2025
0

Chipmaker Nvidia will invest as much as $100 billion in ChatGPT owner OpenAI and supply it with data center chips, the...

Next Post
edit post
Xander Schauffele suprised with where game is before Ryder Cup

Xander Schauffele suprised with where game is before Ryder Cup

edit post
How Britain’s startup sector is evolving

How Britain's startup sector is evolving

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

CATEGORIES

  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Podcast
  • Politics
  • Sports
  • Technology
  • Travel
  • Videos
  • Weather
  • World News

CATEGORY

  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Podcast
  • Politics
  • Sports
  • Technology
  • Travel
  • Videos
  • Weather
  • World News

SITE LINKS

  • About us
  • Contact us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA

[mailpoet_form id=”1″]

  • About us
  • Contact us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA

© 2022. All Right Reserved By Inbvnews.com

No Result
View All Result
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream

© 2022. All Right Reserved By Inbvnews.com

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist