They’re going AI-ncognito.
Don’t like the concept of IT people peeping in your private ChatGPT messages?
To not fear: Parent company OpenAI has unveiled a latest stealth mode that protects users’ conversations from employees’ prying eyes, in addition to from potential leaks.
“We’ll be moving increasingly more on this direction of prioritizing user privacy,” Mira Murati, the Silicon Valley firm’s chief technology officer, told Reuters of the privacy-promoting measure.
Much like an internet browser’s “incognito mode,” the brand new feature, released Tuesday, allows AI diehards to toggle off “Chat History & Training” in settings.
Meaning — in theory — that GPT will not store a compendium of the user’s previous chats, nor will these be reviewed by “AI trainers to enhance our systems,” per the policy stipulated on the location.
Murati said that the measure was a part of a monthslong privacy campaign geared toward putting users “in the motive force’s seat” regarding data collection.
“It’s completely eyes off and the models are superaligned: They do the things that you need to do,” the tech expert insisted.
This measure comes amid an uptick in privacy concerns surrounding ChatGPT.
Last month, Italian regulators banned the Microsoft-backed bot following a reported data breach, which raised alarm bells over user privacy and kids’s safety.
The agency said that OpenAI had “no legal basis” for harvesting user data that was being gathered “to coach the algorithms that power the platform.”
Around the identical time, OpenAI reported that a glitch resulted in ChatGPT by accident sharing random users’ conversation histories, PC magazine reported.
GPTers got sensible to the snafu after noticing that their history function was displaying unfamiliar conversations from apparent strangers.
“We feel awful about this,” OpenAI CEO Sam Altman tweeted in regards to the leak, which thankfully didn’t disclose users’ identities.