ChatGPT may be known to plagiarize an essay or two, but its rogue counterparts are doing far worse.
Duplicate chatbots with criminal capabilities are surfacing on the dark web and — very like ChatGPT — will be accessed for a modest monthly subscription or one-time fee.
These language learning models, as they’re technically known, essentially function a tool chest for stylish online scammers.
Several dark web chatbots, DarkBERT, WormGPT and FraudGPT — the last of which works for $200 a month or $1,700 annually — have recently caught the eye of cybersecurity firm SlashNext. They were flagged for having the potential to create phishing scams and phony texts via remarkably believable images.
The corporate found evidence that DarkBERT illicitly sold “.edu” email addresses at $3 apiece to con artists impersonating academic institutions. These are used to wrongfully access student deals and discounts on marketplaces like Amazon.
One other grift, facilitated by FraudGPT, involves soliciting someone’s banking info by posing as a trusted entity, resembling the bank itself.
These varieties of swindles are nothing recent, but are more accessible than ever because of artificial intelligence, warns Lisa Palmer, an AI strategist for consulting firm AI Leaders.
“That is about crime that will be personalized at an enormous scale. [Scammers] can create campaigns which are highly personalized for hundreds of targeted victims versus having to create one by one,” she told The Post, adding that fraudulent, deepfake video and audio is now easy to create.
Furthermore, these attacks don’t just pose a threat to the elderly and less-than-tech-savvy.
“Since [these kind of models] are trained across large amounts of publicly available data, they may very well be used to search for patterns and data that’s shared in regards to the government — a government that they’re wanting to infiltrate or attack,” Palmer said. “It may very well be gathering details about specific businesses that might allow for things like ransom or status attacks.”
AI-driven character assassination could also facilitate a significant crime cyber security already struggles with defending.
“Take into consideration things like identity theft and having the ability to create identity theft campaigns,” Palmer said. “They’re highly personalized at an enormous scale. What you’re talking about listed below are taking crimes to an elevated level.”
Serving justice to those accountable for the outlaw LLMs won’t be easy, either.
“For those which are sophisticated organizations, it’s exceptionally hard to catch them,” Palmer said.
“On the opposite end of that, we even have these recent criminals which are being emboldened by recent language models because they make it easier for people without high-tech skills to enter illegal enterprises.”