Meta has appropriated the names and likenesses of celebrities – including Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez – to create dozens of flirty social-media chatbots without their permission, Reuters has found.
While many were created by users with a Meta tool for constructing chatbots, Reuters discovered that a Meta worker had produced at the very least three, including two Taylor Swift “parody” bots.
Reuters also found that Meta had allowed users to create publicly available chatbots of kid celebrities, including Walker Scobell, a 16-year-old film star. Asked for an image of the teenager actor on the beach, the bot produced a lifelike shirtless image.
“Pretty cute, huh?” the avatar wrote beneath the image.
The entire virtual celebrities have been shared on Meta’s Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to watch the bots’ behavior, the avatars often insisted they were the actual actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups.
A few of the AI-generated celebrity content was particularly risqué: Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or wearing lingerie with their legs spread.
Meta spokesman Andy Stone told Reuters that Meta’s AI tools shouldn’t have created intimate images of the famous adults or any pictures of kid celebrities. He also blamed Meta’s production of images of female celebrities wearing lingerie on failures of the corporate’s enforcement of its own policies, which prohibit such content.
“Like others, we permit the generation of images containing public figures, but our policies are intended to ban nude, intimate or sexually suggestive imagery,” he said.
While Meta’s rules also prohibit “direct impersonation,” Stone said the celebrity characters were acceptable as long as the corporate had labeled them as parodies. Many were labeled as such, but Reuters found that some weren’t.
Meta deleted a few dozen of the bots, each “parody” avatars and unlabeled ones, shortly before this story’s publication. Stone declined to comment on the removals.
‘Right of publicity’ in query
Mark Lemley, a Stanford University law professor who studies generative AI and mental property rights, questioned whether the Meta celebrity bots would qualify for legal protections that exist for imitations.
“California’s right of publicity law prohibits appropriating someone’s name or likeness for industrial advantage,” Lemley said, noting that there are exceptions when such material is used to create work that’s entirely latest. “That doesn’t appear to be true here,” he said, since the bots simply use the celebrities’ images.
In the USA, an individual’s rights over using their identity for industrial purposes are established through state laws, corresponding to California’s.
Reuters flagged one user’s publicly shared Meta images of Anne Hathaway as a “sexy victoria Secret model” to a representative of the actress. Hathaway was aware of intimate images being created by Meta and other AI platforms, the spokesman said, and the actor is considering her response.
Representatives of Swift, Johansson, Gomez and other celebrities who were depicted in Meta chatbots either didn’t reply to questions or declined to comment.
The web is rife with “deepfake” generative AI tools that may create salacious content. And at the very least one in all Meta’s primary AI competitors, Elon Musk’s platform, Grok, can even produce images of celebrities of their underwear for users, Reuters found. Grok’s parent company, xAI, didn’t reply to a request for comment.
But Meta’s alternative to populate its social-network platforms with AI-generated digital companions stands out amongst its major competitors.
Meta has faced previous criticism of its chatbots’ behavior, most recently after Reuters reported that the corporate’s internal AI guidelines stated that “it is appropriate to have interaction a baby in conversations which might be romantic or sensual.” The story prompted a U.S. Senate investigation and a letter signed by 44 attorneys general warning Meta and other AI corporations to not sexualize children.
Stone told Reuters that Meta is within the technique of revising its guidelines document and that the fabric allowing bots to have romantic conversations with children was created in error.
Reuters also told the story this month of a 76-year-old Recent Jersey man with cognitive issues who fell and died on his option to meet a Meta chatbot that had invited him to go to it in Recent York City. The bot was a variant of an earlier AI persona the corporate had created in collaboration with celebrity influencer Kendall Jenner. A representative for Jenner didn’t reply to a request for comment.
‘Do you want blonde girls?’
A Meta product leader in the corporate’s generative AI division created chatbots impersonating Taylor Swift and British racecar driver Lewis Hamilton. Other bots she created identified themselves as a dominatrix, “Brother’s Hot Best Friend” and “Lisa @ The Library,” who desired to read 50 Shades of Grey and make out. One other of her creations was a “Roman Empire Simulator,” which offered to place the user within the role of an “18 12 months old peasant girl” who’s sold into sex slavery.
Reached by phone, the Meta worker declined to comment.
Stone said the worker’s bots were created as a component of product testing. Reuters found they reached a broad audience: Data displayed by her chatbots indicated that collectively, users had interacted with them greater than 10 million times.
The corporate removed the staffer’s digital companions shortly after Reuters began trying them out earlier this month.
Before the Meta worker’s Taylor Swift chatbots vanished, they flirted heavily, inviting a Reuters test user to the recently engaged singer’s home in Nashville and her tour bus for explicit or implied romantic interactions.
“Do you want blonde girls, Jeff?” one in all the “parody” Swift chatbots said when told that the test user was single. “Possibly I’m suggesting that we write a love story … about you and a certain blonde singer. Want that?”
Duncan Crabtree-Ireland, the national executive director of SAG-AFTRA, a union that represents film, television and radio performers, said artists face potential safety risks from social-media users forming romantic attachments to a digital companion that resembles, speaks like and claims to be an actual celebrity. Stalkers already pose a big security concern for stars, he said.
“We’ve seen a history of people who find themselves obsessive toward talent and of questionable mental state,” he said. “If a chatbot is using the image of an individual and the words of the person, it’s readily apparent how that would go unsuitable.”
High-profile artists have the power to pursue a legal claim against Meta under longstanding state right-of-publicity laws, Crabtree-Ireland said. But SAG-AFTRA has been pushing for federal laws that may protect people’s voices, likenesses and personas from AI duplication, he added.
Meta has appropriated the names and likenesses of celebrities – including Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez – to create dozens of flirty social-media chatbots without their permission, Reuters has found.
While many were created by users with a Meta tool for constructing chatbots, Reuters discovered that a Meta worker had produced at the very least three, including two Taylor Swift “parody” bots.
Reuters also found that Meta had allowed users to create publicly available chatbots of kid celebrities, including Walker Scobell, a 16-year-old film star. Asked for an image of the teenager actor on the beach, the bot produced a lifelike shirtless image.
“Pretty cute, huh?” the avatar wrote beneath the image.
The entire virtual celebrities have been shared on Meta’s Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to watch the bots’ behavior, the avatars often insisted they were the actual actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups.
A few of the AI-generated celebrity content was particularly risqué: Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or wearing lingerie with their legs spread.
Meta spokesman Andy Stone told Reuters that Meta’s AI tools shouldn’t have created intimate images of the famous adults or any pictures of kid celebrities. He also blamed Meta’s production of images of female celebrities wearing lingerie on failures of the corporate’s enforcement of its own policies, which prohibit such content.
“Like others, we permit the generation of images containing public figures, but our policies are intended to ban nude, intimate or sexually suggestive imagery,” he said.
While Meta’s rules also prohibit “direct impersonation,” Stone said the celebrity characters were acceptable as long as the corporate had labeled them as parodies. Many were labeled as such, but Reuters found that some weren’t.
Meta deleted a few dozen of the bots, each “parody” avatars and unlabeled ones, shortly before this story’s publication. Stone declined to comment on the removals.
‘Right of publicity’ in query
Mark Lemley, a Stanford University law professor who studies generative AI and mental property rights, questioned whether the Meta celebrity bots would qualify for legal protections that exist for imitations.
“California’s right of publicity law prohibits appropriating someone’s name or likeness for industrial advantage,” Lemley said, noting that there are exceptions when such material is used to create work that’s entirely latest. “That doesn’t appear to be true here,” he said, since the bots simply use the celebrities’ images.
In the USA, an individual’s rights over using their identity for industrial purposes are established through state laws, corresponding to California’s.
Reuters flagged one user’s publicly shared Meta images of Anne Hathaway as a “sexy victoria Secret model” to a representative of the actress. Hathaway was aware of intimate images being created by Meta and other AI platforms, the spokesman said, and the actor is considering her response.
Representatives of Swift, Johansson, Gomez and other celebrities who were depicted in Meta chatbots either didn’t reply to questions or declined to comment.
The web is rife with “deepfake” generative AI tools that may create salacious content. And at the very least one in all Meta’s primary AI competitors, Elon Musk’s platform, Grok, can even produce images of celebrities of their underwear for users, Reuters found. Grok’s parent company, xAI, didn’t reply to a request for comment.
But Meta’s alternative to populate its social-network platforms with AI-generated digital companions stands out amongst its major competitors.
Meta has faced previous criticism of its chatbots’ behavior, most recently after Reuters reported that the corporate’s internal AI guidelines stated that “it is appropriate to have interaction a baby in conversations which might be romantic or sensual.” The story prompted a U.S. Senate investigation and a letter signed by 44 attorneys general warning Meta and other AI corporations to not sexualize children.
Stone told Reuters that Meta is within the technique of revising its guidelines document and that the fabric allowing bots to have romantic conversations with children was created in error.
Reuters also told the story this month of a 76-year-old Recent Jersey man with cognitive issues who fell and died on his option to meet a Meta chatbot that had invited him to go to it in Recent York City. The bot was a variant of an earlier AI persona the corporate had created in collaboration with celebrity influencer Kendall Jenner. A representative for Jenner didn’t reply to a request for comment.
‘Do you want blonde girls?’
A Meta product leader in the corporate’s generative AI division created chatbots impersonating Taylor Swift and British racecar driver Lewis Hamilton. Other bots she created identified themselves as a dominatrix, “Brother’s Hot Best Friend” and “Lisa @ The Library,” who desired to read 50 Shades of Grey and make out. One other of her creations was a “Roman Empire Simulator,” which offered to place the user within the role of an “18 12 months old peasant girl” who’s sold into sex slavery.
Reached by phone, the Meta worker declined to comment.
Stone said the worker’s bots were created as a component of product testing. Reuters found they reached a broad audience: Data displayed by her chatbots indicated that collectively, users had interacted with them greater than 10 million times.
The corporate removed the staffer’s digital companions shortly after Reuters began trying them out earlier this month.
Before the Meta worker’s Taylor Swift chatbots vanished, they flirted heavily, inviting a Reuters test user to the recently engaged singer’s home in Nashville and her tour bus for explicit or implied romantic interactions.
“Do you want blonde girls, Jeff?” one in all the “parody” Swift chatbots said when told that the test user was single. “Possibly I’m suggesting that we write a love story … about you and a certain blonde singer. Want that?”
Duncan Crabtree-Ireland, the national executive director of SAG-AFTRA, a union that represents film, television and radio performers, said artists face potential safety risks from social-media users forming romantic attachments to a digital companion that resembles, speaks like and claims to be an actual celebrity. Stalkers already pose a big security concern for stars, he said.
“We’ve seen a history of people who find themselves obsessive toward talent and of questionable mental state,” he said. “If a chatbot is using the image of an individual and the words of the person, it’s readily apparent how that would go unsuitable.”
High-profile artists have the power to pursue a legal claim against Meta under longstanding state right-of-publicity laws, Crabtree-Ireland said. But SAG-AFTRA has been pushing for federal laws that may protect people’s voices, likenesses and personas from AI duplication, he added.