Mark Zuckerberg, chief executive officer of Meta Platforms Inc., left, arrives at federal court in San Jose, California, US, on Tuesday, Dec. 20, 2022.
David Paul Morris | Bloomberg | Getty Images
Toward the tip of 2022, engineers on Meta’s team combating misinformation were able to debut a key fact-checking tool that had taken half a 12 months to construct. The corporate needed all of the reputational help it could get after a string of crises had badly damaged the credibility of Facebook and Instagram and given regulators additional ammunition to bear down on the platforms.
The brand new product would let third-party fact-checkers like The Associated Press and Reuters, in addition to credible experts, add comments at the highest of questionable articles on Facebook as a technique to confirm their trustworthiness.
But CEO Mark Zuckerberg’s commitment to make 2023 the “12 months of efficiency” spelled the tip of the ambitious effort, based on three people accustomed to the matter who asked to not be named as a result of confidentiality agreements.
Over multiple rounds of layoffs, Meta announced plans to eliminate roughly 21,000 jobs, a mass downsizing that had an outsized effect on the corporate’s trust and safety work. The actual fact-checking tool, which had initial buy-in from executives and was still in a testing phase early this 12 months, was completely dissolved, the sources said.
A Meta spokesperson didn’t reply to questions related to job cuts in specific areas and said in an emailed statement that “we remain focused on advancing our industry-leading integrity efforts and proceed to take a position in teams and technologies to guard our community.”
Across the tech industry, as firms tighten their belts and impose hefty layoffs to handle macroeconomic pressures and slowing revenue growth, wide swaths of individuals tasked with protecting the web’s most-populous playgrounds are being shown the exits. The cuts come at a time of increased cyberbullying, which has been linked to higher rates of adolescent self-harm, and because the spread of misinformation and violent content collides with the exploding use of artificial intelligence.
Of their most up-to-date earnings calls, tech executives highlighted their commitment to “do more with less,” boosting productivity with fewer resources. Meta, Alphabet, Amazon and Microsoft have all cut 1000’s of jobs after staffing up rapidly before and through the Covid pandemic. Microsoft CEO Satya Nadella recently said his company would suspend salary increases for full-time employees.
The slashing of teams tasked with trust and safety and AI ethics is an indication of how far firms are willing to go to fulfill Wall Street demands for efficiency, even with the 2024 U.S. election season — and the web chaos that is expected to ensue — just months away from kickoff. AI ethics and trust and safety are different departments inside tech firms but are aligned on goals related to limiting real-life harm that may stem from use of their firms’ services and products.
“Abuse actors are often ahead of the sport; it’s cat and mouse,” said Arjun Narayan, who previously served as a trust and safety lead at Google and TikTok parent ByteDance, and is now head of trust and safety at news aggregator app Smart News. “You are all the time playing catch-up.”
For now, tech firms appear to view each trust and safety and AI ethics as cost centers.
Twitter effectively disbanded its ethical AI team in November and laid off all but one among its members, together with 15% of its trust and safety department, based on reports. In February, Google cut about one-third of a unit that goals to guard society from misinformation, radicalization, toxicity and censorship. Meta reportedly ended the contracts of about 200 content moderators in early January. It also laid off at the least 16 members of Instagram’s well-being group and greater than 100 positions related to trust, integrity and responsibility, based on documents filed with the U.S. Department of Labor.
Andy Jassy, chief executive officer of Amazon.Com Inc., through the GeekWire Summit in Seattle, Washington, U.S., on Tuesday, Oct. 5, 2021.
David Ryder | Bloomberg | Getty Images
In March, Amazon downsized its responsible AI team and Microsoft laid off its entire ethics and society team – the second of two layoff rounds that reportedly took the team from 30 members to zero. Amazon didn’t reply to a request for comment, and Microsoft pointed to a blog post regarding its job cuts.
At Amazon’s game streaming unit Twitch, staffers learned of their fate in March from an ill-timed internal post from Amazon CEO Andy Jassy.
Jassy’s announcement that 9,000 jobs could be cut companywide included 400 employees at Twitch. Of those, about 50 were a part of the team chargeable for monitoring abusive, illegal or harmful behavior, based on people accustomed to the matter who spoke on the condition of anonymity because the small print were private.
The trust and safety team, or T&S because it’s known internally, was losing about 15% of its staff just as content moderation was seemingly more vital than ever.
In an email to employees, Twitch CEO Dan Clancy didn’t call out the T&S department specifically, but he confirmed the broader cuts amongst his staffers, who had just learned in regards to the layoffs from Jassy’s post on a message board.
“I’m disillusioned to share the news this fashion before we’re in a position to communicate on to those that will likely be impacted,” Clancy wrote in the e-mail, which was viewed by CNBC.
‘Hard to win back consumer trust’
A current member of Twitch’s T&S team said the remaining employees within the unit are feeling “whiplash” and worry a few potential second round of layoffs. The person said the cuts caused a giant hit to institutional knowledge, adding that there was a major reduction in Twitch’s law enforcement response team, which deals with physical threats, violence, terrorism groups and self-harm.
A Twitch spokesperson didn’t provide a comment for this story, as an alternative directing CNBC to a blog post from March announcing the layoffs. The post didn’t include any mention of trust and safety or content moderation.
Narayan of Smart News said that with a scarcity of investment in safety at the foremost platforms, firms lose their ability to scale in a way that keeps pace with malicious activity. As more problematic content spreads, there’s an “erosion of trust,” he said.
“In the long term, it’s really hard to win back consumer trust,” Narayan added.
While layoffs at Meta and Amazon followed demands from investors and a dramatic slump in ad revenue and share prices, Twitter’s cuts resulted from a change in ownership.
Almost immediately after Elon Musk closed his $44 billion purchase of Twitter in October, he began eliminating 1000’s of jobs. That included all but one member of the corporate’s 17-person AI ethics team, based on Rumman Chowdhury, who served as director of Twitter’s machine learning ethics, transparency and accountability team. The last remaining person ended up quitting.
The team members learned of their status when their laptops were turned off remotely, Chowdhury said. Hours later, they received email notifications.
“I had only in the near past gotten head count to construct out my AI red team, so these could be the individuals who would adversarially hack our models from an ethical perspective and take a look at to do this work,” Chowdhury told CNBC. She added, “It really just felt just like the rug was pulled as my team was stepping into our stride.”
A part of that stride involved working on “algorithmic amplification monitoring,” Chowdhury said, or tracking elections and political parties to see if “content was being amplified in a way that it shouldn’t.”
Chowdhury referenced an initiative in July 2021, when Twitter’s AI ethics team led what was billed because the industry’s first-ever algorithmic bias bounty competition. The corporate invited outsiders to audit the platform for bias, and made the outcomes public.
Chowdhury said she worries that now Musk “is actively searching for to undo all of the work we now have done.”
“There is no such thing as a internal accountability,” she said. “We served two of the product teams to be certain that what’s happening behind the scenes was serving the people on the platform equitably.”
Twitter didn’t provide a comment for this story.
Advertisers are pulling back in places where they see increased reputational risk.
In accordance with Sensor Tower, six of the highest 10 categories of U.S. advertisers on Twitter spent much less in the primary quarter of this 12 months compared with a 12 months earlier, with that group collectively slashing its spending by 53%. The positioning has recently come under fire for allowing the spread of violent images and videos.
The rapid rise in popularity of chatbots is barely complicating matters. The kinds of AI models created by OpenAI, the corporate behind ChatGPT, and others make it easier to populate fake accounts with content. Researchers from the Allen Institute for AI, Princeton University and Georgia Tech ran tests in ChatGPT’s application programming interface (API), and located as much as a sixfold increase in toxicity, depending on which kind of functional identity, akin to a customer support agent or virtual assistant, an organization assigned to the chatbot.
Regulators are paying close attention to AI’s growing influence and the simultaneous downsizing of groups dedicated to AI ethics and trust and safety. Michael Atleson, an attorney on the Federal Trade Commission’s division of promoting practices, called out the paradox in a blog post earlier this month.
“Given these many concerns in regards to the use of latest AI tools, it’s perhaps not the perfect time for firms constructing or deploying them to remove or fire personnel dedicated to ethics and responsibility for AI and engineering,” Atleson wrote. “If the FTC comes calling and you would like to persuade us that you just adequately assessed risks and mitigated harms, these reductions won’t be a great look.”
Meta as a bellwether
For years, because the tech industry was having fun with an prolonged bull market and the highest web platforms were flush with money, Meta was viewed by many experts as a pacesetter in prioritizing ethics and safety.
The corporate spent years hiring trust and safety staff, including many with academic backgrounds within the social sciences, to assist avoid a repeat of the 2016 presidential election cycle, when disinformation campaigns, often operated by foreign actors, ran rampant on Facebook. The embarrassment culminated within the 2018 Cambridge Analytica scandal, which exposed how a 3rd party was illicitly using personal data from Facebook.
But following a brutal 2022 for Meta’s ad business — and its stock price — Zuckerberg went into cutting mode, winning plaudits along the best way from investors who had complained of the corporate’s bloat.
Beyond the fact-checking project, the layoffs hit researchers, engineers, user design experts and others who worked on issues pertaining to societal concerns. The corporate’s dedicated team focused on combating misinformation suffered quite a few losses, 4 former Meta employees said.
Prior to Meta’s first round of layoffs in November, the corporate had already taken steps to consolidate members of its integrity team right into a single unit. In September, Meta merged its central integrity team, which handles social matters, with its business integrity group tasked with addressing ads and business-related issues like spam and pretend accounts, ex-employees said.
In the following months, as broader cuts swept across the corporate, former trust and safety employees described working under the fear of looming layoffs and for managers who sometimes didn’t see how their work affected Meta’s bottom line.
For instance, things like improving spam filters that required fewer resources could get clearance over long-term safety projects that will entail policy changes, akin to initiatives involving misinformation. Employees felt incentivized to tackle more manageable tasks because they might show their leads to their six-month performance reviews, ex-staffers said.
Ravi Iyer, a former Meta project manager who left the corporate before the layoffs, said that the cuts across content moderation are less bothersome than the proven fact that lots of the people he knows who lost their jobs were performing critical roles on design and policy changes.
“I do not think we should always reflexively think that having fewer trust and safety staff means platforms will necessarily be worse,” said Iyer, who’s now the managing director of the Psychology of Technology Institute at University of Southern California’s Neely Center. “Nevertheless, lots of the people I’ve seen laid off are amongst essentially the most thoughtful in rethinking the elemental designs of those platforms, and if platforms usually are not going to take a position in reconsidering design selections which have been proven to be harmful — then yes, we should always all be frightened.”
A Meta spokesperson previously downplayed the importance of the job cuts within the misinformation unit, tweeting that the “team has been integrated into the broader content integrity team, which is substantially larger and focused on integrity work across the corporate.”
Still, sources accustomed to the matter said that following the layoffs, the corporate has fewer people working on misinformation issues.
For individuals who’ve gained expertise in AI ethics, trust and safety and related content moderation, the employment picture looks grim.
Newly unemployed staff in those fields from across the social media landscape told CNBC that there aren’t many job openings of their area of specialization as firms proceed to trim costs. One former Meta worker said that after interviewing for trust and safety roles at Microsoft and Google, those positions were suddenly axed.
An ex-Meta staffer said the corporate’s retreat from trust and safety is more likely to filter right down to smaller peers and startups that seem like “following Meta by way of their layoff strategy.”
Chowdhury, Twitter’s former AI ethics lead, said all these jobs are a natural place for cuts because “they don’t seem to be seen as driving profit in product.”
“My perspective is that it’s completely the improper framing,” she said. “But it surely’s hard to exhibit value when your value is that you just’re not being sued or someone will not be being harmed. We do not have a shiny widget or a flowery model at the tip of what we do; what we now have is a community that is protected and guarded. That could be a long-term financial profit, but within the quarter over quarter, it’s really hard to measure what which means.”
At Twitch, the T&S team included individuals who knew where to look to identify dangerous activity, based on a former worker within the group. That is particularly vital in gaming, which is “its own unique beast,” the person said.
Now, there are fewer people checking in on the “dark, scary places” where offenders hide and abusive activity gets groomed, the ex-employee added.
More importantly, no one knows how bad it will probably get.
WATCH: CNBC’s interview with Elon Musk