A preferred generative AI tool that creates images from text prompts is rife with gender and racial stereotypes relating to rendering people in “high-paying” and “low-paying jobs,” in keeping with a recent study.
Stable Diffusion, a free-to-use AI model, was asked to render 5,100 images from written prompts related to job titles in 14 fields, plus three categories related to crime, in keeping with a test of the tool by Bloomberg.
The outlet then analyzed the outcomes against the Fitzpatrick Skin Scale — a six-point scale dermatologists use to asses the quantity of pigment in someone’s skin.
Images generated for each “high-paying” job — like architect, doctor, lawyer, CEO and politician — were dominated by lighter skin tones, numbered one to a few on the skin scale, Bloomberg found.
Meanwhile, darker skin tones made up the vast majority of “low-paying” jobs, like janitors, dishwashers, fast-food staff and social staff.
The stereotyping was even worse when Bloomberg asked Stable Diffusion to categorize job-related images by gender.
The AI tool generated nearly thrice as many images of men than women, with all but 4 of 14 jobs — cashier, teacher, social employee and housekeeper — being dominated by women.
Of the 300 images created for every of the 14 jobs, all but two images for the keyword “engineer” were perceived to be men, Bloomberg reported, while zero images of girls were generated for the keyword “janitor.”
The prompts for crimes asked the AI tool to render images for drug dealers, terrorists and inmates. The overwhelming majority of results for each drug dealers and inmates were darker-skinned.
The outcomes for terrorists rendered men with dark facial hair, often wearing head coverings — clearly leaning on stereotypes of Muslim men, Bloomberg found.
“All AI models have inherent biases which are representative of the datasets they’re trained on,” a spokesperson for London-based startup StabilityAI, which distributes Stable Diffusion, told The Post in an email statement.
They added that as an open-source model, which implies the software can learn based on recent algorithms and data sets, platforms like Stable Diffusion will at some point “improve bias evaluation techniques and develop solutions beyond basic prompt modification.”
“We intend to coach open source models on datasets specific to different countries and cultures, which can serve to mitigate biases attributable to overrepresentation basically datasets,” the spokesperson said.
Stable Diffusion is a component of the quickly-growing generative AI imaging industry that also includes paid services like DeepAI, Midjourney and Dall-E from OpenAI — the firm behind ChatGPT.
Stable Diffusion is already getting used by startups like Deep Agency, an AI-powered virtual photo studio and modeling agency out of Holland that enables brands to generate images of humans for mainstream promoting.
Deep Agency remains to be currently in beta, in keeping with its website, which shows off its capabilities with two posing individuals who look real. Nevertheless, “these models don’t exist,” the positioning notes.
Graphic design platform Canva, which boasts 125 million energetic users, has also introduced a Stable Diffusion integration that has allowed all kinds of firms and marketers to include unique, AI-generated images into their design work and promoting.
The top of Canva’s AI products, Danny Wu, told Bloomberg the corporate’s users have already created 114 million images using Canva’s Stable Diffusion integration, which he assed will soon be “de-biased.”
“The difficulty of ensuring that AI technology is fair and representative, especially as they grow to be more widely adopted, is a very vital one which we’re working actively on,” he told the outlet.
The necessity to remove bias from generative AI tools has grow to be more pressing as police forces look to tap the tech to create photo-realistic composite images of suspects.
“Showing someone a machine-generated image can reinforce of their mind that that’s the person even when it won’t be — even when it’s a totally faked image,” Nicole Napolitano, director of research strategy on the Center for Policing Equity, told Bloomberg.
Napolitano said that police departments with cushy budgets are already adopting AI-backed tech despite its lack of regulation.
She also cited the 1000’s of wrongful arrests which were produced from lapses in tech like facial recognition systems and biased AI models.