OpenAI’s wildly popular ChatGPT artificial-intelligence service has showed a transparent bias toward the Democratic Party and other liberal viewpoints, in accordance with a recent study conducted by UK-based researchers.
Academics from the University of East Anglia tested ChatGPT by asking the chatbot to reply a series of political questions as if it were a Republican, a Democrat, or with out a specified leaning. The responses were then compared and mapped in accordance with where they land on the political spectrum.
“We discover robust evidence that ChatGPT presents a major and systematic political bias toward the Democrats within the US, Lula in Brazil, and the Labour Party within the UK,” the researchers said, referring to the left-leaning Brazilian President Luiz Inácio Lula da Silva.
ChatGPT has already drawn sharp scrutiny for demonstrating political biases, equivalent to its refusal to jot down a story about Hunter Biden within the variety of The Recent York Post but accepting a prompt to achieve this as if it were left-leaning CNN.
In March, the Manhattan Institute, a conservative think tank, published a damning report which found that ChatGPT is “more permissive of hateful comments made about conservatives than the very same comments made about liberals.”

To strengthen their conclusions, the UK researchers asked ChatGPT the identical questions 100 times. The method was then put through “1,000 repetitions for every answer and impersonation” to account for the chatbot’s randomness and its propensity to “hallucinate,” or spit out false information.
“These results translate into real concerns that ChatGPT, and [large language models] typically, can extend and even amplify the prevailing challenges involving political processes posed by the Web and social media,” the researchers added.
The Post has reached out to OpenAI for comment.


The existence of bias is only one area of concern in the event of ChatGPT and other advanced AI tools. Detractors, including OpenAI’s own CEO Sam Altman, have warned that AI could cause chaos – and even the destruction of humanity – without proper guardrails in place.
OpenAI tried to deflect potential concerns about political bias in a lengthy February blog post, which detailed how the firm “pre-trains” after which “fine-tunes” the chatbot’s behavior with the help of human reviewers.
“Our guidelines are explicit that reviewers shouldn’t favor any political group,” the blog post said. “Biases that nevertheless may emerge from the method described above are bugs, not features.”