Bias in artificial intelligence systems, or the fact that large language models, facial recognition, and AI image generators can only remix and regurgitate the information in data those technologies are trained on, is a well established fact that researchers and academics have been warning about since their inception.
In a blog post about the release of Llama 4, Meta’s open weights AI model, the company clearly states that bias is a problem it’s trying to address, but unlike mountains of research which established AI systems are more likely to discriminate against minorities based on race, gender, and nationality, Meta is specifically concerned with Llama 4 having a left-leaning political bias.
“It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics,” Meta said in its blog. “This is due to the types of training data available on the internet.”
“Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue,” Meta continues. “As part of this work, we’re continuing to make Llama more responsive so that it answers questions, can respond to a variety of different viewpoints without passing judgment, and doesn’t favor some views over others.”
Meta then lists a few “improvements” in Llama 4, including that the model will now less often refuse to engage users who ask about political and social topics overall, that it “is dramatically more balanced with which prompts it refuses to respond to,” and favorably compares its lack of a “strong political lean” to Grok, xAI’s LLM which Elon Musk continually promotes as a non-woke, “based” alternative to comparable products from OpenAI, Google, and Anthropic.
As Meta notes, there is no doubt that bias in AI systems is a well established issue. What’s notable and confusing here is that Meta chooses to frame and address the issue exclusively as a left leaning bias.
“I think, from the jump, this is a pretty naked response that every company (except for xAI, which already said it would not be ‘politically correct’) has taken in response to the Trump administration,” Alex Hanna, director of research at the Distributed AI Research Institute (DAIR) and co-author of the upcoming book The AI Con, told me in an email.
When reached for comment, Meta directed me back to its Llama 4 release blog, and two studies which showed that LLMs often fall on the left/ libertarian section of a four quadrant political compass/map, divided into left, right, libertarian, and authoritarian.
Other experts I talked to also questioned why Meta thought it was so important to push its model further to the right and how it chooses when to surface “both sides” of an argument.
“It is dangerous to approach scientific and empirical questions such as climate change, heath or the environment with a political lens as left/right leaning,” Abeba Birhane, a senior advisor on AI accountability at the Mozilla Foundation, told me in an email. “The ‘both sides’ approach here is false-equivalence, like that of treating an anti vax conspiracy theorist on a par with a scientist or medical doctor. One is illegitimate and dangerous, the other driven by verifiable empirical evidence.”
“I would challenge [Meta] to actually write out 1) what exactly is in their training data, how they selected what is in it—or if in fact it is just a big pile of whatever they could grab; 2) what kinds of issues they deem require ‘unbiased’ (read: ‘both-sides’) treatment, and how they determine that; and 3) who they believe is being harmed and how, when their synthetic text extruding machine fails to run the both-sides play on a given question; 4) what their justification is for promoting and enabling information ecosystem polluting devices in the first place—that is, the problem with ‘biased’ answers coming out of chatbots is easy to avoid: don’t set up chatbots as information access systems,” Emily Bender, a professor and director of the Computational Linguistics Laboratory at University of Washington, and co-author of The AI Con, told me in an email.
As Bender notes, if Meta blames this left leaning bias on training data, the more important question is what is in the training data, which Meta is unwilling to share.
“Without some kind of access to the data, it is impossible to verify Meta’s claims that data from the [the internet] is ‘left leaning.’” Birhane said. “Even if this were true, I would be cautious in assuming that data scraped from the [internet] reflects and/or corresponds to reality. It rather reflects the views of those with access to the [internet]… those digitally connected, which is heavily dominated by Western societies with views that often adhere to the status quo.”
As Hanna suggests, we can talk about the very real problems with bias in AI and the real data that may or may not be informing Meta’s tweaking of Llama here all day, but if we zoom out for a moment the reasoning behind its decisions is pretty transparent.
Mark Zuckerberg is pushing his company and its AI model to the right first because he’s appealing to the current administration and second because he sees himself in competition with an increasingly extreme and right wing Musk. The ways AI systems are biased and actually have impacts on people’s lives in practice is that they allow and empower technology and policies that are more popular with both authoritarians and conservatives. Most computer vision tech ultimately serves as some form of surveillance, sentencing algorithms discriminating against Black people, and a primary driver of AI generated images, video, and audio is nonconsensual media of women. The blog could explain what Meta is doing to mitigate any of those harms, but it doesn’t because at the moment it doesn’t align with Meta’s and Zuckerberg’s politics.
From 404 Media via this RSS feed