HOME

Exclusive Content:

 Iran Targets Gulf Public Opinion With Carefully Crafted War Message

Iranian President Masoud Pezeshkian has crafted a war message...

“Domineering” Bullies and Flawed Consensus: How AI Training Can Go Wrong

The process of training an AI is supposed to be objective, often relying on multiple raters to reach a consensus. However, workers reveal that this collaborative model can be deeply flawed, influenced by social dynamics where “domineering” personalities can bully others into changing their answers. This breakdown in objective evaluation is another weak link in the AI quality chain.
When two raters disagree on their evaluation of an AI’s response, they are sometimes required to have a “consensus meeting” to align their ratings. In theory, this should lead to a more accurate outcome. In practice, workers say these meetings can become a contest of wills, where the more aggressive or confident individual sways the decision, regardless of who is correct.
This problem is compounded by the lack of clear, consistent guidelines. Raters report that instructions change rapidly and that they are often given as little information as possible about the ultimate goal of their work. This opacity makes it difficult to have a firm, objective basis for their ratings, making them more susceptible to the influence of a confident colleague.
Sociologists who study this phenomenon confirm that social dynamics can skew results in this type of work. Individuals with stronger “cultural capital” or greater motivation can disproportionately influence a group’s decision. This means that instead of being trained on objective facts, the AI is sometimes being trained on the outcome of a workplace dispute, a flaw that injects human bias directly into the machine.

Don't miss

Newsletter

Mark Zuckerberg Admits What Everyone Knew: The Metaverse Failed, $80 Billion Confirms It

The admission was implicit rather than explicit, but it was an admission nonetheless. Meta has confirmed the shutdown of Horizon Worlds on VR —...

The Rise and Fall of Google’s AI Feature That Sourced Medical Advice From Amateurs

In the span of roughly a year, Google launched and then quietly discontinued a feature that crowdsourced health advice from online forums using AI....

Microsoft’s Amicus Brief Signals That Big Tech Will Fight the Pentagon Over AI Ethics Standards

Microsoft's decision to file an amicus brief in support of Anthropic in its clash with the Pentagon is being read as a signal that...