It makes mistakes.
On February 5, Twitter flagged a post from controversial YouTuber Tim Pool that said the 2020 U.S. presidential election was rigged. The platform noted that the claim was disputed and turned off engagement “due to a risk of violence.”
But, on Birdwatch, the social media platform’s experiment in crowdsourced fact-checking, users overwhelmingly said the tweet was not misleading, according to a Feb. 14 analysis of Twitter data. And most Birdwatch users indicated in the tool that they found these notes that supported debunked claims helpful and informative…
On Feb. 17, Twitter altered its algorithm and notes on the Pool tweet are no longer rated as helpful, although they are still listed below the post.
Before the change, less than a third of the “helpful” notes contained a source link that wasn’t just another tweet, Poynter notes (though after the change, that number rose to 75%). “It’s a timely illustration of one of the problems facing the Birdwatch model: Can an algorithm fed by a seemingly random group of people ever accurately ‘rate’ the truth?”
PolitiFact’s editor-in-chief suggested better training, incentives, and the use of professional fact-checkers. But even then, they still told Poynter “I’m pretty dubious of tech companies who believe their users will moderate content for free for them. Most users don’t see it as their job to help the platforms run their own businesses.”
Read more of this story at Slashdot.