Piecing together why so many people are willing to share misinformation online is a major focus among behavioral scientists. It's easy to think partisanship is driving it allโpeople will simply share things that make their side look good or their opponents look bad. But the reality is a bit more complicated. Studies have indicated that many people don't seem to carefully evaluate links for accuracy, and that partisanship may be secondary to the rush of getting a lot of likes on social media. Given that, it's not clear what induces users to stop sharing things that a small bit of checking would show to be untrue.
So, a team of researchers tried the obvious: We'll give you money if you stop and evaluate a story's accuracy. The work shows that small payments and even minimal rewards boost the accuracy of people's evaluation of stories. Nearly all that effect comes from people recognizing stories that don't favor their political stance as factually accurate. While the cash boosted the accuracy of conservatives more, they were so far behind liberals in judging accuracy that the gap remains substantial.
The basic outline of the new experiments is pretty simple: get a bunch of people, ask them about their political leanings, and then show them a bunch of headlines as they would appear on a social media site such as Facebook. The headlines were rated based on their accuracy (i.e., whether they were true or misinformation) and whether they would be more favorable to liberals or conservatives.
A YouTuber, Marshall Danielsโwho has posted far-right-leaning videos under the name โYoung Pharaohโ since 2015โtried to argue that YouTube violated his First Amendment rights by removing two videos discussing George Floyd and COVID-19. Years later, Daniels now owes YouTube nearly $40,000 in attorney fees for filing a frivolous lawsuit against YouTube owner Alphabet, Inc.
A United States magistrate judge in California, Virginia K. DeMarchi, ordered Daniels to pay YouTube $38,576 for asserting a First Amendment claim that โclearly lacked merit and was frivolous from the outset.โ YouTube said this represents a conservative estimate and likely an underestimate of fees paid defending against the meritless claim.
In his defense, Daniels never argued that the fees Alphabet was seeking were excessive or could be burdensome. In making this rare decision in favor of the defendant Alphabet, DeMarchi had to consider Danielsโ financial circumstances. In his court filings, Daniels described himself as โa fledgling individual consumer,โ but also told the court that he made more than $180,000 in the year before he filed his complaint. DeMarchi ruled that the fees would not be a burden to Daniels.
The European Commission, which is tasked with tackling disinformation online, this week expressed disappointment that Twitter has failed to provide required data that all other major platforms submitted. Now Twitter has been hit with a "yellow card," Reuters reported, and could be subjected to fines if the platform doesnโt fully comply with European Union commitments by this June.
โWe must have more transparency and cannot rely on the online platforms alone for the quality of information,โ the commissionโs vice president of values and transparency, Vฤra Jourovรก, said in a press release. โThey need to be independently verifiable. I am disappointed to see that Twitter['s] report lags behind others, and I expect a more serious commitment to their obligations.โ
Earlier this month, the EUโs commissioner for the internal market, Thierry Breton, met with Twitter CEO Elon Musk to ensure that Musk understood what was expected of Twitter under the EUโs new Digital Services Act (DSA). After their meeting, Musk tweeted that the EUโs โgoals of transparency, accountability & accuracy of information are alignedโ with Twitterโs goals. But he also indicated that Twitter would be relying on Community Notes, which let users add context to potentially misleading tweets to satisfy DSA requirements on stopping misinformation and disinformation spread. That process seems to be the issue the commission has with Twitterโs unsatisfactory report.
It's fair to say that, once the pandemic started, sharing misinformation on social media took on an added, potentially fatal edge. Inaccurate information about the risks posed by the virus, the efficacy of masks, and the safety of vaccines put people at risk of preventable death. Yet despite the dangers of misinformation, it continues to run rampant on many social media sites, with moderation and policy often struggling to keep up.
If we're going to take any measures to address thisโsomething it's not clear that social media services are interested in doingโthen we have to understand why sharing misinformation is so appealing to people. An earlier study had indicated that people care about making sure that what they share is accurate, but they fail to check in many cases. A new study elaborates that by getting into why this disconnect develops: For many users, clicking "share" becomes a habit, something they pursue without any real thought.
People find plenty of reasons to post misinformation that have nothing to do with whether they mistakenly believe the information is accurate. The misinformation could make their opponents, political or otherwise, look bad. Alternately, it could signal to their allies that they're on the same side or part of the same cultural group. But the initial experiments described here suggest that this sort of biased sharing doesn't explain a significant amount of information.
โUnearth all the rats that have seized power and shoot them,โ read an ad approved by Facebook just days after a mob violently stormed government buildings in Brazilโs capital.
That violence was fueled by false election interference claims, mirroring attacks in the United States on January 6, 2021. Previously, Facebook-owner Meta said it was dedicated to blocking content designed to incite more post-election violence in Brazil. Yet today, the human rights organization Global Witness published results of a test that shows Meta is seemingly still accepting ads that do exactly that.
Global Witness submitted 16 ads to Facebook, with some calling on people to storm government buildings, others describing the election as stolen, and some even calling for the deaths of children whose parents voted for Brazilโs new president, Luiz Inรกcio Lula da Silva. Facebook approved all but two ads, which Global Witness digital threats campaigner Rosie Sharpe said proved that Facebook is not doing enough to enforce its ad policies restricting such violent content.