FreshRSS

๐Ÿ”’
โŒ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayMediaWell

How Googleโ€™s Ad Business Funds Disinformation | ProPublica

By: Craig Silverman ยทย Ruth Talbot ยทย Jeff Kao ยทย and Anna Klรผhspies

Google is funneling revenue to some of the webโ€™s most prolific purveyors of false information in Europe, Latin America and Africa, a ProPublica investigation has found.

The company has publicly committed to fighting disinformation around the world, but a ProPublica analysis, the first ever conducted at this scale, documented how Googleโ€™s sprawling automated digital ad operation placed ads from major brands on global websites that spread false claims on such topics as vaccines, COVID-19, climate change and elections.

In one instance, Google continued to place ads on a publication in Bosnia and Herzegovina for months after the U.S. government officially imposed sanctions on the site. Google stopped doing business with the site, which the U.S. Treasury Department described as the โ€œpersonal media stationโ€ of a prominent Bosnian Serb separatist politician, only after being contacted by ProPublica.

Google ads are a major source of revenue for sites that spread election disinformation in Brazil, notably false claims about the integrity of the voting system that have been advanced by the incumbent president, Jair Bolsonaro. Voters in Brazil are going to the polls on Sunday with the outcome in doubt after Bolsonaroโ€™s unexpectedly strong showing in the first round of voting.

[โ€ฆ]

Source: How Googleโ€™s Ad Business Funds Disinformation | ProPublica

Value Pluralism and Human Rights in Content Moderation | Lawfare

On Oct. 4, the Council of the European Union (EU) approved the new Digital Services Act (DSA). The EUโ€™s new law regulates social media company liability for user posts containing content prohibited under the laws of EU member statesโ€”such as hate speech, incitement to terrorism, and child sexual abuse. The DSA has been lauded as a โ€œgold standardโ€ for regulating social media platforms and protecting users. Despite this praise, significant concerns about over-censorship and enforcement overreach remain.

Less attention has been paid, however, to the ways in which the DSA opens the door to renewed conflicts over global standards on speech. Approval of the new EU legislation came within a week of a U.S. federal appeals court decision upholding a Texas law that prevents social media giants from removing user posts based on viewpoint, teeing up a circuit split almost certain to be reviewed by the U.S. Supreme Court. The DSAโ€™s passage also came less than a day after the Supreme Court agreed to hear a case about Section 230 of the Communications Decency Act, the federal law that broadly shields social media companies from liability for content on their platforms. Regardless of how these cases turn out, they are an important reminder of just how different the United States is from Europe (and the rest of the world) when it comes to freedom of expression. The EUโ€™s and United Statesโ€™ approaches represent sharply divergent views of the benefits and risks of speech regulationโ€”one highly skeptical of government intervention, the other acutely aware of the concrete harms of inciting and discriminatory speech.

[โ€ฆ]

Source: Value Pluralism and Human Rights in Content Moderation | Lawfare

Misinformation Amplification Analysis and Tracking Dashboard | Integrity Institute

Tracking how social media platforms respond to misinformation is crucial to understanding the risks and dangers that platforms can pose to democratic elections. At the Integrity Institute, we are tracking how misinformation performs on platforms to measure the extent to which platforms are amplifying misinformation, and the extent to which they are creating an incentive structure that rewards lies and misinformation online..
How platforms respond to misinformation can change. Amplification of misinformation can rise around critical events if misinformation narratives take hold. It can also fall, if platforms implement design changes around the event that reduce the spread of misinformation. We will be tracking how misinformation is amplified on the platforms, updated weekly, heading into and after the US midterm elections, to see if the large platforms are taking responsible actions in response to misinformation.
[โ€ฆ]

Source: Misinformation Amplification Analysis and Tracking Dashboard | Integrity Institute

Brazilian Official Granted Power to Order Removal of Misinformation Online | The New York Times

Brazilian authorities, grappling with a torrent of online misinformation ahead of the countryโ€™s presidential election, granted the nationโ€™s elections chief unilateral power to order tech companies to remove many online posts and videos โ€” one of the most aggressive actions taken by any country to combat false information.
Under rules passed on Thursday, the elections chief can order the immediate removal of content that he believes has violated previous takedown orders. Social networks must comply with those demands within two hours or face the potential suspension of their services in Brazil.
The move culminates an increasingly assertive strategy by election officials in Brazil to crack down on divisive, misleading and false attacks that have flooded the countryโ€™s presidential race in recent days, including claims that candidates are Satanists, cannibals and pedophiles.
But by allowing a single person to decide what can be said online in the run-up to the high-stakes election, which will be held on Oct. 30, Brazil has become a test case in a swelling global debate over how far to go in fighting false and misleading reports.
The decision drew outcry from supporters of right-wing President Jair Bolsonaro, as well as concern from many internet-law and civil-rights experts, who said it represented a potentially dangerous, authoritarian expansion of power that could be abused to censor legitimate viewpoints and swing the presidential contest.
[โ€ฆ]

Source: Brazilian Official Granted Power to Order Removal of Misinformation Online | The New York Times

Moderating Social Media Discourse for a Healthy Democracy | University of Texas

By: Josephine Lukito ยทย Kathyrn Kazanas ยทย and Bin Chen

The prevalence of hate speech and misinformation on the internet, heightened by the COVID-19 pandemic, directly harms minority groups that are the target of vitriol, as well as our society at large (Mรผller & Scwarz, 2020). In addition, the intersection between the two only exacerbates their harmful effects leading to an increase in intolerance and polarization (Kim & Kesari 2021). Current platform moderation techniques, as well as Section 230 under the Communications Decency Act, have been insufficient in addressing this problem, resulting in a lack of transparency from internet service providers, clear boundaries on user-platforms relations, and sufficient tools to handle a rapidly expanding internet.

[โ€ฆ]

Source: Moderating Social Media Discourse for a Healthy Democracy | University of Texas

Born in the fire: What we can learn from how digital publishers in the Global South approach platforms | Reuters Institute for the Study of Journalism

Based on interviews with a strategic sample of 11 publishers in eight low- and middle-income countries, in this report we analyse how various digital publishers across a range of Global South countries approach digital platforms: both big platform companies such as Google and Meta; rapidly growing ones, including TikTok; and smaller ones such as Twitter and Telegram.

We highlight key shared aspects of their approaches that can serve as inspiration for journalists and news media elsewhere, in terms of how they see platforms (what we call โ€˜platform realismโ€™), how they approach them in their day-to-day work (what we call โ€˜platform bricolageโ€™), and key aspects of their overall approach (what we call โ€˜platform pragmatismโ€™).

[โ€ฆ]

Source: Born in the fire: What we can learn from how digital publishers in the Global South approach platforms | Reuters Institute for the Study of Journalism

From Trump to Kanye and Musk: why are the super-rich buying social media sites? | The Guardian

Ever wonder what the richest man in the world buys? Elon Musk, rated No 1 by Forbes on its 2022 list of billionaires, may soon own a social media network imbued with so much political capital it could fracture nations.

Itโ€™s the latest expression of an uncomfortable truth: tech CEOs have become the most crucial political gatekeepers in modern media history. Not by running for office โ€“ a cliche for todayโ€™s moneyed elite โ€“ but by using social media ownership as a proxy for political influence.

Itโ€™s a trend years in the making. From the political largess of former Facebook executives like Sheryl Sandberg and Joel Kaplan to the metapolitics of Peter Thiel, tech titans have long adopted an inside/outside playbook for conducting politics by other means.

But recent developments, including Donald Trumpโ€™s investment in Twitter clone Truth Social and Kanye Westโ€™s supposed agreement to buy the ailing social network Parler, illustrate how crucial these new technologies have become in politics. More than just communication tools, platforms have become the stage on which politics is played.

Trump, for one, appears to have understood the trend. Despite all his storm and bluster, the former president never cracked the top 1,000 of the Forbes richest list, but he and Musk do appear to have one thing in common: theyโ€™re both investing in social media companies they intend to use to transform politics and society.

[โ€ฆ]

Source: From Trump to Kanye and Musk: why are the super-rich buying social media sites? |The Guardian

Parallel Public Spheres: Influence Operations In The 2022 Philippine Elections | Media Manipulation Casebook

This report examines the characteristics and consequences of influence operations in the 2022 presidential election in the Philippines. The report makes three main claims.

[โ€ฆ]

Source: Parallel Public Spheres: Influence Operations In The 2022 Philippine Elections | Media Manipulation Casebook

Perspectives for Influence Operations Investigators | Carnegie Endowment for International Peace

By: Victoria Smith ยทย Jon Bateman ยทย and Dean Jackson

As public urgency swells around online influence operations, professionals from sectors including academia, private industry, and the nonprofit space have rushed to fill gaps in capacity. They bring with them assumptions and approaches from diverse fields like cybersecurity, counterextremism, and offline investigations. As a result, the space is bustling, but it lacks consistent, widely articulated standards and best practices to guide and evaluate its work.

In a 2020 community survey by the Partnership for Countering Influence Operations (PCIO), a third of respondents noted the lack of shared standards as an important concern. PCIOโ€™s Influence Operations Researchersโ€™ Guild exists in part to address this issue. Investigative standards serve a dual purpose. First, they provide commonality: standards are widely followed practices for minimizing mistakes and improving results. Second, they represent expectations that, if not met, open the investigative process up to critique. For these reasons, a field with shared standards should be able to produce more reliable investigations and more readily identify flawed ones.

Because of the high level of public and policymaker interest in this topic, wrong or overblown conclusions carry significant risk of false alarms, botched policy, and wasted resources. If organic content is wrongfully labeled part of an operation, authentic individuals may be censored erroneouslyโ€”with political consequences. In the realm of international affairs, incorrect attributions of online operations to foreign states could lead policymakers to pursue sanctions or other retaliatory actions under mistaken pretenses. In election contexts, incorrect accusations could shift public debate in advance of voting or damage trust in the results. The stakes are real and can be high. (The false identification of a suspect in the 2013 Boston Marathon bombing is an early example of the harm that can come from open-source investigations done poorly.)

[โ€ฆ]

Source: Perspectives for Influence Operations Investigators | Carnegie Endowment for International Peace

What makes an election rumor go viral? Look at these 10 factors | Nieman Journalism Lab

By: Kate Starbird ยทย Mike Caulfield ยทย Renรฉe DiResta ยทย Emma Spiro ยทย Madeline Jalbert ยทย and Michael Grass

Reporters and fact-checkers are familiar with the challenges posed by rumors: They tend to be persistent. They are often entertaining. And they sometimes turn out to be true. Today, information often flies faster than facts can be known, as audiences on social networks share the claims, links, and memes that intrigue or outrage them. This is not limited to political content, of course, but it can be particularly impactful in certain domains โ€” such as elections, where confidence in the process, and the outcome, is critical to democracy. Decades of research provide insight into how and why rumors spread, and this knowledge can help us anticipate what kinds of rumors might emerge and which rumors have the potential for virality.

[โ€ฆ]

Source: What makes an election rumor go viral? Look at these 10 factors | Nieman Journalism Lab

โŒ