Mark Zuckerberg-led Meta may block news content from appearing on the social media platform in Australia, if the Australian government forces the tech giant to pay licensing fees.
As per Mia Garlick, Meta’s regional policy director, the venture was awaiting Canberra’s decision on whether to implement a newly introduced 2021 law, which grants the government authority to establish fees that American tech giants must pay media organisations for sharing links.
However, this copy is not about analysing the potential fallouts of the Australian government’s latest action against the social media giant. Yes, we are going to discuss something related to news and Facebook, but it will be all about the rampant spread of misinformation on the platform, as the world enters another crucial election year.
Not everything is black and white
It’s the election season in the United Kingdom. As per the media outlet ABC’s report, there have been signs of foreign interference to influence the voters. ABC monitored five coordinated Facebook pages which have been spreading alleged Kremlin agendas, with some posting in support of Nigel Farage’s populist Reform UK party, a key challenger to the Conservatives.
“The five pages identified by ABC Investigations as being part of a coordinated network appear to have little in common. One page presents itself as a pro-refugee left-wing group, while others reference white supremacist conspiracy theories and use AI-generated images of asylum seekers to stoke anti-immigration fears. The ABC has been able to link these seemingly disparate pages by examining the location data attached to the pages’ administrators, tracking paid ads, and by analysing the pages’ similar or shared content,” the report stated further.
The ABC shared its findings with disinformation experts, who said the network’s activity had the hallmarks of a Russian influence operation. AI Forensics is the same European non-profit research organisation that published research in April 2024 about a covert influence operation called “Doppelganger,” and found that Facebook ads with pro-Russian messages were targeting European Union (EU) voters. These ads, which reached more than 38 million users, were linked to EU-sanctioned Russian businessmen.
The network identified by ABC Investigations consists of five Facebook pages with a combined 190,000 followers. The pages have repeatedly shared the same images, text posts, and talking points and often post around the same time. The five pages featured criticism of several British parties including the Conservatives and Labour. Some of these pages have supported Reform UK leader Nigel Farage, with two calling him “the people’s champion”.
Let’s shift to the United States, another election-bound nation (and perhaps the most crucial election).
“Someone in China created thousands of fake social media accounts designed to appear to be from Americans and used them to spread polarising political content in an apparent effort to divide the US ahead of next year’s elections,” Meta reported back in 2023.
The network of nearly 4,800 fake accounts was attempting to build an audience when it was identified and eliminated by the tech company. The accounts sported fake photos, names and locations as a way to appear like everyday American Facebook users weighing in on political issues. Misinformation/disinformation will be aplenty.
“Instead of spreading fake content as other networks have done, the accounts were used to reshare posts from X, the platform formerly known as Twitter, that were created by politicians, news outlets and others. The interconnected accounts pulled content from both liberal and conservative sources, an indication that its goal was not to support one side or the other but to exaggerate partisan divisions and further inflame polarisation,” reported the Associated Press.
CrowdTangle, a digital tool considered vital in tracking viral falsehoods, will be decommissioned by Facebook owner Meta, a move researchers fear will disrupt efforts to battle the misinformation flow on the social media space. CrowdTangle has been offering researchers and journalists crucial real-time transparency against conspiracy theories and hate speeches on Meta-owned platforms like Facebook and Instagram.
Misinformation galore on Facebook
In the recently concluded EU elections and disinformation, Facebook became the potent weapon for the pro-Kremlin elements to push their narratives with ads purchased through fake accounts, as per a Politico report.
First exposed in 2022 and later sanctioned by the EU, the Russian campaign “Doppelganger” continued to influence Europeans online in the lead-up to the elections, despite being flagged by French and German authorities. As per AI Forensics, the misinformation operation is growing, reaching five to 10 times more people than previously thought.
The campaign took place as European authorities repeatedly tried to stamp out foreign and malign influence operations looking to sway public opinion. The overall election integrity concerns were further fuelled by the rise of AI amplifying disinformation and fake content online.
Over 65% of ads connected to political and social issues were spreading unlabelled on Facebook in over 16 countries in the European Union and Meta reportedly took down less than 5% of these ads.
The dissemination of undeclared political ads violates Facebook’s policy and could even violate the EU’s new content-moderation law, the Digital Services Act (DSA) that went into effect in August 2023 for very large online platforms.
Ben Walters, a spokesperson for Meta, said, “The Russian operations targeting Ukraine are aggressive and persistent, which is why we continuously detect and remove the accounts and Pages associated with these campaigns,” while adding that threat detection teams had seen a “consistent decline” in the overall following of the coordinated campaigns from Russia in the past few years.
“If we do not oppose Ukraine’s accession to the European Union, we will ruin our farmers,” said a paid post in French on January 28, as protests swept across EU countries from France to Poland to Germany.
The ad in French, portraying the import of Ukrainian chickens and eggs as “unfair competition,” is one of the nearly 4,000 sponsored messages pushed by thousands of fake Facebook pages operated by Doppelganger between August 2023 and March 31, 2024. Such messages were seen by at least 38 million users.
Around 138,600 Facebook users in France and 37,500 users in Germany saw at least one of these ads undermining Ukraine and the EU and connected to events like announcements of new aid packages to Kyiv, the Israel-Gaza conflict and farmers’ protests every day in that period. Researchers were still detecting new ads reaching hundreds of thousands as of April 15.
Fewer than 20% of the paid and boosted pro-Russian propaganda ads were taken down by Meta after being shown to users at least 2.6 million times. The AI Forensics researchers also identified more than 8,000 ads for crypto scams that reached over 128 million accounts mainly in France, Italy and Spain in January and February 2024, which seemingly came from a coordinated network.
“Doppelganger” was first uncovered in 2022 after spreading propaganda from websites spoofing Western media outlets like The Guardian, Ansa and Spiegel on social media networks like Facebook and X (formerly Twitter) through ads and fake profiles. Meta at the time estimated that the network had paid more than $100,000 for the propaganda ads.
The network allegedly focused on influencing Germany and France was also later found to impersonate government ministries and other media outlets like Le Monde. Meta identified two Russian companies behind the act, Struktura and Social Media Agency, in 2022. The EU sanctioned the entities in July 2023.
“AI Forensics used data from Facebook’s public ad library to build and train its own algorithm on 230 million Meta ads to detect political ads across 16 EU countries. Meta in August 2023 was forced, along with other major platforms to publish detailed information about all its ads in the EU in a detailed public library under the DSA. Meta started a repository in 2019 with self-declared political ads in the EU in 2019,” Politico stated further.
“The algorithm aimed to mimic Facebook’s moderation systems to identify political ads and looked for messages based on indicators like the names of leaders like French President Emmanuel Macron and German Chancellor Olaf Scholz, similar text or words connected to Ukraine targeted at people in 10 different languages,” it noted.
Meta spokesperson Walters rejected the findings, arguing the social media company’s ad library doesn’t include the 430,000 ads the company rejected in the EU between July and December 2023 before they were published. He also disagreed with the definition of political ad for the algorithm used for the report.
Misinformation and US Elections 2020
In September 2021, The Washington Post reported about a study of user behaviour on Facebook, which centred around the 2020 Presidential Elections, while taking into consideration long-standing arguments against the social media company’s algorithms fuelling the spread of misinformation over more trustworthy sources.
The study by researchers at New York University and the Universite Grenoble Alpes in France has found that from August 2020 to January 2021, news publishers known for putting out misinformation got six times the number of likes, shares, and interactions on the platform as did trustworthy news sources.
“The study helps add to the growing body of evidence that, despite a variety of mitigation efforts, misinformation has found a comfortable home — and an engaged audience — on Facebook,” said Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University, who reviewed the study’s findings.
Combatting the finding, Facebook said that the report measured the number of people who engage with content, but that was not a measure of the number of people who view it.
“This report looks mostly at how people engage with content, which should not be confused with how many people actually see it on Facebook. When you look at the content that gets the most reach across Facebook, it is not at all like what this study suggests,” the venture stated, adding that it had 80 fact-checking partners covering over 60 languages that work to label and reduce the distribution of misinformation.
The study’s authors relied on categorisations from two organisations that study misinformation, NewsGuard and Media Bias/Fact Check. Both groups have categorised thousands of Facebook publishers by their political leanings, ranging from far left to far right, and by their propensity to share trustworthy or untrustworthy news.
The team then took 2,551 of these pages and compared the interactions on posts on pages by publishers known for misinformation, such as the left-leaning Occupy Democrats and the right-leaning Dan Bongino and Breitbart, to interactions on posts from factual publishers.
“The researchers also found that the statistically significant misinformation boost is politically neutral — misinformation-trafficking pages on both the far left and the far right generated much more engagement from Facebook users than factual pages of any political slant. But publishers on the right have a much higher propensity to share misleading information than publishers in other political categories, the study found. The latter finding echoes the conclusions of other researchers, as well as Facebook’s own internal findings ahead of the 2018 midterm elections,” Washington Post reported in 2021.
Sorry state of affairs
Facebook is increasingly restricting access to outside groups that make attempts to mine the social media company’s data. Post 2020, the White House had to repeatedly ask Facebook for information about the extent of COVID misinformation on the platform, only to get no answers to their communications.
One of the researchers Facebook has clamped down on was NYU researcher, Laura Edelson. The company cut off Edelson and her colleagues’ accounts in 2021, arguing that her data collection, which relied on users voluntarily downloading a software widget that allows researchers to track the ads that they see, put Facebook potentially in violation of a 2019 United States Federal Trade Commission privacy settlement.
The commission shot back by stating that the settlement makes exceptions for researchers and that Facebook should not use it as an excuse to deny the public the ability to understand people’s behaviour on social networks.
Facebook also published a transparency report that showed the most popular content on the platform every quarter. However, the report was highly curated, and Facebook censored an earlier version of the report out of concerns about getting bad press, as per the sources cited by The Washington Post.
Edelson’s study showed that Facebook algorithms were not rewarding partisanship or bias, or favouring sites on one side of the political spectrum. Instead, the portal was amplifying misinformation because “it does well with users,” and the sites that happen to have more misinformation are on the right. Among publishers categorised as on the far right, those that share misinformation get a majority, or 68%, of all engagement from users.