Supporting Research for Our Top Six Tips

Algorithms boost content that stirs strong emotions, especially anger. This maxes engagement and profits ​

Tip 1 - Social media makes you angry by design – because anger sells

Numerous studies have shown evidence that social media algorithms are optimized to boost user engagement, which in turn was associated with strong emotions, including anger:

- Facebook Researchers discovered its ranking algorithm  was prioritizing content with reaction emojis, especially “angry reaction emoji”. These posts tended to keep users more engaged, and “keeping users engaged was the key to Facebook’s business.” Data scientists at the company confirmed that posts with the angry reaction emoji were “disproportionately likely to include misinformation, toxicity and fake news”

- A published study by computer scientists at Cornell and UC Berkeley found that X (formerly Twitter)’s algorithm amplified tweets expressing stronger emotions, especially anger. Moreover, political tweets shown by the algorithm led to ‘othering behavior’ including reinforcing negative beliefs about groups in society with opposing views

- A study by researchers at University College London and the University of Kent detected a four-fold increase in the level of misogynistic content served up to a fresh TikTok account over five-days of monitoring. The algorithm served more extreme videos, often focused on anger and blame directed at women.

- The Integrity Institute observed that

  1.  Internet platforms rank content primarily by predicted engagement

  2.  Ranking by engagement increases user retention

  3. However, engagement is negatively related to quality – the most engaged with content is “objectively low quality” (clickbait, spam, misleading headlines and misinformation)


Tip 2 - Content quality has plummeted as platforms cut back on safety

Mass layoffs and removal of important safety rules drops the quality of content on your feed 

Academic researchers found that on the eve of the 2020 election, 23% of political image posts on Facebook contained misinformation. Moreover, there was a 23% increase in views of content that “repeatedly violated” Facebook’s rules relative to the start of 2020, and a more than 10% rise in content categorized as “ideologically extreme”.

These levels of false and low-quality content were the result of a long-term policy stance to prioritize profits over engagement:

· Dialling back of “Sparing Sharing”,[MR1]  an initiative to reduce the reach of “super sharers”. For instance, one super sharer sent out 400,000 QAnon invitations in six months

· Rejection of an initiative to limit the number of “Groups” invites an individual super user could send

· Decommissioning of “Informed Sharing”,[MR2]  an initiative that demoted posts where users did not click through to read the accompanying article

· Exemption of politician’s posts and campaign ads from fact checking[MR3] 

· Vetoing of other integrity proposals, including work that had previously been approved[MR4] 

Horwitz, J. (2023). Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets (p. 207 - 212). DoubleDay. [MR1]

Horwitz, J. (2023). Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets (p. 151). DoubleDay. [MR2]

Horwitz, J. (2023). Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets (p. 160 - 162). DoubleDay. [MR3]

Horwitz, J. (2023). Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets (p. 151). DoubleDay. [MR4]

This time around, we expect integrity capabilities to decline even further as Big Tech heavily cut back on Trust & Safety protections in 2023:

- Between November 2022 and November 2023, Meta (Formerly Facebook), X (formerly Twitter), and YouTube eliminated 17 critical critical policies across their platforms which curbed hate speech, harassment and misinformation on their networks. These included reversing policies on misinformation about the 2020 US election being stolen (X and YouTube), lesser requirements on political ads (X and Meta), and weakened privacy protections for user data to train AI (X and Meta)

- Layoffs of c.40,000 employees across Meta (formerly Facebook), X (formerly Twitter) and YouTube including significant cuts in trust and safety, ethical engineering, responsible innovation and content moderation.

- An investigation by researchers at NYU found that 90% of ads they tested on TikTok containing false and misleading election information evaded detection, despite the platform’s policy not allowing for political ads.


Tip 3 - AI will make it harder to tell fact from fiction

AI produces convincing fakes of both audio and video to further reduce trust in the facts

AI can produce hyper-realistic images and convincing ‘deepfakes’, audio or video that is digitally altered to make real people say or do things they did not actually say or do:

- The New York Times published an online test inviting readers to look at 10 images and try to identify which were real and which were generated by AI, demonstrating first-hand the difficulty of differentiating between real and AI-generated images.

- In January, a robocall impersonating President Biden went you to New Hampshire voters, falsely asserting that a vote in the primary would prevent them from participating in the November general election

Voters should also be wary of the potential for deepfakes to undermine trust in real information:

- The Liar’s Dividend describes how by exploiting the perception that ‘AI is everywhere’, real content could be dismissed as deepfakes or AI-generated, eroding trust in genuine information

Read more on the potential impacts of AI on the election here!


Tip 4 - Take control to reduce low-quality info in your social media feed

You must adjust your settings on Facebook and Instagram from the default to minimize low-quality content

Historically, Facebook’s algorithm automatically moved posts lower in the feed if they were flagged as false or misleading by one of the platform’s third-party fact-checking partners

The “content reduced by fact-checking" dial, which allowed users to adjust the level of debunked posts they see in their feed, was introduced in 2023. Facebook announced that this gave people “more power to control the algorithm that ranks posts in their feed”.

However, this puts the onus to increase information quality on users, taking responsibility away from the platform. As noted by David Rand, a Professor at MIT, “allowing people to simply opt out seems to really knee-cap the program”


Tip 5- Rely on your local elections’ office for elections information

New laws and misleading info will try to rob you of your vote – make sure you’re registered to vote at Vote.org 

In the last decade, at least 29 states have passed 94 restrictive voting laws. These include stricter voter ID laws, and restrictions on mail-in voting. This is combined with disinformation campaigns to mislead voters with the aim of disenfranchising them of their vote.

To combat this, and restore trust in the electoral process, voters should rely on Election Officials. They are the most authoritative sources on elections at all levels – this includes registration and voting procedures,  how elections are run, and who wins the election – and they are non-partisan.

Resources:

- National Association of Secretaries of State: Step-by-step voting procedures provided by election officials for each state and territory

- CISA: Dispel myths around election procedures

- Ballot Ready: Get yourself ready to vote!

- Bipartisan Policy Center: How every state protects your vote


Tip 6 - Use Vote.org to make sure you’re registered to vote

They will give you the most accurate info, and protect you from the efforts to mislead you on social media

Bad actors are using technology and the legal system to engage in voter suppression tactics:

- Recent legislative changes in battleground states make it harder to vote. At least 14 states have introduced restrictive laws targeting voter registration, mail-in voting and voter identification.

- Eagle AI is an AI-powered software deployed in battleground states by activists to challenge the voter registration status of individuals. The criteria generated by the AI to support these challenges are often unreliable or irrelevant, raising concerns about its potential to disenfranchise legitimate voters.

Resources for securing your vote:

Vote.org

Ballotready.org