Tip 3 - AI makes it harder to tell fact from fiction
Advances in generative AI make it easier to create and distribute altered and fake audio and video content
Bad actors can use AI to make convincing ‘deepfakes’: images, audio or video that are digitally created or altered to make real people say or do things they did not actually say or do.
In January 2024, an AI-generated fake robocall impersonating Joe Biden was sent to New Hampshire voters. This artificial message attempted to trick them out of their vote in the primary:
FACT CHECK - Reduce the spread of false information by cross-checking audio and videos against multiple sources.
AI software now lets you mimic voices of public figures like Presidents Biden, Trump, and Obama for as little as $5. While this is often for fun - for instance, a parody format with the three Presidents riffing using Gen Z humor while playing “Call of Duty” - there’s a risk people confuse these scripted personalities with the real deal. Know the difference!
Social media companies will use labels to notify you of AI-generated content.
Here’s what they will look like:
Keep in mind:
Not all manipulated or false content will use AI - lots will use manually edited images, or even misrepresent real information
Likewise, some AI-generated content will escape moderation and labelling!
So these labels are a helpful guide, but not foolproof!
Be critical of all content and fact-check stories across multiple sources!
5 things to be aware of:
Watch out for lies embedded in facts: False information may be hidden between two truths. Stay alert!
Beware of false info sent through private chats: Be cautious if a video or voice message in your private messages (e.g., on WhatsApp, iMessages, or Signal) seems unusual, even if it features familiar faces or voices. Deepfakes are harder to spot in private, where fewer eyes can scrutinize them.
Fakes take on all forms: Be alert across all messages — text, videos, photos, and especially audio. Fake audio messages can be particularly deceptive due to less context.
Know who you trust: Fake videos of well-known figures like politicians are easier to detect. Be extra cautious with unfamiliar faces!
Crisis = Caution: In times of crisis, misinformation spreads fast. Don't let fear cloud your judgment.
Stay informed, question, and verify before you trust or share!
Supporting Research
AI can produce hyper-realistic images and convincing ‘deepfakes’, audio or video that is digitally altered to make real people say or do things they did not actually say or do:
- The New York Times published an online test inviting readers to look at 10 images and try to identify which were real and which were generated by AI, demonstrating first-hand the difficulty of differentiating between real and AI-generated images.
- In January, a robocall impersonating President Biden went you to New Hampshire voters, falsely asserting that a vote in the primary would prevent them from participating in the November general election
- A Rolling Stone article reports on how AI-generated audios of public figures, most notably President Biden, are used in comedic parodies
Voters should also be wary of the potential for deepfakes to undermine trust in real information:
- The Liar’s Dividend describes how by exploiting the perception that ‘AI is everywhere’, real content could be dismissed as deepfakes or AI-generated, eroding trust in genuine information
Read more on the potential impacts of AI on the election here!