Tip 3 - AI will make it harder to tell fact from fiction
AI produces convincing fakes of both audio and video to further reduce trust in the facts
Bad actors can use AI to make hyper-realistic images and convincing ‘deepfakes’: images, audio or video that are digitally created or altered to make real people say or do things they did not actually say or do.
In January 2024, an AI-generated fake robocall impersonating Joe Biden was sent to New Hampshire voters. This artificial message attempted to trick them out of their vote in the primary:
FACT CHECK - voting in a primary does not prevent you from voting in a general election!
Social media companies will use labels to notify you of AI-generated content.
Here’s what they will look like:
Facebook:
Instagram:
TikTok:
Keep in mind:
Not all manipulated or false content will use AI - lots will use manually edited images, or even misrepresent real information
Likewise, some AI-generated content will escape moderation and labelling!
So these labels are a helpful guide, but not foolproof!
Be critical of all content and fact-check stories across multiple sources!
5 things to be aware of:
Watch for the Truth Sandwich: False information may be hidden between two truths. Stay alert!
Beware of private chats: Be cautious if a video or voice message in your private messages seems unusual, even if it features familiar faces or voices. Deepfakes are harder to spot in private, where fewer eyes can scrutinize them.
Fakes take on all forms: Be alert across all messages — text, videos, photos, and especially audio. Fake audio messages can be particularly deceptive due to less context.
Know Who You Trust: Fake videos of well-known figures like politicians are easier to detect. Be extra cautious with unfamiliar faces!
Crisis = Caution: In times of crisis, misinformation spreads fast. Don't let fear cloud your judgment.
Stay informed, question, and verify before you trust or share!