A barrage of videos has taken over social media feeds following Hamas’ terrorist attack in Israel recently.
Unfortunately, though, a plethora of clips aren’t necessarily telling the stories they seem to be at first glance. Some of the videos are entirely fabricated, others slightly modified to speak to a specific narrative, and a number of others misattributed, showcasing scenes that might have happened in real life but not in connection with this specific set of atrocities.
The trend isn’t a new one: Throughout the past few years, especially during Donald Trump’s run for office in 2016 and as artificial intelligence (AI) tools have become more advanced, social media users have been confronted with images that have swayed public opinion and political allegiances despite their inaccuracy.
Given the extent of the issue and its worldwide impact, it’s important we learn how to properly assess the integrity of the videos we are flooded with while also discussing why they are being shared in the first place.
How a fake video is created and distributed.
Various types of materials currently being shared on the internet are not historically accurate, including cheap fakes, deepfakes and real but misattributed clips.
“A cheap fake is just a doctored video that was originally legit,” explains Julie Smith, an instructor at the School of Communications at Webster University and the author of ”Master the Media: How Teaching Media Literacy Can Save Our Plugged-In World.”
The humans who create them usually use existing media — photos, audio, video — in novel and edited ways.
One such example is the famous clip of a supposedly drunk Nancy Pelos, widelyy shared in 2020. “It had its audio slowed down, so she sounded drunk,” Smith explains. “So the video was real, but the audio had been tweaked.”
Such clips can be created using simple editing techniques. “For example, you can use someone’s interview and cut parts of it to remove or change the context,” said Klara Tuličić, a social media and video marketing strategist.
A deep fake, on the other hand, is an entirely fabricated production that happens to feature a person’s real voice and/or image.
Examples of deep fakes include the viral Morgan Freeman video, where the actor urges the public to question reality and just about anything posted on the @deeptomcruise TikTok page, which is entirely dedicated to fake clips of Tom Cruise.
According to Tuličić, AI tools are commonly used to fabricate this sort of content. “It can be done literally in a few clicks, in less than 10 minutes, and under 10 dollars,” she said. Just Google “make your own deepfake” for a list of websites that will allow you to do that swiftly.
There is a third category of videos that are just as dangerous to proper political and social discourse as deepfakes and cheap fakes: misattributed clips, like the heartbreaking post showing children in cages that was circulating online during the past few days, supposedly showing Jewish kids held captive in Gaza. The footage was actually from Syria.
“This happens because it’s just so easy,” Smith said. “All you have to do is find a video online, and then recycle it and reframe it to fit whatever narrative you’re pushing.”
After all, when confronted with such emotional videos, the average human may not immediately think of the veracity or precision of the images. “It’s hard to view them critically,” Smith said.
“Such misattribution helps in shaping narrative and achieving certain goals of individuals or groups,” Tuličić said. “Showing tragic faith, especially in children, is something not many people can be indifferent to. Information is the most powerful tool in this day and age, and if a picture is worth a thousand words, then videos are worth at least tenfold.”
Why are these fake videos being created?
There are a variety of reasons why these visual spectacles of falsities are born, and plenty of them rely on the psychological.
Humans’ propensity for the grotesque and shocking is the simplest explanation for why some people and outlets create these emotional videos.
After all, capturing internet surfers’ attention can become financially beneficial, considering the virtually endless amount of web pages and accounts competing for eyeballs online. Increased traffic may also bring it more influence, leading to an endless loop of posting falsities to gain even more traction.
“Some […] might be doing it to increase traffic on their sites or pages [while] some others might simply be doing it as sport,” Smith noted.
Interestingly enough, although things may eventually shift, social media platforms are not currently held accountable for the sorts of information spreading that they facilitate. The platforms are, therefore, less likely to take down videos that don’t reflect the truth — because they’re not lawfully required to do so.
“The [websites] are not liable for anything posted on them by a third party because of Section 230,” said Smith, referring to the section of Title 47 that basically provides online computer services immunity for content generated by their users. “So there’s no risk in posting or hosting misinformation online.”
The very essence of social media is part of the problem.
The creation of fake content is a problem, but the viral and organic distribution of misinformation is perhaps harder to unpack and solve.
Even more specifically, the fact that we can share a doctored clip on social media across a network of people, mostly consisting of friends and family that we trust and who trust us, makes the disavowal of the material in front of us very difficult.
At their core, Instagram, TikTok, Facebook, Twitter and other similar platforms are communities of individuals we love and whose opinions matter to us. That is all to say: Whatever they post, we’re likely to believe.
“We get so much information from social media platforms where we tend to connect with people who think, feel, believe and vote the way that we do,” Smith said. “If we like what the message says, we’re more likely to believe it and less likely to check it for authenticity. Echo chambers are very comfortable.”
How to spot fake videos on social media.
So, how can we tell whether the clips we encounter while endlessly scrolling through our social media feeds are real?
According to Smith, the number one rule is to question just about anything you see.
“If a video gives you a super strong emotional reaction, that may have been its exact intention,” the expert said. Ideally, you would try and find the source behind the production — not the person who shared it, but who actually put the whole thing together. You’d then want to peruse through the profile’s other posts to determine some sort of credibility.
Tuličić suggests looking for “visual mistakes” when analyzing possible cheap fakes.
“They can have inconsistent backgrounds of the person that’s talking, inconsistencies in their clothing or the audio may be different than what indicated by the subject’s lips,” she explained.
When it comes to deepfakes in specific, though, things are a bit tougher.
“The technology is getting so good that they are harder to identify than they used to be,” Smith noted.
Tuličić advises looking for a lack of body movement.
“Not including hand gestures, breathing and other human quirks that our body does can be a sign of a deepfake video,” she explained. “Sometimes they can have a bit of unusual facial expressions like unnatural blinking.”
Misattributed videos, on the other hand, can actually be run through certain online programs that will spew out the origins of the upload.
“Invid” from the Rand Corporation and Amnesty International offer such digital verification tools to, according to Smith, “authenticate human rights abuses.”
If you cannot access these programs, trying to distinguish a “suspicious” video from a genuine one calls upon, believe it or not, your emotional reaction to it. “Misattributed video often hopes to evoke strong emotions,” Tuličić said. “Especially if those emotions are negative, like sadness, anger or fear. They usually play with trends and/or sensitive topics that are currently causing discomfort in the local community or society in general.”
Given the world’s interconnectedness and the heightened political atmosphere that has defined intrapersonal relationships for the past few years, humans’ ability to create and spread visuals depicting altered realities is extremely dangerous.
Unfortunately, the rapid development of technology we benefit from makes the trend much harder to eradicate. Perhaps the problem can be curbed by our pledge to question everything we see and, maybe, think twice before sharing it with our hundreds of followers.
Be the first to comment