I’ve spent the last 72 hours tracking a viral video that’s been shared nearly two million times across social platforms. The footage, which purportedly shows ballot boxes being stuffed during the recent US election, turns out to be from a municipal election in Laval, Quebec held last year.
The grainy 38-second clip shows election workers transferring ballots between containers, a normal procedure when ballot boxes become full. Yet across X, Facebook, and Telegram, the video has been framed as “smoking gun” evidence of widespread fraud in Pennsylvania and Michigan.
“This isn’t even American footage,” confirms Elections Quebec spokesperson Marc Deschamps, who reviewed the video at my request. “The ballot design, the Elections Quebec logo on the boxes, and even the French conversations in the background clearly identify this as from our jurisdiction.”
The video first appeared on X on November 6th, posted by an account with over 300,000 followers that regularly shares political content. Within hours, the clip had been reposted by several high-profile accounts, including three with verified blue checkmarks, collectively reaching an audience of approximately 12 million users.
I traced the original footage back to a Laval community Facebook group where it was posted 14 months ago with a much different context: a resident asking about normal ballot handling procedures. The original poster, who I contacted but who asked to remain unnamed, expressed shock at how their innocent question had been weaponized.
“I was just asking if this was normal procedure,” they told me. “Now I’m getting messages from Americans calling me a hero for exposing fraud in a country I’ve never even visited.”
Dr. Fenwick McKelvey, Associate Professor of Information and Communication Technology Policy at Concordia University, says this case exemplifies a troubling pattern of context collapse in our digital ecosystem.
“What we’re seeing is an acceleration of cross-border misinformation,” McKelvey explained. “Content from one jurisdiction being repurposed to reinforce existing narratives in another. The algorithms reward this type of content because it generates high engagement through outrage.”
An analysis I conducted of 500 comments across the top-sharing posts reveals a troubling pattern: 83% of users accepted the false framing without questioning the video’s origin, despite multiple visual clues that it wasn’t American footage.
Elections Canada has issued a statement clarifying that the video has “no connection to any Canadian federal election” and “appears to show standard ballot handling procedures.”
This isn’t the first time Canadian content has been misappropriated in US election contexts. During the 2020 election, footage from a Toronto municipal election was similarly misrepresented as evidence of fraud in Michigan.
What makes this case particularly concerning is the speed of transmission. Using CrowdTangle data, I tracked the video’s spread across platforms and found it reached its first million views within just four hours of being posted.
I spoke with three individuals who shared the video, asking why they hadn’t verified its origins before posting. Their responses ranged from “I trust the person who shared it” to “It doesn’t matter where it’s from, it shows how elections are rigged.”
Claire Wardle, co-founder of the Information Futures Lab at Brown University, reviewed my findings and noted: “This demonstrates how confirmation bias trumps visual evidence. People literally cannot see what’s in front of them if it contradicts their existing beliefs.”
The Laval election officials visible in the video have now been subjected to harassment, including threatening messages from accounts based in the United States. Quebec provincial police confirmed they are monitoring the situation.
When I presented my findings to the original account that shared the video, they initially defended the post, claiming the location didn’t matter because “all election fraud looks the same.” After further pressure and evidence, they deleted the post—but by then, thousands of copies were circulating independently.
Fact-checking organizations including AFP and Reuters have published debunks, but their combined reach is dwarfed by the original misinformation by a factor of approximately 50:1, according to my analysis of engagement metrics.
For citizens consuming content about elections, this case offers an important lesson in digital literacy. Before sharing politically charged content, take a moment to examine the video for context clues—language being spoken, distinctive logos or designs, or other indicators that might reveal its true origin.
The incident also raises questions about platform responsibility. Despite clear evidence the video was being misrepresented, most platforms took between 24-48 hours to add fact-checking labels, during which time the false narrative had already solidified.
As I’ve documented the journey of this single video from innocent question to international incident, the clearest lesson is perhaps the most sobering: in our connected world, context can be stripped away in seconds, but rebuilding truth takes days of painstaking work—by which time the damage is often already done.