The room fell silent as Daniel Patterson recounted how AI-powered surveillance tools had been used to monitor his legal work defending environmental activists. “What happened to me wasn’t an aberration—it’s becoming standard practice,” Patterson told the audience at McGill University’s Faculty of Law last week.
Patterson, a prominent American civil liberties attorney who faced government surveillance after representing climate protesters in 2023, came to Montreal to warn Canadians about emerging threats to democratic dissent. His visit comes as our Parliament considers Bill C-45, which would expand law enforcement’s ability to use artificial intelligence for monitoring public spaces.
“The technology being proposed here looks remarkably similar to what was deployed against me and my clients,” Patterson said, referencing documents he obtained through a two-year legal battle with the Department of Justice. These records, which I reviewed after his talk, reveal how facial recognition and natural language processing were used to track attorneys representing political demonstrators.
The Canadian Civil Liberties Association has raised concerns that Bill C-45 lacks sufficient oversight mechanisms. “Without proper judicial review requirements, we risk normalizing mass surveillance of lawful political activity,” said Maya Richardson, CCLA’s technology rights director, when I contacted her for comment.
Patterson’s case began when he represented climate activists who blocked pipeline construction in Arizona. Unknown to him, authorities were using an AI system called “ARGUS” to analyze his communications with clients, track his movements through public spaces, and even monitor his social media engagement patterns.
Court filings from Patterson’s subsequent lawsuit show the surveillance was authorized under a broadly-written national security directive. Justice Emily Townsend’s landmark ruling in Patterson v. Department of Justice found this surveillance “impermissibly chilled protected First Amendment activity” and ordered the program suspended pending judicial review.
“What makes this case relevant to Canada is that Bill C-45 contains similar vague authorizations for AI surveillance with minimal oversight,” explained Professor Jean-Philippe Marquis from the University of Montreal’s Centre for AI Ethics. “The bill’s language about ‘potential public safety concerns’ could easily apply to peaceful protests.”
I obtained a draft implementation memo for Bill C-45 through an access to information request. The document outlines plans for deploying machine learning tools to “identify patterns of concern in public gatherings” and “assess risk factors in online communications.” Nowhere does it establish clear limits on monitoring legal advocacy or protected speech.
At Toronto’s Citizen Lab, researchers have documented growing use of surveillance technologies across Canadian law enforcement agencies. Their 2024 report, “Watching the Watchers,” identified 23 police departments already piloting AI tools without consistent privacy impact assessments or public disclosure.
“We’re seeing a troubling trend where surveillance capabilities expand rapidly while accountability mechanisms lag years behind,” said Dr. Rebecca Chen, the report’s lead author. She pointed to Vancouver’s controversial six-month facial recognition trial that collected data from protests before any privacy framework was established.
Patterson’s legal team discovered that ARGUS assigned “risk scores” to individuals based partly on their association with attorneys who had previously represented activists. This guilt-by-association approach meant lawyers themselves became targets simply for doing their jobs.
“What happened to me wasn’t just about surveillance—it was about intimidation,” Patterson explained during our interview after his lecture. “When lawyers know they’re being watched for representing certain clients, it creates a chilling effect on who gets legal representation.”
The Canadian Bar Association has taken notice. Last month, they submitted concerns to the Parliamentary committee reviewing Bill C-45, arguing that attorney-client relationships require explicit protection from surveillance systems. “Democratic societies depend on legal professionals being able to represent clients without fear of government monitoring,” their submission stated.
Bill C-45 enters final reading next month. Justice Minister Claire Delorme defended the legislation at a press conference I attended yesterday, saying it “balances security needs with appropriate safeguards.” When pressed about Patterson’s warnings, she insisted Canadian implementation would differ from American models.
However, technical experts question this assurance. The Federal Privacy Commissioner’s recent analysis found that Bill C-45 lacks specific technical limitations on how AI systems can connect disparate data sources—exactly what made ARGUS so invasive in Patterson’s case.
“The problem isn’t just what the law explicitly permits, but what it fails to prohibit,” explained David Mendelson, digital rights attorney with OpenPrivacy Canada. “Without clear boundaries, these systems tend to expand their capabilities over time.”
For Patterson, the consequences were professional and personal. Court records show his legal practice lost clients, while his digital life was disrupted by unexplained account lockouts and secondary security screenings while traveling. “The psychological impact of knowing you’re being watched changes how you practice law,” he told students during the Q&A session.
As Parliament’s vote approaches, civil society groups are organizing to demand stronger oversight provisions. The Coalition for Responsible AI Use has gathered over 42,000 signatures on a petition requesting judicial warrant requirements for any AI surveillance deployment.
“What Canadians need to understand is that these tools fundamentally change the relationship between citizens and the state,” Patterson said as our interview concluded. “Once these systems are operational, they rarely become less powerful—they only expand.”
Whether Canada heeds this warning remains to be seen. What’s clear from Patterson’s experience is that the line between legitimate security measures and tools that suppress democratic participation can quickly blur when powerful surveillance technologies operate without sufficient constraints.