Article – The healthcare authority running a dozen major hospitals in British Columbia’s Lower Mainland has quietly launched 40 artificial intelligence initiatives – but executives are refusing to provide basic details about these programs that could affect millions of patients.
When Fraser Health held its regular board meeting last week, I expected the typical updates on wait times and budget allocations. Instead, buried in a presentation slide was a startling revelation: the health authority has dozens of AI programs already operational or in development. The moment passed without questions from board members or elaboration from executives.
This lack of transparency around healthcare AI implementation isn’t just a bureaucratic oversight – it potentially affects the 1.9 million British Columbians whose medical care falls under Fraser Health’s jurisdiction.
“The public deserves to know what these systems are doing, who built them, and how they’re being evaluated,” says Dr. Heidi Tworek, a technology governance expert at the University of British Columbia. “Healthcare AI has enormous potential benefits but also serious risks if deployed without proper oversight.”
When pressed for details following the meeting, Fraser Health spokesperson Dixon Tam provided only vague assurances that the authority is “leveraging technology to improve care delivery.” Repeated requests for basic information about which AI systems are already making decisions that affect patient care were met with references to “proprietary technology” and “early development stages.”
This information vacuum is particularly concerning given recent controversies surrounding healthcare AI implementations elsewhere. Last year, a widely-used algorithm in the United States was found to systematically underestimate the care needs of Black patients. Meanwhile, a UK hospital’s diagnostic AI system reportedly misidentified certain medical conditions at twice the rate of human radiologists.
Fraser Health’s silence stands in contrast to approaches taken by other Canadian health networks. Toronto’s University Health Network maintains a public registry of its AI tools, including descriptions of their functions, data sources, and validation processes.
“Transparency isn’t just about appeasing curiosity – it’s essential for accountability,” explains Emily Fulton, a digital health advocate who has experienced both the benefits and limitations of medical AI firsthand. “Patients need to know if an algorithm is influencing their diagnosis or treatment plan.”
The stakes are particularly high for marginalized communities. Research published in the Canadian Medical Association Journal last October demonstrated that healthcare AI systems can unintentionally perpetuate biases present in their training data. Without rigorous evaluation and diverse input, these systems risk exacerbating existing health inequities.
Fraser Health serves some of B.C.’s most diverse communities, including substantial South Asian, Chinese, Filipino, and Indigenous populations. The authority hasn’t disclosed whether its AI systems have been tested for performance variations across different demographic groups.
On the financial side, the implementation of 40 AI programs raises questions about resource allocation in a healthcare system already facing staffing shortages and budget constraints. Public records indicate Fraser Health’s annual technology budget increased by 18% last year, but specific AI-related expenditures remain undisclosed.
“We’re talking about public money funding systems that could fundamentally change how healthcare decisions are made,” notes Dr. Michael Wolfson, former assistant chief statistician at Statistics Canada. “The public has every right to know what they’re getting for their tax dollars.”
Several Fraser Health employees, speaking on condition of anonymity due to fear of professional repercussions, expressed concerns about the rapid implementation of AI tools without adequate staff training or consultation.
“We’re being told to trust the output of systems we don’t understand,” said one clinician who works at Royal Columbian Hospital. “When I asked basic questions about how the predictive model works, I was told that information wasn’t available to frontline staff.”
The federal government’s proposed Artificial Intelligence and Data Act would require transparency and risk mitigation measures for high-impact AI systems, including those in healthcare. However, the legislation remains under review, creating a regulatory gap that health authorities are navigating differently.
B.C.’s Information and Privacy Commissioner Michael McEvoy has previously warned about the risks of deploying AI systems without proper governance frameworks. His office confirmed they have not been consulted about Fraser Health’s AI initiatives.
The silence from Fraser Health executives isn’t just a local concern – it reflects broader questions about how public institutions should approach AI implementation. As algorithms increasingly influence decisions from diagnosis to resource allocation, the boundary between technological innovation and patient autonomy becomes increasingly blurred.
What remains clear is that patients deserve more than vague reassurances about transformative technologies affecting their care. As Fraser Health continues its AI expansion, the gap between technological capability and public accountability grows increasingly difficult to ignore.
Until executives provide substantive information about these 40 AI initiatives, patients are left with an uncomfortable reality: the algorithms influencing their healthcare remain as opaque as the process that put them there.