Canada’s Laws Are Not Ready for Artificial Intelligence — and the Cost of Delay Is Rising

Artificial intelligence (AI) is transforming how data is collected, analyzed, and repurposed at a scale and speed that Canada’s current legal frameworks were never designed to handle. While AI promises enormous economic and productivity gains (Barry, 2024; The Canadian Press, 2025), it also introduces profound risks to privacy, data sovereignty, and human authorship (Alben, 2020; Peghin, 2025). Canada lacks a coherent legislative framework governing the development and use of artificial intelligence, leaving individuals and organizations to navigate a fragmented and uncertain regulatory landscape.

Previous federal governments have attempted to modernize Canada’s privacy legislation in response to the rapid rise of artificial intelligence but were unsuccessful. Meanwhile, other jurisdictions like the European Union have successfully developed and begun implementing new regulations governing AI. The federal and provincial governments of Canada must look to modernize the regulatory landscape to protect Canadians and ensure the safe development of AI technologies. 

How do Generative AI Models Work?

One of the most common questions in discussions about generative artificial intelligence is deceptively simple: how do these models work? The answer matters because many of the risks associated with AI stem directly from how these systems are trained and how they generate outputs. Generative AI models do not “understand” information or retrieve stored documents like a database. Instead, they are statistical systems trained on massive datasets to recognize and reproduce patterns at scale. Models are exposed to a wide variety of examples and learn to predict what comes next in a sequence, such as the next word in a sentence (Alben, 2020; IBM, n.d.). Over time, they adjust internal parameters to improve these predictions, internalizing statistical relationships between concepts (Alben, 2020; IBM, n.d.). While models do not directly memorize training data, they can nonetheless reproduce fragments of it, creating real privacy and copyright risks even in the absence of intentional misuse.

Once trained, AI models produce outputs probabilistically (IBM, n.d.). Given a prompt, it calculates which words or elements are most likely to follow based on patterns learned during training, resulting in outputs that may appear coherent, creative, or original. This probabilistic process explains why AI-generated content can resemble existing works, produce factual errors, and vary significantly between responses to the same prompt. 

Why Canada’s Existing Frameworks Fall Short

Modern AI systems, including large language models, biometric identification tools, and predictive analytics platforms, exploit vast quantities of data that are continuously reused, recombined, and repurposed across contexts. Data collected for one purpose may be used to train models for entirely different applications, often without the knowledge or meaningful consent of the individuals involved (Alben, 2020; Moczuk & Płoszajczak, 2020). De-identification techniques, once assumed to protect privacy, can frequently be reversed through re-identification methods when datasets are combined at scale.

Canada’s privacy and copyright frameworks were not designed for the AI era. The Personal Information Protection and Electronic Documents Act (PIPEDA) was built to govern relatively linear data flows, where information is collected and used for specific purposes within defined contexts and disclosed under clear conditions (PIPEDA, 2000). Artificial intelligence fundamentally disrupts this model by enabling continuous data reuse across applications, including training systems on datasets that often contain copyrighted works collected without the knowledge or consent of their creators. While Canada’s Copyright Act provides protection for “original works,” it does not explicitly define authorship as human or clearly address AI-generated outputs (Copyright Act, 1985; Peghin, 2025). Compounding these design limitations is a fragmented governance landscape: federal instruments such as PIPEDA, the Privacy Act, the Copyright Act, and the Directive on Automated Decision-Making offer incomplete and uneven coverage, while provincial privacy regimes introduce further inconsistency (PIPEDA, 2000; Privacy Act, 1985; Copyright Act, 1985; Government of Canada, 2025). This patchwork creates uncertainty for Canadians about how their data is used; for innovators, it generates regulatory ambiguity that can discourage investment and slow responsible deployment.

A Policy Gap at a Critical Economic Moment

The absence of AI-specific legislation is increasingly out of step with Canada’s broader economic ambitions. In a 2025 mandate letter, Prime Minister Mark Carney instructed all Ministers to prioritize economic expansion by “identifying and expediting nation-building projects that will connect and transform our country” (Office of the Prime Minister, 2025). Artificial intelligence is widely viewed as one such transformative force. A report cited by the Canadian Press (2025) estimates that AI could contribute up to $3.65 trillion to Canada’s GDP by 2035. However, realizing these gains depends on public trust and without clear rules governing data protection, accountability, and transparency, Canadians may become increasingly resistant to AI adoption.

Some jurisdictions have taken a proactive approach to the rapid rise of AI and have developed regulations for the technology. The European Union introduced its Artificial Intelligence Act in 2021, and it entered into force in 2024, creating regulatory structures and obligations for risk management, data quality and robustness, cybersecurity, and human oversight (Artificial Intelligence Act Portal, 2024). 

Considering the rapid development of AI technology, Canada must now act to regulate this technology. As mentioned earlier, previous governments have attempted to expand existing frameworks through the Digital Charter Implementation Acts, Bills C-11 and C-27, which would have updated PIPEDA, as well as passed the Personal Information and Data Protection Tribunal Act and Artificial Intelligence Data Act. These would have modernized Canada’s privacy framework and established a dedicated tribunal for enforcement and regulation of AI systems. 

Reintroducing these bills would also provide an opportunity to clarify copyright protections in the context of generative AI, reinforcing the principle of human authorship while establishing accountability for AI-driven infringement. While not as comprehensive as some international frameworks, this approach would significantly strengthen Canada’s regulatory baseline within a realistic timeframe.

Conclusion: The Case for Pragmatic Urgency

Canada does not need to choose between innovation and regulation, but it does need to choose decisiveness over delay. Regulating AI models helps to restore and protect public trust, protecting individual rights, and providing clarity to innovators. Artificial intelligence will shape Canada’s economy and society for decades to come. The question is not whether to regulate it, but whether regulation will arrive in time to matter.


Authors: Rio Valencerina, Aiyana Burkowski-Kleefeld, Rebekah Brandenburg, and Yaseen Benhaddad are current Master of Public Policy students at the School of Public Policy.


References

Alben, A. (2020). When artificial intelligence and big data collide—How data aggregation and predictive machines threaten our privacy and autonomy. AI Ethics Journal, 1(1), 1–3. https://doi.org/10.47289/AIEJ20201106 

Artificial Intelligence Act Portal. (2024, August 1). Implementation timeline. https://artificialintelligenceact.eu/implementation-timeline/ 

Barry, C. (2024, June 4). New report highlights how generative AI can transform Canada’s future with a potential to add $187B to the Canadian economy by 2030. Microsoft News Centre Canada. https://news.microsoft.com/en-ca/2024/06/04/new-report-highlights-how-generative-ai-can-transform-canadas-future-with-a-potential-to-add-187b-to-the-canadian-economy-by-2030/

Copyright Act, R.S.C. 1985, c. C-42. https://laws-lois.justice.gc.ca/eng/acts/C-42/Index.html 

Government of Canada. (n.d.). AI strategy for the federal public service 2025-2027: Overview. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/gc-ai-strategy-overview.html 

IBM. (n.d.). What is an AI model? IBM Think. Retrieved January 6, 2026, from https://www.ibm.com/think/topics/ai-model

Moczuk, E., & Płoszajczak, B. (2020). Artificial intelligence – benefits and threats for society. Humanities and Social Sciences, 27(2), 133–139. https://doi.org/10.7862/rz.2020.hss.22 

Peghin, E. (2025, November 9). Copyright in the age of AI: Legal implications and emerging issues – Summary. Ontario Bar Association. https://oba.org/copyright-in-the-age-of-ai-legal-implications-and-emerging-issues-summary/

Personal Information Protection and Electronic Documents Act, S.C. 2000 c. 5. https://laws-lois.justice.gc.ca/eng/acts/p-8.6/ 

Prime Minister of Canada, Office of the Prime Minister. (2025, May 21). Mandate Letter. https://www.pm.gc.ca/en/mandate-letters/2025/05/21/mandate-letter 

Privacy Act, R.S.C. 1985, c. P-21. Retrieved January 14, 2026, from https://canlii.ca/t/56jh1

The Canadian Press. (2025, September 24). AI adoption could boost Canada’s GDP to $3.65 trillion by 2035, PwC study estimates. BNN Bloomberg. https://www.bnnbloomberg.ca/business/artificial-intelligence/2025/09/24/ai-adoption-could-boost-canadas-gdp-to-365-trillion-by-2035-pwc-study-estimates/