
Canada Faces Rising AI Crime Wave as Criminals Exploit 'Jailbroken' Language Models
AI breaks free from constraints
Crime evolves anew
Canadian law enforcement officials are warning of an emerging cybersecurity threat as criminals increasingly exploit artificial intelligence systems, particularly through the 'jailbreaking' of large language models (LLMs) for illegal purposes [1].
Chris Lynam, director general of the RCMP's National Cyber Crime Coordination Centre, reports that criminals are now offering services to remove AI safeguards, effectively creating 'tech support for cybercriminals' through underground forums and messaging platforms like Telegram [2].
The scope of AI-enabled crime has expanded significantly, encompassing:
Deepfake pornography and voice impersonationRomance scams leading to financial fraudInvestment fraud using deepfake video conferencesCustom-built criminal LLMsFinancial implications are severe, with Deloitte forecasting AI-related fraud losses could exceed US$40 billion in the United States by 2027 [1]. A notable case involved a Hong Kong company losing US$25 million to fraudsters using deepfake video impersonation of executives.
The regulatory response has faced setbacks. Canada's Artificial Intelligence and Data Act, intended to regulate AI systems, died when Parliament was prorogued on January 6, 2025 [3]. AI researcher Alex Robey from Carnegie Mellon University describes the current situation as 'the Wild West,' noting a lack of comprehensive governing principles.
In response, Canadian authorities are focusing on public education. The BC Securities Commission launched a $1.8 million awareness campaign in January 2025, featuring innovative content to highlight AI-related fraud risks [2]. The RCMP's National Cyber Crime Coordination Centre, established in 2020, continues to monitor and respond to evolving cyber threats.