Why forcing AI firms to report online threats may not be simple
A cybersecurity law expert says Canada could introduce laws requiring that artificial intelligence companies notify police of online threats, but the process would not be a simple one, since reporting every suspicion is “just not workable.”
Emily Laidlaw, a Canada Research Chair in cybersecurity law at the University of Calgary, said every AI company sets its own policy on when to inform police about what happens online. She said Canada considered introducing laws in the past but did not follow through.
The issue is under scrutiny again in the wake of the mass killings in Tumbler Ridge, B.C., by a shooter who was banned by OpenAI from its ChatGPT platform at least seven months ago.
OpenAI did not inform police about the problematic behaviour of Jesse Van Rootselaar until after the Feb. 10 killings and the firm has been called to Ottawa to meet with federal Artificial Intelligence Minister Evan Solomon on Tuesday to explain its safety procedures and decisions.
