Why the Criminal Code and Human Rights Act Provisions Should Be Removed from the Online Harms Act
There is a real risk these provisions will overwhelm Bill C-63 and become a primary area of focus despite not being central to the law’s core objective of mitigating harms on Internet platforms.
Having a spent virtually the entire day yesterday talking with media and colleagues about Bill C-63, one thing has become increasingly clear: the Criminal Code and Human Rights Act provisions found in the Online Harms Act should be removed. In my initial post on the bill, I identified the provisions as one of three red flags, warning that they “feature penalties that go as high as life in prison and open the door to a tidal wave of hate speech related complaints.” There is no obvious need or rationale for penalties of life in prison for offences motivated by hatred, nor the need to weaponize human rights complaints by reviving Human Rights Act provisions on communication of hate speech. As more Canadians review the bill, there is a real risk that these provisions will overwhelm the Online Harms Act and become a primary area of focus despite not being central to the law’s core objective of mitigating harms on Internet platforms.
Indeed, these concerns are already attracting media coverage and were raised yesterday in columns and commentary from Andrew Coyne and Professor Richard Moon, who I think rightly describes the core provisions of the Online Harms Act as “sensible and workable” but notes that these other provisions are troubling. Bill C-63 is effectively four bills in one: (1) the Online Harms Act, which forms the bulk of the bill and is focused on the duties of Internet platforms as they respond to seven identified harms, (2) the expansion of mandatory child pornography reporting requirements to include those platforms, (3) the Criminal Code provisions, which opens the door to life in prison for committing offences that are motivated by hatred of certain groups, and (4) the changes to the Canadian Human Rights Act, which restores Section 13 involving communicating hate speech through the Internet as a discriminatory practice. The difference between the first two and the latter two is obvious: the first two are focused on the obligations of Internet platforms in addressing online harms, while the latter two have nothing directly to do with Internet platforms at all.
The Criminal Code and Human Rights Act changes originate in Bill C-36, which was introduced in 2021 on the very last sitting day of the Parliamentary session. The bill died on the order paper with an election call several weeks later and did not form a core part of either the online harms consultation or the 2022 expert panel on online harms. These provisions simply don’t fit within a legislative initiative that is premised on promoting online safety by ensuring that social media services are transparent and accountable with respect to online harms. Further, both raise legitimate concerns regarding criminal penalties and misuse of the human rights complaint system.
My initial post made the case that “this feels like the first Internet regulation bill from this government that is driven primarily by policy rather than implementing the demands of lobby groups or seeking to settle scores with big tech.” Upon reflection, I think that remains true for the provisions focused on the Internet platforms, which are the product of several years of expert panels and public consultations. There is still a need to address Bill C-63 concerns involving enforcement and the powers of the proposed Digital Safety Commission, but Internet regulation to counter identifiable harms remains justified since leaving the issue solely to the police is unlikely to mitigate against the risks of amplification of those harms on social media services. However, the Criminal Code and Human Rights Act reforms are less connected to those efforts and much more difficult to support. Given that they distract from the emphasis on Internet platform regulation and may undermine the balance struck in the provisions focused on the seven harms and platform duties to act reasonably, they should be removed from the bill.
Post originally appeared at https://www.michaelgeist.ca/2024/02/why-the-criminal-code-and-human-rights-act-provisions-should-be-removed-from-the-online-harms-act/
Find me on:
I'll bet the liberals can't wait to jail and bankrupt us for hating on them and their policies.
Just for fun, I uploaded the full content of the C-63 document to ChatGPT to find out what it thought.
[Question 1:] "Is there anything in this proposed law that would prevent a government (or an individual government official) from designating the reasonable and legal arguments of a political opponent as 'hateful' speech and prosecuting him?"
[Answer:] The search within the proposed law's text for specific provisions related to "hateful speech" and safeguards against misuse by government or officials, such as the wrongful designation and prosecution of political opponents' arguments as "hateful," did not yield any directly relevant elements. This indicates that there might not be explicit sections within the accessible portions of the document that directly address your concern or the terms used for the search did not match the document's specific language on these issues.
[Question 2:] Is it reasonable to worry that a phrase like "hateful speech" is too subjective to be safely encoded in law?
[Answer:] "...Given these considerations, it is reasonable to have concerns about the subjectivity of terms like "hateful speech" in legal contexts. Ensuring that laws are clear, provide adequate safeguards, and respect fundamental freedoms while addressing the harms of hate speech is a complex but essential task. Public debate, legal scholarship, and careful legislative drafting play crucial roles in addressing these challenges."