If you thought C-11 was noisy, get ready for the Online Harms Bill.

June 29, 2022

Two weeks ago on June 15th Heritage Minister Pablo Rodriguez announced that the final summary of his Expert Panel on Online Safety will be published in the coming weeks. Presumably, that Report sets the table for a Bill or a further public consultation.

A Canadian Online Harms Bill would replicate measures being taken by Germany and the United Kingdom responding to harmful posts on social media. Any version of such a Bill in Canada, no matter how well crafted, will unleash genuine censorship issues in contrast to the risible “freedom of expression” messaging we endured from the critics of the Online Streaming Bill C-11. 

The heated debate that follows the tabling of such a Bill will conjure up images of Wile E. Coyote detonating the proverbial canister of TNT.

The Minister was understandably eager to “expert-fy” such a combustible issue. And so this Spring his panel of 12 experts met nine times (the minutes are online) grappling with the best way to tame the dangerous and uncivil excesses of social media and even less savoury online “services.”

For the most part, the panel leaned towards the UK model of a legally-binding “duty of care” imposed by government on social media platforms to improve their content moderation results overall while avoiding piece-by-piece content moderation as much as possible.

In their own words, the panel stated that “the objective is to reduce the amount of harmful content online and the risk it poses to Canadian users. Based on the responses to the 2021 public consultation, a harmful content regulatory model should not be organized around whether a regulated platform appropriately moderates a given piece of content. Instead, and similar to the proposal brought forward by the United Kingdom in its recent Online Safety Bill, a regulatory model focused on the systems, tools, and approaches that platforms have in place to address harmful content would be preferred.” (Emphasis added)

In practice that means social media platforms would be expected to exploit the proprietary knowledge of their own content moderation data and AI tools, and then develop codes of conducts (“Digital Safety Plans”) dealing with harmful posts that would trigger countermeasures, whether deleting posts, warning authors of offside content, labelling “awful but lawful” content as dangerous or untruthful, and so on. 

The government would also expect the platforms to share their data and AI decisions with a government-appointed Digital Enforcement Commissioner so that the public would be able to verify the platform’s efforts to reduce harmful content.

In this “systems” approach, the piece-by-piece moderation of posted content would play a peripheral role in the platform’s duty of care, deferring to the demonstrated improvement in overall results. Censorship (deleting posts or expelling unrepentant authors) would likely be restricted to posts advocating terrorism, criminal hate expression, threats of physical harm, sexual exploitation of minors, and possibly revenge porn. 

This systems approach is industry self-regulation by the social media platforms with the government looking over their shoulder and breathing down their necks. The government would be looking for results in the long term but letting platforms off the hook for the occasional screw up on a bad post.

The political payoff in such an approach is it reduces censorship, especially over-censorship by liability-averse platforms, and relieves government and the platforms of the burden of an elaborate administrative law regime with legal red-lines, justiciable complaints and appeals in a digital environment of millions of daily posts (which means that any Bill must not take away the right of Canadians to file tort actions in civil court).

The least difficult task will be separating the regulation of illegally bad posts from the awful-but-lawful.  

The sleeper issue is how far the list of regulated posts might extend into the awful-but-lawful territory, something that the general public may well be expecting

Said the panel’s minutes of April 21, 2022: “Some [panel] experts explained that additional types of harmful content would need to be included if the framework were to delineate specific objects of regulation. A range of harmful content was said to be important to scope in, including: fraud; cyberbullying; mass sharing of traumatic incidents; defamatory content; propaganda, false advertising, and misleading political communications; content or algorithms that contribute to unrealistic body image, or which creates a pressure to conform; and content or algorithms that contributes to isolation or diminished memory concentration and ability to focus.” [Emphasis added]

The most vexing of all is political or civic misinformation, whether spontaneous or coordinated by malign actors. In the end, the minutes reveal a panel of experts dreading the destructiveness of misinformation yet backing away from the precipitous ledge of censorship that beckons.

If the sophomoric flap over the Online Streaming Bill C-11 demonstrates anything, it is that reasoned policy solutions don’t get much of a hearing once the shouting starts. 

Regardless of how well thought out the details of an Online Harms Bill might be, any impact on freedom of expression will trigger political narratives of censorship versus safety and freedom versus harm. 

As the panel’s minutes repeatedly note, the constitutional and moral values at stake include both freedom of expression and equality. The latter is described clinically as everyone’s equal right to freely express themselves —-even provocatively—- online without being subject to racist or misogynist abuse. More fundamentally, you can take “equality” as code for the constitutional claim of the victims of hateful content to the protection of society. 

And that is what might get lost if free speech ultras —-an august but disproportionately white and male assembly—- don’t empathize with fellow Canadians who do the actual experiencing of hate online. 

When it comes to acting against Online Harms, no one should be morally bullied into acquiescing to the government’s approach, but everyone must keep an open mind and an open heart.

Published by

Howard Law

I am retired staff of Unifor, the union representing 300,000 Canadians in twenty different sectors of the economy, including 10,000 journalists and media workers. As the former Director of the Media Sector and as an unapologetic cultural nationalist, I have an abiding passion for public policy in Canadian media.

3 thoughts on “If you thought C-11 was noisy, get ready for the Online Harms Bill.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s