Taking Action Against Antisemitic Hate: When Content Moderation, Self-Regulation, and Legislation Fail
I posted about antisemitic graffiti at a Toronto bus stop. What came next has left me thinking about the enormity of the challenge of dealing with online hate.
The explosive growth of antisemitism in Canada since October 7th is well documented with shooting at schools, the need for a regular police presence at synagogues and community centres, arrests on terrorism offences, and protests targeting Jewish owned businesses and communities. So in that context, some antisemitic graffiti at a bus stop in Toronto over the weekend might have been just one more incident to add to the list that now runs into the hundreds. Yet I found the image of “No Service For Jew Bastards” particularly chilling, evoking memories of the holocaust and of similar hateful messages that have frequently targeted minority communities over the years. I proceeded to post a tweet and a LinkedIn post with the photo and a caption:
What came next has left me thinking about the enormity of the challenge of dealing with online hate.
The LinkedIn post was removed within an hour – presumably by automated content moderation – as LinkedIn said the photo violated the site’s hateful speech guidelines. Viewed on its own, that is surely the right call, but when combined with the caption and context, I don’t think it is. I filed an appeal and about 30 minutes later, the post was backup. Hours after that, LinkedIn removed it again, reversing the decision and concluding that it did violate its guidelines.
While I think LinkedIn is wrong in this case, I’m glad it uses content moderation with an appellate process to address content that may violate its guidelines. Transparent content moderation practices with appropriate due process are an essential part of addressing online harms. But the experience highlights the imperfections even of systems that combine automation, human review, and put-back appeal processes that can lead to limitations on freedom of expression.
The experience on Twitter (or X) was even worse. The tweet generated over 150,000 views in 24 hours with thousands of retweets, likes, and comments. While there were many supportive comments, the hate it sparked was breathtaking. I posted a tweet with some of the comments, but they persist now days later with a steady stream of vitriol that would understandably cause many to think twice before wading into an issue such as antisemitic hate. There were numerous reply tweets that unquestionably violate Twitter’s hateful conduct guidelines, but enforcement on the platform rarely occurs. Many have left it as a result, fed up with a self-regulation model that ignores its own guidelines.
Since Bill C-63 on online harms is very much top of mind, it is also worth noting that I don’t think the flawed bill would significantly change the situation either if passed. There would be a duty on the platforms to act responsibly but the standard for fomenting hatred is rightly a high one. Addressing the issue would therefore come primarily as a result of regulatory pressure on platforms to abide by their own policies rather than new legislative requirements. The experience with LinkedIn takedowns highlights the limitations of practices that can have a real impact on freedom of expression.
So if all of these fail, what can be done to counter antisemitic hate? I will start with the painful acknowledgement that it isn’t going away. Indeed, if anything, it has discouragingly become a hallmark of both the extreme right and left. Some will suggest that an end to the conflict in Israel/Gaza will temper the hate, but so many of the comments are focused on Jews and the Jewish community, reviving vile antisemitic tropes that suggest something bigger and more insidious is at play. It is difficult – this post will no doubt spark more of the same – but critical to call it out again and again and again with sustained pressure on both platforms and political and community leaders to take appropriate action.
The failure of our political leaders is particularly jarring as words of support without action have become meaningless. In certain respects, the 2022 Laith Marouf incident, in which a Liberal MP suggested I was racist for calling out the silence of then-Canadian Heritage Minister Pablo Rodriguez for failing to speak out on government funding for an antisemite, was a warning sign. Many MPs said nothing and Rodriguez appears to have lied to committee about what he knew, yet there were no consequences. Not speaking out or taking action in the face of antisemitism is not new and we should have seen this coming.
If there is to be a change, it will have to come from more than just the Jewish community, legislative initiatives, or political leaders that suddenly find some courage. It will require all Canadians committed to equality and combatting hate to make their voices heard and to demand better. Politicians are fond of saying that “this isn’t us” when responding to hate incidents. Yet the reality is that it is. Words alone won’t change that, but real, universal action might.
Post originally appeared at https://www.michaelgeist.ca/2024/03/taking-action-against-antisemitic-hate-when-content-moderation-self-regulation-and-legislation-fail/
Find me on:
So sad Michael. Keep up with the fight . We’re with you.
Dr. Geist, you were caught in a censorship dragnet with this one social media corp, censorship that big tech already has in place, to silence discussion of "controversial" topic that needs to be discussed. So, you cannot even communicate the issues of antisemitism. you are seeing in front of you. Do you not think Bill C-63, the Online Harms Act, has the potential to exponentially amplify this over all social media platforms? With financial penalties that could be employed against people for speaking out against antisemitism because big tech algorithms are set up to delete discussion, and if someone is offended by your opinions about antisemitism and their "group" finds your valid points showing reality about what you are seeing in front of your eyes "hateful". Or even govt corruption, independent voices exposing truth that govt and its political supporters do not like. This sort of dragnet can now be widened by various means to silence those questioning the govt and revealing actual truth of corruption, by shutting down discussion, if certain political party supporters want to target independent voices through tribunals using the mob to make complaints to shut any discussion down. The potential for real abuse of Bill C-63 to silence whomever the govt and its supporters, or their stakeholder "groups" do not like is huge. Have you considered this based on your recent experience?