Following a gunman’s livestreamed massacre of Muslim worshippers at two Christchurch mosques in March 2019, New Zealand Prime Minister Jacinda Ardern shot to international prominence when, along with French President Emmanuel Macron, she fronted the “Christchurch Call”: an international “commitment by governments and tech companies to eliminate terrorist and violent extremist content online.”
New Zealand’s Prime Minister Jacinda Ardern speaks during a news conference with French President Emmanuel Macron (not pictured) at the ‘Christchurch Call Meeting’ at the Elysee Palace in Paris, France, May 15, 2019, via Reuters
To date, 55 nations (including the US), three international bodies, and 10 tech companies (including Amazon, Google, Meta, Microsoft, Twitter and YouTube) have pledged to work collaboratively toward the Call’s aims. In addition, the New Zealand government, via Ardern, has opted to “exercise a degree of global leadership on wider digital and tech issues beyond the specific scope of the Call.” Said leadership is to be carried out by a seven-strong Christchurch Call Unit located within the Department of Prime Minister and Cabinet and supported by the Ministry of Foreign Affairs and Trade. Current priorities, following the second anniversary Summit and release of a Community Consultation report, include “convening a working group on algorithmic outcomes and developing the foundations of a self-sustaining community.”
The enormity and complexity of the task taken on by Ardern and the New Zealand government was emphasized by the recent Buffalo, New York supermarket shooting. Despite the Christchurch Call community’s best endeavors, yet another violent extremist shooting—reputedly inspired by the Christchurch incident—was livestreamed and the associated manifesto was widely distributed. (My AEI colleague Daniel Lyons recently summarized the challenges this episode posed to fledgling content moderation regimes in the US.)
It is apposite then to consider what Ardern is doing within New Zealand to address terrorism and violent extremist content in both the on- and offline domains.
The first measure—making ownership of the type of guns used by the mosque terrorist illegal, with an associated buy-back scheme to remove these guns from the streets—has proven less than successful. Somewhat predictably, the guns handed over were from owners most unlikely to use them illegally in the first place. Owners with other intentions (e.g., organized crime gangs) simply ignored the edict. 2021 was a record year for gun-related violence in New Zealand, and the country’s police union is calling for officers to be armed, as occurs in the United States—which was unheard of in New Zealand prior to the Christchurch Call.
The second measure—tightening up New Zealand’s hate speech laws—has proved similarly unsuccessful. In June 2021, Justice Minister Kris Faafoi released a discussion document outlining proposed new laws against hatred and discrimination. The target was “the types of communication that seek to spread and entrench feelings of intolerance, prejudice, and hatred against groups in our society.” The proposals strengthened human rights law to include prohibitions on hatred against religious groups and LGBTQ+ communities alongside existing color, race, and ethnic-origin groups.
The most controversial proposal sought to establish a new criminal offense for “intentionally inciting, stirring up or normalizing hatred against any specific group of people, by being threatening, abusive or violent, including by inciting violence.” In essence, speech which Christchurch Call commercial partners are encouraged to take down (and much more besides) would invoke jail sentences of up to three years or fines up to $50,000 if uttered in New Zealand.
While their intentions may have been laudable, both Ardern and Faafoi fell victim to exactly the same dilemmas online platform operators themselves face daily. It is quite difficult to specify precisely what constitutes hate speech and is therefore difficult to phrase suitable laws, define workable processes to identify harmful instances, and enforce said laws effectively. When questioned on national television, Faafoi failed to distinguish speech that could be considered freely expressible from speech that could land the speaker in prison. Asked if younger folk expressing disdain toward older generations for inflating housing prices could face legal consequences, Faafoi responded, “If it’s an opinion on a particular group . . . it depends on what you say. If your intent is to incite hatred against them, then potentially.” (This is topical in New Zealand because a young member of parliament once used the now-potentially-hateful phrase “OK, boomer” in response to heckling during a speech.)
Similarly, Ardern was found wanting in her grasp of the details during another interview when she struggled to identify what was actually in the proposal. Act Party (opposition) leader David Seymour responded that the “hate speech laws would create a divided and hateful society where cancel culture would spiral out of control.” Amidst such confusion and opposition, somewhat unsurprisingly and embarrassingly for Ardern and her government, the proposal was sent back to the drawing board last week, with Faafoi conceding the issue was “fraught.”
If this saga illustrates New Zealand’s leadership, it begs the question of whether there is any hope of the Christchurch Call succeeding in this admittedly “fraught” arena.