Instagram failed to act on the vast majority of abusive messages sent to three women in public-facing positions, according to a report released Wednesday.
The Center for Countering Digital Hate (CCDH) report analyzed 8,717 direct messages, or DMs, sent to a total of five participants who have a combined following of 4.8 million on the platform to look at the rate at which Instagram responded to the abusive messages or the accounts behind them.
Researchers were given access to the direct messages sent to Amber Heard, an actress and a United Nations human rights champion; Rachel Riley, a British daytime television presenter; Jamie Klingler, a co-founder of the activist group Reclaim These Streets; Bryony Gordon, a journalist and mental health campaigner; and Sharan Dhaliwal, the founder of South Asian culture magazine Burnt Roti.
Researchers found that 1 in 15 of the 8,717 direct messages sent to the participants broke Instagram’s rules on abuse and harassment.
Additionally, Instagram failed to act on 90 percent of abusive direct messages sent to Heard, Riley and Dhaliwal that were reported using the platform’s own tools, according to the report. Researchers only used messages sent to those three participants for analysis on failure to remove posts, since Klingler and Gordon routinely delete abusive messages.
Researchers logged abuse sent by 253 accounts to the three women and reported using the Instagram app or website. An audit of abusive accounts found that 227 remained active at least a month after the reports were filed, according to the report.
An audit carried out 48 hours after the reports were filed found an even lower rate of action, with 99.6 percent of the accounts remaining active, according to the report.
“This puts some numbers around something that we all know, which is that the power dynamics of social media — these have been turned into digital dark alleys that people feel uncomfortable going into, I don’t look at my own DMs for the same reason, but they’re just horrible places to be,” said CCDH founder and CEO Imran Ahmed.
“But if you’re transacting business, if you’re trying to build a personal brand, it means that you create a barrier to entry for women into those spaces. And that is inherently problematic because it means it’s institutional misogyny, it creates a barrier to entry for women that wouldn’t be there for men in that they have to suffer the abuse of a small coterie of really nasty individuals.”
Ahmed said the research shows that Instagram is failing to follow through on its pledge to take action against abusive messages. The company said in February 2021 it would temporarily bar a user sending DMs that broke the platform’s rules from sending more messages and would disable the account of a user who “continues to send violating messages.”
“This is a single study in a space with a tiny amount of actual quantitative research. We realize that this is a data point in a conversation that needs to be had,” Ahmed said.
He also noted the caveat that the women who gave researchers access to view their direct messages are public figures, rather than typical, average Instagram users.
“We are in a sort of a cavern, and we’re shining a torch light, but that torch light was well directed and has found something which should be disturbing and worrying to Facebook. Their reaction should be, ‘This is a problem, if only 1 in 10 times that someone sends us a death threat we take action. What’s gone wrong?’ Their reaction should not be — and it would be insanely negligent of them, grossly negligent — if their reaction is, ‘Well, nothing we could do about it,’” Ahmed said.
A spokesperson for Meta, the parent company for Facebook and Instagram, did not respond to a request for comment.
The report found that higher percentages of audio, image and video direct messages included abusive content and that the direct messages were being used to send image-based sexual abuse.
Researchers found 66 examples of users sending the women pornography without consent, 50 examples of men sharing images or videos of their genitals and nine examples of pornography edited to feature someone else’s face, known as “deepfake porn,” including in some cases the receiver of the message.
The CCDH is urging Meta to create a more effective complaint system for users on Instagram, including creating a pathway to report abusive voice notes, which researchers said they did not find during the course of the research.
The report also calls for Instagram to give users access to all of their data, including direct messages from blocked accounts.
The report further calls on lawmakers to take action to help mitigate the abuse on the platform and create a regulatory framework for better enforcement.
“Fundamentally, Big Tech cannot be trusted to self-regulate or follow a voluntary code without teeth. The commitments that they’ve already made to users, the public and lawmakers have been broken. The harm they are perpetuating is real and widespread,” the report states.