Modern Harassment Online harassment is nothing new; only in the last few years has it become more than the occasional evening news story. A contributing factor to its lasting presence is the complacency of internet users. We advise each other to ignore it, block it, and walk away. “That’s just how the internet is,” a common phrase said all over the web.
Those familiar with the tactics of harassers know blocking and ignoring rarely solve the problem. You can set up a new account within seconds on most sites, successfully evading any block system. Ignoring, while successful against the common troll, only encourages a dedicated abuser to try harder for your attention.
When discussing harassment with my mother, she laughs. “Internet bullying isn’t serious,” she tells me. “Sure, it sucks, but just turn off your computer! Go outside!”
Walking away isn’t a viable option for everyone. The rise of internet commerce brought unprecedented marketplaces which countless people rely on as a sole source of income. Content creators are often required to interact with the public if they’ve any hope of succeeding. This goes double for people whose livelihood lies in the sale of products either physical or digital.
A lucky few — in terms of the greater internet population — have the benefit of platform-specific filters. One such filter is the Verified Account checkmark we discussed in a previous article. However, these checkmarks are handed out sparingly with little clarification on Twitter’s criteria in selecting recipients.
Accounts on some services may also be set to private, and many people who have been the target of harassment end up resorting to this multiple times. Again, this doesn’t always solve the problem; Twitter users can still tag a private account, for example, even though they cannot see your tweets and reply to them. Preventing these alerts requires changing your notifications to ‘People you follow’, further reducing your ability to engage with your audience.
Imbalance of Power
Herein lies the issue with how platforms handle online harassment:
If you’re the target, every solution involves reducing your online presence. Eventually, like Anne Wheaton, your only choice is to leave the platform altogether.
If you’re the harasser, just keep doing what you’re doing! Blocking and muting does little to impede your campaign, and in fact strengthens it! It shows a sign of weakness. It provides an easy screen cap to be passed around, encouraging others to join in until the target finally deletes their account.
This is not a sustainable system.
A system in which the victim must put in more effort than the perpetrator is fundamentally flawed. When the only method of preventing harassment on a platform is to never use it, we need to ask what went wrong. What happened to the idea of strangers sharing their experiences and knowledge with people around the globe? Why do we force the victims of harassment to wall themselves off from so many enjoyable avenues while the harassers roam free?
The last two years brought attempts to shift the power back to victims. Where victims still go into seclusion at times, they also alert others to the individuals driving them there. Users will report the harassers with hopes the platform’s support team takes action. One result of this pressure was Women, Action, & the Media’s collaboration with Twitter. But over a year later, WAM! believes Twitter has done little to address the problems they found.
Reddit attempted to respond to pressure as well. When Steve Huffman came back to the CEO position after the resignation of Ellen Pao, he introduced the concept of reclassifying subreddits. The new classification, a ‘quarantined’ subreddit, was meant for controversial subreddits which weren’t breaking Reddit’s content rules. People took issue with this idea; some felt it was akin to censorship, others were upset that subreddits founded on racism would be allowed to stay. While that particular subreddit was later removed from the platform entirely, many more were allowed to stay.
While a significant portion of harassment starts and ends on a single platform, there is another breed which carries the same power imbalance in reporting. An e-sports reporter known as Richard Lewis is infamous for utilizing this tactic. If he felt an article of his wasn’t well received by Reddit’s League of Legends community, he’d tweet a link to the subreddit and his followers would wreak havoc. Repeated abuse of this power resulted in his content and account being banned from the subreddit. (Side note: Richard later went on to be banned from Dreamhack, a digital festival with an e-sports competition, after strangling a player backstage. He now works for Breitbart.)
But Reddit isn’t immune from being used in the exact same way.
November 14th, multiple right-wing outlets and personalities attempted to shame the supporters of a protest at the University of Missouri. The attacks in Paris the previous day were used to downplay any grievances from the Mizzou protestors. Notable figures and general supporters responded by pointing out that there is no limit on the amount of suffering one can care about.
One particular subreddit, known as Kotaku In Action, linked to five different people of color expressing their feelings on coverage of Mizzou and Paris. The resulting harassment caused one woman to delete her account, and a couple others to go private.
The same subreddit linked to another person’s tweet two days later for criticizing Richard Lewis’ involvement with the Game Awards. Again, the user was forced to set their tweets to private after a flood of harassment.
Reporting cross-platform harassment highlights another fundamental flaw in current support systems: evidence beyond a user’s capability of producing.
I sent an email to Reddit’s administrators on November 17th after noticing the incitement for harassment from the 14th and 16th still hadn’t been removed. An hour and a half later, I was told that the individuals would need to contact Reddit directly if they were being harassed.
The next response — after I pointed out the difficulty in contacting these users and asking them to contact Reddit — told me that the users would need to provide proof their harassment was coming from the links on Reddit. Without an explicit call to action or referral data showing Reddit originated the harassment, nothing could be done.
Right to Free Speech
Who are these policies really protecting? The message being sent when platforms refuse to act is, “The right to comment and have an account outweighs the right to reasonable safety and well-being.” There is no reason a user in Twitter should be expected to have access to referral metrics proving that their harassment is coming from a Reddit link, and vice versa.
But this problem, and others discussed here, can be traced back to a polarizing debate regarding speech on the internet.
A common view expressed by groups such as GamerGate, is that the idea of free speech overrides any legal descriptions and protections laid out by governing bodies. Any entity cannot enforce rules that result in an individual losing access to something without being accused of censorship. Somehow, this also includes ‘self-censorship’: the act of an individual or group editing their own work for whatever reason. A game developer responding to criticism by removing offending material is on par with a government silencing an entire race through force. A platform suspending an account for threatening violence is akin to NPR refusing to allow any guests from a specific political party.
This belief, and the groups who propagate it, wouldn’t be of much note on their own. They become an issue when people with large platforms and positions of power seem to be in agreement. Steve Huffman has made it clear that he will support the view until it is no longer economically viable to do so. Jack Dorsey, CEO of Twitter, is following suit. After all, his company’s new campaign against harassment in December involved rearranging paragraphs in their Terms of Service.
What’s surprising is how neither of these men seem to take cues from Mark Zuckerberg. He’s found a way to reduce harassment and increase security without the hemorrhaging of money frightening Steve and Jack. His platform had a net income of $2.94bil in 2014, and four times the amount of monthly users than Twitter in 2015. Just today, it was announced that Facebook will be fighting hate speech in countries experiencing an influx of refugees. Clearly, one can maintain a profit without a significant amount of that profit coming from harassment.
The difference in their support systems is very distinct, too; I’ve reported death threats and disclosure of personal information to Reddit and seen an average of 12 hours for a response time, while only a handful of my reports on Twitter have received a response at all. Facebook’s system has been rather reliable, with every report getting a response in under two hours regardless of whether it was actioned upon.
That’s not to say Facebook’s system is perfect. Friends and acquaintances have shared tales of transphobic hate speech and other acts of bigotry being allowed to stay up after reported. That is absolutely not okay, and shows there is still a ways to go before users can feel safe.
At the least, though, it shows that some companies are making attempts to improve. The question that remains is:
Will Twitter and Reddit follow?
See how the imbalance of power is reflected offline as well in the second part of this series.