Image credit: Dan Olson
Design that doesn’t enforce its own rules
Imagine going shopping at a huge mall. A sign near the door informs you theft will result in a lifetime ban from the premises. Reasonable, understandable policy for a mall to have.
You travel through the mall and take a look through a few shops. In each shop, you notice people grabbing merchandise and walking through the door without paying. Multiple people — dozens, even. You tell a security guard; they tell you to call the mall’s owners. Their number is right next to the sign you read at the door. They don’t pick up, so you leave a voicemail.
A text message hits your phone two days later. The owners say they dealt with the situation. Feeling like you did a good dead, you head back to the mall to treat yourself to a gift. Despite the notice on the sign near the door, everyone you reported to the owners for theft has returned. They’re grabbing merchandise from the same shops and walking through the same doors without paying.
Knowing the owners don’t seem to care, would you decide it’s ok to join in and grab some merchandise?
What you’ve just witnessed is the result of Reddit running a mall.
The Wild West of the Web is alive and well on Reddit. Users are left to roam free so long as they agree to follow a very small list of site-wide rules. One would seemingly be reprimanded or have their account suspended for stepping out of these very thin lines. Yet enforcement is notoriously inconsistent, with users having entire sections of the site — known as subreddits — dedicated to stalking their every move across Reddit and other sites while the administrators refuse to take action.
Stalking can take place on a macro level, too; certain subreddits are created with the purpose of tracking and documenting other subreddits. /r/AgainstMensRights was designed to keep track of the frequently misogynistic and violent rhetoric found on /r/MensRights, a subreddit founded to advocated for men’s issues. On the flip side, /r/SRSsucks was created due to users being upset with /r/ShitRedditSays, a subreddit that highlights racism, sexism, bigotry, and/or pedophilia contained in comments on Reddit which have received at least 20 upvotes (similar to Facebook’s ‘like’ button).
Upvotes, to some, are the key metric to measure anything on Reddit. /r/ShitRedditSays marks how many upvotes were given to terrible comments as a way of indicating how much the Reddit community supports regressive ideas. The comment with the most upvotes is taken as the most reliable in many threads, particularly with advice subreddits like /r/ExplainLikeImFive.
Submissions will also be considered reliable based on upvotes. News-oriented subreddits like /r/News, /r/WorldNews, and /r/Politics face frequent controversy as unreliable or otherwise false articles receive thousands of upvotes while the top comment inside the thread points out why the article is unreliable or false. Moderators of these subreddits sometimes allow these submissions to remain on their Front Page even after multiple people have disproven the article’s claims.
Combatting these issues is sometimes done with the usage of downvotes. Originally, they were designed as a way of reducing spam and low-effort comments which did not contribute to the discussion, but were quickly turned into a ‘dislike’ button for content and comments. They’ve also been utilized as a weapon to make it harder for people to use the website; receiving enough overall downvotes in a subreddit prevents a user from commenting more than once every nine minutes. Certainly a useful tool to prevent spam accounts from flooding a comment thread, and certainly a useful tool to keep someone from replying to multiple comments in a timely manner.
Where Reddit’s design disagrees with Reddit’s message
Voting on comments and submissions carries a lot of weight in how they’re received — and using these votes to target users and communities can cause a lot of disruption.
Reddit officially refers to the targeting of votes as ‘vote manipulation’. A user won’t run afoul of the rules for simply posting a link on Twitter to Reddit content or a link on one subreddit to content from another subreddit; the internet is nought but a series of links so why punish someone for using it? What causes a user to be negatively received by Reddit’s admins lies in posting that link and asking or explicitly encouraging users to vote.
So how do you get around this obstacle? You make the request an implication.
Think back to our discussion on stalking across Reddit. Several communities have been created for this purpose. Not all of them imply that you should engage in vote manipulation, but a significant number serve no other purpose. /r/AngryBlackLadies tracks a single woman of color across the site, with most comments they link seeing an influx of downvotes after they’ve arrived. /r/AMRsucks — a subreddit that is against a subreddit that is against a subreddit — tracks regular users of /r/AgainstMensRights and the other subreddits they visit, with the same influx of downvotes occurring.
We can assume that both subreddits have resulted in multiple users having their accounts suspended. Administrators, after all, can see referral information for every vote and page visit on the entire site. If you came from Twitter and voted on something, the admins can see what tweet sent you there. If you came from another subreddit and voted on something, the admins can see what submission or comment linked you there. Even if you don’t vote, administrators can see the origin point for every bit of traffic.
The problem is any action they take is reactive. Until a user contacts the admins to report suspicious voting activity, vote manipulation will continue to affect a user or community without issue. Contacting the admins is simple enough but many reports will go unanswered for up to 24 hours.
Reddit could be proactive. Automatic systems have been in place for years to automatically remove submissions from domains known for vote manipulation. Therefore, Reddit already has the capability to detect and action specific URLs. Creating a system to disable voting when someone arrives to a subreddit through a link from another should be very well within Reddit’s power. Such a system could even be utilized when referral traffic indicates someone came from outside the site.
An additional step could be turning permalinks for comments into specific URLs that link people to a version of the page where voting is disabled. Submissions currently become archived after six months to prevent new comments and all voting. A recent tool given to moderators was the ability to lock threads, which prevents new comments but doesn’t affect voting. Turning permalinks into a URL which prevents voting would only require using code that currently exists within the site.
Users have requested these changes in one form or another for years. Confusingly, the administrators care enough to suspend some of the accounts engaging in vote manipulation, but have done nothing noticeable to prevent the methods those accounts use. Reddit’s design even seems to encourage vote manipulation thanks to a handy tab found on a significant amount of submissions.
When a link is submitted to Reddit, all other subreddits where that link was submitted will appear under the ‘other discussions’ tab. The idea is that a user can check how other subreddits have discussed the link in question and potentially learn something new. But the stalking aspect perfectly plays into one of the frequent uses of ‘other discussions’.
Take two diametrically opposed subreddits who both care about the same issues: /r/TacoPizzaRules and /r/TacoPizzaSucks. TPS has a larger userbase than TPR due to the lack of awareness surrounding the deliciousness of taco pizza. When a TPS user sees that a TPR user submitted an article about a new restaurant feature taco pizza, the TPS user submits the same article to /r/TacoPizzaSucks. Suddenly, the submission in /r/TacoPizzaRules is being flooded with downvotes and angry comments. Nobody linked directly to that submission because nobody had to; the TPS user knew /r/TacoPizzaSucks would find their way through the ‘other discussions’ tab and cause all the trouble they could.
The possibility for civil — maybe even beneficial — discussion does exist with the ‘other discussions’ tab. Perhaps a comment in one subreddit contains a great explanation of a controversial topic that a user would never see in their own subreddits, or a fantastic recipe for taco pizza. Without having access to Reddit’s internal data, we can’t know for sure how often the tab is used in a positive or neutral manner.
We can, however, know that the abuse exists and Reddit isn’t keen on showing evidence they’re trying to reduce it. A recent example involves the Gregory Alan Elliott verdict and /r/GamerGhazi:
When the verdict was announced, a link to TheStar.com’s coverage was posted to the subreddit. Multiple other subreddits with an interest in the case and opposition to /r/GamerGhazi posted the same link to their own subreddits. Immediately, comments from the subreddit’s regular users were being swarmed with dozens of downvotes while comments from users who had never been in the subreddit before were seeing dozens of upvotes. Comments in /r/GamerGhazi rarely see more than 20 votes in total.
The moderators of /r/GamerGhazi moved to lock that thread, remove it, and resubmit the article in a text post. But the damage had already been done, and everyone who was disrupting the previous thread jumped into the new thread. Some comments received over 40 downvotes and the thread was ultimately locked when the moderators decided nothing of value was going to come out of it.
Reporting the threads to the administrators resulted in a response hours later, stating they were looking into the situation. None of the accounts which commented in either thread have been suspended, even when their history made it clear they came from another subreddit and had no history in /r/GamerGhazi.
Waiting for solutions
Such is the story of moderating on Reddit. Vote manipulation and disruption can occur daily in your subreddit with little communication from the administrators that those responsible have been reprimanded or suspended. Requests for better tools have only recently been answered but remain ineffective when moderators have access to none of the information Reddit collects. A suspended user can continue making new accounts to evade any ban placed against them. And reporting users for posting personal information or engaging in vote manipulation means you’re lucky to get a response in less than 12 hours.
Waiting wouldn’t be quite as trying if the avenues of abuse most frequently used weren’t built directly into the site. Every extra click required to engage in abuse will deter a significant portion of abusers, beneficial to both administrators and moderators. Until Reddit makes it harder to disrupt another subreddit than it is to post a submission, faith will continue to be lost in the teams powering the site.