Three simple ways Twitter can help reduce abuse

Three simple ways Twitter can help reduce abuse
CoWyUscVYAEZxnm.jpg-large-e1470817824272.jpeg

The ever-flowing spigot of proud, neo-nazi bigots was slightly turned down in intensity last month after one of the several lighthouses who strive to add Grand Dragon to their résumé was decommissioned from further signaling hate brigades filled with thousands of paddleboats who imagine themselves to be frigates. Other lighthouses still remain, fearful their shining beacons promising a reboot of the American Civil War—where the slave states win with cries of Lyin’ Lincoln and Based Lee—will be the next ones extinguished thanks to Political Correctness. Preventing such a tragedy has involved increased signaling of paddleboat brigades to inform people that islands are censorship of water and impede the right to free sailing.

In the case of the posturing paddleboats, the absence of cannons does little to reduce the level of stress felt by the fisher in a shack, watching the boats descend upon them waving flags of epithets, gore, and cheerfully expressing an intention of murder and sexual assault. All the fisher has been provided by the local government are some blackout curtains, a broken deadbolt, and a flare gun that takes 12 hours to alert the city guard—provided they even view the invaders as a threat and respond appropriately.

It doesn’t matter how wealthy or well-known the particular fisher is; even if millions know the fisher’s name, the best they can do is send messages of support and pictures of cute animals to the fisher. Maybe they’ll even try to get the city guard’s attention and tell them what’s happening, but the guard is still the only person who decides if they want to do their job.

And this guard is notorious for caring very little about the fishers that the city relies on for a functioning economy.

The guards of the city of Twitter don’t necessarily make the rules, they’re just the enforcers. Likely, even Jack Dorsey—the pseudo-mayor of Twitter—has minor influence beyond signing off on proposals or changes. He’s stepped in at least a couple times, but usually only hours after a celebrity or public figure has endured hours of harassment and threats. Those further down the Hierarchy of Follower Count have yet to be graced with his presence during their times of torment.

What this means is we don’t know specifically who to hold responsible for the consistent failures of Twitter’s support staff. For this reason, we often turn our complaints directly towards Jack in the hope that making the CEO aware of these problems will trickle down to the appropriate parts of the organization and result in well-needed changes; this hasn’t been very successful. Hopefully, though, continued pressure to improve the support system will result in someone, somewhere in the company taking charge to shake things up.

So today, I’m going to give that person five simple fixes they can implement to make Twitter more tolerable until the day it’s inevitably turned into a wasteland of bots tweeting Markov chain-replies at other bots.

1. Reduce the need for a private account

Not all users like to talk publicly about serious personal matters. Others just don’t like making themselves very public at all. Twitter may have originally presented itself as a large open space for everyone to shout about things, but modern usage shows this styling doesn’t hold true for a substantial number of users. Many people have found a close-knit community within various Twitter circles and don’t particularly want to engage with the entire world every time they type.

The current solution: Have a private account. 

Maybe this is your main account, maybe it’s a secondary account you only allow certain people to follow, maybe you changed your account to private for a specific period of time. Either way, you’ve used this feature because there are situations where you don’t want to talk with people you don’t know. Sometimes, users use these accounts because they’re part of a frequently marginalised group, such as being a transgender person or a person of colour, where tweeting publicly about subjects you care about will result in the paddleboats swarming on you.

Why this fails: People want to talk to other people.

There might be subjects or discussions that users on private accounts want to lend their voice to, but their words are only going to be seen by people they’ve allowed to follow them. Even if your private account replies to a tweet from another account, that account cannot see it unless they follow you.

Making sure that person can see your reply means switching your account back to public. And when you do that, every tweet you’ve ever tweeted immediately becomes public. Say you switched your account to private so you could discuss something sensitive you only wanted your trusted followers to know: if each of those tweets isn’t deleted before turning back to public, they’re now visible to the world and cached by web crawlers like Google.

Deleting those tweets isn’t a simple matter, either. Twitter doesn’t offer a simple way of mass-deleting tweets, so you could be looking at deleting hundreds of tweets one-by-one. Not only is this a massive investment of time, but you won’t know you’ve missed one until someone else points it out.

Easy fix: Let people adjust privacy on individual tweets and account features.

The code is already there, seeing as how turning your account private involves simply flipping a switch in your settings. When that switch is flipped, every single tweet from your account becomes private and people can no longer view who you’re followed by or following.

Why not add a simple toggle on the tweet screen so someone can choose if a tweet will be publicly available or seen only by their followers? Put a toggle in the settings so your followers and following are hidden?

The former would give people the ability to communicate with others who share their interests who aren’t already following them, while the latter reduces the ability for paddleboats to find new people to swarm. I’m certainly not the first person to bring this up—several have talked about it for years. What we’ve received instead has been the changing of Favorites to Likes, tagging people in photos, and the Moments feature that few people seem to find useful in any way.

One feature Twitter implemented some time ago but has been utterly underutilized is the Lists feature. This feature allows you to add other users to custom lists of your own creation, very useful for narrowing down the specific content you want to view. For example, you can make a list that just has the accounts of news outlets so you can see news updates without the surrounding noise of the other people you follow.

Why not incorporate that feature into the rest of your account? Being able to select a list you’ve made of followers who you want to see a particular tweet seems fairly straightforward.

2. Make the reporting system more comprehensive

You see one of your friend’s tweeting about being harassed, including screenshots of the horrible tweets they’re receiving. After telling them you’ll help report this person, you go to the harasser’s user page and look at the tweets they’re sending. Dozens upon dozens of tweets all directed at your friend and ranging from vile to outright abusive.

The current solution: Report the tweet under a specific category.

Twitter support has multiple categories when you try to file a report. They also change based on context: reporting an account gives slightly different categories than reporting an individual tweet, and reporting that a tweet is directed at you provides a space for more information than reporting a tweet directed at someone else.

Recently, Twitter updated the system so that filing a single report will let you select multiple tweets to include in the report, depending on which category and context you’re selecting.

Why this fails: It’s an absolute confusing mess.

The person harassing your friend told them to kill themselves with a very graphic description of how. When you go to file the report, the system gives you three prompts. The most obvious choice is the one marked ‘abusive or harmful’. After selecting this option, you’re given five categories of abuse and harm:

Disrespectful or offensive

Includes private information

Includes targeted harassment

Threatening violence or physical harm

This person might be contemplating suicide or self-harm

Which one would you choose? The closest option would be violence or physical harm, so you pick it and are asked if the tweet is directed at you or someone else. Both options let you select up to 5 tweets to add to the report (useful for serial abusers), but choosing the former gives you a text box to provide additional information while the latter just thanks you for filing a report.

Further, you’ll only receive an email confirming your report if you indicated that the tweet was directed at you. Filing reports when someone else is experiencing harassment leaves you completely clueless to any outcome. You won’t know if the harasser was taken care of unless their account gets outright suspended—which you’ll only notice if you try to visit their account during the suspension.

As for your friend, let’s say they filed the same report on the same tweets and indicated that they were the person being abused. They’ll receive an email with a case number, and then an email with Twitter’s determination anywhere from 24 hours to several days later.

And, for this specific report, they’ll be told that the tweets didn’t violate those rules so the harasser faced no actions.

Easy fix: Specific categories for the ways abuse manifests, and follow-ups for all reports.

A great example of a comprehensive system for reporting content can be found on Facebook, where reporting a post or comment will provide dozen of options to narrow down why you’re reporting something. The benefit from this system is that rule violations are laid out in a clear manner so you know whether or not the report will actually be considered by the support staff. Now, this isn’t to say that Facebook has a great track record; my own experience informs me that people can be blatantly hostile towards other races and still considered within the rules.

But, at the very least, Facebook’s system means I know what a rule violation looks like and whether or not I’m filing an erroneous report. There’s also a follow-up within a few hours regardless of whether the content I reported was directed at me or others.

Under Twitter’s current system, I have no idea if someone calling for the murder of a specific race, gender, or sexuality is against the rules. I can file a hundred reports on tweets like that, choosing the ‘violence or physical harm’ category, but will never be informed if any action was taken. I have no idea if telling someone repeatedly to commit suicide is against the rules because the category of ‘violence or physical harm’ doesn’t fit and reports could be rejected for not fitting the criteria. I have no idea what specifically is considered ‘targeted harassment’ because no example information is provided as it is in Facebook’s system.

This wouldn't necessarily change the poor handling of reports by the support staff (which could fill an article on its own), but at least would allow us better documentation of when the staff fails to enforce its own rules.

3. Give us worthwhile filters

Curating your feed is an essential task to enjoy the constant feed of information provided on Twitter. Curating your notifications is a bit more difficult.

The current solution: Blocking and muting accounts.

When you come across accounts producing content you don’t like, it only takes a couple clicks to block or mute that account. Muting will prevent you from getting notifications if they try and tweet at you, while blocking prevents them from seeing your account and tweets whenever they’re logged into their account.

Why this fails: It’s never just one person.

Blocking and muting only works on the accounts you block and mute. Obviously.

A person you block can get around that block by just opening a private window or tab and going to your profile. From there, it takes only a second to copy a link to one of your tweets or take a screenshot and post it from their own account. Because you have them blocked, you won’t be notified when they link your tweet or tag your username in a tweet.

What you will be notified of are their followers, where a handful to a hundred might show up in an instant. These will typically be accounts that were made recently, have less than 100 followers, and serve no purpose beyond attacking people on the signals of larger accounts. Blocking these accounts becomes a hassle as more and more flood into your mentions. The very act of blocking will only stem the flow of harassment from that particular incident, doing little—if anything—to prevent the next wave that comes from another account with a sizable following.

Easy fix: The filters already available for verified accounts.

Twitter recently announced that more people will be eligible for a verified account, which comes with a couple very helpful perks that have no reason to be exclusive.

The first perk is the ability to turn off mentions from any non-verified account or from accounts that you don’t follow. Both of these would be incredibly useful for cutting down the swarms that occur whenever you catch the eye of an abuser. While the former might not be quite as helpful due to verified accounts being so sparse, changing the verification system so anyone who submits specific information could increase its relevance.

The second perk is one that seems to be recently introduced. It’s called the quality filter, and immediately filters out notifications that contain threats, offensive or abusive language, or come from accounts deemed suspicious. Obviously, this feature wouldn’t eradicate all abuse, but it would cut it down to a manageable size that traditional blocking and muting can take care of.

Why these critical features are withheld from everyday users seems completely arbitrary. Getting verified, while less exclusive, still has unknown qualifications that the vast majority of people experiencing abuse will never meet. The effort it would take to make these features universal is trivial, so why hold back? Surely the team that implements special emojis for hashtags could spare a bit of time to make the service immediately more friendly for millions of people.