You can help support our reporting by checking out this page. People are really mad at YouTube right now.
See, over the last few days, several creators on the service have received notice that their videos do not follow the proper guidelines to be monetized. When a video is monetized, that means ads will appear whenever the video is played. Creators who monetize their videos receive a portion of the money advertisers paid to Google for the advertising space.
There has been confusion regarding why and when this happened. According to information provided to The Verge, there has not been any recent change to monetization policies. What has changed is the method of notification when a video is determined to not comply with YouTube policies.
Things have actually been made more user-friendly for creators; before, you’d have to dig through a specific part of the analytics page to learn a video was no longer supported with ads. YouTube’s notification change now provides email notifications and a visible icon to alert creators about monetization being removed. The specific policies a video must follow for a creator to make money were implemented roughly one year before the notification changes.
Theoretically—and according to The Verge’s information from a YouTube spokesperson—the proactive notifications now implemented by YouTube should increase the amount of videos that can be monetized. Users will now be notified immediately when videos have their ads pulled, so users can appeal the decision right away. Before these changes were implemented, a video could’ve been running without ads for months—unbeknownst to the video creator.
Are the policies actually fair?
YouTube’s support page lists several reasons a video may have advertisements removed:
Content that is considered "not advertiser-friendly" includes, but is not limited to:
- Sexually suggestive content, including partial nudity and sexual humor
- Violence, including display of serious injury and events related to violent extremism
- Inappropriate language, including harassment, profanity and vulgar language
- Promotion of drugs and regulated substances, including selling, use and abuse of such items
- Controversial or sensitive subjects and events, including subjects related to war, political conflicts, natural disasters and tragedies, even if graphic imagery is not shown
The page goes on to provide tips to users, as well as explaining context is key. Simply having the above content in a video doesn’t mean the video cannot be monetized, but must be handled carefully with justifiable usage.
In theory, these policies seem reasonable; advertisers may not want their brand to be associated with certain kinds of content, so users must take advertisers’ rights into account if they want to receive money from said advertisers. This has been a key understanding and standard operating procedure in the world of ad-supported content since the beginning of radio (and likely before that).
YouTube obviously differs from traditional media operations where advertisers would make deals to have their ads played on specific channels, during specific shows, in specific areas at specific times. Some of these demographic choices are still available, of course; an advertiser can request ads be shown on videos relating to certain topics (i.e. “only show ad on gaming content”), displayed when the viewer is determined to exist within a certain age range, a connection originating from a certain region, et cetera. Advertisers can also make deals with specific content creators or their networks to display specific ads, such as contracting with College Humor to show trailers for a specific movie.
The main difference between YouTube and traditional media is the sheer level of scale. Accountability and vetting of content is nigh impossible. In 2014, YouTube stated roughly 300 hours of video were uploaded to the site every sixty seconds—over 157,000,000 hours of video in a single year. Reviewing each hour of video would require 430,000 hours of video being reviewed every day, just to make sure descriptions, tags, and content of 2014’s uploaded videos comply with YouTube’s policies. Certain algorithms like ContentID—a service that checks uploaded content to make sure it’s not stolen, copyrighted content—can reduce some of the load. Other assistance can come from the ability given to any viewer to flag a video for review should they believe it breaks any policies.
But the level of scale still remains as the most prominent obstacle when it comes to advertisers receiving their best return on investment. When a company pays YouTube to place an ad before videos related to specific topics, they can’t be reasonably assured their ad will only show up with the correct videos. An ad meant to be shown next to videos talking about video games may show up next to a video of torture that the uploaded labeled as being about video games; the same ad may show up on a video whose only connection to video games is that the person in the video really really hates Anita Sarkeesian and spends 20 minutes attempting to call her every vulgar obscenity they can imagine.
Things get murky in how the policies are applied.
Some YouTube creators have publicly stated they’ve had monetization disabled on videos which can’t entirely be considered to break the stated policies. One creator, Boogie2988, found several of his videos were demonetized because they discussed suicide—even though the videos themselves intend to emotionally support people dealing with depression and suicidal ideation.
— boogie2988 (@Boogie2988) September 1, 2016
Here's a video where I talk about Suicide Prevention and its flagged. Saving lives isn't 'ad friendly' any more. pic.twitter.com/79qNnNxrgt
— boogie2988 (@Boogie2988) September 1, 2016
Another creator, Melanie Murphy, had videos discussing skin care for acne demonetized with notification that the videos contained graphic content and/or excessive strong language.
— Melanie Murphy (@melaniietweets) August 31, 2016
Strangely, Dan from Nerd³ found that none of his videos containing the word ‘suicide’ in tags, descriptions, or titles had been marked with demonetization.
@Boogie2988 Actually I looked into it. I have 9 videos with suicide in the title, tags or description. None have been un-monetised.
— Daniel Hardcastle (@DanNerdCubed) September 1, 2016
Boogie and Murphy’s cases can certainly go through an appeals process for a manual review, hopefully resulting in ads being restored to the videos. However, companies that rely on user content to generate a profit are notoriously terrible at any kind of follow-up. Google and YouTube are no exception, with creators being susceptible to having their entire account suspended by swarms of false reports, or getting false copyright claims that result in losing monetization status on even original content. Given the glacial pace at which YouTube has moved to deal with any flaws in their service, not much should be expected.
Nor should any algorithms be expected to operate with much confidence. ContentID, for example, has been a major hassle for content creators since its inception in 2007. Videos containing original content can get mistakenly flagged by the system as belonging to someone else, and videos whose use of copyrighted content falls within fair use standards will have all their ad revenue stripped or redirected to copyright holders. There have even been cases of people having videos flagged despite having the full permission of copyright holders to use their copyrighted material. When a system that’s been around for nearly ten years still has the same problems it did when first introduced, shaky faith is both understandable and empirically-supported.
So, in theory, the policies are fair; in practice, they’re very hit-and-miss. Advertisers absolutely have the right to choose in what context their ads are viewed, but creators also have the right to not take a massive hit in revenue due to a misfiring algorithm or misapplied policies. If a creator makes a video strictly following the policies outlined by YouTube, they shouldn’t have a 50% chance of erroneously being punished for a crime they didn’t commit—especially if recourse could take months without guarantee of resolution.
But some people are hypocritically outraged about the policies even existing.
We’ve established the rights of advertisers and the rights of creators. They’re a difficult but ultimately achievable balancing act we’ve seen more and more debate over as the internet becomes a central place of commerce. YouTube’s policies are, again, nothing new in the world of online advertising. And some of the people currently outraged are fully aware of this fact.
In 2014, a gurgle of gamers came together under a banner founded on the debunked allegations of an ex-boyfriend. If you haven’t heard about it by now, just know it’s a long and tortuous rabbit hole whose only major effect has been disproving old stereotypes of gamers by realizing new ones which are far worse.
One of the Gate’s first attempts at providing cover for their campaigns of hate against several women was to portray themselves as a consumer revolt. The revolting consumers engaged in several floods of email spam—which they deemed “operations”—to various entities who they hoped would listen to their ramblings and give them a feeling of validity. Demands have shifted between floods but the recipients have largely remained the same: large companies who are household names.
Sometime around September of 2014, the only notable one of these campaigns was organized and christened Operation Disrespectful Nod. It’s goal was to convince advertisers that the self-titled “gamers” of the Gate were in fact valuable consumers of the companies’ products. If the companies wouldn’t revoke their advertisements from sites such as Kotaku or Polygon, they may find themselves taking a massive hit to their profits.
There was some success seen in this tactic: the Gate duped Intel into pulling ads from a Gawker. This was the only success the Gate saw, however; Intel quickly realized whose demands they were answering, releasing a statement reversing their earlier decision. They later consulted with one of the Gate’s central targets, Anita Sarkeesian, and ultimately became the example other companies would follow when dealing with the Gate. Intel’s reversal set such a precedent that the Gate made it a rule in later campaigns to not mention their movement in their flood of emails—hoping companies would be clueless to who was contacting them.
What relates this to YouTube and their policies?
Well, one facet of ODN was to contact Google and cite parts of their AdSense policies which the Gate believed certain media outlets were violating; the intent was to convince Google that certain content wasn’t following advertiser-friendly policies, cutting creators of that content off from a main revenue stream should Google pull their access to AdSense.
Other attempts under or related to ODN—such as Operation Azure Orbs—involved contacting companies like Blizzard or OutBrain directly. Those campaigns were meant to convince the companies that their ads were featured on sites breaking companies’ rules for content that could be shown alongside the ads.
Of course, some of the content the Gate was highlighting in their spam were wholly inoffensive items that the Gate condemned through faux outrage under the belief they could put on a convincing façade mimicking legitimate consumer boycotts. OAB, for example, targeted Vox Media sites such as Polygon by spamming Blizzard with allegations of Vox sites slandering gamers or supporting the imprisonment of innocent males. Of the somewhat legitimate complaints leveled by the Gate, most concerned content published by Gawker which is no different from some of the content the Gate is currently defending as advertiser-friendly by others.
The hypocrisy lies in the Gate spending months campaigning on the basis that advertisers have a right to choose what content should be seen alongside their ads, but suddenly finding this to be a preposterous idea when content they enjoy may be affected. Outrage has now shifted to complaints that advertiser-friendly policies like those of YouTube are resulting in censorship. Some of the people allegedly censored have in fact published content similar to that which the Gate claimed was grounds for Gawker to be censored; Bro Team Pill, for example, has published leaked information—an action the Gate alleged of Gawker in an email campaign to AdSense over a year ago. Yet, as BTP has claimed they’ve had videos demonetized, the Gate believes they’ve uncovered a scandal relating to their long-standing targets from the first days of 2014.
There is legitimate reason to be concerned with YouTube’s policies.
As I outlined above, YouTube’s historical flaws mean there will continue to be false positives which result in creators losing profit during the times when their videos get the majority of their views. Not only is this an issue that needs to be handled promptly and comprehensively by YouTube, but it opens the door on a conversation regarding the freedom of crowd-funding versus the restrictions of relying on a share of advertiser funding.
What we cannot allow to happen is for pure, legitimate concerns to be co-opted by parties intending to weaponize outrage for further harassment of individuals.
The Gate and those sympathetic to their cause have no interest in standing for anyone’s rights unless such an action will provide a momentary boost in PR. As they’ve done time and time again, they’ll abandon whoever they prop up and even turn on them the moment they can change sides to get a slight edge.
Call out this co-opting when it happens, because not doing so allows real conversations to be derailed and diluted until creators hurt by flawed YouTube processes shouted down for not blaming a vast, feminist conspiracy.