In case you're coming to this story late, here's the original story: https://www.theguardian.com/technology/2017/mar/25/google-youtube-advertising-extremist-content-att-verizon
But now that we’ve had a couple weeks for dust to settle, a LOT of shame needs to be thrown the way of agencies and media buyers who are very guilty in this whole “scandal”. These are the people who promise their clients that their ad network has all these features to prevent spam and avoid mature/hateful/controversial content. And then those same people run billions of impressions on sites full of user-generated content with very little in terms of barriers to what kind of content gets generated.
Virtually anyone can create an account on YouTube (or any number of free platforms for blogging, video sharing, social networking, etc) and post whatever idiotic crap they want. It’s impossible to have a site that both strictly limits this behavior yet is also hugely popular. If you get too strict, people will go elsewhere. Not just the idiots.
The platforms have some responsibility if they are profiting off of advertising (or just a moral responsibility) to have a degree of filtering and categorization of that content. But the folks buying the ads have just as much responsibility in using common sense before racking up tons of commissions for showing their ads on these sites.
Consumers are irrational. Maybe they generally get that if ATT has an ad next to a NY Times article about bombing Syria that it doesn’t mean ATT supports war. But those same consumers will see an ATT ad next to porn or a white supremacist screaming at their webcam on YouTube and freak out, as if ATT is endorsing this content.
Brands know this and are pissed, pulling their ads because they’ve been assured their ads wouldn’t be shown next to something the general population would consider horrible. They’ve been assured this by the ad agencies who are managing the buys. Because those ad agencies are either buying via an exchange that has a blanket “no mature content” statement or they go into an ad platform and check a box to exclude certain types of content. But then they let the ads run wild with no real controls and no active participation on their part to consider potential negative impact of what they’re doing.
Yes, it’s really hard to pick the specific pages of websites where you want your ads to run. Except for occasional ad runs on the most popular websites, advertisers are not going to do this. But you know what’s not hard? Just being honest with your damn clients about how things work. And checking on your ads once in a while to see if there’s anything to be concerned about. This isn’t like someone saw one ad next to one horrible video and we all overreacted. This is one instance of a systematic failure of how we buy and sell ads on the web. This big of a problem could be prevented, but it’s too easy to throw the blame elsewhere and let the money keep rolling in.