A billboard states that a McDonald's is "straight ahead,” but the arrow actually points to another billboard stating "Diabetes? A heart attack could be right around the corner.”
Such ad placement "fails” are fodder for many Internet slideshows, but lately, poor placements on YouTube have not been a laughing matter for many top-tier execs. At this writing, more than 250 advertisers have pulled out of YouTube after their ads have run before videos that supported the Islamic state and promote racist groups.
Google claims it can fix this situation with the right tools, but that's unrealistic. Sometimes, objectionable content isn't apparent until you watch a video for a minute or two. Google has no mechanism to screen for this.
Overall, brand safety is still too complex a problem for machines to solve. Until there's a quantum leap in AI in the near future, brand safety will require human intervention, which is actually a very effective solution.
An algorithm won't solve brand safety
The collapse of brand safety on video has been one of the major narratives in advertising this year. In January, Disney cut its ties with the YouTube star known as PewDiePie after he started trafficking in anti-Semitism.
In February, The Times of London reported that Mercedes-Benz and other brands were running ads on YouTube videos for the Islamic State and a pro-Nazi group. More than a month later, The Wall Street Journal found that ads for Coca-Cola, Microsoft, Amazon and Procter & Gamble products were running against anti-Semitic and racist YouTube videos.
It's easy to see how this happened. Some 400 hours of content are uploaded to YouTube every minute. There's no way that humans could assess such content unless Google employed thousands of them for that purpose. Instead, Google relies on algorithms and input from users, who they hope will flag objectionable content.
Following the outcry, in early April Google announced it was working with third-party, MRC-accredited firms to address the ad safety issues. Google also said it was employing advanced machine learning to tackle the problem.
Google didn't say how the latter would work. Machine learning is currently good at text translation, facial recognition and voice recognition. However, it strains credulity that any AI solution will be able to analyze YouTube's huge trove of content or that it would pick up on subtle actions or references. So far, YouTube's algorithm changes have manifested themselves as a reduction in funding for many YouTube creators. It's unclear whether brand safety has improved as a result.
Humans can solve the problem
Though bots are capable of writing some articles, humans still write the majority of them. Human editors still oversee and create content for The New York Times and Vogue, and exercise their professional judgment. There's no quality control for YouTube and Facebook, where content is user-generated. So when it comes to brand safety, top-tier publishers have an edge.
Machine learning is great and it's getting better all the time, but brand safety is a complex issue that requires cultural literacy and knowledge of branding. A textbook brand safety issue is that an airline ad should never appear next to a story on a plane crash. While you can flag keywords to avoid similar situations, most require judgment calls that go beyond merely identifying keywords.
For instance, while the administration's recent health care bill never went up for a vote, that weekend, an advocacy group ran ads declaring the repeal had already passed. A human would know to pull such an ad, but this one got by the machines.
Marketers worried about brand safety — and isn't that all of them? — should be aware that there's no technical fix to this problem yet. We have met the solution and the solution is us.