Corporate brands, UK government pull ads from Google's YouTube over extremist hate group v...

Posted:
in General Discussion edited March 2017
A series of major brands have pulled ads from Google's YouTube following reports that their ads are being shown next to terrorist, hate group and other offensive or controversial videos.


Source: Times of London: "Taxpayers are funding extremism"


A series of recent reports have drawn attention to problems at Google related to false answers, fake news and offensive or illegal content. Advertisers are increasingly growing concerned about their brands being associated with hate groups, terrorists and religious extremists.

Earlier this month, AppleInsider noted that ads for IBM and other major brands appeared on fake news clip that gained visibility after Google Home referenced fringe content on YouTube in an answer to whether "Obama was planning a coup."

A new report by Bloomberg noted that the U.K. government and the Guardian newspaper have pulled their advertising from YouTube."It is completely unacceptable that Google allows advertising for brands like the Guardian to appear next to extremist and hate-filled videos" - Guardian News

"it is completely unacceptable that Google allows advertising for brands like the Guardian to appear next to extremist and hate-filled videos," wrote Guardian News & Media in a statement.

French marketing giant Havas has pulled its brands from both Google and YouTube in the U.K. after "failing to get assurances from Google that the ads wouldn't appear next to offensive material." Havas brands include European mobile carrier O2, British Royal Mail, the BBC, Domino's Pizza and Hyundai Kia.

The pulled ads were limited to the U.K., in an apparent link to an embarrassing report by the Times of London headlined "Taxpayers are funding extremism."

The report detailed that Google was placing ads from the U.K. government--including the Home Office, the Royal Navy, the Royal Air Force, Transport For London and the BBC--and major corporate brands next to "hate videos and rape apologists," featuring Michael Savage; "antisemitic conspiracy theorist, Holocaust denier and former Imperial Wizard of the Ku Klux Klan" David Duke as well as Christian extremist pastor Steven Anderson--who praised the murders of 49 killed in the Pulse nightclub shooting in Orlando, Florida.

After finding that ad spots for its charity supporting disadvantaged youth were being published by Google on extremist YouTube videos, cosmetics brand L'Oreal stated, "we are horrified that our campaign -- which is the total antithesis of this extreme, negative content -- could be seen in connection with them."

Bloomberg characterized the pulled ads as a "growing backlash" against Google's automated selling of online ads and the advertising giant's platform that does little to prevent the mingling of advertisers' brands with hateful, extremist content that is illegal in some countries. Google's problems are shared by Facebook, which has also been targeted for spreading viral fake news, violent content and extremist messaging on its platform.

Both Google and Facebook have taken over majority market share in global display advertising by automating the placement of ads next to content without any real human curation. Both have announced an intent to give advertisers better control over what content they are supporting and associating their brands with.

Apple's advertising in iTunes and the App Store is largely protected by the company's efforts to curate its own content, but it too has drawn criticism for both allowing moderately offensive content and--in the other direction--erecting a Walled Garden that stifles unrestricted speech.
patchythepirate

Comments

  • Reply 1 of 17
    calicali Posts: 3,494member
    I'm starting to get the feeling this scumbag company is gonna go downhill soon. 
    stanthemanelijahgdysamoria1983watto_cobrabrakkenpscooter63ration al
  • Reply 2 of 17
    I can't believe this is still an issue. Even by google standards this is ridiculous. google has known about this for years:

    http://www.chicagotribune.com/news/nationworld/chi-youtube-terrorist-propaganda-20150128-story.html

    google's response, back in 2015, was to whine and exaggerate how hard it is. Pretty pathetic. They were basically saying it's not a priority and they don't want to dedicate resources to it (i.e. they want to keep raking in the ad dollars). 

    This is classic google. They were even arrogant enough to imply that they're a useful public utility, and that other people and states should come in and help filter the terrorist propaganda for them. 
    edited March 2017 stanthemancalielijahgdysamoriawatto_cobrapscooter63ration al[Deleted User]
  • Reply 3 of 17
    calicali Posts: 3,494member
    I can't believe this is still an issue. Even by google standards this is ridiculous. google has known about this for years:

    http://www.chicagotribune.com/news/nationworld/chi-youtube-terrorist-propaganda-20150128-story.html

    google's response, back in 2015, was to whine and exaggerate how hard it is. Pretty pathetic. They were basically saying it's not a priority and they don't want to dedicate resources to it (i.e. they want to keep raking in the ad dollars). 

    This is classic google. They were even arrogant enough to imply that they're a useful public utility, and that other people and states should come in and help filter the terrorist propaganda for them. 
    Imagine if Apple....

    there was already enough sh*t about Apple products being "too secure" so terrorists used them. 
    elijahgwatto_cobra
  • Reply 4 of 17
    tundraboytundraboy Posts: 1,884member
    But, but, but AI was going to eliminate this problem!  As well as find the cure for death, banish recessions forever, and create perfect soulmates for pasty-faced basement dwellers!  Wait, wait... I know what's going on. This is all Fake News!
    edited March 2017 dysamoria
  • Reply 5 of 17
    maestro64maestro64 Posts: 5,043member
    I can't believe this is still an issue. Even by google standards this is ridiculous. google has known about this for years:

    http://www.chicagotribune.com/news/nationworld/chi-youtube-terrorist-propaganda-20150128-story.html

    google's response, back in 2015, was to whine and exaggerate how hard it is. Pretty pathetic. They were basically saying it's not a priority and they don't want to dedicate resources to it (i.e. they want to keep raking in the ad dollars). 

    This is classic google. They were even arrogant enough to imply that they're a useful public utility, and that other people and states should come in and help filter the terrorist propaganda for them. 
    In no way am I defending Google, however, when you automate anything like ad placements and someone signed up for Google ads on their YouTube channel how does Google figure out you are posting stuff advertisers are going to hate. Also people have right to free speech does that also mean when an advertiser does not like your free speech they get to decide if your not allowed to make money off ads. Advertisers sold their ad into a distribution market as such they legal can not tell Google where those ads get placed. There are clear US laws about products sold into distribution and how much say the company has once they do that.

    This is a complicated issue, even Google with all its AI can not solve this. It will require lots of human involved which is going to cut into Google's profits. Advertisers will also need to be activity be involved and they will have to say where and when ads get placed like they use to do years ago.
    jbdragon
  • Reply 6 of 17
    dysamoriadysamoria Posts: 3,430member
    maestro64 said:
    I can't believe this is still an issue. Even by google standards this is ridiculous. google has known about this for years:

    http://www.chicagotribune.com/news/nationworld/chi-youtube-terrorist-propaganda-20150128-story.html

    google's response, back in 2015, was to whine and exaggerate how hard it is. Pretty pathetic. They were basically saying it's not a priority and they don't want to dedicate resources to it (i.e. they want to keep raking in the ad dollars). 

    This is classic google. They were even arrogant enough to imply that they're a useful public utility, and that other people and states should come in and help filter the terrorist propaganda for them. 
    In no way am I defending Google, however, when you automate anything like ad placements and someone signed up for Google ads on their YouTube channel how does Google figure out you are posting stuff advertisers are going to hate. Also people have right to free speech does that also mean when an advertiser does not like your free speech they get to decide if your not allowed to make money off ads. Advertisers sold their ad into a distribution market as such they legal can not tell Google where those ads get placed. There are clear US laws about products sold into distribution and how much say the company has once they do that.

    This is a complicated issue, even Google with all its AI can not solve this. It will require lots of human involved which is going to cut into Google's profits. Advertisers will also need to be activity be involved and they will have to say where and when ads get placed like they use to do years ago.
    Free speech doesn't entitle you to a platform upon which to speak, nor does it entitle you to earn an income from your speech. Freedom of speech only means you cannot be jailed by your government for expressing your opinions out loud. 
    indyfxzoetmbbrakkenpscooter63[Deleted User]
  • Reply 7 of 17
    zoetmbzoetmb Posts: 2,654member
    maestro64 said:
    I can't believe this is still an issue. Even by google standards this is ridiculous. google has known about this for years:

    http://www.chicagotribune.com/news/nationworld/chi-youtube-terrorist-propaganda-20150128-story.html

    google's response, back in 2015, was to whine and exaggerate how hard it is. Pretty pathetic. They were basically saying it's not a priority and they don't want to dedicate resources to it (i.e. they want to keep raking in the ad dollars). 

    This is classic google. They were even arrogant enough to imply that they're a useful public utility, and that other people and states should come in and help filter the terrorist propaganda for them. 
    In no way am I defending Google, however, when you automate anything like ad placements and someone signed up for Google ads on their YouTube channel how does Google figure out you are posting stuff advertisers are going to hate. Also people have right to free speech does that also mean when an advertiser does not like your free speech they get to decide if your not allowed to make money off ads. Advertisers sold their ad into a distribution market as such they legal can not tell Google where those ads get placed. There are clear US laws about products sold into distribution and how much say the company has once they do that.

    This is a complicated issue, even Google with all its AI can not solve this. It will require lots of human involved which is going to cut into Google's profits. Advertisers will also need to be activity be involved and they will have to say where and when ads get placed like they use to do years ago.
    Of course an advertiser should be able to decide where their ads are placed.   Advertisers can decide today whether or not to place an ad in a given magazine, radio program or TV show depending on the audience.    Having said that, most advertising is based on targeting specific demographics and has been for years.   So what service is Google providing to advertisers if they're not really providing targeted advertising?   There are plenty of companies offering all kinds of optimized advertising services today.  Is Google not one of them?  I certainly was under the impression that they claimed to be industry leaders of this tech.  

    Let's say I'm Disney advertising the new "Beauty and the Beast" movie.   I want it to go to parents of young children and to children.   Maybe I also want it to be targeted to grandparents.  I certainly don't want it targeted to 18-25 year old single men or people who follow terrorists.   What could be worse for a company than having their logo splashed across a terrorist video?  

    That screen shot actually cracked me up - it looks like a Saturday Night Live routine.  

    An advertiser deciding not to place their ad on any given video does not limit your free speech nor does it restrict you from making money.  It just prevents you from making money from that one advertiser.    And if you aren't creating something desirable that an advertiser wants to be associated with, that's your problem, not theirs.    If you create some stupid video of people behaving badly, I'm not advertising on it no matter what demographics you reach.   
    watto_cobra
  • Reply 8 of 17
    singularitysingularity Posts: 1,328member
    by the way isn't it  "The Times" not Times of London?
  • Reply 9 of 17
    fallenjtfallenjt Posts: 4,053member
    Fuck you, Google!
    watto_cobrabrakkenration al
  • Reply 10 of 17
    MarvinMarvin Posts: 15,310moderator
    maestro64 said:
    I can't believe this is still an issue. Even by google standards this is ridiculous. google has known about this for years:

    http://www.chicagotribune.com/news/nationworld/chi-youtube-terrorist-propaganda-20150128-story.html

    google's response, back in 2015, was to whine and exaggerate how hard it is. Pretty pathetic. They were basically saying it's not a priority and they don't want to dedicate resources to it (i.e. they want to keep raking in the ad dollars). 

    This is classic google. They were even arrogant enough to imply that they're a useful public utility, and that other people and states should come in and help filter the terrorist propaganda for them. 
    In no way am I defending Google, however, when you automate anything like ad placements and someone signed up for Google ads on their YouTube channel how does Google figure out you are posting stuff advertisers are going to hate. Also people have right to free speech does that also mean when an advertiser does not like your free speech they get to decide if your not allowed to make money off ads. Advertisers sold their ad into a distribution market as such they legal can not tell Google where those ads get placed. There are clear US laws about products sold into distribution and how much say the company has once they do that.

    This is a complicated issue, even Google with all its AI can not solve this. It will require lots of human involved which is going to cut into Google's profits. Advertisers will also need to be activity be involved and they will have to say where and when ads get placed like they use to do years ago.
    Surely they can just maintain a whitelist of approved publishers. According to the following site, Youtube is heavily weighted towards the top channels/videos like most online services:

    http://blogs.barrons.com/techtraderdaily/2016/05/11/youtubes-2-billion-videos-197m-hours-make-it-an-immense-force-says-bernstein/

    "Content and usage seem to be relatively concentrated across several dimensions. For example, 1% of YouTube videos correspond to 93% of views since inception; 94% of viewing time (since inception) is also concentrated in about 1% of the videos in the library"

    The whitelist can be publishers who haven't received follow-through complaints about content and they'd have to be approved to get on the whitelist. In the event that a content publisher posts something bad, they get removed from the whitelist. Advertisers would be free to choose only whitelisted publishers (can also have different approval tiers) or anyone. This would also encourage publishers to post less offensive material as dropping approval levels could impact their income.

    Creating the whitelist would take some human input but if they only need to approve a few million high profile publishers out of tens/hundreds of millions, this can be handled by <1000 staff and the approved advertising tiers can cost more for advertisers. Automated tools can help like keyword filtering, audio track transcription by running it through audio assistants where extremist videos can be flagged up by their audio, even image detection algorithms run on a frame for every few seconds of video.

    It sounds like they do this sort of thing already:

    https://blog.google/topics/google-europe/improving-our-brand-safety-controls/

    "At the same time, we recognize the need to have strict policies that define where Google ads should appear. The intention of these policies is to prohibit ads from appearing on pages or videos with hate speech, gory or offensive content. In the vast majority of cases, our policies work as intended. We invest millions of dollars every year and employ thousands of people to stop bad advertising practices. Just last year, we removed nearly 2 billion bad ads from our systems, removed over 100,000 publishers from our AdSense program, and prevented ads from serving on over 300 million YouTube videos.

    However, with millions of sites in our network and 400 hours of video uploaded to YouTube every minute, we recognize that we don't always get it right. In a very small percentage of cases, ads appear against content that violates our monetization policies. We promptly remove the ads in those instances, but we know we can and must do more."

    Google tends to go the route of cleaning up after a mess is made rather than preventing it like with their Play Store. This is more efficient for publishers but it inevitably creates a mess at some point. Having a pre-approval process is better for ensuring reputable publishing for advertisers sensitive to this.
    pscooter63ration al[Deleted User]
  • Reply 11 of 17
    19831983 Posts: 1,225member
    Fuck Google and Facebook, they could do something about this if they really wanted to, they just can't be bothered, money talks. They're accomplices in the increasing amount of misogyny, anti-Semitism, religious extremism and terrorism in the world by letting this kind of thing continue to happen unabated. Those morally bankrupt turds!
    edited March 2017 watto_cobraration al
  • Reply 12 of 17
    brakkenbrakken Posts: 687member
    In other news, the. National Islamists League has just petitioned L'Oreal to be their primary sponsors in an upcoming Middle East campaign to 'bring beauty back to politics'.

    One NIL representative was quoted as saying hand and face creams would be an instatnt favorite with supports, and would instantly create a new market for those products. 

    Nivea could not be contacted, but inside sources have hinted at a possible collaboration with the Northern Front. 
    edited March 2017
  • Reply 13 of 17
    crossladcrosslad Posts: 527member
    On a more serious note, why am I seeing adverts for Samsung phones and Chromebooks on AppleInsider?
  • Reply 14 of 17
    brakkenbrakken Posts: 687member
    crosslad said:
    On a more serious note, why am I seeing adverts for Samsung phones and Chromebooks on AppleInsider?
    It is delicious irony ;)
  • Reply 15 of 17
    badmonkbadmonk Posts: 1,285member
    I noticed this last week that the density of advertisements on YouTube have dropped considerably (with the non-extremist non-fake 🐼 videos i watch).  I suspect this is the issue.  It surprises me that advertisers have dumped so much money in Alphabet advertising.  I suspect they are not as effective as people think.
  • Reply 16 of 17
    josephcapone1josephcapone1 Posts: 1unconfirmed, member
    ad Block???
  • Reply 17 of 17
    crosslad said:
    On a more serious note, why am I seeing adverts for Samsung phones and Chromebooks on AppleInsider?
    That's targeted advertising working at it's best. There's no point advertising a Chromebook to someone who already has one, they want you (an apple product user/fan) to see the competing product you do not have and at the very least have the brand and product in your mind, and at the most you would go and buy that product. To many (majority?) advertising does not really work other than to annoy.
Sign In or Register to comment.