AT&T, Verizon join firms pulling millions in ads from Google, YouTube over hate, terrorist...

Posted:
in General Discussion
The U.K. advertising backlash against Google is spreading to the United States. Mobile carriers AT&T and Verizon, Enterprise car rental and pharmaceutical giant GlaxoSmithKline are among the major ad buyers acting to distance their brands from the offensive and extremist content that saturates YouTube.


Source: Times of London: "Taxpayers are funding extremism"


Following an eruption of brand association concerns in the U.K. that prompted the Guardian newspaper, European mobile carrier O2, British Royal Mail, Royal Navy, the Royal Air Force, Transport For London, the BBC, Domino's Pizza, Hyundai Kia, McDonald's, L'Oreal, Toyota and Volkswagen to pull ads from Google and/or YouTube specifically, a series of global brands have also jumped to pull their ads in America.

Reports by Wall Street Journal, USA Today and other media sources noted that "major U.S. advertisers are pulling hundreds of millions of dollars in business from Google and YouTube" despite efforts by the ad giant to get control over how advertisers' brands are associated with offensive and extremist content published by YouTube.

AT&T is pulling all advertisement from Google apart from paid search placement, a move that affects not only YouTube but millions of other websites that participate in Google's ad network.

"We are deeply concerned that our ads may have appeared alongside YouTube content promoting terrorism and hate," the carrier said in a statement published by USA Today. "Until Google can ensure this won't happen again, we are removing our ads from Google's non-search platforms.""We are deeply concerned that our ads may have appeared alongside YouTube content promoting terrorism and hate" - AT&T

A spokesperson for Verizon said it was also pulling ads, noting that "Verizon is one of the largest advertisers in the world, and one of the most respected brands. We take careful measure to ensure our brand is not impacted negatively. Once we were notified that our ads were appearing on non-sanctioned websites, we took immediate action to suspend this type of ad placement and launched an investigation."

Enterprise offered a statement saying, "although it is effective in dealing with the highly fragmented nature of the digital ad world, programmatic buying is still evolving as a business practice - and it appears that technology has gotten ahead of the advertising industry's checks-and-balances," adding "there is no doubt there are serious flaws that need to be addressed. As a result, we have temporarily halted all YouTube advertising, while executives at Google, YouTube and our own media agencies focus on alleviating these risks and concerns going forward."

Google declined to comment on the pulled ads, but offered a statement "we've begun an extensive review of our advertising policies and have made a public commitment to put in place changes that give brands more control over where their ads appear," adding "we're also raising the bar for our ads policies to further safeguard our advertisers' brands.

The original investigation by The Times detailed why brands are concerned, noting that Google's algorithms placed ads for Mercedes E-Class "next to an ISIL video praising jihad that has been viewed more than 115,000 times." Google was also detailed placing ads on videos by al-Shabaab, an East African jihadist group affiliated with al-Qaeda.

Google now taking a "hard look"

Yesterday, Google's chief business officer Philipp Schindler published a blog entry acknowledging, "we know advertisers don't want their ads next to content that doesn't align with their values. So starting today, we're taking a tougher stance on hateful, offensive and derogatory content.

"This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories."

Schindler also noted that "the YouTube team is taking a hard look at our existing community guidelines to determine what content is allowed on the platform--not just what content can be monetized."

After an initial foray into in-app advertising with iAd, Apple has distanced itself from advertising overall, seeking to sell premium content subscriptions of its own with Apple Music, and with partners including Netflix and HBO. This has avoided association with extremist content and fake news, while also promoting Apple as having a safe, private and secure platform lacking any need to collect user behavior and personal data for use in surveillance advertising.
Solilolliversockrolid
«1

Comments

  • Reply 1 of 39
    Don't be evil (Google definition): "Google's algorithms placed ads for Mercedes E-Class next to an ISIL video praising jihad that has been viewed more than 115,000 times. Google was also detailed placing ads on videos by al-Shabaab, an East African jihadist group affiliated with al-Qaeda."
    bb-15sockrolidwatto_cobra
  • Reply 2 of 39
    Glad to see this movement surge in online advertising. It's time that people recognize that clicks aren't everything.
    lolliversockrolidcoolfactor1983watto_cobraspacekid
  • Reply 3 of 39
    entropysentropys Posts: 4,299member
    Superficially sounds great, but I am concerned who gets to decide what is offensive hate material and who doesn't. Will it be on the basis of twitter campaigns? FFS? Somehow or other that would turn out to be bad.
    on the other hand,  imagine if a Trumpbot got to decide what was hate content. Daring fireball*would probably fit.

    Overall a bad idea in the name of good.

    *speaking of daring fireball, Gruber made the obvious poin the other day that this is Google responding to its customers, the advertisers.
    until they started lifting their ads, Google never bothered.
    edited March 2017 DanielEranjungmarksockrolid
  • Reply 4 of 39
    tdknoxtdknox Posts: 84member
    I’m expecting their stock to go up along with articles explaining how this is actually bad news for Apple.
    lolliverradarthekatlkruppRoyfb[Deleted User]smiffy31watto_cobraedred
  • Reply 5 of 39
    maestro64maestro64 Posts: 5,043member

    Not sure how Google fixes this, the issue is the fact Google targets ads to the individual and they can not stop individuals who go to sites these company find offensive. I see ads on Appleinsider based on things I looked at on Amazon, if I go to some other website I see the same ads since they are being targets at me. Google will have to determine two things, first the individual like going to website these advertiser do not like and they will also need to figure if the website has content which the advertisers do not like. The only way to stop this is to make sure Google know which people are doing objectionable things as defined by the advertisers and mark them as not to advertise, otherwise, ad content may show up in the wrong places. I hope Google does this then then I could go to some of these sites to get marked, and I will not see anymore Google ads that will be a great day. Google can not just mark the website, since they change all the time they have to mark the individual as well.

    Think about this, the only reason an AT&T add showed up on a hate website was because the individual that AT&T wants Google to target and AT&T wants business from also went to a hate website. AT&T can not have it both way the ads being target to customer who they want money from but also is in to hate content.

    This is going down the slippery slop, you could have some advertiser saying they do not like LGBT people since it against their morals so they do not want their ads showing up on sites which LGBT people visit. You have to ask who gets to decide which website and content on website if offensive to a particular group. It would be like telling a company who owns a billboard they did not want LGBT people seeing their ads, or better yet taking their magazine with their ads into a place which caters to the LGBT community.

    Google is going to have issue here and civil liberties will be suing them since they are now going to sensor whether ads which are targeted to the individual are being filtered out because someone at Google decide the content on the website is offensive.

    edited March 2017
  • Reply 6 of 39
    maestro64 said:

    Not sure how Google fixes this, ...

    It starts with Google (and Facebook and Twitter) acknowledging that their platforms have become sewers of hatred, bullying, and intimidation.
    They can no longer throw up their hands and claim "Oh, its just science and algorithms. We can't do anything about it." 
    These are ostensibly smart people, and its time they stop polluting the world in the name of ad hits.

    lolliverlmagoojohn.bsockrolidcoolfactor[Deleted User]watto_cobramdriftmeyer
  • Reply 7 of 39
    maestro64maestro64 Posts: 5,043member
    stickista said:
    maestro64 said:

    Not sure how Google fixes this, ...

    It starts with Google (and Facebook and Twitter) acknowledging that their platforms have become sewers of hatred, bullying, and intimidation.
    They can no longer throw up their hands and claim "Oh, its just science and algorithms. We can't do anything about it." 
    These are ostensibly smart people, and its time they stop polluting the world in the name of ad hits.


    You are correct it is going to take real people doing the work not algorithms. It is not their platform that promote hates, it is people that does it and the internet as whole. As I said Google will have to figure which individuals not to target with ads since the ads follow the person not the website.
    watto_cobra
  • Reply 8 of 39
    entropys said:
    Superficially sounds great, but I am concerned who gets to decide what is offensive hate material and who doesn't. 
    It sounds like the advertisers that pulled their ads have decided.
    DanielEranjohn.bMisterKitsockrolidr00fus1watto_cobra
  • Reply 9 of 39
    tundraboytundraboy Posts: 1,911member
    What was that again about AI going to make the world a better place to live in?
  • Reply 10 of 39
    maestro64 said:
    stickista said:
    maestro64 said:

    Not sure how Google fixes this, ...

    It starts with Google (and Facebook and Twitter) acknowledging that their platforms have become sewers of hatred, bullying, and intimidation.
    They can no longer throw up their hands and claim "Oh, its just science and algorithms. We can't do anything about it." 
    These are ostensibly smart people, and its time they stop polluting the world in the name of ad hits.


    You are correct it is going to take real people doing the work not algorithms. It is not their platform that promote hates, it is people that does it and the internet as whole. As I said Google will have to figure which individuals not to target with ads since the ads follow the person not the website.
    The platform does not promote hate, but it amplifies its ability to metastasize. 
    Yes, human 'curation' (for lack of a better word... or maybe 'common sense') is going to be needed.
    But again, it starts with Zuck et al acknowledging that they're now the biggest amplifier of the disease, and they need to start taking responsibility.
    DanielEranlmagoocoolfactorwatto_cobra
  • Reply 11 of 39
    jdgazjdgaz Posts: 407member
    I don't search using google. Duck Duck Go works just fine.
    lmagoomacseekerjohn.bdavenMisterKitsockrolidmontrosemacswatto_cobraidrey
  • Reply 12 of 39
    maestro64 said:

    Not sure how Google fixes this, the issue is the fact Google targets ads to the individual and they can not stop individuals who go to sites these company find offensive.

    Think about this, the only reason an AT&T add showed up on a hate website was because the individual that AT&T wants Google to target and AT&T wants business from also went to a hate website. AT&T can not have it both way the ads being target to customer who they want money from but also is in to hate content.

    That's quite obviously not true. While Google does track users and shows them the same ads from other sites it knows they visited, it also simply fills all potential ad space programmatically from the inventory of ads it has sold. So it's simply not true that all the ads you see are based on Google's surveillance of what you visit. That's obvious to anyone who has ever seen an ad for a product they haven't ever used or would never have any interest in. Just because an idea occurs to you as "possible" doesn't make it fact or even true.
    lolliverradarthekatwatto_cobra
  • Reply 13 of 39
    What took so long to something that should have been addressed years ago...?
    watto_cobra
  • Reply 14 of 39
    "… the offensive and extremist content that saturates YouTube." That would be liberals, the leftists and democrats - but Google supports that ideology 100%. So that content stays on Youtube. Google seems to support the vast amount of Muslim terrorism also - that content stays on Youtube too. Google has been in bed with the federal government for many years. Mostly democrats - but a lot republicans also. The Google corporate motto of "do no evil" has always been a sham. It's all they do. As long as it harms America.
  • Reply 15 of 39
    stickista said:

    It starts with Google (and Facebook and Twitter) acknowledging that their platforms have become sewers of hatred, bullying, and intimidation.
    They can no longer throw up their hands and claim "Oh, its just science and algorithms. We can't do anything about it." 
    These are ostensibly smart people, and its time they stop polluting the world in the name of ad hits.

    The problem is ... who decides what qualifies as 'hate' or 'bullying', etc. Obviously there is some hateful material out there, but it won't stop there. When someone feels offended by what someone else believes they are quick to declare that the person is hateful. That way of thinking will most certainly become apart of the issue in question.

    Discrimination and censorship are being masqueraded as 'equality' and 'tolerance' and the masses are buying it. This is an unfortunate time for all of us, because some see it and can't seem to do anything about it and the others who don't see it are otherwise intelligent but have blinded themselves, and so the actual problem grows—that is, both sides are fatally intolerant and those who know how to genuinely love others are rare.
    edited March 2017
  • Reply 16 of 39
    Regarding science and algorithms, one must remember all of it comes from human input.  The data and methodology is human derived.  So, if Google says their programming has errors, what they're saying is their programmers and analyzers aren't doing their job correctly.  What Google really needs is an independent organization overseeing the science and algorithms that they employ.  The independent organization should not be controlled by Google corporate.  To me, some heads should roll.
  • Reply 17 of 39
    john.bjohn.b Posts: 2,742member
    The scandal is that the people in the Googleplex don't give one rats arse about terror and hate content on YT until their big advertisers start to leave en masse.
    edited March 2017 macseekersockrolidwatto_cobra
  • Reply 18 of 39
    I'm sure Google will be very fair and even handed as to what gets penalized. No problems here. 
  • Reply 19 of 39
    anomeanome Posts: 1,545member
    entropys said:
    Superficially sounds great, but I am concerned who gets to decide what is offensive hate material and who doesn't. Will it be on the basis of twitter campaigns? FFS? Somehow or other that would turn out to be bad.
    on the other hand,  imagine if a Trumpbot got to decide what was hate content. Daring fireball*would probably fit.

    Overall a bad idea in the name of good.

    *speaking of daring fireball, Gruber made the obvious poin the other day that this is Google responding to its customers, the advertisers.
    until they started lifting their ads, Google never bothered.
    georgie01 said:
    stickista said:

    It starts with Google (and Facebook and Twitter) acknowledging that their platforms have become sewers of hatred, bullying, and intimidation.
    They can no longer throw up their hands and claim "Oh, its just science and algorithms. We can't do anything about it." 
    These are ostensibly smart people, and its time they stop polluting the world in the name of ad hits.

    The problem is ... who decides what qualifies as 'hate' or 'bullying', etc. Obviously there is some hateful material out there, but it won't stop there. When someone feels offended by what someone else believes they are quick to declare that the person is hateful. That way of thinking will most certainly become apart of the issue in question.

    Discrimination and censorship are being masqueraded as 'equality' and 'tolerance' and the masses are buying it. This is an unfortunate time for all of us, because some see it and can't seem to do anything about it and the others who don't see it are otherwise intelligent but have blinded themselves, and so the actual problem grows—that is, both sides are fatally intolerant and those who know how to genuinely love others are rare.

    The obvious answer to these questions is: The advertiser. IE give Verizon, AT&T or whoever the choice of where there ads show. It's the same with ads on TV. If advertisers decide they don't want to be associated with a program, they pull their advertising from it.

    How they decide which programs/web pages they don't want to be associated with is up to them. It may be the default position is to advertise with everyone, until they start to get negative feedback, and then pull their ads accordingly. Which is, after all, the approach that Google have taken here - Leave everything alone until their paying customers complain.

  • Reply 20 of 39
    I for one am against censorship, generally. As mentioned previously, one persons acceptable content is another's unacceptable trash. As dispicable as some content is on YouTube, at the least, if a person views such content, per age limitations, then many times the viewing of that content will further isolate and detract from the worth of that particular cause or extremist tendency, as acceptable in the viewers world view.  I think it is important to mention that most people who view such things are repulsed by it, no  titillation experienced. there are valid reasons for viewing the content, such as research, accuracy, current advancement of the particular repulsive movement and a general interest in what makes the other guy tick and how to counter such distasteful thought and action. And also gives law enforcement a,crack at the truly disturbing content that needs to go.

    as far as advertisers go, that is their problem and Google. Free speech is just another facet of free viewing as a right. That is something, I think, that Google understands, beyond their mad dash for advertising profits.
    edited March 2017
Sign In or Register to comment.