Superficially sounds great, but I am concerned who gets to decide what is offensive hate material and who doesn't. Will it be on the basis of twitter campaigns? FFS? Somehow or other that would turn out to be bad. on the other hand, imagine if a Trumpbot got to decide what was hate content. Daring fireball*would probably fit.
Overall a bad idea in the name of good.
*speaking of daring fireball, Gruber made the obvious poin the other day that this is Google responding to its customers, the advertisers. until they started lifting their ads, Google never bothered.
It's simple. If you violate the TOS, then you violate it. And they have the right to decide because they are content providers. You don't get to decide.
"… the offensive and extremist content that saturates YouTube."
That would be liberals, the leftists and democrats - but Google supports that ideology 100%. So that content stays on Youtube. Google seems to support the vast amount of Muslim terrorism also - that content stays on Youtube too.
Google has been in bed with the federal government for many years. Mostly democrats - but a lot republicans also. The Google corporate motto of "do no evil" has always been a sham. It's all they do. As long as it harms America.
Didn't Google join Apple and attacked Trump plan to temporarily put a halt to the Terrorists from the 7 countries. The tech companies will do anything to support the liberals letting illegal aliens and terrorists into the country in return for more H1B visas
1. The customer is advertisers we are the product 2. So after months and months of lip service regarding fake news and awful stuff on YouTube and linked in Google nothing done 3. However, when customers start moving their money away NOW Google or Alphabet or whatever their name is takes action
97% of Google's revenue comes from ads. Ads are all they really care about. Free software, self-driving cars, etc. are all just public relations stunts.
Google investors and apologists have argued that AI will solve a multitude of problems.
AI is an oxymoron. Intelligence is not artificial. Machine learning is NOT intelligence. Computers don't understand humor and they don't understand sarcasm. I don't care if Alpha Go bested the human champion in the game. To appropriately filter content, humans still need to curate the data. Machine algorithms do NOT substitute for human judgment. The videos in question are not in any way questionable. They clearly cross the line with respect to hate and intolerance.
The real problem is that automation of ad placement results in company ads being shown on Jihadist or anti-Semitic videos. This isn't an issue of censorship. No one is asking Google to remove the videos. They are, however, appropriately demanding that Google place some safeguards into the process. So that there is minimal risk that an ad is placed on extremist content. And the real kicker, that those ad revenues aren't being paid out to groups that espouse hatred and violence.
Google has paid out large sums to such groups. And the source of that revenue comes from advertising. Such that the likes of AT&T, Verizon, Barclays, etc. have contributed.
Isn't this just laziness on the advertisers part by not actually doing the work and specifying what content they want to market alongside and rather treating it to much as blanket advertising and sponsoring any video?
I thought the whole point of this form of advertising is that you can target better than traditional advertising, but a lot of these companies don't seem to get that and just pour money into the system then complain when there advertising is linked to content they don't like.
I watch a lot of youtube and it's a certain type of content, I see adverts for stuff that is just so irrelevant to me it's silly. I listen to a lot of metal and rock and I get a lot of adverts for pop music, surely it's just someone stupid doing the marketing assuming someone anyone watching music videos on youtube needs to see there advertisement...
If this is not the case.. then this is simply the problem youtube needs to solve, better tools for targeting advertising so advertisements don't get linked to the wrong content. Also this would increase the value of their ad business so seems silly they wouldn't do this.
I'm not sure if Google solves this by blocking the ads on the offensive videos or blocking the videos themselves. And what happens to an American-made video such as a documentary on ISIS or a video saying that Google shouldn't be blocking ISIS/ISIL videos? If the algorithms are based on keywords, it might block such videos. That would probably be as bad as blocking access to videos about "breast cancer" because they contain the word "breast". Personally, I wouldn't avoid a car company just because its ads appeared on an offensive web page. On the other hand, a car company has every right to refuse to use Google for ads for any reason it cares to have. Free speech; free market; free choice. Google is stuck between Iraq and a hard place here.
Every time I scan through YouTube comments, I'm left feeling very sorry to be associated with the human race. Absolutely disgusting and childish behaviour on there.
Not sure how Google fixes this, the issue is the fact Google targets ads to the individual and they can not stop individuals who go to sites these company find offensive. I see ads on Appleinsider based on things I looked at on Amazon, if I go to some other website I see the same ads since they are being targets at me. Google will have to determine two things, first the individual like going to website these advertiser do not like and they will also need to figure if the website has content which the advertisers do not like. The only way to stop this is to make sure Google know which people are doing objectionable things as defined by the advertisers and mark them as not to advertise, otherwise, ad content may show up in the wrong places. I hope Google does this then then I could go to some of these sites to get marked, and I will not see anymore Google ads that will be a great day. Google can not just mark the website, since they change all the time they have to mark the individual as well.
Think about this, the only reason an AT&T add showed up on a hate website was because the individual that AT&T wants Google to target and AT&T wants business from also went to a hate website. AT&T can not have it both way the ads being target to customer who they want money from but also is in to hate content.
This is going down the slippery slop, you could have some advertiser saying they do not like LGBT people since it against their morals so they do not want their ads showing up on sites which LGBT people visit. You have to ask who gets to decide which website and content on website if offensive to a particular group. It would be like telling a company who owns a billboard they did not want LGBT people seeing their ads, or better yet taking their magazine with their ads into a place which caters to the LGBT community.
Google is going to have issue here and civil liberties will be suing them since they are now going to sensor whether ads which are targeted to the individual are being filtered out because someone at Google decide the content on the website is offensive.
Google is already putting LBGT content in the restricted category even when it makes little sense.
The thing is Google should not be allowing this kind of content in the first place, comparing Jihadi videos (and the like) to LBGT is offensive to me by the way, that's one hell of a false equivalence.
The thing is those groups use Google to make money and google is making money off them to. I'm pretty sure LBGT is not advocating murders.... that's another big ass difference hey.
Well, the advertiser DECIDES WHO THE FRACK SEES HIS MESSAGES AND WHERE IT IS DISPLAYED. That'S the whole god damn point of Google tracking the hell out of users, if there is no way for advertisers to restrict where their advertising go, that's one one shitty advertising channel!
Advertisers pull advertising from shows putting out offensive material all the time, there is nothing new here.
I for one am against censorship, generally. As mentioned previously, one persons acceptable content is another's unacceptable trash. As dispicable as some content is on YouTube, at the least, if a person views such content, per age limitations, then many times the viewing of that content will further isolate and detract from the worth of that particular cause or extremist tendency, as acceptable in the viewers world view. I think it is important to mention that most people who view such things are repulsed by it, no titillation experienced. there are valid reasons for viewing the content, such as research, accuracy, current advancement of the particular repulsive movement and a general interest in what makes the other guy tick and how to counter such distasteful thought and action. And also gives law enforcement a,crack at the truly disturbing content that needs to go.
as far as advertisers go, that is their problem and Google. Free speech is just another facet of free viewing as a right. That is something, I think, that Google understands, beyond their mad dash for advertising profits.
Google understands NOTHING, they have seemingly contempt for the advertisers that pay their bills.
The channel to reach the users impact how the brand is perceived and as such is part of what Google should be selling to advertisers. They seem to not give a shit about that and that needs to change.
"This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories."
Surely they should be removing the content? That sounds like they're OK with people posting racist, anti-semantic, homophobic etc. videos, as long as google removes the adverts on them . wow google, wow.
Not sure how Google fixes this, the issue is the fact Google targets ads to the individual and they can not stop individuals who go to sites these company find offensive.
Think about this, the only reason an AT&T add showed up on a hate website was because the individual that AT&T wants Google to target and AT&T wants business from also went to a hate website. AT&T can not have it both way the ads being target to customer who they want money from but also is in to hate content.
That's quite obviously not true. While Google does track users and shows them the same ads from other sites it knows they visited, it also simply fills all potential ad space programmatically from the inventory of ads it has sold.
So it's simply not true that all the ads you see are based on Google's surveillance of what you visit. That's obvious to anyone who has ever seen an ad for a product they haven't ever used or would never have any interest in.
Just because an idea occurs to you as "possible" doesn't make it fact or even true.
I did not say Google did not have non-targeted ads they drop in to open ad spots on a website. So yes I see those as well. I have testes this, my wife and both go to the same website and neither of us have google account so google only tracks us base on location or IP address the cookies they are dropping on you computer. When we both go to the same website at the same time we both get totally different ads, most are specifically targets at her and one are targeted to me, I can tell them apart since google is really good at knowing what I look or bought recently. However, the issue is not those non-targeted ads, since those companies are not paying to get their ads in front of specific people. In this case companies will have to also say which customers they do not want their ads in front of as well which website they deem unacceptable.
Just because one group things something is hateful does not mean everyone thinks it is, I am making a generally statement here not specific one on terrorism.
Comments
2. So after months and months of lip service regarding fake news and awful stuff on YouTube and linked in Google nothing done
3. However, when customers start moving their money away NOW Google or Alphabet or whatever their name is takes action
object lesson for all of us
Ads are all they really care about.
Free software, self-driving cars, etc. are all just public relations stunts.
AI is an oxymoron. Intelligence is not artificial. Machine learning is NOT intelligence. Computers don't understand humor and they don't understand sarcasm. I don't care if Alpha Go bested the human champion in the game. To appropriately filter content, humans still need to curate the data. Machine algorithms do NOT substitute for human judgment. The videos in question are not in any way questionable. They clearly cross the line with respect to hate and intolerance.
The real problem is that automation of ad placement results in company ads being shown on Jihadist or anti-Semitic videos. This isn't an issue of censorship. No one is asking Google to remove the videos. They are, however, appropriately demanding that Google place some safeguards into the process. So that there is minimal risk that an ad is placed on extremist content. And the real kicker, that those ad revenues aren't being paid out to groups that espouse hatred and violence.
Google has paid out large sums to such groups. And the source of that revenue comes from advertising. Such that the likes of AT&T, Verizon, Barclays, etc. have contributed.
https://www.searchenginejournal.com/google-adwords-sponsoring-orkut-terror-jihad-groups/4094/#readfull
I thought the whole point of this form of advertising is that you can target better than traditional advertising, but a lot of these companies don't seem to get that and just pour money into the system then complain when there advertising is linked to content they don't like.
I watch a lot of youtube and it's a certain type of content, I see adverts for stuff that is just so irrelevant to me it's silly. I listen to a lot of metal and rock and I get a lot of adverts for pop music, surely it's just someone stupid doing the marketing assuming someone anyone watching music videos on youtube needs to see there advertisement...
If this is not the case.. then this is simply the problem youtube needs to solve, better tools for targeting advertising so advertisements don't get linked to the wrong content. Also this would increase the value of their ad business so seems silly they wouldn't do this.
The new ad (lol) campaign for AdBlock (and uBlock, and...).
The thing is Google should not be allowing this kind of content in the first place, comparing Jihadi videos (and the like) to LBGT is offensive to me by the way, that's one hell of a false equivalence.
The thing is those groups use Google to make money and google is making money off them to.
I'm pretty sure LBGT is not advocating murders.... that's another big ass difference hey.
Well, the advertiser DECIDES WHO THE FRACK SEES HIS MESSAGES AND WHERE IT IS DISPLAYED.
That'S the whole god damn point of Google tracking the hell out of users, if there is no way for advertisers to restrict where their advertising go, that's one one shitty advertising channel!
Advertisers pull advertising from shows putting out offensive material all the time, there is nothing new here.
The channel to reach the users impact how the brand is perceived and as such is part of what Google should be selling to advertisers. They seem to not give a shit about that and that needs to change.
Surely they should be removing the content? That sounds like they're OK with people posting racist, anti-semantic, homophobic etc. videos, as long as google removes the adverts on them . wow google, wow.
I did not say Google did not have non-targeted ads they drop in to open ad spots on a website. So yes I see those as well. I have testes this, my wife and both go to the same website and neither of us have google account so google only tracks us base on location or IP address the cookies they are dropping on you computer. When we both go to the same website at the same time we both get totally different ads, most are specifically targets at her and one are targeted to me, I can tell them apart since google is really good at knowing what I look or bought recently. However, the issue is not those non-targeted ads, since those companies are not paying to get their ads in front of specific people. In this case companies will have to also say which customers they do not want their ads in front of as well which website they deem unacceptable.
Just because one group things something is hateful does not mean everyone thinks it is, I am making a generally statement here not specific one on terrorism.