Google's Pichai denies any political bias in search results during lengthy Congressional t...

124

Comments

  • Reply 61 of 86
    Mike WuertheleMike Wuerthele Posts: 6,861administrator
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    edited December 2018 GeorgeBMac
  • Reply 62 of 86
    dewmedewme Posts: 5,362member
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    Thank you Mike. Thank you Anton for demonstrating how personally held biases impact the interpretation of data. 
  • Reply 63 of 86
    gatorguygatorguy Posts: 24,213member
    Only one Trump image/photo in that one (in Catholic church garb), followed up of course by dozens of article links to President Trump if you're being honest. 
    https://duckduckgo.com/?q=idiot+image&t=h_&ia=news

    Fyi: ducduck still uses google. 

    Upd: i think I am wrong on that one.. 
    DDG sources Bing rather than Google.
  • Reply 64 of 86
    gatorguygatorguy Posts: 24,213member
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    I was about to say the same. Even a cursory reading of the article without following the source links would have told the OP that.

    Quote: "..the study found no evidence Google had intentionally altered search results for the purpose of “suppressing voices of conservatives,” as President Donald Trump has previously stated."

  • Reply 65 of 86
    gatorguygatorguy Posts: 24,213member
    gatorguy said:
    gatorguy said:
    gatorguy said:
    curt12 said:
    maestro64 said:
    The example they use in the hearing search idiot and get Trump picture, so end users are all tagging Trumps photos with the word idiot so when google craws around the web, it finds a bunch of website with Trumps picture tagged with idiot. So now Trumps picture is now associated with Idiot. 

    If I search "idiot" on Bing or Yahoo, Trump is also among the top image hits. So multiple major search engines agree on the relevance of that association.
    FWIW so does DuckDuckGo. In fact at DDG nearly every first page result includes Trump. Evidence that search is driven by user interaction more than anything else no matter what search engine is used. 

    Russian search engine Yandex? Yup lots of Trump stories. The uncensored Gibiru Search? Yup you got it, more Trump. Even the Czech Republic's Seznam will give the same results. It's not a Google thing. 

     
    Another nice strawman you got yourself here, eh, gator? How do you do that? You must be a walking strawman factory.
    It is not that google elliminates the results with certain keywords. It is the emotional tone of the articles is what you should be looking for, if you were in any way honest about your small “research” into the matter. Instead, you pretended not to understand what others have said here, or simply dont care what the actual problem is. Propaganda is done not via full censorship of the information. It is rather a different approach, that is a bit more subtle (subtle for people like you). For example, harvard, which is in no way a right wing place, had this to say about the tone of the coverage of Trump https://www.chicagotribune.com/news/columnists/kass/ct-trump-media-coverage-harvard-kass-0521-20170519-column.html

    so, if you show people more of the left leaning msm in search results, that pretty much guarantees that the coverage will be 80-90 negative for that particular topic. That is the whole point of propaganda, when you do it in a more subtle way (for the average joe), and you still achieve the goal of brain washing the masses. How do you think Hitler’s propaganda or USSR propaganda was so successful? It was successful for the same reason - you dont censor everything fully, but you put a twist on every piece of info as to create a coherent (yet deceitful) picture. So, msm not doing their job of being journalists (instead writing hit pieces) + google provising links more to those types of articles, guarantees the coverage will be VERY biased (look up that study) for the majority of the population. Of course, what you DONT WANNA HAVE (when you do propaganda) is a whole bunch of alternative sources that will disprove you. So, in an open society you start calling thosr sources nazies and banning them from the your platforms for the masses, as to elliminate the threat.

    This is exaclty what has been happening in the last year or so. Multiple platforms banning the same person within 24 hrs is not a coincidence. 
    Ah, so what you're saying is ALL the world's search engines are in cahoots, maybe using some backdoor channels where they secretly discuss what to promote and what to hide. Did I get that right? 
    1. You dont need all search engines to be “in cahoots”. You just need google and facebook news agregator, becasue about 70-80% of the population use both to get their news from. You dont need more to achieve a goal of controlling the masses, becasue you already got them.
    2. If tech companies have coordinated the purge of the alternative sources (as in, removing a person from various platforms of different tech firms within 24 hrs), why cant they coordinate their algs to be doing the same? What makes you think it is impossible? What justification you have for even trying to fool yourself into thinking it is not possible? 

    again, you seem not to think your responses through, do you....
    Of course nearly anything is possible. That doesn't prove that thing you imagine happened actually happened. At this point it has become obvious to me at least that the only evidence that you have to offer is you think it to be so. There's nothing else you have is there? if not then the discussion is at an end since I suspect this will rapidly devolve into a series of he said/she said posts sprinkled with ad-homs. I'm not interested if that's what it's coming down to. 
    1. Google consistenly favours left leaning media
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/

    2. Left leaning media coverage for trump was 8: 1 - 11:1 negative.
    https://www.chicagotribune.com/news/columnists/kass/ct-trump-media-coverage-harvard-kass-0521-20170519-column.html

    so, google showing more left leaning coverage + the knowledge that the left leaning MSM have HEAVY leaning toward negative news associated with trump clearly shows that you are wrong here, gator.
    Stubborn facts and reality are a bitch, eh gator?

    In order for you to cast doubt on what I said, you have to disprove 1 and 2. And you cant, because now you have to find actual facts.... which would be impossible to do, because those reports already contain facts indicating that you are wrong.

    As for knowing whether Google has its algs modified to do that or not - the proof is in the pudding i.e. results of how those algs perform and what they present. It is a black box problem, and you dont need to know what is inside, to know the high level laws it operates on, when you have results like that.
    Did you even read your own links thoroughly before offering them as proof of Google search bias? I think not. Read them again, and more carefully. 
  • Reply 66 of 86
    GeorgeBMacGeorgeBMac Posts: 11,421member
    In today's post-truth world, anything you don't agree with is "biased" (or FakeNews):

    -- "Hitler = evil"    -- Today that is considered fact by most, but not all.  To them it is biased.
    -- "Trump = idiot" -- Today that is considered fact by most, but not all.  To them, it is biased.

    It is why Time's person of the year are tellers of the truth.  They sometimes give their lives and their freedom to speak it.
    IreneW
  • Reply 67 of 86
    In response to an earlier comment, I did an image search on DuckDuckGo and here are the top results:


    Bing:


    DDG:

    gatorguy
  • Reply 68 of 86
    GeorgeBMacGeorgeBMac Posts: 11,421member
    curt12 said:
    In response to an earlier comment, I did an image search on DuckDuckGo and here are the top results:


    Bing:


    DDG:

    What's Elfred E. Newman doing in there?  
    THIS IS BIASED!   NO WAY!
  • Reply 69 of 86
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    That is your opinion, Mike.

    (From the report itself)
    Google News and Google News search results have a STRONG preference for media outlets that have a perspective that is from the left of the US political spectrum. 



    That is what the report ACTUALLY said, and this is the bottom line. This is what users get. So, putting your words against that report, you might be a nominee for the biggest understatement of the week award.

    Now, there is no way to tell if what Google does, is intentional or not in and of itself, because as a software engineer, you cant discuss that with the outsiders. Otherwise you would get fired, and prosecuted for exposing inner workings of a competitive product, which is illegal. Also, you would never get hired again, because companies do not like risks.

    but, given that Google has no problem firing a person for expressing his political opinion they did not like (which consisted of facts and not just his opinions), and deplatform users for the same stuff, than why would you expect them to be impartial when it comes to programming their algorithms...especially considering that they are openly left leaning, as they often affirm?

    I mean, where is in this picture a noticeably big chance that their algs were not programmed to do exactly that, given when is known about the company? There is plausible deniability (which I granted) and there is common sense. 
    edited December 2018
  • Reply 70 of 86
    gatorguy said:
    gatorguy said:
    gatorguy said:
    gatorguy said:
    curt12 said:
    maestro64 said:
    The example they use in the hearing search idiot and get Trump picture, so end users are all tagging Trumps photos with the word idiot so when google craws around the web, it finds a bunch of website with Trumps picture tagged with idiot. So now Trumps picture is now associated with Idiot. 

    If I search "idiot" on Bing or Yahoo, Trump is also among the top image hits. So multiple major search engines agree on the relevance of that association.
    FWIW so does DuckDuckGo. In fact at DDG nearly every first page result includes Trump. Evidence that search is driven by user interaction more than anything else no matter what search engine is used. 

    Russian search engine Yandex? Yup lots of Trump stories. The uncensored Gibiru Search? Yup you got it, more Trump. Even the Czech Republic's Seznam will give the same results. It's not a Google thing. 

     
    Another nice strawman you got yourself here, eh, gator? How do you do that? You must be a walking strawman factory.
    It is not that google elliminates the results with certain keywords. It is the emotional tone of the articles is what you should be looking for, if you were in any way honest about your small “research” into the matter. Instead, you pretended not to understand what others have said here, or simply dont care what the actual problem is. Propaganda is done not via full censorship of the information. It is rather a different approach, that is a bit more subtle (subtle for people like you). For example, harvard, which is in no way a right wing place, had this to say about the tone of the coverage of Trump https://www.chicagotribune.com/news/columnists/kass/ct-trump-media-coverage-harvard-kass-0521-20170519-column.html

    so, if you show people more of the left leaning msm in search results, that pretty much guarantees that the coverage will be 80-90 negative for that particular topic. That is the whole point of propaganda, when you do it in a more subtle way (for the average joe), and you still achieve the goal of brain washing the masses. How do you think Hitler’s propaganda or USSR propaganda was so successful? It was successful for the same reason - you dont censor everything fully, but you put a twist on every piece of info as to create a coherent (yet deceitful) picture. So, msm not doing their job of being journalists (instead writing hit pieces) + google provising links more to those types of articles, guarantees the coverage will be VERY biased (look up that study) for the majority of the population. Of course, what you DONT WANNA HAVE (when you do propaganda) is a whole bunch of alternative sources that will disprove you. So, in an open society you start calling thosr sources nazies and banning them from the your platforms for the masses, as to elliminate the threat.

    This is exaclty what has been happening in the last year or so. Multiple platforms banning the same person within 24 hrs is not a coincidence. 
    Ah, so what you're saying is ALL the world's search engines are in cahoots, maybe using some backdoor channels where they secretly discuss what to promote and what to hide. Did I get that right? 
    1. You dont need all search engines to be “in cahoots”. You just need google and facebook news agregator, becasue about 70-80% of the population use both to get their news from. You dont need more to achieve a goal of controlling the masses, becasue you already got them.
    2. If tech companies have coordinated the purge of the alternative sources (as in, removing a person from various platforms of different tech firms within 24 hrs), why cant they coordinate their algs to be doing the same? What makes you think it is impossible? What justification you have for even trying to fool yourself into thinking it is not possible? 

    again, you seem not to think your responses through, do you....
    Of course nearly anything is possible. That doesn't prove that thing you imagine happened actually happened. At this point it has become obvious to me at least that the only evidence that you have to offer is you think it to be so. There's nothing else you have is there? if not then the discussion is at an end since I suspect this will rapidly devolve into a series of he said/she said posts sprinkled with ad-homs. I'm not interested if that's what it's coming down to. 
    1. Google consistenly favours left leaning media
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/

    2. Left leaning media coverage for trump was 8: 1 - 11:1 negative.
    https://www.chicagotribune.com/news/columnists/kass/ct-trump-media-coverage-harvard-kass-0521-20170519-column.html

    so, google showing more left leaning coverage + the knowledge that the left leaning MSM have HEAVY leaning toward negative news associated with trump clearly shows that you are wrong here, gator.
    Stubborn facts and reality are a bitch, eh gator?

    In order for you to cast doubt on what I said, you have to disprove 1 and 2. And you cant, because now you have to find actual facts.... which would be impossible to do, because those reports already contain facts indicating that you are wrong.

    As for knowing whether Google has its algs modified to do that or not - the proof is in the pudding i.e. results of how those algs perform and what they present. It is a black box problem, and you dont need to know what is inside, to know the high level laws it operates on, when you have results like that.
    Did you even read your own links thoroughly before offering them as proof of Google search bias? I think not. Read them again, and more carefully. 
    I did. The link does proof that Google does have strong left leaning bias. That was the whole point of the link. Looks like you missed that...again. What a shame
    watto_cobra
  • Reply 71 of 86
    curt12 said:
    In response to an earlier comment, I did an image search on DuckDuckGo and here are the top results:


    Bing:


    DDG:

    What's Elfred E. Newman doing in there?  
    THIS IS BIASED!   NO WAY!
    Idially, you would need to run that from multiple IPs and count the number of first couple of hundred pictures.  That would be at least some kind of fair comparison.
    edited December 2018 GeorgeBMacwatto_cobra
  • Reply 72 of 86

    In today's post-truth world, anything you don't agree with is "biased" (or FakeNews):

    -- "Hitler = evil"    -- Today that is considered fact by most, but not all.  To them it is biased.
    -- "Trump = idiot" -- Today that is considered fact by most, but not all.  To them, it is biased.

    It is why Time's person of the year are tellers of the truth.  They sometimes give their lives and their freedom to speak it.
    Nice projection here. Fact is when it can be confirmed independently. Fakenews cant be confirmed independently, since they are often custom tailored stories that fit a particular narrative that supports “half-truth” by either altering parts of the story, or by omitting part of the story, that otherwise would be crucial FOR TELLING TRUTH. Since that truth is not what the narrative needs, it will get omitted. 
    edited December 2018
  • Reply 73 of 86
    gatorguygatorguy Posts: 24,213member
    gatorguy said:
    gatorguy said:
    gatorguy said:
    gatorguy said:
    curt12 said:
    maestro64 said:
    The example they use in the hearing search idiot and get Trump picture, so end users are all tagging Trumps photos with the word idiot so when google craws around the web, it finds a bunch of website with Trumps picture tagged with idiot. So now Trumps picture is now associated with Idiot. 

    If I search "idiot" on Bing or Yahoo, Trump is also among the top image hits. So multiple major search engines agree on the relevance of that association.
    FWIW so does DuckDuckGo. In fact at DDG nearly every first page result includes Trump. Evidence that search is driven by user interaction more than anything else no matter what search engine is used. 

    Russian search engine Yandex? Yup lots of Trump stories. The uncensored Gibiru Search? Yup you got it, more Trump. Even the Czech Republic's Seznam will give the same results. It's not a Google thing. 

     
    Another nice strawman you got yourself here, eh, gator? How do you do that? You must be a walking strawman factory.
    It is not that google elliminates the results with certain keywords. It is the emotional tone of the articles is what you should be looking for, if you were in any way honest about your small “research” into the matter. Instead, you pretended not to understand what others have said here, or simply dont care what the actual problem is. Propaganda is done not via full censorship of the information. It is rather a different approach, that is a bit more subtle (subtle for people like you). For example, harvard, which is in no way a right wing place, had this to say about the tone of the coverage of Trump https://www.chicagotribune.com/news/columnists/kass/ct-trump-media-coverage-harvard-kass-0521-20170519-column.html

    so, if you show people more of the left leaning msm in search results, that pretty much guarantees that the coverage will be 80-90 negative for that particular topic. That is the whole point of propaganda, when you do it in a more subtle way (for the average joe), and you still achieve the goal of brain washing the masses. How do you think Hitler’s propaganda or USSR propaganda was so successful? It was successful for the same reason - you dont censor everything fully, but you put a twist on every piece of info as to create a coherent (yet deceitful) picture. So, msm not doing their job of being journalists (instead writing hit pieces) + google provising links more to those types of articles, guarantees the coverage will be VERY biased (look up that study) for the majority of the population. Of course, what you DONT WANNA HAVE (when you do propaganda) is a whole bunch of alternative sources that will disprove you. So, in an open society you start calling thosr sources nazies and banning them from the your platforms for the masses, as to elliminate the threat.

    This is exaclty what has been happening in the last year or so. Multiple platforms banning the same person within 24 hrs is not a coincidence. 
    Ah, so what you're saying is ALL the world's search engines are in cahoots, maybe using some backdoor channels where they secretly discuss what to promote and what to hide. Did I get that right? 
    1. You dont need all search engines to be “in cahoots”. You just need google and facebook news agregator, becasue about 70-80% of the population use both to get their news from. You dont need more to achieve a goal of controlling the masses, becasue you already got them.
    2. If tech companies have coordinated the purge of the alternative sources (as in, removing a person from various platforms of different tech firms within 24 hrs), why cant they coordinate their algs to be doing the same? What makes you think it is impossible? What justification you have for even trying to fool yourself into thinking it is not possible? 

    again, you seem not to think your responses through, do you....
    Of course nearly anything is possible. That doesn't prove that thing you imagine happened actually happened. At this point it has become obvious to me at least that the only evidence that you have to offer is you think it to be so. There's nothing else you have is there? if not then the discussion is at an end since I suspect this will rapidly devolve into a series of he said/she said posts sprinkled with ad-homs. I'm not interested if that's what it's coming down to. 
    1. Google consistenly favours left leaning media
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/

    2. Left leaning media coverage for trump was 8: 1 - 11:1 negative.
    https://www.chicagotribune.com/news/columnists/kass/ct-trump-media-coverage-harvard-kass-0521-20170519-column.html

    so, google showing more left leaning coverage + the knowledge that the left leaning MSM have HEAVY leaning toward negative news associated with trump clearly shows that you are wrong here, gator.
    Stubborn facts and reality are a bitch, eh gator?

    In order for you to cast doubt on what I said, you have to disprove 1 and 2. And you cant, because now you have to find actual facts.... which would be impossible to do, because those reports already contain facts indicating that you are wrong.

    As for knowing whether Google has its algs modified to do that or not - the proof is in the pudding i.e. results of how those algs perform and what they present. It is a black box problem, and you dont need to know what is inside, to know the high level laws it operates on, when you have results like that.
    Did you even read your own links thoroughly before offering them as proof of Google search bias? I think not. Read them again, and more carefully. 
    I did. The link does proof that Google does have strong left leaning bias. That was the whole point of the link. Looks like you missed that...again. What a shame
    Then you obviously saw the researchers you seem to explicitly trust state, and I am quoting:
    "the study found NO EVIDENCE Google had intentionally altered search results", exactly counter to to your belief. You're not helping your argument by showing such an obvious bias yourself. I know you really, REALLY want to believe that evil Google is doing whatever they can to intentionally hurt your cause.

    I know that there's probably nothing anyone can say, or link or otherwise contribute that will alter your apparently strong conviction that it must be true even if only because you believe it is.  At the end of the day you simply have nothing of substance that proves it to be so despite your repeated claims of having proof, but as I expected your posts are straying more towards personal insults than facts, therefor serving no further purpose. 

    Unless you have something better than you've offered so far we are done. 

    EDIT: I see in a followup post that you've now decided you have zero proof of Google's search bias and admit to it simply being your opinion that it is so. That's certainly fair to have an opinion. 


    edited December 2018
  • Reply 74 of 86
    Mike WuertheleMike Wuerthele Posts: 6,861administrator
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    That is your opinion, Mike.

    (From the report itself)
    Google News and Google News search results have a STRONG preference for media outlets that have a perspective that is from the left of the US political spectrum. 



    That is what the report ACTUALLY said, and this is the bottom line. This is what users get. So, putting your words against that report, you might be a nominee for the biggest understatement of the week award.

    Now, there is no way to tell if what Google does, is intentional or not in and of itself, because as a software engineer, you cant discuss that with the outsiders. Otherwise you would get fired, and prosecuted for exposing inner workings of a competitive product, which is illegal. Also, you would never get hired again, because companies do not like risks.

    but, given that Google has no problem firing a person for expressing his political opinion they did not like (which consisted of facts and not just his opinions), and deplatform users for the same stuff, than why would you expect them to be impartial when it comes to programming their algorithms...especially considering that they are openly left leaning, as they often affirm?

    I mean, where is in this picture a noticeably big chance that their algs were not programmed to do exactly that, given when is known about the company? There is plausible deniability (which I granted) and there is common sense. 
    Yeah, not my opinion. Gatorguy has the quote from the report right, and completely. You're interpreting what you want to believe. The statement is the solid conclusion of the report writers.

    You're welcome to believe what you want, and infer what you want from what you're reading, but you're absolutely and demonstrably wrong about your conclusion.

    What the users get is based on what the users do - which is what the report says. And the key to your quoted sentence -- the "search results" are the key words in that sentence, not your bolded STRONG.
    edited December 2018 GeorgeBMac
  • Reply 75 of 86
    gatorguy said:
    gatorguy said:
    gatorguy said:
    gatorguy said:
    gatorguy said:
    curt12 said:
    maestro64 said:
    The example they use in the hearing search idiot and get Trump picture, so end users are all tagging Trumps photos with the word idiot so when google craws around the web, it finds a bunch of website with Trumps picture tagged with idiot. So now Trumps picture is now associated with Idiot. 

    If I search "idiot" on Bing or Yahoo, Trump is also among the top image hits. So multiple major search engines agree on the relevance of that association.
    FWIW so does DuckDuckGo. In fact at DDG nearly every first page result includes Trump. Evidence that search is driven by user interaction more than anything else no matter what search engine is used. 

    Russian search engine Yandex? Yup lots of Trump stories. The uncensored Gibiru Search? Yup you got it, more Trump. Even the Czech Republic's Seznam will give the same results. It's not a Google thing. 

     
    Another nice strawman you got yourself here, eh, gator? How do you do that? You must be a walking strawman factory.
    It is not that google elliminates the results with certain keywords. It is the emotional tone of the articles is what you should be looking for, if you were in any way honest about your small “research” into the matter. Instead, you pretended not to understand what others have said here, or simply dont care what the actual problem is. Propaganda is done not via full censorship of the information. It is rather a different approach, that is a bit more subtle (subtle for people like you). For example, harvard, which is in no way a right wing place, had this to say about the tone of the coverage of Trump https://www.chicagotribune.com/news/columnists/kass/ct-trump-media-coverage-harvard-kass-0521-20170519-column.html

    so, if you show people more of the left leaning msm in search results, that pretty much guarantees that the coverage will be 80-90 negative for that particular topic. That is the whole point of propaganda, when you do it in a more subtle way (for the average joe), and you still achieve the goal of brain washing the masses. How do you think Hitler’s propaganda or USSR propaganda was so successful? It was successful for the same reason - you dont censor everything fully, but you put a twist on every piece of info as to create a coherent (yet deceitful) picture. So, msm not doing their job of being journalists (instead writing hit pieces) + google provising links more to those types of articles, guarantees the coverage will be VERY biased (look up that study) for the majority of the population. Of course, what you DONT WANNA HAVE (when you do propaganda) is a whole bunch of alternative sources that will disprove you. So, in an open society you start calling thosr sources nazies and banning them from the your platforms for the masses, as to elliminate the threat.

    This is exaclty what has been happening in the last year or so. Multiple platforms banning the same person within 24 hrs is not a coincidence. 
    Ah, so what you're saying is ALL the world's search engines are in cahoots, maybe using some backdoor channels where they secretly discuss what to promote and what to hide. Did I get that right? 
    1. You dont need all search engines to be “in cahoots”. You just need google and facebook news agregator, becasue about 70-80% of the population use both to get their news from. You dont need more to achieve a goal of controlling the masses, becasue you already got them.
    2. If tech companies have coordinated the purge of the alternative sources (as in, removing a person from various platforms of different tech firms within 24 hrs), why cant they coordinate their algs to be doing the same? What makes you think it is impossible? What justification you have for even trying to fool yourself into thinking it is not possible? 

    again, you seem not to think your responses through, do you....
    Of course nearly anything is possible. That doesn't prove that thing you imagine happened actually happened. At this point it has become obvious to me at least that the only evidence that you have to offer is you think it to be so. There's nothing else you have is there? if not then the discussion is at an end since I suspect this will rapidly devolve into a series of he said/she said posts sprinkled with ad-homs. I'm not interested if that's what it's coming down to. 
    1. Google consistenly favours left leaning media
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/

    2. Left leaning media coverage for trump was 8: 1 - 11:1 negative.
    https://www.chicagotribune.com/news/columnists/kass/ct-trump-media-coverage-harvard-kass-0521-20170519-column.html

    so, google showing more left leaning coverage + the knowledge that the left leaning MSM have HEAVY leaning toward negative news associated with trump clearly shows that you are wrong here, gator.
    Stubborn facts and reality are a bitch, eh gator?

    In order for you to cast doubt on what I said, you have to disprove 1 and 2. And you cant, because now you have to find actual facts.... which would be impossible to do, because those reports already contain facts indicating that you are wrong.

    As for knowing whether Google has its algs modified to do that or not - the proof is in the pudding i.e. results of how those algs perform and what they present. It is a black box problem, and you dont need to know what is inside, to know the high level laws it operates on, when you have results like that.
    Did you even read your own links thoroughly before offering them as proof of Google search bias? I think not. Read them again, and more carefully. 
    I did. The link does proof that Google does have strong left leaning bias. That was the whole point of the link. Looks like you missed that...again. What a shame
    Then you obviously saw the researchers you seem to explicitly trust state, and I am quoting:
    "the study found NO EVIDENCE Google had intentionally altered search results", exactly counter to to your belief. You're not helping your argument by showing such an obvious bias yourself. As I expected your posts are straying more towards personal insults than facts, therefor serving no further purpose. 


    Do you understand that when the report says “they did not find Google deliberately modifying results” it is not the same as Google does not deliberately modify the results?

    Researches did not and never will have access to the algorithms and the code of Google engine, and therefore they cant actually demonstrate that. They (according to their methodology, whatever they meant) did not find deliberate modifications. 

    Now, how do you separate a deliberate modifications of results case from “its just the way our uber complex algothims work” case? Well, if you know at least a little bit about such systems, the answer is - you cant separate them UNLESS YOU HAVE THE CODE!!. I can code you a system that will deliberate modify results the way I need, and you have no way of proving it unless you look at the code...which I wont give you access to.

    Translation: Researches have no way of proving if google did it deliberately or not. But they did prove that Google comes up with a strong left bias. 

    It is my opinion which is based on Google track record of being strongly biased to the left, that they modified the algs. And they do that with impunity becasue they know it would be almost impossible to prove that. That is the only part that is opinion. The rest are facts, that can be verified. 
    edited December 2018
  • Reply 76 of 86
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    That is your opinion, Mike.

    (From the report itself)
    Google News and Google News search results have a STRONG preference for media outlets that have a perspective that is from the left of the US political spectrum. 



    That is what the report ACTUALLY said, and this is the bottom line. This is what users get. So, putting your words against that report, you might be a nominee for the biggest understatement of the week award.

    Now, there is no way to tell if what Google does, is intentional or not in and of itself, because as a software engineer, you cant discuss that with the outsiders. Otherwise you would get fired, and prosecuted for exposing inner workings of a competitive product, which is illegal. Also, you would never get hired again, because companies do not like risks.

    but, given that Google has no problem firing a person for expressing his political opinion they did not like (which consisted of facts and not just his opinions), and deplatform users for the same stuff, than why would you expect them to be impartial when it comes to programming their algorithms...especially considering that they are openly left leaning, as they often affirm?

    I mean, where is in this picture a noticeably big chance that their algs were not programmed to do exactly that, given when is known about the company? There is plausible deniability (which I granted) and there is common sense. 
    Yeah, not my opinion. 
    It is. And just like me, you are free to believe anything you want and have your own interpretations of “me being demonstrably wrong”. That does not mean that I am.

     I have explained that in the post above, why. I am not sure why it is so hard to understand that there is a difference between a lack of evidence of something being true, and and evidence to something being not true. Their report could not find deliberate mods is not evidence that Google did not modify the results. It simpy FOUND NO EVIDENCE of modifications.

    but I do appreciate your honest effort to minimize the impact of thosr facts that I stated. You are doing a stellar job, no doubt Mike.

    again, google demonstrably have a very poor track record of being impartial, being it a case of handling internal affairs, or users. No matter what you say, you cant change facts. Deal with it.
    edited December 2018
  • Reply 77 of 86
    Mike WuertheleMike Wuerthele Posts: 6,861administrator
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    That is your opinion, Mike.

    (From the report itself)
    Google News and Google News search results have a STRONG preference for media outlets that have a perspective that is from the left of the US political spectrum. 



    That is what the report ACTUALLY said, and this is the bottom line. This is what users get. So, putting your words against that report, you might be a nominee for the biggest understatement of the week award.

    Now, there is no way to tell if what Google does, is intentional or not in and of itself, because as a software engineer, you cant discuss that with the outsiders. Otherwise you would get fired, and prosecuted for exposing inner workings of a competitive product, which is illegal. Also, you would never get hired again, because companies do not like risks.

    but, given that Google has no problem firing a person for expressing his political opinion they did not like (which consisted of facts and not just his opinions), and deplatform users for the same stuff, than why would you expect them to be impartial when it comes to programming their algorithms...especially considering that they are openly left leaning, as they often affirm?

    I mean, where is in this picture a noticeably big chance that their algs were not programmed to do exactly that, given when is known about the company? There is plausible deniability (which I granted) and there is common sense. 
    Yeah, not my opinion. 
    It is. And just like me, you are free to believe anything you want and have your own interpretations of “me being demonstrably wrong”. That does not mean that I am.

     I have explained that in the post above, why. I am not sure why it is so hard to understand that there is a difference between a lack of evidence of something being true, and and evidence to something being not true. Their report could not find deliberate mods is not evidence that Google did not modify the results. It simpy FOUND NO EVIDENCE of modifications.

    but I do appreciate your honest effort to minimize the impact of thosr facts that I stated. You are doing a stellar job, no doubt Mike.
    What facts are those? That the users are impacting what gets returned by search algorithms? I never disputed that. I don't give a shit what Google returns and the fact I put forth is the conclusion from the report that you seem to be willfully ignoring.

    The rest of what you're saying aren't facts, so therefore, I'm not minimizing anything. You want to believe that Google intentionally manipulates search results? Go crazy. The data doesn't support your conclusion, and Occam's Razor applies -- the simplest, and most logical explanation is that the users are getting data out, based on what the users are putting in.
    edited December 2018
  • Reply 78 of 86
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    That is your opinion, Mike.

    (From the report itself)
    Google News and Google News search results have a STRONG preference for media outlets that have a perspective that is from the left of the US political spectrum. 



    That is what the report ACTUALLY said, and this is the bottom line. This is what users get. So, putting your words against that report, you might be a nominee for the biggest understatement of the week award.

    Now, there is no way to tell if what Google does, is intentional or not in and of itself, because as a software engineer, you cant discuss that with the outsiders. Otherwise you would get fired, and prosecuted for exposing inner workings of a competitive product, which is illegal. Also, you would never get hired again, because companies do not like risks.

    but, given that Google has no problem firing a person for expressing his political opinion they did not like (which consisted of facts and not just his opinions), and deplatform users for the same stuff, than why would you expect them to be impartial when it comes to programming their algorithms...especially considering that they are openly left leaning, as they often affirm?

    I mean, where is in this picture a noticeably big chance that their algs were not programmed to do exactly that, given when is known about the company? There is plausible deniability (which I granted) and there is common sense. 
    Yeah, not my opinion. 
    It is. And just like me, you are free to believe anything you want and have your own interpretations of “me being demonstrably wrong”. That does not mean that I am.

     I have explained that in the post above, why. I am not sure why it is so hard to understand that there is a difference between a lack of evidence of something being true, and and evidence to something being not true. Their report could not find deliberate mods is not evidence that Google did not modify the results. It simpy FOUND NO EVIDENCE of modifications.

    but I do appreciate your honest effort to minimize the impact of thosr facts that I stated. You are doing a stellar job, no doubt Mike.
    What facts are those? That the users are impacting what gets returned by search algorithms? I never disputed that.

    The rest of what you're saying aren't facts.
    Yeah? So Google did not fire an engineer for expressing his political opinion is not a fact? What else is not a fact in your world, Mike?

    users are not the only module in that chain that impacts the results. There is the algorithm the the researches had no access to. It is a fact. And there is a track record of Google being politically biased - which is a fact that needs like 1 minute (unironically on Google) to confirm with their own statements, and what they stand for and support.

    Now, if you wanna think that is not enough to say that Google does continue that track record with the way they code the logic of their search engine, that is on you. All the facts indicate that it is extremely likely, given their previous and current political activism.
    edited December 2018
  • Reply 79 of 86
    Mike WuertheleMike Wuerthele Posts: 6,861administrator
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    That is your opinion, Mike.

    (From the report itself)
    Google News and Google News search results have a STRONG preference for media outlets that have a perspective that is from the left of the US political spectrum. 



    That is what the report ACTUALLY said, and this is the bottom line. This is what users get. So, putting your words against that report, you might be a nominee for the biggest understatement of the week award.

    Now, there is no way to tell if what Google does, is intentional or not in and of itself, because as a software engineer, you cant discuss that with the outsiders. Otherwise you would get fired, and prosecuted for exposing inner workings of a competitive product, which is illegal. Also, you would never get hired again, because companies do not like risks.

    but, given that Google has no problem firing a person for expressing his political opinion they did not like (which consisted of facts and not just his opinions), and deplatform users for the same stuff, than why would you expect them to be impartial when it comes to programming their algorithms...especially considering that they are openly left leaning, as they often affirm?

    I mean, where is in this picture a noticeably big chance that their algs were not programmed to do exactly that, given when is known about the company? There is plausible deniability (which I granted) and there is common sense. 
    Yeah, not my opinion. 
    It is. And just like me, you are free to believe anything you want and have your own interpretations of “me being demonstrably wrong”. That does not mean that I am.

     I have explained that in the post above, why. I am not sure why it is so hard to understand that there is a difference between a lack of evidence of something being true, and and evidence to something being not true. Their report could not find deliberate mods is not evidence that Google did not modify the results. It simpy FOUND NO EVIDENCE of modifications.

    but I do appreciate your honest effort to minimize the impact of thosr facts that I stated. You are doing a stellar job, no doubt Mike.
    What facts are those? That the users are impacting what gets returned by search algorithms? I never disputed that.

    The rest of what you're saying aren't facts.
    Yeah? So Google did not fire an engineer for expressing his political opinion is not a fact? What else is not a fact in your world, Mike?

    users are not the only module in that chain that impacts the results. There is the algorithm the the researches had no access to. It is a fact. And there is a track record of Google being politically biased - which is a fact that needs like 1 minute (unironically on Google) to confirm with their own statements, and what they stand for and support. 
    Like I said, man, You do you.
    edited December 2018
  • Reply 80 of 86
    dewme said:
    I don't believe that there is purposeful or intentional bias in the AI/predictive algorithms in Google's search engine, but there is definitely plenty of unintentional bias expressed in the search results because of the way the source data and metadata is statistically/algorithmically processed. It's simply an artifact or side-effect of the processing in the same way that DSP processing of complex audio data can produce unwanted artifacts that need to be filtered out in post-processing. In fact, evidence of this is apparent in Apple's word prediction algorithm in Messages. Try these on your iPhone:

    If you type "The doctor" (or nurse if you prefer) followed by a space Apple will produce emojis for both a male doctor and a female doctor. A few years ago in older versions of iOS (or android) the same exercise would very likely have provided the word "he" as a predicted next word for the doctor case and "she" for the nurse case, strictly based on probability of association and prevailing demographics that were baked into the back end data used to formulate the prediction. I strongly suspect that Apple is actually performing some sort of post-processing to remove the unintended bias that emerged from the raw prediction results.

    A more interesting bit of post-processing intervention evidence, in Messages again, is to start typing the phrase "James is a businessman" but only type up to the letter "i" in businessman to trigger the next word prediction. When I type "James is a busi" on my iPhone Xs Max with the latest iOS beta the next word prediction choices I get are "business" and "businesswoman." From what I can tell, using any traditionally male name (including Jesus) does not alter the next word results that include "businesswoman." One could infer that the post-processing is intervening in a way to suppress the inherent bias in the raw results, and doing so in a rather heavy-handed way.

    This begs the question: what should the presenters of data that emerges from algorithms with an impression of bias do? Is it better to simply present what the algorithms spit out or to intervene to reduce the appearance of bias? Put another way, should the data scientists forcefully break the statistical association between the words "Trump" and "idiot" by scrubbing the source data or should they filter the results so the "Trump" and "idiot" association does not show up in the results? There is no clear answer because either way try to "fix it" you're going to be accused of manipulation. The algorithms don't care, they are just chugging away on data and spitting out results.

    The whole notion that Google or anyone else can manipulate data, data processing, or the results of data processing in a way that would not be trivially easy to detect by a statistician or data scientist is absurd. There are times when people simply have to grow a spine about things they don't like, apply some logic and judgement, and be a lot more aware of their own personal biases that impact their view of the world. Sorry, but you can't blame this one on Google. 
    There is intentional bias and it does not matter whether you believe it or not. 
    https://www.usatoday.com/story/news/politics/2018/10/16/google-news-results-left-leaning/1651278002/
    That's not what the report, nor the USA Today story says. 

    The report says that there is bias based on user behavior and use patterns (the aforementioned "idiot" returns), but it is not on Google's part from skewing coding or the database and certainly not "intentional" on the company's part.

    Source: the report itself https://www.allsides.com/sites/default/files/Google-news-bias-report-by-AllSides.pdf specifically, page 7, 16, and 21.
    That is your opinion, Mike.

    (From the report itself)
    Google News and Google News search results have a STRONG preference for media outlets that have a perspective that is from the left of the US political spectrum. 



    That is what the report ACTUALLY said, and this is the bottom line. This is what users get. So, putting your words against that report, you might be a nominee for the biggest understatement of the week award.

    Now, there is no way to tell if what Google does, is intentional or not in and of itself, because as a software engineer, you cant discuss that with the outsiders. Otherwise you would get fired, and prosecuted for exposing inner workings of a competitive product, which is illegal. Also, you would never get hired again, because companies do not like risks.

    but, given that Google has no problem firing a person for expressing his political opinion they did not like (which consisted of facts and not just his opinions), and deplatform users for the same stuff, than why would you expect them to be impartial when it comes to programming their algorithms...especially considering that they are openly left leaning, as they often affirm?

    I mean, where is in this picture a noticeably big chance that their algs were not programmed to do exactly that, given when is known about the company? There is plausible deniability (which I granted) and there is common sense. 
    Yeah, not my opinion. 
    It is. And just like me, you are free to believe anything you want and have your own interpretations of “me being demonstrably wrong”. That does not mean that I am.

     I have explained that in the post above, why. I am not sure why it is so hard to understand that there is a difference between a lack of evidence of something being true, and and evidence to something being not true. Their report could not find deliberate mods is not evidence that Google did not modify the results. It simpy FOUND NO EVIDENCE of modifications.

    but I do appreciate your honest effort to minimize the impact of thosr facts that I stated. You are doing a stellar job, no doubt Mike.
    What facts are those? That the users are impacting what gets returned by search algorithms? I never disputed that.

    The rest of what you're saying aren't facts.
    Yeah? So Google did not fire an engineer for expressing his political opinion is not a fact? What else is not a fact in your world, Mike?

    users are not the only module in that chain that impacts the results. There is the algorithm the the researches had no access to. It is a fact. And there is a track record of Google being politically biased - which is a fact that needs like 1 minute (unironically on Google) to confirm with their own statements, and what they stand for and support. 
    Like I said, man, You do you.
    My point was - you wanna be honest with yourself, look at the facts no matter where they come from. If you dont, that’s obviously on you, but no reason to tell others that those facts you dont like are just opinions. Just saying. 

Sign In or Register to comment.