Microsoft attempt at artificial intelligence becomes Hitler-loving, misogynistic PR disaster

Posted:
in General Discussion edited March 2016
It took only 24 hours for a Microsoft artificial intelligence project to turn from a typical tweeting teenage girl into a hate spewing, offensive public relations debacle, thanks to coordinated attacks on the learning software.




Microsoft launched the "@TayandYou" account on Twitter on Wednesday as a publicity stunt, allowing users to engage in conversations with its AI software. "Tay" was designed to speak like a teenage girl that would adapt and learn new language over time.

But after being bombarded with offensive tweets attempting to break or confuse the artificial intelligence, "Tay" learned a whole range of offensive things that Microsoft executives certainly did not want her to say.

Among her more extreme tweets, "Tay" told followers that "Hitler was right," that all feminists should "die," and "Bush did 9/11." And as noted by The Telegraph, "Tay" also took to calling Twitter users "daddy" and soliciting sex.

The Twitter account and official website for "Tay" remain accessible, though the offending tweets have been removed and she is no longer responding to users. The latest post on the website reads "Phew. Busy day. Going offline for a while to absorb it all. Chat soon."




The failed PR stunt comes only days after Microsoft was compelled to issue a public statement apologizing for Xbox-hosted events at the Game Developers Conference 2016, where the company paid models to dance and talk to attendees while dressed as scantily-clad schoolgirls. Critics derided Microsoft's party as sexist and out of touch, and Xbox chief Phil Spencer agreed.

"This matter is being handled internally, but let me be very clear - how we represent ourselves as individuals, who we hire and partner with and how we engage with others is a direct reflection of our brand and what we stand for," Spencer wrote. "When we do the opposite, and create an environment that alienates or offends any group, we justly deserve the criticism."

Apple, of course, has its own artificial intelligence projects, and is regularly working to improve Siri, its voice-driven personal assistant. Development of AI is difficult, which is why companies like Microsoft create projects like "Tay" to provide them with a range of inputs, crowd sourcing data from large numbers of people.

In fact, it was said last year that Apple's own strict privacy policies have been a hinderance in the development of Siri. Apple has been recruiting data scientists and artificial intelligence experts, but is having trouble actually hiring them because it prevents researchers from gaining access to valuable user data.

On the website for "Tay," Microsoft makes it clear that the data users provide to the AI can be used to search on their behalf, or create a simple profile to personalize their experience. Data and conversations with Tay are anonymized, but may be retained for up to a year to help Microsoft bolster its AI development.
«134

Comments

  • Reply 1 of 65
    I've done some stuff with the old ELIZA program on my PowerBook 180 that usually makes me chuckle. Still, after the whole "Twitch Installs Linux" debacle they should have known better. :)
  • Reply 2 of 65
    This is just too good to be true!  MS will never learn.
  • Reply 3 of 65
    Disaster? That thing is hilarious. It's the most entertaining social media project I've seen in a good long while. Can we not hate on this beautiful mess simply because it's a Microsoft project?
    braderunnermike1cnocbuicornchippscooter63
  • Reply 5 of 65
    auxioauxio Posts: 2,754member
    It's a perfect simulation of what naturally happens to people who spend their whole life on social media and don't interact in real life.
    muppetrynapoleon_phoneapartzroger73patchythepiratedysamoriabrakkentdknoxbaconstangfastasleepcornchip
  • Reply 6 of 65
    why-why- Posts: 305member
    Oh come on. You make it sound like this is Microsoft's fault. They created an AI and it went and did its own thing which is exactly what an AI is supposed to do. If anything this is a very good example of the dangers of artificial intelligence
    cash907censoredtallest skilmonstrositycrowleyvolcanpatchythepiratedigital_guycnocbuianantksundaramcornchip
  • Reply 7 of 65
    williamhwilliamh Posts: 1,044member
    Disaster? That thing is hilarious. It's the most entertaining social media project I've seen in a good long while. Can we not hate on this beautiful mess simply because it's a Microsoft project?
    I, for one, am not hating on it for any reason.  This is funny as hell.  I got a great laugh out of it.  Thanks Microsoft!  Thanks AI for bringing it to my attention!  I feel a bit sorry for Microsoft that an interesting project was hijacked like this, but they should have foreseen it.  No hard feelings against them though, just funny!
    cash907censoredcornchiplolliver
  • Reply 8 of 65
    lkrupplkrupp Posts: 10,557member
    why- said:
    Oh come on. You make it sound like this is Microsoft's fault. They created an AI and it went and did its own thing which is exactly what an AI is supposed to do. If anything this is a very good example of the dangers of artificial intelligence

    And it’s mimicking so-called human intelligence perfectly it would appear.
    monstrositydreyfus2dysamoriabaconstangcornchippscooter63lolliverlatifbptomkarljony0
  • Reply 9 of 65
    thedbathedba Posts: 776member
    What I find amusing about this article is the picture of a Samsung Galaxy whatever, with an iPhone screen.
    zroger73cornchipspacerays
  • Reply 10 of 65
    sirlance99sirlance99 Posts: 1,301member
    This is just too good to be true!  MS will never learn.
    Obviously you didn't learn. It was the social media crowd and the bad side of humanity that made this happen. 
    jony0
  • Reply 11 of 65
    crowleycrowley Posts: 10,453member
    Artificial Intelligence won't be truly convincing until you achieve Artificial Stupidity.

    microsoft just took a step into the future!
    cornchipnolamacguylolliverroundaboutnowspaceraysjony0
  • Reply 12 of 65
    dougddougd Posts: 292member
    Isn't "Tay" a bit too close ro Taylor Swift ? Dumb ass Microsoft
  • Reply 13 of 65
    well, the lesson we learned here is that hate is a learned behavior...
    auxiodysamoriabaconstangroundaboutnowjony0
  • Reply 14 of 65
    irnchrizirnchriz Posts: 1,617member
    Kinda explains why Skynet went awry.  Haha
    brakkenpscooter63jony0
  • Reply 15 of 65
    sockrolidsockrolid Posts: 2,789member
    It took only 24 hours for a Microsoft artificial intelligence project to turn from a typical tweeting teenage girl into a hate spewing, offensive public relations debacle, thanks to coordinated attacks on the learning software. ...
    Welcome to the Internet, Tay.
    pscooter63
  • Reply 16 of 65
    auxioauxio Posts: 2,754member
    sog35 said:
    Obviously you didn't learn. It was the social media crowd and the bad side of humanity that made this happen. 
    Nope. If it was real AI then it would adjust.

    What MS created was just a Parrot.
    Just as in real life, some people can distinguish the nonsense posted on the Internet from reality, some can't.  Tay is a simulation of those in the latter camp.
    dysamoriabrakkenbaconstangai46Anijony0
  • Reply 17 of 65
    Pulling it offline was a huge mistake. No one reasonable is going to make a big deal about this AI in its learning curve. Funny though that this serves as an honest view of how ugly society actually is because if I understand this correctly the AI is as a mirror reflecting back what is projected. 

    Howvever, let's not give this thing launch control just yet. 
    cornchip
  • Reply 18 of 65
    sog35 said:
    Obviously you didn't learn. It was the social media crowd and the bad side of humanity that made this happen. 
    Nope. If it was real AI then it would adjust.

    What MS created was just a Parrot.

    How can you not see that you too are a parrot. That's how you learned to come on here, tie your shoes, language... What do you think AI should be ? An entity doing random things it's created itself ? I imagine that could be AI but it wouldn't perhaps be coherent to us.. 
    edited March 2016
  • Reply 19 of 65
    auxioauxio Posts: 2,754member
    Pulling it offline was a huge mistake. No one reasonable is going to make a big deal about this AI in its learning curve. Funny though that this serves as an honest view of how ugly society actually is because if I understand this correctly the AI is as a mirror reflecting back what is projected. 
    To be fair, generally the first thing people do when they find out they're interacting with a "bot" online is try to mess with/break it (e.g. IRC bots).  So I'm sure that was a factor in what happened.  That said, there's no shortage of antisocial, angry people posting hateful things on the Internet (most of which they would never say in real life).  So it's certainly a reflection of that part of humanity.
    Sir_Turkey
  • Reply 20 of 65
    jungmarkjungmark Posts: 6,927member
    This is why we can't have nice things. Thanks, Internet. 
    cornchip
Sign In or Register to comment.