Wozniak, Musk & more call for 'out-of-control' AI development pause

Posted:
in General Discussion edited March 2023
Steve Wozniak, Elon Musk, and over 1,000 others have signed an open letter asking for an immediate six-month halt on AI technology more powerful than ChatGPT-4.

Google Bard is rolling out to users.
Google Bard is rolling out to users.


This has been the year of artificial intelligence agents such as ChatGPT and Google Bard becoming mainstream. Despite all AI companies describing their products as experiments, or effectively beta releases, their language-processing features are being integrated into Microsoft 365 and searches including Bing.

Now the Future of Life Institute is asking for "public and verifiable" pause that includes "all key actors" in the field. "If such a pause cannot be enacted quickly," it adds, "governments should step in and institute a moratorium"

The Future of Life Institute aims to "steer transformative technology towards benefitting life and away from extreme large-scale risks."

This 600-word letter, aimed at all AI developers, argues that a pause is needed because "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one -- not even their creators -- can understand, predict, or reliably control."

"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," it continues. "These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt."

"This does not mean a pause on AI development in general," says the open letter, "merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."

At time of writing, the institute's open letter has 1,123 signatories. Those include high-profile ones such as:

  • Elon Musk, CEO of SpaceX, Tesla & Twitter

  • Steve Wozniak, Co-founder, Apple

  • Jaan Tallinn, Co-Founder of Skype

  • Evan Sharp, Co-Founder, Pinterest

Both Google and Microsoft have been working to implement technology tools based on new AI chatbot systems. The initial launch of Microsoft's Bing implementation fared poorly and Google's Bard made a factual error in its first demo.

Read on AppleInsider
«1

Comments

  • Reply 1 of 25
    DAalsethDAalseth Posts: 3,084member
    And nobody will pay any attention to it. 
    lkruppJaiOh81bala1234tyler82watto_cobra
  • Reply 2 of 25
    22july201322july2013 Posts: 3,744member
    Are they calling for only American companies to stop development, or also the Chinese company which is also working on it? Do they expect China will honor their request to stop development, or are they really only calling for the US to unilaterally stop AI development?

    https://www.cnbc.com/2023/02/08/chinese-tech-giant-alibaba-working-on-a-chatgpt-rival-shares-jump.html 

    Wasn't Elon Musk one of the founders of OpenAI, then sold his stock (because he didn't think it would make any profit?) and suddenly he's calling for them to pause development? Isn't he guilty for starting the OpenAI company and isn't he planning to put brain implants in people as we speak?

    JaiOh81blastdoorwatto_cobra
  • Reply 3 of 25
    omasouomasou Posts: 646member
    Oh, no stop now before Skynet becomes self aware. /s
    JaiOh81watto_cobra
  • Reply 4 of 25
    lkrupplkrupp Posts: 10,557member
    DAalseth said:
    And nobody will pay any attention to it. 
    Absolutely correct. Once the scientists at Los Alamos in 1945 realized what they had created only then did they start warning against its use. It was too late then and it's too late now. Imagine a Boston Dynamics combat robot powered by AI and armed with a 50 cal machine gun. It's probably already on some military drawing board somewhere.

    Developers seem to think this can be controlled. Maybe it can but maybe it can't.
    edited March 2023 JaiOh81watto_cobra
  • Reply 5 of 25
    omasou said:
    Oh, no stop now before Skynet becomes self aware. /s
    I heard the remake of Terminator will be no longer robots, but AI. 
    watto_cobra
  • Reply 6 of 25
    Let me guess: the "beneficial" role of AI is to reduce corporate budgets by eliminating human jobs and ransacking human IP. The "non-beneficial" role that must be prevented at all costs is AI usurping the role of know-it-all human billionaires dictating how society works. 
    edited March 2023 JaiOh81tyler82watto_cobrajony0
  • Reply 7 of 25
    hmlongcohmlongco Posts: 589member
    “it is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair
    DAalsethauxioJaiOh81AniMillwatto_cobrajony0badmonk
  • Reply 8 of 25
    DAalsethDAalseth Posts: 3,084member
    lkrupp said:
    DAalseth said:
    And nobody will pay any attention to it. 
    Absolutely correct. Once the scientists at Los Alamos in 1945 realized what they had created only then did they start warning against its use. It was too late then and it's too late now. Imagine a Boston Dynamics combat robot powered by AI and armed with a 50 cal machine gun. It's probably already on some military drawing board somewhere.

    Developers seem to think this can be controlled. Maybe it can but maybe it can't.
    Oh hell they’re way past that. They’ve had drones for a while now that will fly over an area and wait for a particular car to destroy. When a white car turns X corner, away go the HARMs. All without an operator having to do anything. Officially they “always keep a person in the loop”, but I have a feeling that’s just for show. 
    JaiOh81watto_cobra
  • Reply 9 of 25
    auxioauxio Posts: 2,771member
    hmlongco said:
    “it is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair
    When you structure your entire society around the goal of attaining money, and laud those who achieve it no matter how they did it, guess what activity you'll see more of? Simple reward-based behaviour modification.
    muthuk_vanalingamtyler82watto_cobrajony0
  • Reply 10 of 25
    chadbagchadbag Posts: 2,029member
    DAalseth said:
    lkrupp said:
    DAalseth said:
    And nobody will pay any attention to it. 
    Absolutely correct. Once the scientists at Los Alamos in 1945 realized what they had created only then did they start warning against its use. It was too late then and it's too late now. Imagine a Boston Dynamics combat robot powered by AI and armed with a 50 cal machine gun. It's probably already on some military drawing board somewhere.

    Developers seem to think this can be controlled. Maybe it can but maybe it can't.
    Oh hell they’re way past that. They’ve had drones for a while now that will fly over an area and wait for a particular car to destroy. When a white car turns X corner, away go the HARMs. All without an operator having to do anything. Officially they “always keep a person in the loop”, but I have a feeling that’s just for show. 
    Technically, unless the white car is emitting RF the HARMs won’t do much.  HARM are anti radiation missiles (radiation meaning radiating RF).  They are used to knock out enemy radars.  A Hellfire would be a better choice to unleash on the white car in your scenario. 
    blastdoorwatto_cobrajony0
  • Reply 11 of 25
    Sounds like hysteria.
    dewme
  • Reply 12 of 25
    DAalsethDAalseth Posts: 3,084member
    chadbag said:
    DAalseth said:
    lkrupp said:
    DAalseth said:
    And nobody will pay any attention to it. 
    Absolutely correct. Once the scientists at Los Alamos in 1945 realized what they had created only then did they start warning against its use. It was too late then and it's too late now. Imagine a Boston Dynamics combat robot powered by AI and armed with a 50 cal machine gun. It's probably already on some military drawing board somewhere.

    Developers seem to think this can be controlled. Maybe it can but maybe it can't.
    Oh hell they’re way past that. They’ve had drones for a while now that will fly over an area and wait for a particular car to destroy. When a white car turns X corner, away go the HARMs. All without an operator having to do anything. Officially they “always keep a person in the loop”, but I have a feeling that’s just for show. 
    Technically, unless the white car is emitting RF the HARMs won’t do much.  HARM are anti radiation missiles (radiation meaning radiating RF).  They are used to knock out enemy radars.  A Hellfire would be a better choice to unleash on the white car in your scenario. 
    Quite true. I stand corrected.
    lolliverwatto_cobrajony0
  • Reply 13 of 25
    bloggerblogbloggerblog Posts: 2,537member
    Microsoft is integrating the tech into their next version of Windows. The Copilot demos show how insanely useful this tech can be. This is a turning point in computer history. 46% of code on GitHub is already created by AI, and they just released CoPilot X. VS Code Copilot is already out too. Microsoft is in a situation where if they play their cards right they'll quickly end up on top again. I would never try installing Edge again because it is spyware but it is being adopted fast by others and may bypass Chrome in the coming years. Bard is not up to par with GPT, not even close.
    williamlondontyler82lolliverjony0
  • Reply 14 of 25
    waveparticlewaveparticle Posts: 1,497member
    They did not see what is possible, just fear. Someone will integrate AI into robots. It will be very soon, probably through Raspberry Pi. 
  • Reply 15 of 25
    Microsoft is integrating the tech into their next version of Windows. The Copilot demos show how insanely useful this tech can be. This is a turning point in computer history. 46% of code on GitHub is already created by AI, and they just released CoPilot X. VS Code Copilot is already out too. Microsoft is in a situation where if they play their cards right they'll quickly end up on top again. I would never try installing Edge again because it is spyware but it is being adopted fast by others and may bypass Chrome in the coming years. Bard is not up to par with GPT, not even close.
    "Microsoft on top again"?
    Only until the business users discover just how much of their company-sensitive data is being sent to the ChatGPT controllers.
    The year of Linux on the desktop may well be here at last if that happens. Oh... sorry, I should have said...
    The year of MacOS on the desktop (sic)
    williamlondonwatto_cobrajony0
  • Reply 16 of 25
    AppleishAppleish Posts: 719member
    Woz and Musk? Hmm. I was kinda worried about the acceleration of AI. Now that a has-been and a nutcase megalomaniac have chimed in, I say full steam ahead, AI!
    lolliverMacProjony0
  • Reply 17 of 25
    MarvinMarvin Posts: 15,512moderator
    Sounds like hysteria.
    They are never specific about the threat beyond anecdotes.

    If there was a computer program smarter than a human, great. Ask it to solve difficult problems. Let it teach people's kids.
    If it spits out misinformation, turn it off.
    If it is attached to a time-traveling murder robot, turn it off.

    The research papers linked from the letter talk about the same kind of anecdotes like spreading misinformation to cause political divisions eventually leading to nuclear war:

    https://arxiv.org/pdf/2209.10604.pdf
    https://arxiv.org/pdf/2206.13353.pdf
    https://arxiv.org/pdf/2303.12712.pdf

    Also things like becoming dependent on AI. This would be similar to what happens due to relying on computers for everything. When there's a power cut, everything grinds to a halt because nobody knows how to do anything without power or a network connection any more. If AI is eventually tasked with a lot of the complex jobs and humans aren't trained to do it any more, there would be some bad consequences.

    The threat of automating a lot of jobs is only a threat because of a flawed economy that doesn't guarantee people's welfare. That's flawed without AI getting involved. If AI pushes it over the edge, good, maybe people will put some effort into fixing it.
    lolliverdewmewatto_cobrajony0
  • Reply 18 of 25
    jdwjdw Posts: 1,463member
    Sounds like hysteria.
    No, but this does...


  • Reply 19 of 25
    jdwjdw Posts: 1,463member
    omasou said:
    Oh, no stop now before Skynet becomes self aware. /s
    Interestingly, it wouldn't have been possible for Skynet to do anything prior to 2019 when the US nuclear arsenal was "protected" by floppy drive tech...

    https://nypost.com/2019/10/20/pentagon-scraps-obsolete-floppy-disk-system-controlling-us-nuclear-arsenal/

    Not sure what they replaced it with, but hopefully it is just as unbreachable as the Gerald Ford era tech was.
    watto_cobra
  • Reply 20 of 25
    jdwjdw Posts: 1,463member
    And while all these folks are going nuts worrying about AI, we still don't have fully autonomous cars and SIRI still stinks.  

    Rather than worry about AI becoming sentient or taking over / destroying the world, what many of these people are probably worried about is a human reliance on AI tech rather than their own brains.  The world has a drug problem now, so we can see AI as a new kind of drug that people will get hooked on.  That doesn't mean the tech is bad or that it needs a halt.  It means human beings need a kind of help that no AI can provide.
Sign In or Register to comment.