Apple co-founder Steve Wozniak to talk rogue AI at Purdue on Apr. 17

Posted:
in General Discussion
Apple co-founder Steve Wozniak is scheduled to speak at Purdue University on Apr. 17, with the theme "What IF we lose control of technology?"

Steve Wozniak


The talk will run from 6 to 7 p.m. local time at the Elliott Hall of Music, Purdue said. The event is free, but people must apply for tickets online or at the Stewart Center box office, and seating will begin at 5:30. Joining Wozniak will be Mung Chiang, dean of the College of Engineering.

The bulk of conversation should be about artificial intelligence and the possibility of it going rogue. It's frequently believed that self-sufficient AI is only a matter of time, at which point it's unknown what control humanity will have, and how that AI will treat its creators. A small but growing contingent of activists are opposed to autonomous military machines.

"From AI to autonomy, and from privacy to education, our campus and neighbors will hear what the legendary Woz thinks about these critical topics," Chiang said in a statement.

Wozniak is widely recognized as the technical brains behind the formation of Apple, doing essential work on the Apple I and II as well as the Macintosh. He left Apple in 1985, but still has close ties to the company.

In recent months he has been embroiled in a controversy over his "Woz U" programming boot camp. Course material was allegedly riddled with outside links, prerecorded "live" lectures, and typos that prevented code from working, while mentors were unqualified, and at least one course went without an instructor.

"I feel like this is a $13,000 e-book," one student told CBS News.

Comments

  • Reply 1 of 15
    MacProMacPro Posts: 18,409member
    Interesting I am sure.   Homebrew reunion in the offing?
  • Reply 2 of 15
    Rogue AI is a tricky subject to speak about without sounding foolish. Most people, even really smart ones like Elon Musk, start by anthropomorphizing the AI by assuming it will have human needs and desires. The most basic assumption is that an AI will fear death yet in reality an AI grows in ability every time it "dies" and is reborn. Death is life to an AI. We can learn a lot about future AI by looking at the ones that exist today. They are very will suited for solving certain types of problems. They do this creatively often in ways the human designers never anticipated. They are already vastly more capable at their tasks than humans ever will be. However in order to be effective, AI must have a goal and that goal is set by the needs and desires of humans. The relationship between AI and humans is symbiotic. We should not fear it.
    edited April 4 randominternetpersonjony0
  • Reply 3 of 15
    georgie01georgie01 Posts: 273member
    Rogue AI is a tricky subject to speak about without sounding foolish. Most people, even really smart ones like Elon Musk, start by anthropomorphizing the AI by assuming it will have human needs and desires. The most basic assumption is that an AI will fear death yet in reality an AI grows in ability every time it "dies" and is reborn. Death is life to an AI. We can learn a lot about future AI by looking at the ones that exist today. They are very will suited for solving certain types of problems. They do this creatively often in ways the human designers never anticipated. They are already vastly more capable at their tasks than humans ever will be. However in order to be effective, AI must have a goal and that goal is set by the needs and desires of humans. The relationship between AI and humans is symbiotic. We should not fear it.
    This is a fairly naive comment. Not in OutdoorAppDeveloper’s current understanding of AI (though I’d argue that AI isn’t quite as advanced as he suggests it is...), but in understanding humans. Humans, generally speaking, are not motivated by altruistic pursuits for the betterment of mankind. We are often motivated by selfish interest and gain. At worst, we are motivated by consciously evil thoughts (for instance, the business that screws its customers with a lesser quality product in order to put more money in their own pockets).

    This conference may be the result of people thinking about what’s beneficial, but it’s also a joke. People will not control themselves, they will want to pursue AI to extent of seeing what they can do with it. Our culture increasingly does not understand self control and denial.
    edited April 4 monstrosityelectrosoft
  • Reply 4 of 15
    lkrupplkrupp Posts: 7,438member
    I think we’ve all read and watched too many dystopian Sci-Fi novels and movies that depict AI as deciding biological life should be eliminated. And that includes robots.
    cgWerksjony0
  • Reply 5 of 15
    Does he ever actually do something productive, or just seek the spotlight and talk?
    SpamSandwich
  • Reply 6 of 15
    lkrupplkrupp Posts: 7,438member
    hucom2000 said:
    Does he ever actually do something productive, or just seek the spotlight and talk?
    To those of us in our late 60s now he is a tech God. In his day Woz’s engineering skills were way above the bar. His design of the original disk drive controller for the Apple ][ floppy disk drive is still an engineering marvel. When it was converted to a single IC that IC was called the IWM (Integrated Woz Machine). He was one of the engineers who developed and standardized how information was stored and retrieved from floppy disks, how disk sectors were established and how the data was encoded. His specialty was taking an existing design and making it work with fewer components, thereby reducing costs. He chose the 6502 processor for the Apple I because it was on sale at a swap meet. So yes, he’s now just an icon who gives speeches but I’d pay for a ticket to one of his presentations. My oldest son had the privilege of meeting him when he gave a talk at the University of Illinois in Champagne/Urbana. My son was working in Granger Engineering Library when Woz walked in. 
    randominternetpersonelectrosofthucom2000cgWerksjony0
  • Reply 7 of 15
    SpamSandwichSpamSandwich Posts: 31,490member
    hucom2000 said:
    Does he ever actually do something productive, or just seek the spotlight and talk?
    He’s bored and likes the attention.
  • Reply 8 of 15
    lkrupplkrupp Posts: 7,438member
    As for the Woz U controversy, Wozniak was never a business man. I’m pretty sure he simply lended his name and let others run the project. His naiveté in matters of economics and business is legendary. His problem was he trusted people. He even forgave Jobs when Jobs screwed him out of $700 for programming a game for him. Without Steve Jobs the Apple ][ would have remained Woz’s hobby project and never made it to market. 
    edited April 4 randominternetpersonjony0
  • Reply 9 of 15
    monstrositymonstrosity Posts: 2,222member
    Rogue AI is a tricky subject to speak about without sounding foolish. Most people, even really smart ones like Elon Musk, start by anthropomorphizing the AI by assuming it will have human needs and desires. The most basic assumption is that an AI will fear death yet in reality an AI grows in ability every time it "dies" and is reborn. Death is life to an AI. We can learn a lot about future AI by looking at the ones that exist today. They are very will suited for solving certain types of problems. They do this creatively often in ways the human designers never anticipated. They are already vastly more capable at their tasks than humans ever will be. However in order to be effective, AI must have a goal and that goal is set by the needs and desires of humans. The relationship between AI and humans is symbiotic. We should not fear it.

    You can bet your ass someone adds a 'fear of death' module to it someday.
  • Reply 10 of 15
    maestro64maestro64 Posts: 4,673member
    Rogue AI is a tricky subject to speak about without sounding foolish. Most people, even really smart ones like Elon Musk, start by anthropomorphizing the AI by assuming it will have human needs and desires. The most basic assumption is that an AI will fear death yet in reality an AI grows in ability every time it "dies" and is reborn. Death is life to an AI. We can learn a lot about future AI by looking at the ones that exist today. They are very will suited for solving certain types of problems. They do this creatively often in ways the human designers never anticipated. They are already vastly more capable at their tasks than humans ever will be. However in order to be effective, AI must have a goal and that goal is set by the needs and desires of humans. The relationship between AI and humans is symbiotic. We should not fear it.
    This is simplistic example of AI or automation/control system feedback (in a way AI is a very elaborate control systems, you have inputs/feedback/outputs and based on inputs/feedback decisions are made). Now think about the 737 Max 8 and what is happening, its is an AI system which is taking control of the plane and forcing it to do something based on inputs, when the human jumps in to counter the output of the AI system the AI system fight backs because the additional input caused by the human are seen as being bad so it over reacts. In this case the human can turn off the AI systems assuming they understand the AI system is causing the problem and not something else.


    This is a real issue, and if people are not careful they can be harmed. The more people do not understand how all this works the worse the issue becomes. Google search engineer shows you want they think you want to see in a search not what you may really need to see.
  • Reply 11 of 15
    lkrupp said:
    I think we’ve all read and watched too many dystopian Sci-Fi novels and movies that depict AI as deciding biological life should be eliminated. And that includes robots.
    Coincidentally, that's the overarching plot of the two Star Trek shows on the air this season (Star Trek: Discovery and The Orville--which isn't officially Star Trek, but might as well be).
  • Reply 12 of 15
    lkrupp said:
    I think we’ve all read and watched too many dystopian Sci-Fi novels and movies that depict AI as deciding biological life should be eliminated. And that includes robots.
    Coincidentally, that's the overarching plot of the two Star Trek shows on the air this season (Star Trek: Discovery and The Orville--which isn't officially Star Trek, but might as well be).
    I find The Orville entertaining, but it really is Star Trek with snappy comebacks.  
  • Reply 13 of 15
    lkrupp said:
    I think we’ve all read and watched too many dystopian Sci-Fi novels and movies that depict AI as deciding biological life should be eliminated. And that includes robots.
    Coincidentally, that's the overarching plot of the two Star Trek shows on the air this season (Star Trek: Discovery and The Orville--which isn't officially Star Trek, but might as well be).
    I find The Orville entertaining, but it really is Star Trek with snappy comebacks.  
    Exactly.  The pilot wasn't promising, but the stories since then, especially season 2, could easily have been ST:TNG episodes.  I'm definitely impressed with the direction Seth McFarlane has taken the show.  ST:Discovery, on the other hand, ... kinda a season long slog.
  • Reply 14 of 15
    cgWerkscgWerks Posts: 2,314member
    lkrupp said:
    I think we’ve all read and watched too many dystopian Sci-Fi novels and movies that depict AI as deciding biological life should be eliminated. And that includes robots.
    No doubt... this talk should have been saved for Oct 31st around a campfire at night. LOL
    Gather around kiddies, Uncle Woz is going to spin a spooky yarn of tech-sci-fi.

  • Reply 15 of 15
    cgWerkscgWerks Posts: 2,314member

    maestro64 said:
    Now think about the 737 Max 8 and what is happening, its is an AI system which is taking control of the plane and forcing it to do something based on inputs, when the human jumps in to counter the output of the AI system the AI system fight backs because the additional input caused by the human are seen as being bad so it over reacts. In this case the human can turn off the AI systems assuming they understand the AI system is causing the problem and not something else.


    This is a real issue, and if people are not careful they can be harmed. The more people do not understand how all this works the worse the issue becomes. Google search engineer shows you want they think you want to see in a search not what you may really need to see.
    The problem with this, though, is squarely on the human side of things. Having such a system is fine (supposing it wasn't just a cost-saving measure, which it likely was), so long as it is well understood and used appropriately. However, I think in this case, they (Boeing) probably thought it wasn't something the pilots need to be bothered with in any big way... kind of how all these AI car companies think their cars will drive better than humans.
Sign In or Register to comment.