Who wants to ride the HyperLoop?

Jump to First Reply
Posted:
in AppleOutsider edited June 2015

Comments

  • Reply 1 of 13

    High speed rail in California, this thing… I don’t know; isn’t there a reason that people don’t care too much for trains these days?

     

    Or, rather, why does it take decades for anything to even pretend to happen on this front?

     0Likes 0Dislikes 0Informatives
  • Reply 2 of 13
    Marvinmarvin Posts: 15,585moderator
    isn’t there a reason that people don’t care too much for trains these days?

    They require a lot of people to roughly be going to the same place, which makes them of limited use as you need supplementary transport to actually get where you want like walking or a taxi. Plus there's the waiting on each one coming. If they can do a long journey in 30 minutes then it helps with the waiting times but they gradually want to add stops in the middle. Hundreds of miles of tracks need maintenance too. Air travel for long distances can be just as quick without the track maintenance. Maybe Elon Musk should use his rocket tech instead, just fire a rocket up and then release the passengers part of the way and they'll glide down to the destination.
     0Likes 0Dislikes 0Informatives
  • Reply 3 of 13
    Marvin wrote: »
    They require a lot of people to roughly be going to the same place, which makes them of limited use as you need supplementary transport to actually get where you want like walking or a taxi. Plus there's the waiting on each one coming. If they can do a long journey in 30 minutes then it helps with the waiting times but they gradually want to add stops in the middle. Hundreds of miles of tracks need maintenance too. Air travel for long distances can be just as quick without the track maintenance. Maybe Elon Musk should use his rocket tech instead, just fire a rocket up and then release the passengers part of the way and they'll glide down to the destination.

    Like what most of Musk is working on, I believe the Hyperloop is something he ultimately envisions as being Mars-based. He's working out the bugs here first, to reuse and employ for his Mars colonization plan. All of his activities are focused on that long range goal.
     0Likes 0Dislikes 0Informatives
  • Reply 4 of 13
    Originally Posted by SpamSandwich View Post

    …something he ultimately envisions as being Mars-based. All of his activities are focused on that long range goal.

     

    We need more stubborn children with the capital to see their dreams come true.

     0Likes 0Dislikes 0Informatives
  • Reply 5 of 13
    Marvinmarvin Posts: 15,585moderator
    …something he ultimately envisions as being Mars-based. All of his activities are focused on that long range goal.

    We need more stubborn children with the capital to see their dreams come true.

    He seems to have a bit of hang-up about dangerous robots:

    http://www.theverge.com/2015/1/15/7551687/elon-musk-donates-10-million-to-prevent-killer-ai

    "Elon Musk is worried that AI will destroy humanity, and so he's decided to donate $10 million toward research into how we can keep artificial intelligence safe. Musk, the CEO of Tesla and SpaceX, has previously expressed concern that something like what happens in The Terminator could happen in real life. He's also said that AI is "potentially more dangerous than nukes." The purpose of this donation is to both prevent that from happening and to ensure that AI is used for good and to benefit humanity.

    "It's best to try to prevent a negative circumstance from occurring than to wait for it to occur and then be reactive," Musk says. "This is a case where the range of negative outcomes, some of them are quite severe. It's not clear whether we'd be able to recover from some of these negative outcomes. In fact, you can construct scenarios where recovery of human civilization does not occur. When the risk is that severe, it seems like you should be proactive and not reactive."
     0Likes 0Dislikes 0Informatives
  • Reply 6 of 13
    Originally Posted by Marvin View Post

    He seems to have a bit of hang-up about dangerous robots:

     

    Yeah, and he’s right to. AI’s the worst decision we could possibly make.

     0Likes 0Dislikes 0Informatives
  • Reply 7 of 13
    asciiascii Posts: 5,936member

    There is a city near my city that is right on the boundary of being too close for a flight but too far away to drive. I have done both and it just feels suboptimal whatever you choose. A hyperloop would be perfect.

     

    People think the AI apocalypse is far-fetched because they think evil genius robots taking over the world is far fetched, and they're right. But that's not what the AI people are warning about. 

     

    What they are warning about is the danger of writing a very intelligent computer program (not evil, not self-aware, just a program that is really good as figuring out how to acheive goals) and then we give it a goal, and it takes the goal too literally. Elon Musk's example was, we write an AI program and give it the task of removing spam from the Internet and it decides (purely out of logic, not out of evil intent) that removing all the people from the Earth is the quickest way to stop all the spam emails.

     

    The issue I have with these warnings, it that it requires a computer program that is smart enough to solve almost any problem, but is simultaneously dumb enough to take it's instructions totally literally.

     0Likes 0Dislikes 0Informatives
  • Reply 8 of 13
    Marvinmarvin Posts: 15,585moderator
    ascii wrote: »
    Elon Musk's example was, we write an AI program and give it the task of removing spam from the Internet and it decides (purely out of logic, not out of evil intent) that removing all the people from the Earth is the quickest way to stop all the spam emails.

    The issue I have with these warnings, it that it requires a computer program that is smart enough to solve almost any problem, but is simultaneously dumb enough to take it's instructions totally literally.

    The issue I have with that is it requires a spam filter system to be able to connect with a system capable of inflicting injury on human beings on a massive scale. Such a system would be more vulnerable to malicious human intent than AI. Hawking's getting in on it too:

    http://www.bbc.com/news/technology-30290540

    There's also the issue of compulsion/motivation. AI simulates the thought processes of the mind but not the vulnerability of the body. What drives emotional response is the body's variable state of energy, awareness, hunger, desire and so on. Things that would only have to be programmed into a computer model if there was an intent to introduce human weakness and purposefully make a potentially destructive system.

    The point of failure would always be in a system capable of inflicting massive injury though such as internet-connected self-driving vehicles. Like a HyperLoop or Tesla. As I say though, any system like that can also be taken over by humans:


    [VIDEO]


    http://www.cnet.com/news/car-hacking-code-released-at-defcon/

    It doesn't require intelligence to create a malicious intent, only to find a vulnerability. Intelligent people are finding vulnerabilities all the time and malicious people are exploiting them all the time. AI will pose no more of a threat than humans.
     0Likes 0Dislikes 0Informatives
  • Reply 9 of 13
    asciiascii Posts: 5,936member
    Quote:

    Originally Posted by Marvin View Post





    The issue I have with that is it requires a spam filter system to be able to connect with a system capable of inflicting injury on human beings on a massive scale. Such a system would be more vulnerable to malicious human intent than AI. Hawking's getting in on it too:



    http://www.bbc.com/news/technology-30290540



    There's also the issue of compulsion/motivation. AI simulates the thought processes of the mind but not the vulnerability of the body. What drives emotional response is the body's variable state of energy, awareness, hunger, desire and so on. Things that would only have to be programmed into a computer model if there was an intent to introduce human weakness and purposefully make a potentially destructive system.



    The point of failure would always be in a system capable of inflicting massive injury though such as internet-connected self-driving vehicles. Like a HyperLoop or Tesla. As I say though, any system like that can also be taken over by humans:









    http://www.cnet.com/news/car-hacking-code-released-at-defcon/



    It doesn't require intelligence to create a malicious intent, only to find a vulnerability. Intelligent people are finding vulnerabilities all the time and malicious people are exploiting them all the time. AI will pose no more of a threat than humans.

     

    In the Terminator movies the AI took over automated nuke launch systems. Automated military systems in general would be another thing to worry about along with transport systems.

     

    I agree that humans can hack such systems too, so we already face that danger. But separately, if AIs became a threat, could we hack software written by them? No way to know, it could be programmed so perfectly so as not to have security vulnerabilities. So I wouldn't count on computer hacking as a weapon against the AI (if it came to that).

     0Likes 0Dislikes 0Informatives
  • Reply 10 of 13
    Marvinmarvin Posts: 15,585moderator
    ascii wrote: »
    if AIs became a threat, could we hack software written by them? No way to know, it could be programmed so perfectly so as not to have security vulnerabilities. So I wouldn't count on computer hacking as a weapon against the AI (if it came to that).

    Whatever will we do? Oh I know:

    1000

    We wouldn't have to hack an AI program, all programs run on computer hardware that can be shut down. Network infrastructure can be closed off. While it's being fixed e.g drives wiped, we just revert back to outdated, primitive tech. Sony went back to Blackberries when they got hacked:

    http://bgr.com/2014/12/31/sony-hack-blackberry-devices/

    "once Sony realized the massive scope of the hack, it was forced to dig up “a handful of old BlackBerrys, located in a storage room in the Thalberg basement” and give them to key executives."
     0Likes 0Dislikes 0Informatives
  • Reply 11 of 13
    asciiascii Posts: 5,936member

    Unless this:

     

    Quote:

    Originally Posted by Marvin View Post





     

     

    Is being guarded by this:

     

     0Likes 0Dislikes 0Informatives
  • Reply 12 of 13
    Marvinmarvin Posts: 15,585moderator
    ascii wrote: »
    Unless this (switch) is being guarded by this (killer robot)

    The criticism there wouldn't be AI but the fact people designed a killer robot. Weaponized drones would be more effective too but even then there needs to be fuel. Terminators had nuclear fuel cells. Human beings would have to setup all the conditions for this to take place with evil intent, it's not a scenario where someone would go 'oh no, we built all of these killer self-fuelling robots but now they've gone rogue, they were only meant to deliver groceries'. When we get to a stage where this is even remotely possible, then we can talk about countermeasures.
     0Likes 0Dislikes 0Informatives
  • Reply 13 of 13

    hhahaha thats right,, I think so

    by : tempat wisata

     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.