US House unanimously approves measures to speed up autonomous car testing

Posted:
in Future Apple Hardware
The U.S. House of Representatives voted unanimously on Wednesday to approve a proposal that could accelerate the testing and deployment of self-driving cars, though not without safety concerns from some parties.




The bill still has to go the Senate -- where a group has been working on its own legislation -- but in its first year would let automakers deploy up to 25,000 vehicles bypassing normal safety standards, Reuters reported. By the third year, the limit would grow to 100,000 vehicles.

Various companies -- such as GM, Waymo, and Volkswagen -- have put pressure on Congress in order to speed up their projects, and out of worry that state regulations could interfere, particularly ones proposed in California. Normally federal rules would block vehicles without human drivers, and indeed the House legislation still doesn't include large trucks.

A key point in the bill is the elimination of pre-market approval for technologies, although automakers will still have to submit safety assessments, and prove their cars are at least as safe as current models. Vehicles may also have to meet certain headlight standards, and warn people to check rear seats in case children are left behind.

Concerns have been raised throughout the bill's history. "The autonomous vehicle bill just passed by the House leaves a wild west without adequate safety protections for consumers. It pre-empts any state safety standards, but there are none at the national level," Consumer Watchdog said in a statement following the bill's approval.

Apple is currently engaged in limiting public testing of a self-driving car platform. The company appears to have scaled back its ambitions from selling an original vehicle to developing a corporate shuttle, and possibly getting involved with ridehailing services.
Soli

Comments

  • Reply 1 of 20
    Good thing, because with the massive aging population it will soon be too unsafe to be on the roads otherwise and those aging would be cut off from being able to lead a normal life for their remaining years.
    bshanklollivermr o
  • Reply 2 of 20
    Good thing, because with the massive aging population it will soon be too unsafe to be on the roads otherwise and those aging would be cut off from being able to lead a normal life for their remaining years.
    I'm less concerned with the aging population than the burgeoning idiot population. I can't walk a single block without seeing some fool breaking the law with their driving, often egregiously. The sooner we get humans out of the driver's seat, the better. One day, people will look back at cars with drivers the way we look back at programming computers with punch cards.
    lkruppSolidoozydozentzm41tyler82pscooter63lolliver
  • Reply 3 of 20
    Glad to see this. Occasionally some semblance of sense does prevail among these low-lifes lotus-eaters. The last major one along these lines was a commitment not to tax the internet. I think that provided a huge boost to internet commerce.
    edited September 2017 Solilolliver
  • Reply 4 of 20
    This is a great way the government can be an "enabler".  Too often it is the exact opposite.  Exciting times.
    Solitzm41bshanklolliver
  • Reply 5 of 20
    Good thing, because with the massive aging population it will soon be too unsafe to be on the roads otherwise and those aging would be cut off from being able to lead a normal life for their remaining years.
    I'm less concerned with the aging population than the burgeoning idiot population. I can't walk a single block without seeing some fool breaking the law with their driving, often egregiously. The sooner we get humans out of the driver's seat, the better. One day, people will look back at cars with drivers the way we look back at programming computers with punch cards.
    I know of a number of aged and aging adults who are eagerly looking forward to self-driving cars. It'll be like they will have access to a large population of personal taxi drivers or chauffeurs at their beck and call.
    edited September 2017 tzm41beowulfschmidt
  • Reply 6 of 20
    tyler82tyler82 Posts: 1,101member
    Good thing, because with the massive aging population it will soon be too unsafe to be on the roads otherwise and those aging would be cut off from being able to lead a normal life for their remaining years.
    I feel much safer driving next to a 60- something than a teen or 20- something
    baconstangcgWerks
  • Reply 7 of 20
    tshapitshapi Posts: 370member
    Why is everyone so eager to drive cars that can be hacked. 
    baconstangcgWerkstallest skil
  • Reply 8 of 20
    cgWerkscgWerks Posts: 2,952member
    What a bunch of friggin' morons!
    baconstang
  • Reply 9 of 20
    cgWerkscgWerks Posts: 2,952member
    charlesatlas said:
    I'm less concerned with the aging population than the burgeoning idiot population.
    No doubt. And, not just for driving!
    I think I'll take my chances that the elderly will have enough sense to stop driving when appropriate, over people distracted in all sorts of ways, or doing stupid stuff like trying to pass in dangerous areas/ways (a huge problem in Northern BC).
    And, I'd probably take them over FI (faked intelligence) vehicles roaming around out there as well.
    But, I guess I'm mostly concerned with a bunch of greedy and corrupt morons running the country who can't think their way out of a paper sack.

    tshapi said:
    Why is everyone so eager to drive cars that can be hacked. 
    Makes it easy to add a new one to the list... hot tubs, small aircraft, two to the head with gun in the left hand, etc.
  • Reply 10 of 20
    We can't even have automated monorails, light rail, trains, etc that don't crash... and they're on tracks.  The variables and logistics of fully autonomous driving are just far too complex.  I don't think it will happen nearly as soon as people think, if all.  Many of the test cars have had even higher accident rates than human drivers, though those accidents were technically the fault of the human drivers and not the autonomous cars.  However this points to the fact that a human will always be better at avoiding accidents by predicting the human behavior of other drivers.  Avoiding accidents is more than just not being at fault.  Thus it stands to reason that unless every car on the road were autonomous, they won't really be that much safer.  There's not really a way to just suddenly make that switch either.  Furthermore, I don't even want an autonomous car, and I am not alone in that by any means.  Lane assist, parking assist, tailer backing, etc?  Sure, but fully autonomous?  No.  I want to be able to drive off road, or out in a field, back country and unexplored territory.  I love the freedom that being in the driver's seat brings.  No autonomous vehicle will ever be able to match this.
  • Reply 11 of 20
    People criticizing the idea and rapidly advancing implementations apparently have no concept of how quickly artificial intelligence will be imbued in all kinds of electronic products. 

    When Elon Musk says they'll have an autonomous vehicle capable of self-driving coast-to-coast by 2019, I take him at his word because he's in a position to know!
  • Reply 12 of 20
    cgWerkscgWerks Posts: 2,952member
    pigybank said:
    ... However this points to the fact that a human will always be better at avoiding accidents by predicting the human behavior of other drivers.  Avoiding accidents is more than just not being at fault.  Thus it stands to reason that unless every car on the road were autonomous, they won't really be that much safer.  There's not really a way to just suddenly make that switch either.  Furthermore, I don't even want an autonomous car, and I am not alone in that by any means.  Lane assist, parking assist, tailer backing, etc?  Sure, but fully autonomous?  No. ...
    I pretty much agree with what you've said, but autonomous cars will be better at avoiding certain kinds of accidents that humans aren't good at avoiding. The problem is 'on the whole' and it's questionable what will happen until it's 100% autonomous, as you say. Just the other day, I nearly hit a guy (probably on drugs) who was kind of half-jogging down the sidewalk, who suddenly decided to cross the street (he never even looked back, just swerved right out across the street).

    Now, possibly, and AI car might (if the sensors properly detect the problem, and the decision tree has been properly programmed & filled with data) stop more quickly and efficiently than I could. On the other hand, just seeing this as I approached, I 'read' that there wasn't something quite right with the situation, so I was in some sense, anticipating, or I'd have hit him (I had already unconsciously been slowing).

    But, I agree, I'm a car enthusiast and I don't want an AI car. Some AI assists maybe, not not full AI (if it even were possibly to work well).

    The other problem is that distracted driving and the ability of people to reason and be responsible has been so impaired in recent years that as a society, we're starting to outrun (in stupidity) the incredible safety advances of our vehicles. As much as I oppose the lawmakers on this, maybe they are simply doing the math and have decided 'on the whole' AI would save lives vs trying to actually teach people how to drive and be responsible (which might be a hopeless cause). In that case, that guy above might just be a statistic outweighed by some other life saved because the AI stopped a car where the driver was texting.

    That's not the kind of world I want to live in.... but it might be where we're headed. :( As my dad wisely says when we talk about some of these things... 'yea, but I'm not the average driver'. Of course, I remind him that this doesn't mean he's safe from the other idiots, but the point is a good one. If the average is low enough, you can actually be safer than that average through some experience and diligence.

    People criticizing the idea and rapidly advancing implementations apparently have no concept of how quickly artificial intelligence will be imbued in all kinds of electronic products. 

    When Elon Musk says they'll have an autonomous vehicle capable of self-driving coast-to-coast by 2019, I take him at his word because he's in a position to know!
    Elon might have a great deal of technical understanding, but he's operating off a false view from which this future of AI is envisioned.
  • Reply 13 of 20
    foggyhillfoggyhill Posts: 4,767member
    cgWerks said:
    pigybank said:
    ... However this points to the fact that a human will always be better at avoiding accidents by predicting the human behavior of other drivers.  Avoiding accidents is more than just not being at fault.  Thus it stands to reason that unless every car on the road were autonomous, they won't really be that much safer.  There's not really a way to just suddenly make that switch either.  Furthermore, I don't even want an autonomous car, and I am not alone in that by any means.  Lane assist, parking assist, tailer backing, etc?  Sure, but fully autonomous?  No. ...
    I pretty much agree with what you've said, but autonomous cars will be better at avoiding certain kinds of accidents that humans aren't good at avoiding. The problem is 'on the whole' and it's questionable what will happen until it's 100% autonomous, as you say. Just the other day, I nearly hit a guy (probably on drugs) who was kind of half-jogging down the sidewalk, who suddenly decided to cross the street (he never even looked back, just swerved right out across the street).

    Now, possibly, and AI car might (if the sensors properly detect the problem, and the decision tree has been properly programmed & filled with data) stop more quickly and efficiently than I could. On the other hand, just seeing this as I approached, I 'read' that there wasn't something quite right with the situation, so I was in some sense, anticipating, or I'd have hit him (I had already unconsciously been slowing).

    But, I agree, I'm a car enthusiast and I don't want an AI car. Some AI assists maybe, not not full AI (if it even were possibly to work well).

    The other problem is that distracted driving and the ability of people to reason and be responsible has been so impaired in recent years that as a society, we're starting to outrun (in stupidity) the incredible safety advances of our vehicles. As much as I oppose the lawmakers on this, maybe they are simply doing the math and have decided 'on the whole' AI would save lives vs trying to actually teach people how to drive and be responsible (which might be a hopeless cause). In that case, that guy above might just be a statistic outweighed by some other life saved because the AI stopped a car where the driver was texting.

    That's not the kind of world I want to live in.... but it might be where we're headed. :( As my dad wisely says when we talk about some of these things... 'yea, but I'm not the average driver'. Of course, I remind him that this doesn't mean he's safe from the other idiots, but the point is a good one. If the average is low enough, you can actually be safer than that average through some experience and diligence.

    People criticizing the idea and rapidly advancing implementations apparently have no concept of how quickly artificial intelligence will be imbued in all kinds of electronic products. 

    When Elon Musk says they'll have an autonomous vehicle capable of self-driving coast-to-coast by 2019, I take him at his word because he's in a position to know!
    Elon might have a great deal of technical understanding, but he's operating off a false view from which this future of AI is envisioned.
    Maybe you think people were better drivers in the 1980s or 1970s, but stats seem to say otherwise.
    One thing to remember also, there are massively more cars on the road than say the 1960s and thus driving these days takes a lot more situational awareness than then.
    You could maybe survive being a true dumbass driver with a terrible 1970s car with the crumiest of handling (almost none even in a straight line).
    Cars also have improved immensely and not just because of driving assist. Even the lamest of cars these days has proper hand handling
    compared to the really terrible handling of pre 1990s cars.
  • Reply 14 of 20
    cgWerks said:
    ...SNIP...
    People criticizing the idea and rapidly advancing implementations apparently have no concept of how quickly artificial intelligence will be imbued in all kinds of electronic products. 

    When Elon Musk says they'll have an autonomous vehicle capable of self-driving coast-to-coast by 2019, I take him at his word because he's in a position to know!
    Elon might have a great deal of technical understanding, but he's operating off a false view from which this future of AI is envisioned.
    Just limit his prognostication to self-driving vehicles (his area of expertise) and you'll get a good idea with regard to timelines.
  • Reply 15 of 20
    badmonkbadmonk Posts: 1,293member
    Really, parents need a warning to make sure they remove their children from the backseat?
  • Reply 16 of 20
    cgWerkscgWerks Posts: 2,952member
    foggyhill said:
    Maybe you think people were better drivers in the 1980s or 1970s, but stats seem to say otherwise.
    One thing to remember also, there are massively more cars on the road than say the 1960s and thus driving these days takes a lot more situational awareness than then.
    You could maybe survive being a true dumbass driver with a terrible 1970s car with the crumiest of handling (almost none even in a straight line).
    Cars also have improved immensely and not just because of driving assist. Even the lamest of cars these days has proper hand handling
    compared to the really terrible handling of pre 1990s cars.
    Oh, I agree cars have gotten dramatically better, both in terms of performance (including handling) and safety. I'm pretty sure that has much more to do with any decreased traffic fatality/injury stats you might cite than improved driving capabilities. While the roads when I was growing up weren't very safe after dark, due to the near glorification of drunk driving... I think people were - on the whole - more attentive to their driving before the age of as many gadgets and cell phones (and way more than the current smartphone days). So, yes, I'd say drivers on the whole were better a decade ago and prior.

    There also seems to be a rash of utter lack of common sense and judgement. Unless it's just where I live, I've had several close calls, and have witnessed accidents, where people will turn left blindly across oncoming traffic and stuff like that. I don't recall seeing that kind of stuff a whole lot before the last 5-10 years. (Sadly, so many people have gotten killed at a few intersections like that, here, in the last 3-4 years that they've rebuilt the lights/intersections recently. I was getting kind of gun-shy of these intersections, having had close calls a few times. Just today, in fact, while taking my son to school, a car blindly turned left in front of us at one of the non-upgraded intersections, though not close enough to lay rubber and miss by feet/inches, fortunately. )

    SpamSandwich said:
    Just limit his prognostication to self-driving vehicles (his area of expertise) and you'll get a good idea with regard to timelines.
    Yea, I'm going more off stuff he's written about the future and his fear of AI. I'm sure he understands technically how the current 'AI' works in his Teslas, but his understanding of it in the big picture seems more sci-fi than anything. And, if he's counting on those kind of advances to ultimately make his automated car dreams come true, then he's going to be disappointed.

    My fear - with someone like this - is that he might be trying to push for legal advancement based on those dreams.... so we can hurry up and meet the future. I've heard conflicting reports, already, between the PR they are putting out about the state of Autopilot and the actual reality. Some of these 'future types' end up actually believing the smoke they are blowing enough to try and assume the responsibility of leading the masses into their imagined future.
    edited September 2017
  • Reply 17 of 20
    cgWerks said:
    foggyhill said:
    Maybe you think people were better drivers in the 1980s or 1970s, but stats seem to say otherwise.
    One thing to remember also, there are massively more cars on the road than say the 1960s and thus driving these days takes a lot more situational awareness than then.
    You could maybe survive being a true dumbass driver with a terrible 1970s car with the crumiest of handling (almost none even in a straight line).
    Cars also have improved immensely and not just because of driving assist. Even the lamest of cars these days has proper hand handling
    compared to the really terrible handling of pre 1990s cars.
    Oh, I agree cars have gotten dramatically better, both in terms of performance (including handling) and safety. I'm pretty sure that has much more to do with any decreased traffic fatality/injury stats you might cite than improved driving capabilities. While the roads when I was growing up weren't very safe after dark, due to the near glorification of drunk driving... I think people were - on the whole - more attentive to their driving before the age of as many gadgets and cell phones (and way more than the current smartphone days). So, yes, I'd say drivers on the whole were better a decade ago and prior.

    There also seems to be a rash of utter lack of common sense and judgement. Unless it's just where I live, I've had several close calls, and have witnessed accidents, where people will turn left blindly across oncoming traffic and stuff like that. I don't recall seeing that kind of stuff a whole lot before the last 5-10 years. (Sadly, so many people have gotten killed at a few intersections like that, here, in the last 3-4 years that they've rebuilt the lights/intersections recently. I was getting kind of gun-shy of these intersections, having had close calls a few times. Just today, in fact, while taking my son to school, a car blindly turned left in front of us at one of the non-upgraded intersections, though not close enough to lay rubber and miss by feet/inches, fortunately. )

    SpamSandwich said:
    Just limit his prognostication to self-driving vehicles (his area of expertise) and you'll get a good idea with regard to timelines.
    Yea, I'm going more off stuff he's written about the future and his fear of AI. I'm sure he understands technically how the current 'AI' works in his Teslas, but his understanding of it in the big picture seems more sci-fi than anything. And, if he's counting on those kind of advances to ultimately make his automated car dreams come true, then he's going to be disappointed.

    My fear - with someone like this - is that he might be trying to push for legal advancement based on those dreams.... so we can hurry up and meet the future. I've heard conflicting reports, already, between the PR they are putting out about the state of Autopilot and the actual reality. Some of these 'future types' end up actually believing the smoke they are blowing enough to try and assume the responsibility of leading the masses into their imagined future.
    I'm not entirely sure what is driving his concern/fear with regard to AI, especially since strong AI is a ways off.
  • Reply 18 of 20
    MarvinMarvin Posts: 15,322moderator
    cgWerks said:
    foggyhill said:
    Maybe you think people were better drivers in the 1980s or 1970s, but stats seem to say otherwise.
    One thing to remember also, there are massively more cars on the road than say the 1960s and thus driving these days takes a lot more situational awareness than then.
    You could maybe survive being a true dumbass driver with a terrible 1970s car with the crumiest of handling (almost none even in a straight line).
    Cars also have improved immensely and not just because of driving assist. Even the lamest of cars these days has proper hand handling
    compared to the really terrible handling of pre 1990s cars.
    Oh, I agree cars have gotten dramatically better, both in terms of performance (including handling) and safety. I'm pretty sure that has much more to do with any decreased traffic fatality/injury stats you might cite than improved driving capabilities. While the roads when I was growing up weren't very safe after dark, due to the near glorification of drunk driving... I think people were - on the whole - more attentive to their driving before the age of as many gadgets and cell phones (and way more than the current smartphone days). So, yes, I'd say drivers on the whole were better a decade ago and prior.

    There also seems to be a rash of utter lack of common sense and judgement. Unless it's just where I live, I've had several close calls, and have witnessed accidents, where people will turn left blindly across oncoming traffic and stuff like that. I don't recall seeing that kind of stuff a whole lot before the last 5-10 years. (Sadly, so many people have gotten killed at a few intersections like that, here, in the last 3-4 years that they've rebuilt the lights/intersections recently. I was getting kind of gun-shy of these intersections, having had close calls a few times. Just today, in fact, while taking my son to school, a car blindly turned left in front of us at one of the non-upgraded intersections, though not close enough to lay rubber and miss by feet/inches, fortunately. )

    SpamSandwich said:
    Just limit his prognostication to self-driving vehicles (his area of expertise) and you'll get a good idea with regard to timelines.
    Yea, I'm going more off stuff he's written about the future and his fear of AI. I'm sure he understands technically how the current 'AI' works in his Teslas, but his understanding of it in the big picture seems more sci-fi than anything. And, if he's counting on those kind of advances to ultimately make his automated car dreams come true, then he's going to be disappointed.

    My fear - with someone like this - is that he might be trying to push for legal advancement based on those dreams.... so we can hurry up and meet the future. I've heard conflicting reports, already, between the PR they are putting out about the state of Autopilot and the actual reality. Some of these 'future types' end up actually believing the smoke they are blowing enough to try and assume the responsibility of leading the masses into their imagined future.
    I'm not entirely sure what is driving his concern/fear with regard to AI, especially since strong AI is a ways off.
    Elon Musk talks about AI in the following video (1:16:25):



    He references the advances in things like AI beating human players:

    https://www.newscientist.com/article/2132086-deepminds-ai-beats-worlds-best-go-player-in-latest-face-off/

    Typically the people expressing caution over AI avoid describing scenarios, which leads people to just dismiss them as fear-mongering and irresponsible. Elon gave an example there about an AI controlling information systems. When you look at how the world ticks in its entirety, it's driven by information and it's all information. The term encompasses everything. The way that these comments respond right now to Elon Musk is because of information; you never met him, you've never heard him say these things in person but even in person, it's just information signals. An AI in a network system can manipulate information, it can figure out how to find security vulnerabilities faster than a human. China uses machine learning on their firewall to dynamically shut down attempts to bypass it.

    Elon gave an example of an AI being tasked with getting the best stock prices and hacking into airline control networks to change the navigation of a flight over restricted airspace to start a war. Ultimately the AI would be designed by someone and there are laws to cover malicious computer use but there could be no preventative legislation for making the AI just like for hacking or most crimes. The laws are just there to clean up afterwards. The best they can really do is to regulate the systems an AI can run on or has access to.

    One mention in the video concerning autonomous vehicles was a system-wide hack across a manufacturer's vehicle control software. They want to ensure that humans have the ultimate override e.g an emergency button that shuts off the network and allows the vehicles to continue operating safely. This needs partitioning of control systems. This applies to sensitive data like in the Equifax hack. The iPhone uses partitioned systems with the Secure Enclave.

    "Communication between the Secure Enclave and the application processor is isolated to an interrupt-driven mailbox and shared memory data buffers."

    That kind of regulation reduces the possibility for sensitive control systems to be compromised.

    I think that some people are over-exaggerating the feasibility of AI software alone causing a lot of damage, it's easy to extrapolate into science fiction scenarios, which ignore the fact that people are observing and monitoring systems all the time. It would have to get out of control very quickly but AI is adaptive. There was a case of a computer trading system going wrong quite quickly due to a bug, it took 45 minutes ( https://www.sec.gov/news/press-release/2013-222 ). If it was an AI operating on trading systems, it could cause a lot of economic disruption, especially if it's between countries but again, people would monitoring and be able to clean up a lot of this.

    There are much simpler scenarios that would be much more damaging. A solar-powered drone could fly forever potentially:

    http://www.iflscience.com/technology/solar-powered-drone-could-fly-nonstop-five-years/

    Imagine a gun was attached pointing downwards with 100 bullets. Someone puts on control software with a camera to detect humans from above. They deploy 1000 drones across an entire country with an AI that locates and fires at people from above. That could wipe out 100,000 people in a matter of hours before anyone could react, more than a nuclear weapon. They refill the drones and keep on going. I think that's a lot more dangerous than software because software doesn't immediately hurt people, it just influences them, AI-driven hardware can hurt people directly.

    AI-driven hardware can also be safer than human-driven hardware. That's always the responsibility that comes with advancing technology, to make sure that it does more good than harm.
  • Reply 19 of 20
    Marvin said:
    cgWerks said:
    foggyhill said:
    Maybe you think people were better drivers in the 1980s or 1970s, but stats seem to say otherwise.
    One thing to remember also, there are massively more cars on the road than say the 1960s and thus driving these days takes a lot more situational awareness than then.
    You could maybe survive being a true dumbass driver with a terrible 1970s car with the crumiest of handling (almost none even in a straight line).
    Cars also have improved immensely and not just because of driving assist. Even the lamest of cars these days has proper hand handling
    compared to the really terrible handling of pre 1990s cars.
    Oh, I agree cars have gotten dramatically better, both in terms of performance (including handling) and safety. I'm pretty sure that has much more to do with any decreased traffic fatality/injury stats you might cite than improved driving capabilities. While the roads when I was growing up weren't very safe after dark, due to the near glorification of drunk driving... I think people were - on the whole - more attentive to their driving before the age of as many gadgets and cell phones (and way more than the current smartphone days). So, yes, I'd say drivers on the whole were better a decade ago and prior.

    There also seems to be a rash of utter lack of common sense and judgement. Unless it's just where I live, I've had several close calls, and have witnessed accidents, where people will turn left blindly across oncoming traffic and stuff like that. I don't recall seeing that kind of stuff a whole lot before the last 5-10 years. (Sadly, so many people have gotten killed at a few intersections like that, here, in the last 3-4 years that they've rebuilt the lights/intersections recently. I was getting kind of gun-shy of these intersections, having had close calls a few times. Just today, in fact, while taking my son to school, a car blindly turned left in front of us at one of the non-upgraded intersections, though not close enough to lay rubber and miss by feet/inches, fortunately. )

    SpamSandwich said:
    Just limit his prognostication to self-driving vehicles (his area of expertise) and you'll get a good idea with regard to timelines.
    Yea, I'm going more off stuff he's written about the future and his fear of AI. I'm sure he understands technically how the current 'AI' works in his Teslas, but his understanding of it in the big picture seems more sci-fi than anything. And, if he's counting on those kind of advances to ultimately make his automated car dreams come true, then he's going to be disappointed.

    My fear - with someone like this - is that he might be trying to push for legal advancement based on those dreams.... so we can hurry up and meet the future. I've heard conflicting reports, already, between the PR they are putting out about the state of Autopilot and the actual reality. Some of these 'future types' end up actually believing the smoke they are blowing enough to try and assume the responsibility of leading the masses into their imagined future.
    I'm not entirely sure what is driving his concern/fear with regard to AI, especially since strong AI is a ways off.
    Elon Musk talks about AI in the following video (1:16:25):



    He references the advances in things like AI beating human players:

    https://www.newscientist.com/article/2132086-deepminds-ai-beats-worlds-best-go-player-in-latest-face-off/

    Typically the people expressing caution over AI avoid describing scenarios, which leads people to just dismiss them as fear-mongering and irresponsible. Elon gave an example there about an AI controlling information systems. When you look at how the world ticks in its entirety, it's driven by information and it's all information. The term encompasses everything. The way that these comments respond right now to Elon Musk is because of information; you never met him, you've never heard him say these things in person but even in person, it's just information signals. An AI in a network system can manipulate information, it can figure out how to find security vulnerabilities faster than a human. China uses machine learning on their firewall to dynamically shut down attempts to bypass it.

    Elon gave an example of an AI being tasked with getting the best stock prices and hacking into airline control networks to change the navigation of a flight over restricted airspace to start a war. Ultimately the AI would be designed by someone and there are laws to cover malicious computer use but there could be no preventative legislation for making the AI just like for hacking or most crimes. The laws are just there to clean up afterwards. The best they can really do is to regulate the systems an AI can run on or has access to.

    One mention in the video concerning autonomous vehicles was a system-wide hack across a manufacturer's vehicle control software. They want to ensure that humans have the ultimate override e.g an emergency button that shuts off the network and allows the vehicles to continue operating safely. This needs partitioning of control systems. This applies to sensitive data like in the Equifax hack. The iPhone uses partitioned systems with the Secure Enclave.

    "Communication between the Secure Enclave and the application processor is isolated to an interrupt-driven mailbox and shared memory data buffers."

    That kind of regulation reduces the possibility for sensitive control systems to be compromised.

    I think that some people are over-exaggerating the feasibility of AI software alone causing a lot of damage, it's easy to extrapolate into science fiction scenarios, which ignore the fact that people are observing and monitoring systems all the time. It would have to get out of control very quickly but AI is adaptive. There was a case of a computer trading system going wrong quite quickly due to a bug, it took 45 minutes ( https://www.sec.gov/news/press-release/2013-222 ). If it was an AI operating on trading systems, it could cause a lot of economic disruption, especially if it's between countries but again, people would monitoring and be able to clean up a lot of this.

    There are much simpler scenarios that would be much more damaging. A solar-powered drone could fly forever potentially:

    http://www.iflscience.com/technology/solar-powered-drone-could-fly-nonstop-five-years/

    Imagine a gun was attached pointing downwards with 100 bullets. Someone puts on control software with a camera to detect humans from above. They deploy 1000 drones across an entire country with an AI that locates and fires at people from above. That could wipe out 100,000 people in a matter of hours before anyone could react, more than a nuclear weapon. They refill the drones and keep on going. I think that's a lot more dangerous than software because software doesn't immediately hurt people, it just influences them, AI-driven hardware can hurt people directly.

    AI-driven hardware can also be safer than human-driven hardware. That's always the responsibility that comes with advancing technology, to make sure that it does more good than harm.
    No matter how much regulation you throw at it, someone or some country that will not abide by regulations will be out there and using A.I. for a purpose that no one anticipates. IMO, prepare for every imaginable thing to go wrong in the digital realm.
  • Reply 20 of 20
    cgWerkscgWerks Posts: 2,952member
    SpamSandwich said:
    I'm not entirely sure what is driving his concern/fear with regard to AI, especially since strong AI is a ways off.
    Strong AI is a category error. What makes us think we can program a computer to do what a human brain does, when we don't even know what a human brain does to begin with? It's silliness, based on false presuppositions. You don't hook sensors up to a computer, and poof, out comes consciousness.

    Marvin said:
    He references the advances in things like AI beating human players:
    ...
    Typically the people expressing caution over AI avoid describing scenarios, which leads people to just dismiss them as fear-mongering and irresponsible. Elon gave an example there about an AI controlling information systems.
    ...
    An AI in a network system can manipulate information, it can figure out how...
    ...
    Elon gave an example of an AI being tasked with getting the best stock prices and hacking into airline control networks to change the navigation of a flight over restricted airspace to start a war.
    ...
    Imagine a gun was attached pointing downwards with 100 bullets. Someone puts on control software with a camera to detect humans from above. They deploy 1000 drones across an entire country with an AI that locates and fires at people from above.
    Beating humans at chess or Go might be called 'AI' but it's not the kind of AI that Elon keeps lapsing into in his 'what if' scenarios. Please give me the steps in his scenario of how this computer goes from calculating stocks to hacking into a defense system. It's baloney, unless someone programmed it to do that. It doesn't 'figure out how' unless that's what it's designed to do.

    So, ultimately, you come to the real danger that isn't intelligent at all, but is still a huge danger. We don't need AI to make advanced machines that could be designed (on purpose, or possibly by accident) to hurt or kill people, or cause a lot of destruction. But it won't be because the algos suddenly become self-aware and 'decide' to do something.

    Every time I hear someone talking about AI... it goes from here's what it's doing to - incredible logic leap - then XYZ happens. And, the magic inserted seems to come down to... faster computers and time. I can plug all the sensors in the world into my computer.... make it a zillions time faster... and give it eternity, and it won't ever have a thought of it's own.... even if it's running some fancy skip-logic OS.

    And that brings me back to the car problem. I think a lot of people imagine that AI cars work by actually making some kind of decisions (even if they are artificial), but in reality, what all this fancy stuff depends on is building a big enough database of what-ifs and solutions. The problem is that there are a nearly infinite amount of what-ifs. As more data is collected and programmers keep tweaking the routines, these AI cars will seems smarter and smarter... that is until they run into some situations they don't accurately 'read' or have a good solution to.

    Then, something bad will happen. The game these guys are playing, is that they think less bad things will happen - on the whole - than would happen due to human error (or more likely, negligence... especially a society that considers not missing a Facebook update over keeping themselves or others alive). I don't think that's a trade I'd like to make. I welcome all sorts of assistive technology that can see the moose with IR or radar heading to cross the road I'm driving down, or maybe even auto-breaking that kicks in when I don't seem to recognize an obstacle I'm going to hit (though I can imagine problem scenarios with this). But, I want actual conscious thinking beings to ultimately be in charge, not some half-baked computer routines masquerading as 'intelligence.'
Sign In or Register to comment.