Apple grows self-driving test fleet to 55 vehicles as project remains in shadows

2»

Comments

  • Reply 21 of 36
    StrangeDaysStrangeDays Posts: 12,877member
    cgWerks said:
    The company has yet to apply for a permit to test without backup drivers, but those only became available in California last month.
    Boy, am I glad I don't live there anymore. Maybe just hide in your homes and offices.
    Of course, even WITH backup drivers, once they stop paying attention, you get what happened in Arizona (though the media and police will help cover it up).

    What a friggin' $$$ grab with little regard for human safety.
    Are you similarly panicked over the possibility of a human driver killing you? Why not? Chances are much higher. 

    You may not be aware, but not long ago city streets were for people and horses. When autos were invented there was similar panic and pushback about keeping them off the streets for human safety. 
    edited May 2018
  • Reply 22 of 36
    cgWerkscgWerks Posts: 2,952member
    franklinjackcon said:
    37,000 deaths on the roads in the US per year. Which state do you live in where you don't need to hide in your home or office?
    It would be nice to make the number go down, not up.

    StrangeDays said:
    Are you similarly panicked over the possibility of a human driver killing you? Why not? Chances are much higher. 

    You may not be aware, but not long ago city streets were for people and horses. When autos were invented there was similar panic and pushback about keeping them off the streets for human safety. 
    Oh sure, a human driver could kill me too. But, at least they *should* be thinking and trying to avoid me. Plus, I'm all for getting any humans that don't, off the road.

    And, this isn't any kind of fear of technology or advancement... it's a fear because of knowing the limitations of AI and that those pushing forward with it don't know or care.
    baconstang
  • Reply 23 of 36
    fastasleepfastasleep Posts: 6,417member
    cgWerks said:
    The company has yet to apply for a permit to test without backup drivers, but those only became available in California last month.
    Boy, am I glad I don't live there anymore. Maybe just hide in your homes and offices.
    Of course, even WITH backup drivers, once they stop paying attention, you get what happened in Arizona (though the media and police will help cover it up).

    What a friggin' $$$ grab with little regard for human safety.

    Apple may not be the First but they will likely be the Best!
    From the company that brought you Siri.... 
    You’re far more likely to be killed by a human driving a car than a self-driving car. 
  • Reply 24 of 36
    cgWerkscgWerks Posts: 2,952member
    fastasleep said:
    You’re far more likely to be killed by a human driving a car than a self-driving car. 
    Can you explain your reasoning there?
  • Reply 25 of 36
    fastasleepfastasleep Posts: 6,417member
    cgWerks said:
    fastasleep said:
    You’re far more likely to be killed by a human driving a car than a self-driving car. 
    Can you explain your reasoning there?
    Uh, I’m surprised that I have to. It’s very simple probability. There are something like 40,000 auto deaths at the hands of humans a year in the US. A fairly large percentage of those are in California due to population density. There has been, what, a small handful of deaths related to autonomous vehicles anywhere at all to date. You have a massively higher likelihood of getting killed by a human driver in California than an autonomous test vehicle. Not to mention drunk and distracted driving statistics to counter your “they should be thinking and trying to avoid me”, as if autonomous cars aren’t. How exactly do you think that many people die every year? 


  • Reply 26 of 36
    cgWerkscgWerks Posts: 2,952member
    fastasleep said:
    Uh, I’m surprised that I have to. It’s very simple probability. There are something like 40,000 auto deaths at the hands of humans a year in the US. A fairly large percentage of those are in California due to population density. There has been, what, a small handful of deaths related to autonomous vehicles anywhere at all to date. You have a massively higher likelihood of getting killed by a human driver in California than an autonomous test vehicle. Not to mention drunk and distracted driving statistics to counter your “they should be thinking and trying to avoid me”, as if autonomous cars aren’t. How exactly do you think that many people die every year? 
    It's a really big country with a lot of vehicles and drivers, driving under all kinds of crazy conditions (and, as you say, with some of the drivers impaired in various ways... i.e.: they shouldn't be driving... easy solution there). Meanwhile, there are only a few AI vehicles driving under the most ideal conditions in places that are insanely highly mapped in comparison to anywhere else, and they are still having problems.

    If they really cared about lowering that death rate, they'd do more to get the above people causing the majority of the accidents off the road.

    And, it isn't whether the AI car is trying to avoid me or not. It isn't. It's just following its program. Just like it was doing when it ran the Apple employee into the barrier, or when it decided a bicyclist crossing the street was likely a blowing plastic bag.

    Will they improve as the database gets bigger and more defined? Sure. But they'll never catch the edge cases. So, you're basically betting statistic against statistic (i.e.: we'll trade those edge case deaths for some of those drunk and distracted deaths, instead of just getting them off the road), hoping the *overall* number is eventually lower. I like quality and good solutions, not lowest-common-denominator type solutions.

    But, what really ticks me off, is that this is being sold to the general public (and our lawmakers) as some kind of magic fairy-dust type solution. They think these AI vehicles will be better 'drivers' than humans, not that they'll simply be better than a texting teenager on crack.

    In some cases, they will be better. But, then lets put that kind of stuff into AI-assistive type technologies, not take the good, the bad, and the really, really bad all in one package, because taxi, trucking, and delivery mega-industries are comping at the bit to replace humans with these train-wrecks.
  • Reply 27 of 36
    fastasleepfastasleep Posts: 6,417member
    cgWerks said:
    fastasleep said:
    Uh, I’m surprised that I have to. It’s very simple probability. There are something like 40,000 auto deaths at the hands of humans a year in the US. A fairly large percentage of those are in California due to population density. There has been, what, a small handful of deaths related to autonomous vehicles anywhere at all to date. You have a massively higher likelihood of getting killed by a human driver in California than an autonomous test vehicle. Not to mention drunk and distracted driving statistics to counter your “they should be thinking and trying to avoid me”, as if autonomous cars aren’t. How exactly do you think that many people die every year? 
    It's a really big country with a lot of vehicles and drivers, driving under all kinds of crazy conditions (and, as you say, with some of the drivers impaired in various ways... i.e.: they shouldn't be driving... easy solution there). Meanwhile, there are only a few AI vehicles driving under the most ideal conditions in places that are insanely highly mapped in comparison to anywhere else, and they are still having problems.

    If they really cared about lowering that death rate, they'd do more to get the above people causing the majority of the accidents off the road.

    And, it isn't whether the AI car is trying to avoid me or not. It isn't. It's just following its program. Just like it was doing when it ran the Apple employee into the barrier, or when it decided a bicyclist crossing the street was likely a blowing plastic bag.

    Will they improve as the database gets bigger and more defined? Sure. But they'll never catch the edge cases. So, you're basically betting statistic against statistic (i.e.: we'll trade those edge case deaths for some of those drunk and distracted deaths, instead of just getting them off the road), hoping the *overall* number is eventually lower. I like quality and good solutions, not lowest-common-denominator type solutions.

    But, what really ticks me off, is that this is being sold to the general public (and our lawmakers) as some kind of magic fairy-dust type solution. They think these AI vehicles will be better 'drivers' than humans, not that they'll simply be better than a texting teenager on crack.

    In some cases, they will be better. But, then lets put that kind of stuff into AI-assistive type technologies, not take the good, the bad, and the really, really bad all in one package, because taxi, trucking, and delivery mega-industries are comping at the bit to replace humans with these train-wrecks.
    I’m not going to argue numbers with you because you don’t seem to understand how probability works, amd really unsure what you’d propose for an “easy solution” to get impaired and distracted drivers off the road. But as far as your other points go, I 100% disagree. Nobody is saying it’s magic fairy dust, it’s a nascent technology that will inevitably improve to the point that it will be demonstrably safer than a human behind the wheel. That is what will get dangerous human drivers off the wheel. You’re being overly pessimistic about new technology, but that seems to be a running theme. You know airliners are largely automated now and are safer than ever?
    SpamSandwich
  • Reply 28 of 36
    cgWerkscgWerks Posts: 2,952member
    fastasleep said:
    I’m not going to argue numbers with you because you don’t seem to understand how probability works, amd really unsure what you’d propose for an “easy solution” to get impaired and distracted drivers off the road. But as far as your other points go, I 100% disagree. Nobody is saying it’s magic fairy dust, it’s a nascent technology that will inevitably improve to the point that it will be demonstrably safer than a human behind the wheel. That is what will get dangerous human drivers off the wheel. You’re being overly pessimistic about new technology, but that seems to be a running theme. You know airliners are largely automated now and are safer than ever?
    I understand probability, but I'm not a statistic. By not drunk driving, not texting and driving, being aware (as a pedestrian), etc. I can increase my 'probabilities' significantly. This includes driver training (I used to race SCCA), having driven a lot in some really poor conditions (think Northern BC winters, or 600 miles per week through mid-West USA snow to ice-belt for years, etc.), and my choice of much better than average vehicles.

    Over this time, I've also learned to 'read' tendencies in other human traffic so I often know when I'm around a bad driver and can be more cautious. I'm not sure if I could 'read' an AI vehicle in the same manner, as there is no rationality or behavior to read (aside from them currently driving like a 1st day drivers-ed student).

    Also, as far as getting bad drivers off the road... the police (with use of camera technology, etc.) could stop focusing so much on stuff like speed-traps and catch truly catch the dangerous drivers. The penalties could get quite stiff... like either once, and you're out, or maybe twice. Get caught drunk driving, or evidence of texting and driving... and you're done driving for a good long while. I think they'd learn quite quickly compared to a small fine.

    But, yes you have nailed the difference here in your response. You're AI-faithful, while I'm not. You see it as a quantitative problem (as tech advances, it will advance past humans) while I see it as a qualitative problem (no matter how good it gets, it will always be different, with advantages and disadvantages). It's artificial. It WILL NEVER actually think or reason, not even if computers get a million trillion times more powerful.

    Instead of eliminating the true danger, you propose to replace one danger with another (that you see as a lesser danger, which you think will eventually improve so it no longer is).

    And, I'm no luddite when it comes to technology at all. I just have enough exposure to philosophy and philosophy of mind (combined with my engineering background... i.e.: look for problems!) that I can see the problems many tech people can't. I've also talked to people at parties who've worked in the industry and have read a number of articles by industry pioneers who are starting to actually admit the foundation is crumbling (at least in getting to what many see as the goal of AI).
  • Reply 29 of 36
    SpamSandwichSpamSandwich Posts: 33,407member
    cgWerks said:
    fastasleep said:
    Uh, I’m surprised that I have to. It’s very simple probability. There are something like 40,000 auto deaths at the hands of humans a year in the US. A fairly large percentage of those are in California due to population density. There has been, what, a small handful of deaths related to autonomous vehicles anywhere at all to date. You have a massively higher likelihood of getting killed by a human driver in California than an autonomous test vehicle. Not to mention drunk and distracted driving statistics to counter your “they should be thinking and trying to avoid me”, as if autonomous cars aren’t. How exactly do you think that many people die every year? 
    It's a really big country with a lot of vehicles and drivers, driving under all kinds of crazy conditions (and, as you say, with some of the drivers impaired in various ways... i.e.: they shouldn't be driving... easy solution there). Meanwhile, there are only a few AI vehicles driving under the most ideal conditions in places that are insanely highly mapped in comparison to anywhere else, and they are still having problems.

    If they really cared about lowering that death rate, they'd do more to get the above people causing the majority of the accidents off the road.

    And, it isn't whether the AI car is trying to avoid me or not. It isn't. It's just following its program. Just like it was doing when it ran the Apple employee into the barrier, or when it decided a bicyclist crossing the street was likely a blowing plastic bag.

    Will they improve as the database gets bigger and more defined? Sure. But they'll never catch the edge cases. So, you're basically betting statistic against statistic (i.e.: we'll trade those edge case deaths for some of those drunk and distracted deaths, instead of just getting them off the road), hoping the *overall* number is eventually lower. I like quality and good solutions, not lowest-common-denominator type solutions.

    But, what really ticks me off, is that this is being sold to the general public (and our lawmakers) as some kind of magic fairy-dust type solution. They think these AI vehicles will be better 'drivers' than humans, not that they'll simply be better than a texting teenager on crack.

    In some cases, they will be better. But, then lets put that kind of stuff into AI-assistive type technologies, not take the good, the bad, and the really, really bad all in one package, because taxi, trucking, and delivery mega-industries are comping at the bit to replace humans with these train-wrecks.
    I’m not going to argue numbers with you because you don’t seem to understand how probability works, amd really unsure what you’d propose for an “easy solution” to get impaired and distracted drivers off the road. But as far as your other points go, I 100% disagree. Nobody is saying it’s magic fairy dust, it’s a nascent technology that will inevitably improve to the point that it will be demonstrably safer than a human behind the wheel. That is what will get dangerous human drivers off the wheel. You’re being overly pessimistic about new technology, but that seems to be a running theme. You know airliners are largely automated now and are safer than ever?
    And it'll be insurance rates which will hammer the final nail in the coffin of cars which don't have automated systems once they achieve a demonstrated record of success in most driving scenarios.
  • Reply 30 of 36
    cgWerkscgWerks Posts: 2,952member
    SpamSandwich said:
    And it'll be insurance rates which will hammer the final nail in the coffin of cars which don't have automated systems once they achieve a demonstrated record of success in most driving scenarios.
    Yeah, this is a concern for sure. I'm hoping it will fail before it gets this far, but there is a ***LOT*** of money and power behind this movement. I'm sure they won't let trivial things like lives and public safety get in the way. (They have a proven track record of that, now, on so many other fronts.)
  • Reply 31 of 36
    fastasleepfastasleep Posts: 6,417member
    cgWerks said:
    I understand probability, but I'm not a statistic. By not drunk driving, not texting and driving, being aware (as a pedestrian), etc. I can increase my 'probabilities' significantly. This includes driver training (I used to race SCCA), having driven a lot in some really poor conditions (think Northern BC winters, or 600 miles per week through mid-West USA snow to ice-belt for years, etc.), and my choice of much better than average vehicles.

    Over this time, I've also learned to 'read' tendencies in other human traffic so I often know when I'm around a bad driver and can be more cautious. I'm not sure if I could 'read' an AI vehicle in the same manner, as there is no rationality or behavior to read (aside from them currently driving like a 1st day drivers-ed student).

    Also, as far as getting bad drivers off the road... the police (with use of camera technology, etc.) could stop focusing so much on stuff like speed-traps and catch truly catch the dangerous drivers. The penalties could get quite stiff... like either once, and you're out, or maybe twice. Get caught drunk driving, or evidence of texting and driving... and you're done driving for a good long while. I think they'd learn quite quickly compared to a small fine.
    Small fine? DUIs are extremely expensive, and you usually aren't able to drive for some time. Depends on the jurisdiction. Police can't monitor more than a tiny fraction of traffic, regardless of "camera technology". And you might be a perfect driver, but that doesn't stop someone from nodding off and swerving into your lane and killing everyone in your car. I don't know what you mean by "I'm not a statistic" — all I'm saying is you have a much, much higher probability of getting harmed by a human driver than an AI driver, regardless of where you live, so saying "glad I don't live there" because they were going to start permitting testing in CA when we're literally talking about a small handful of AI-related accidents in contrast of 40,000 human-related accidents in the state already happening is kinda silly. :)

    cgWerks said:
    But, yes you have nailed the difference here in your response. You're AI-faithful, while I'm not. You see it as a quantitative problem (as tech advances, it will advance past humans) while I see it as a qualitative problem (no matter how good it gets, it will always be different, with advantages and disadvantages). It's artificial. It WILL NEVER actually think or reason, not even if computers get a million trillion times more powerful.

    Instead of eliminating the true danger, you propose to replace one danger with another (that you see as a lesser danger, which you think will eventually improve so it no longer is).

    And, I'm no luddite when it comes to technology at all. I just have enough exposure to philosophy and philosophy of mind (combined with my engineering background... i.e.: look for problems!) that I can see the problems many tech people can't. I've also talked to people at parties who've worked in the industry and have read a number of articles by industry pioneers who are starting to actually admit the foundation is crumbling (at least in getting to what many see as the goal of AI).
    To me, you sound like the people who didn't think humans could ever travel faster than a horse. :)
    SoliSpamSandwich
  • Reply 32 of 36
    cgWerkscgWerks Posts: 2,952member
    fastasleep said:
    ... we're literally talking about a small handful of AI-related accidents in contrast of 40,000 human-related accidents in the state already happening is kinda silly. :)
    ... To me, you sound like the people who didn't think humans could ever travel faster than a horse. :)
    There are only a handful of AI cars, too, compared to millions of human-driven cars.
    Maybe closer to, humans won't ever travel faster than the speed of light.
  • Reply 33 of 36
    fastasleepfastasleep Posts: 6,417member
    cgWerks said:
    fastasleep said:
    ... we're literally talking about a small handful of AI-related accidents in contrast of 40,000 human-related accidents in the state already happening is kinda silly. :)
    ... To me, you sound like the people who didn't think humans could ever travel faster than a horse. :)
    There are only a handful of AI cars, too, compared to millions of human-driven cars.
    Maybe closer to, humans won't ever travel faster than the speed of light.
    Yes, that's exactly what I was saying. Never mind though, the original point's been lost here. Regardless, there will be millions of them on the road in the near future. 

    You should really give this a read — I think you're giving humans far too much credit and far too little credit to both autonomous cars' abilities to use probabilistic reasoning, pattern recognition, and machine learning even at this stage and understanding how close the tech is to superseding human drivers as far as safety goes.

    http://www.driverless-future.com/?page_id=774
  • Reply 34 of 36
    cgWerkscgWerks Posts: 2,952member
    fastasleep said:
    Yes, that's exactly what I was saying. Never mind though, the original point's been lost here. Regardless, there will be millions of them on the road in the near future. 

    You should really give this a read — I think you're giving humans far too much credit and far too little credit to both autonomous cars' abilities to use probabilistic reasoning, pattern recognition, and machine learning even at this stage and understanding how close the tech is to superseding human drivers as far as safety goes.

    http://www.driverless-future.com/?page_id=774
    Interesting article, and I think I agree mostly about how automated cars will be introduced and progress. I'm not sure about the last few points, though.
    That said, here are a few articles that address the state of AI, as well as cover some of the challenges this technology faces, including our safety discussion. I'll put the links and then one pertinent paragraph from one on the safety aspect.

    Artificial intelligence pioneer says we need to start over
    https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html

    Is AI Riding a One-Trick Pony?
    https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony/

    5 big challenges that self-driving cars still have to overcome
    https://www.vox.com/2016/4/21/11447838/self-driving-cars-challenges-obstacles

    We don't understand AI because we don't understand intelligence
    https://www.engadget.com/2016/08/15/technological-singularity-problems-brain-mind/

    After Peak Hype, Self-Driving Cars Enter The Trough Of Disillusionment

    https://www.wired.com/story/self-driving-cars-challenges/


    re: safety, from the vox article:

    Kalra laid this all out in a recent paper for RAND. As noted above, drivers in the United States currently get into fatal accidents at a rate of about one for every 100 million miles driven. Ideally, we'd want self-driving cars to be at least that safe. But it's unlikely we'll be able to prove that any time soon. Google only drove its cars 1.3 million miles total between 2009 and 2015 — not nearly enough to draw rigorous statistical conclusions about safety. It would take many decades to drive the hundreds and hundreds of millions of miles needed to prove safety.

    "My hunch is that by the time automakers are ready to sell these things, we still won't know how safe they are," says Kalra. "We're going to have to make these decisions under uncertainty."
  • Reply 35 of 36
    fastasleepfastasleep Posts: 6,417member
    cgWerks said:
    fastasleep said:
    Yes, that's exactly what I was saying. Never mind though, the original point's been lost here. Regardless, there will be millions of them on the road in the near future. 

    You should really give this a read — I think you're giving humans far too much credit and far too little credit to both autonomous cars' abilities to use probabilistic reasoning, pattern recognition, and machine learning even at this stage and understanding how close the tech is to superseding human drivers as far as safety goes.

    http://www.driverless-future.com/?page_id=774
    Interesting article, and I think I agree mostly about how automated cars will be introduced and progress. I'm not sure about the last few points, though.
    That said, here are a few articles that address the state of AI, as well as cover some of the challenges this technology faces, including our safety discussion. I'll put the links and then one pertinent paragraph from one on the safety aspect.

    Artificial intelligence pioneer says we need to start over
    https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html

    Is AI Riding a One-Trick Pony?
    https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony/

    5 big challenges that self-driving cars still have to overcome
    https://www.vox.com/2016/4/21/11447838/self-driving-cars-challenges-obstacles

    We don't understand AI because we don't understand intelligence
    https://www.engadget.com/2016/08/15/technological-singularity-problems-brain-mind/

    After Peak Hype, Self-Driving Cars Enter The Trough Of Disillusionment

    https://www.wired.com/story/self-driving-cars-challenges/


    re: safety, from the vox article:

    Kalra laid this all out in a recent paper for RAND. As noted above, drivers in the United States currently get into fatal accidents at a rate of about one for every 100 million miles driven. Ideally, we'd want self-driving cars to be at least that safe. But it's unlikely we'll be able to prove that any time soon. Google only drove its cars 1.3 million miles total between 2009 and 2015 — not nearly enough to draw rigorous statistical conclusions about safety. It would take many decades to drive the hundreds and hundreds of millions of miles needed to prove safety.

    "My hunch is that by the time automakers are ready to sell these things, we still won't know how safe they are," says Kalra. "We're going to have to make these decisions under uncertainty."
    Yeah, the article I linked to goes into all that last bit. But thing is, we also don’t know how much safer they might be as well. So, there is some risk in not pursuing autonomous vehicles faster, if there’s a chance that the tech might save lives. 

    http://www.driverless-future.com/?page_id=983
  • Reply 36 of 36
    cgWerkscgWerks Posts: 2,952member
    fastasleep said:
    Yeah, the article I linked to goes into all that last bit. But thing is, we also don’t know how much safer they might be as well. So, there is some risk in not pursuing autonomous vehicles faster, if there’s a chance that the tech might save lives. 
    It's too hard to project from such small numbers, but what are we up to, like 3 or 4 deaths already with a pretty small amount of miles under their belts (and human supervision)? It's not looking too good so far.
Sign In or Register to comment.