AI 'drivers' will gain same legal status as human drivers in autonomous vehicles, NHTSA rules

13

Comments

  • Reply 41 of 74
    So, sue the car, err Google.
     0Likes 0Dislikes 0Informatives
  • Reply 42 of 74
    linkman said:
    Right now if a human is driving in the USA and breaks a law there is criminal liability. An individual is held responsible. Extend this concept to AI driving and either one of the riders is now responsible or someone who owns/operates/designed the software/engineered it/etc. can go to jail. I think that someone should definitely be held responsible for poorly implemented vehicles -- I don't want to get caught behind 3 mile traffic jams everyday because some stupid AI operated cars are being too cautious and driving slower than the minimum allowed speed.

    This basically happened a few months ago in California: http://www.theguardian.com/technology/2015/nov/13/google-self-driving-car-pulled-over-driving-too-slowly
    If you can't compute, slow down. That's the best policy.
    The link is hilarious and reminds me of the story of a (car) driver who never had an accident in 30 years but made up for that in a terminal way. He wondered, why did I have this fatal accident; but God (or an other onlooker) did know the answer: he had no accidents all those years because of the quality of other drivers and he was in fact a very bad driver, without knowing it.

    Edit: The other cause of accidents with GCars is a rear end collision, and that is caused by the fact that the GCars hit the breaks at totally unsuspect moments when no living thing (not even a bird) can see a cause (not even a remote one) for the action. Even very skilled drivers can be put off by that. 
    Of course the GCars should be blamed (where I live that is the case; hitting the brakes for no reason isn't allowed).
    edited February 2016
     0Likes 0Dislikes 0Informatives
  • Reply 43 of 74
    melgross said:
    If you were a non-vehicle operating passenger in an autonomous vehicle that you did not own, it'd be no different from you taking a cab.

    If you owned the vehicle, presumably you'd still need insurance.
    This is very confusing. I'd need to read the whole thing to get a better idea of what they mean here. But, if I own the car, and I'm not responsible for an accident, then who is? We can't take the car to court. We can't fine it. Can we take the license away? What license, and how would that work, as every car from that manufacturer would presumably have the same software and hardware combo. So would all the cars need to get off the road? This is a real can of worms. Does the manufacturer have to pay for the accident? What about insurance? Normally we incurr penalties if we have an accident. How would that work here?

    it seems to me that a lot needs to be worked out.
    This doesn't answer all of your questions but I thought this article was interesting:

    http://www.bbc.com/news/technology-34475031
     0Likes 0Dislikes 0Informatives
  • Reply 44 of 74
    alandail said:
    maestro64 said:

    Think about these self driving cars interacting with the these unpreditable human, who do you think will win. I'll give you an example in most DUI accidents who lives and dies most time. Computers today still do not have the ability to deal with the unpreditable. Humans have the ability to respond and react to things they not dealt with in the past. I personally avoided bad situations when I realize others around me were not driving the way they should, it is suddle things but it adds up, the self driving cars lack this ability.
    you underestimate the AI.  Computers will react far faster than a person is capable of and the software running them has a more driving experience than any single person does.
    You must be right, it's like Apples 'genius' song selection, it will be better when more and more people use it. Only know it's advertised that 'genius' is hand picked by ... wait for it ... actual humans.

    Edit: That reminds me of a DARPA robot car running in a driverless off-road contest: all was fine on a bright day and the car drove on a perfectly visible road in a straight line, suddenly it steered sharp to the right and hit a tree somewhat further off road. What happend: the AI (neural network) was trained on pictures of roads, but what it 'learned' was that a 'road' is some part that's darker, so, as a consequence it interpreted the shadow of a tree as a road and ended up total loss against that tree.
    Lesson learned: you never know what a neural net learns, you also never know if it improves.
    edited February 2016
     0Likes 0Dislikes 0Informatives
  • Reply 45 of 74
    jony0jony0 Posts: 380member
    maestro64 said:
    jony0 said:
    I trust that the AI manufacturers and the NHTSA will go over a lot of case scenarios and testing before certifying every single AI car design. To answer your question in particular I would imagine that the obstacle avoidance algorithms would have to and will be able to detect whether it is human or otherwise by assessing size and speed and general physical attributes, assess if the other lane is oncoming traffic and free to go around the obstacle while immediately applying all braking schemes available and avoid only if it's safe to do so. As for Bambi specifically, I live in deer country and am always mindful of my dad's early advice to slam the brakes immediately, brace yourself for impact and stay in your lane unless absolutely sure, there have been too many stories here of people going in the ditch or in oncoming traffic and killing themselves to avoid wildlife. I would certainly hope this would also be the chosen algorithm.


    Yeah you would hope, unless it is a person who jump out verse an animal. Then it kills you verse the stupid person who jump out in front of your car. The problem is there are so many scenarios they can not cover them all. Like their is a person on the road crossing illegally and on coming traffic the other way, and innocent pedestrian standing to your right, who gets killed.  Keep in mind, we humand have a ability to forgive someone for an unavoidable accident and they do happen, but we humand do not extend that forgiveness to objects and less so to companies of those objects. This is the paradigm we are dealing with and designers of these kinds of systems fail to understand, they just think it neat to allow a car drive itself. Also remember these systems were developed by the miliary and they are in the business of killing people so they did not worry about collateral damage, they did not want US soilder being killed.


    BTW, i was not looking for someone like you to answer I was looking for Google and other companies involved in this technolog to tell me who dies in these cases. Unless Google can tell me who dies I am interested in putting my faith in them to get it right in all cases and guess what, I bet I would change my mind on what is acceptable as time goes on.

    I responded (Reply 37) to one of your previous comments while you typed this one so I’ll try not to repeat myself too much here. As I’ve mentioned, and alandail as well, the AI is an order of magnitude faster, probably more. Whatever the circumstance, the computer will outperform us by a wide margin, that is an undeniable fact. The problem is not that they can't cover all scenarios, because neither can we. In your pedestrian example, you ask a question (who gets killed) that no one can answer with the simple conditions stated, we can only prioritize options and execute the best possible outcome of varying scenarios of speed, distance, traction, visibility and so on. What we can answer is that the machine will analyze with better precision most of these parameters and will certainly execute much faster. What you’re really asking is if sensors will be able to be as good as our senses and if AI can apply the same priorities that we would. I am quite confident that both will be as good and will have to be up to the task before these vehicles are certified.

    If an accident is truly unavoidable, then there is nothing to forgive, shit happens. Keep in mind that if it was avoidable. not all humans have your ability to forgive a human more than a company and I suspect that it would make no difference to most people who would be litigious to sue whomever or whatever is responsible regardless. If your neighbour accidentally shot you, do you really have that ability to forgive him but not forgive the gun and its manufacturer ?

    I have not had the privilege of speaking to any designers of these kinds of systems so I won’t speak for them, but I disagree with your assumption that they fail to understand the value of human life and that their endeavours are simply because they think it’s neat. And contrary to popular belief and cynicism the military is not in the business of killing people, but to protect the ones who sent them, sometimes by killing bad people. That’s irrelevant anyway, since there are many technologies initiated with military funds but crossed over to civilian peaceful applications.

    BTW I can’t speak for Google or other AI developers and although I understand that you weren’t expecting an answer from me, I would think that their purpose is to save lives first if at all possible, if not, minimize casualties, just like us humans would want to do, only much more efficiently.

    SpamSandwich
     1Like 0Dislikes 0Informatives
  • Reply 46 of 74
    volcanvolcan Posts: 1,799member
    jony0 said:

    What we can answer is that the machine will analyze with better precision most of these parameters and will certainly execute much faster. What you’re really asking is if sensors will be able to be as good as our senses and if AI can apply the same priorities that we would. I am quite confident that both will be as good and will have to be up to the task before these vehicles are certified.
    While I wholeheartedly agree, humans do have intuition and although machines can be programmed with infinite algorithms, there is usually limited scope of possibilities that they will analyze. 

    Here is just an example where I might be better at making a decision than a machine:

    There is a curb at a crosswalk near my house where the irrigation system has a slight leak and the curb has moss growing on it and it is slick. I know because I have almost slipped on it. ( by the way, I've call the city about it and they have done nothing ) Anyway, as I was driving and preparing to turn right at that intersection there was a mother holding the hand of a very young girl while they were waiting for the walk light. Right as I was approaching the mother let go of the girls hand  for a second and they were standing quite close to the street and very near that slippery spot. I immediately slowed to less than 5 MPH because I knew there was possibly a dangerous situation that I'm pretty sure the camera and decision making software would not have picked up.
     0Likes 0Dislikes 0Informatives
  • Reply 47 of 74
    volcan said:
    jony0 said:

    What we can answer is that the machine will analyze with better precision most of these parameters and will certainly execute much faster. What you’re really asking is if sensors will be able to be as good as our senses and if AI can apply the same priorities that we would. I am quite confident that both will be as good and will have to be up to the task before these vehicles are certified.
    While I wholeheartedly agree, humans do have intuition and although machines can be programmed with infinite algorithms, there is usually limited scope of possibilities that they will analyze. 

    Here is just an example where I might be better at making a decision than a machine:

    There is a curb at a crosswalk near my house where the irrigation system has a slight leak and the curb has moss growing on it and it is slick. I know because I have almost slipped on it. ( by the way, I've call the city about it and they have done nothing ) Anyway, as I was driving and preparing to turn right at that intersection there was a mother holding the hand of a very young girl while they were waiting for the walk light. Right as I was approaching the mother let go of the girls hand  for a second and they were standing quite close to the street and very near that slippery spot. I immediately slowed to less than 5 MPH because I knew there was possibly a dangerous situation that I'm pretty sure the camera and decision making software would not have picked up.
    You must also understand the rate of development and acceleration of power in the field of artificial intelligence. What we have right now is impressive, but in a mere three years from now, it'll be exponentially more so. Facebook's Mark Zuckerberg has publicly committed to throwing their massive resources at the problem and Amazon, Google and Apple are certainly also committing billions of dollars to the same. Around the year 2045 the so-called "Singularity" is theorized to happen, which represents a single desktop computer having the combined processing power of ALL human brains!

    https://en.wikipedia.org/wiki/Technological_singularity
    edited February 2016
     0Likes 0Dislikes 0Informatives
  • Reply 48 of 74
    volcanvolcan Posts: 1,799member
    SpamSandwich said:

    Around the year 2045 the so-called "Singularity" is theorized to happen, which represents a single desktop computer having the combined processing power of ALL human brains!
    Sure, science fiction says that eventually a computer will be able to read a human's mind. Perhaps everyone will be forced to have a chip in their head that calls home a thousand times a millisecond. I hope I'm gone by that time.

    I'm pretty sure when it comes to things like compassion, desires, love, intuition, fun and excitement, computers will always be a big fail. The best things in life are not ones and zeros, although some here might argue otherwise.
     0Likes 0Dislikes 0Informatives
  • Reply 49 of 74
    tzeshantzeshan Posts: 2,351member
    melgross said:
    If you were a non-vehicle operating passenger in an autonomous vehicle that you did not own, it'd be no different from you taking a cab.

    If you owned the vehicle, presumably you'd still need insurance.
    This is very confusing. I'd need to read the whole thing to get a better idea of what they mean here. But, if I own the car, and I'm not responsible for an accident, then who is? We can't take the car to court. We can't fine it. Can we take the license away? What license, and how would that work, as every car from that manufacturer would presumably have the same software and hardware combo. So would all the cars need to get off the road? This is a real can of worms. Does the manufacturer have to pay for the accident? What about insurance? Normally we incurr penalties if we have an accident. How would that work here?

    it seems to me that a lot needs to be worked out.
    According to NHTSA the driver of the car is either the manufacturer or the owner of the operating system.  
     0Likes 0Dislikes 0Informatives
  • Reply 50 of 74
    tzeshantzeshan Posts: 2,351member
    The self driving cars should pass road test.  In this case they are required to understand human orders given by the tester.  Hahaha!  I don't think Google engineers are so smart to pass this.  
     0Likes 0Dislikes 0Informatives
  • Reply 51 of 74
    tzeshantzeshan Posts: 2,351member
    tzeshan said:
    melgross said:
    This is very confusing. I'd need to read the whole thing to get a better idea of what they mean here. But, if I own the car, and I'm not responsible for an accident, then who is? We can't take the car to court. We can't fine it. Can we take the license away? What license, and how would that work, as every car from that manufacturer would presumably have the same software and hardware combo. So would all the cars need to get off the road? This is a real can of worms. Does the manufacturer have to pay for the accident? What about insurance? Normally we incurr penalties if we have an accident. How would that work here?

    it seems to me that a lot needs to be worked out.
    According to NHTSA the driver of the car is either the manufacturer or the owner of the operating system.  
    In other words the manufacturer or the designer of the operating system should obtain insurance for every self-driving cars.  Hahaha!
     0Likes 0Dislikes 0Informatives
  • Reply 52 of 74
    volcan said:
    SpamSandwich said:

    Around the year 2045 the so-called "Singularity" is theorized to happen, which represents a single desktop computer having the combined processing power of ALL human brains!
    Sure, science fiction says that eventually a computer will be able to read a human's mind. Perhaps everyone will be forced to have a chip in their head that calls home a thousand times a millisecond. I hope I'm gone by that time.

    I'm pretty sure when it comes to things like compassion, desires, love, intuition, fun and excitement, computers will always be a big fail. The best things in life are not ones and zeros, although some here might argue otherwise.
    In this case, it's not science-fiction... it's the math that says so. Both Gordon Moore (the originator of Moore's Law) and Ray Kurzweil both have histories that are impressive. 
     0Likes 0Dislikes 0Informatives
  • Reply 53 of 74
    melgrossmelgross Posts: 33,723member
    tenly said:
    melgross said:
    I think that saying that every car from that manufacturer is the same is an over-simplification not exactly the same but close enough as saying that all humans are the same because we are all equipped with the same sensors - ie hands, eyes, ears, etc...  With the cars - the brains might be identical - but might not if there are different versions of firmware available - and as far as the other sensors are concerned - it's possible that they would be different models, or have different firmware, tolerances and wear or state of disrepair (maybe some mud partially obstructing an optical sensor - or a piece of metal debris affecting another sensor).  In any case - sensors placement would be different on the different model cars from a manufacturer - but even within a specific year and model, I don't think they would all be able to be considered "exactly" the same.

    Of course, having said all that - I don't disagree at all with your conclusion that "a lot needs to be worked out!"
    This software would need to be approved in some way. Manufacturers are going to have upgrades to the software that will be mandatory. Every car of a particular model will be far more alike than different. Cars won't be like people. People are either very competent, competent, or not competent. They also have differing personalities. Cars won't be like that either.

    If a manufacturer makes a million cars of a particular model in one year, all of those cars will be the same. They will react the same way in any particular situation. That's the entire point.
     0Likes 0Dislikes 0Informatives
  • Reply 54 of 74
    melgrossmelgross Posts: 33,723member
    techlover said:
    melgross said:
    This is very confusing. I'd need to read the whole thing to get a better idea of what they mean here. But, if I own the car, and I'm not responsible for an accident, then who is? We can't take the car to court. We can't fine it. Can we take the license away? What license, and how would that work, as every car from that manufacturer would presumably have the same software and hardware combo. So would all the cars need to get off the road? This is a real can of worms. Does the manufacturer have to pay for the accident? What about insurance? Normally we incurr penalties if we have an accident. How would that work here?

    it seems to me that a lot needs to be worked out.
    This doesn't answer all of your questions but I thought this article was interesting:

    http://www.bbc.com/news/technology-34475031
    Ok, that's interesting. But what will be needed are government regulations so that every manufacturer, and car owner, will be equally responsible for what happens. This can't be done manufacturer by manufacturer.
     0Likes 0Dislikes 0Informatives
  • Reply 55 of 74
    melgrossmelgross Posts: 33,723member
    knowitall said:
    alandail said:
    you underestimate the AI.  Computers will react far faster than a person is capable of and the software running them has a more driving experience than any single person does.
    You must be right, it's like Apples 'genius' song selection, it will be better when more and more people use it. Only know it's advertised that 'genius' is hand picked by ... wait for it ... actual humans.

    Edit: That reminds me of a DARPA robot car running in a driverless off-road contest: all was fine on a bright day and the car drove on a perfectly visible road in a straight line, suddenly it steered sharp to the right and hit a tree somewhat further off road. What happend: the AI (neural network) was trained on pictures of roads, but what it 'learned' was that a 'road' is some part that's darker, so, as a consequence it interpreted the shadow of a tree as a road and ended up total loss against that tree.
    Lesson learned: you never know what a neural net learns, you also never know if it improves.
    In the last year of the Darpa tests of autonomous cars, which went through a very complex series of open road tests, city driving tests, tests with obstructions in the road, etc. EVERY car passed the tests. Every one. If you read about how these tests were constructed, you would be impressed. They were very complex. What they did lack, were other drivers, because DARPA was interested in military uses. But I followed those tests in my journals. They were very impressive. What you read must of been from an early test.
     0Likes 0Dislikes 0Informatives
  • Reply 56 of 74
    melgrossmelgross Posts: 33,723member

    volcan said:
    jony0 said:

    What we can answer is that the machine will analyze with better precision most of these parameters and will certainly execute much faster. What you’re really asking is if sensors will be able to be as good as our senses and if AI can apply the same priorities that we would. I am quite confident that both will be as good and will have to be up to the task before these vehicles are certified.
    While I wholeheartedly agree, humans do have intuition and although machines can be programmed with infinite algorithms, there is usually limited scope of possibilities that they will analyze. 

    Here is just an example where I might be better at making a decision than a machine:

    There is a curb at a crosswalk near my house where the irrigation system has a slight leak and the curb has moss growing on it and it is slick. I know because I have almost slipped on it. ( by the way, I've call the city about it and they have done nothing ) Anyway, as I was driving and preparing to turn right at that intersection there was a mother holding the hand of a very young girl while they were waiting for the walk light. Right as I was approaching the mother let go of the girls hand  for a second and they were standing quite close to the street and very near that slippery spot. I immediately slowed to less than 5 MPH because I knew there was possibly a dangerous situation that I'm pretty sure the camera and decision making software would not have picked up.
    These cars have a dozen cameras and number out sensors. They don't read just in the visible light range either. Some are radar, etc. they are designed to have no blind spots.

    but this is still in the early test phase. We'll see how this works in four or five years. We can be sure it will be much better, and in another five years, much better yet. At some point in time, people won't be writing about this any more.
     0Likes 0Dislikes 0Informatives
  • Reply 57 of 74
    melgrossmelgross Posts: 33,723member
    volcan said:
    While I wholeheartedly agree, humans do have intuition and although machines can be programmed with infinite algorithms, there is usually limited scope of possibilities that they will analyze. 

    Here is just an example where I might be better at making a decision than a machine:

    There is a curb at a crosswalk near my house where the irrigation system has a slight leak and the curb has moss growing on it and it is slick. I know because I have almost slipped on it. ( by the way, I've call the city about it and they have done nothing ) Anyway, as I was driving and preparing to turn right at that intersection there was a mother holding the hand of a very young girl while they were waiting for the walk light. Right as I was approaching the mother let go of the girls hand  for a second and they were standing quite close to the street and very near that slippery spot. I immediately slowed to less than 5 MPH because I knew there was possibly a dangerous situation that I'm pretty sure the camera and decision making software would not have picked up.
    You must also understand the rate of development and acceleration of power in the field of artificial intelligence. What we have right now is impressive, but in a mere three years from now, it'll be exponentially more so. Facebook's Mark Zuckerberg has publicly committed to throwing their massive resources at the problem and Amazon, Google and Apple are certainly also committing billions of dollars to the same. Around the year 2045 the so-called "Singularity" is theorized to happen, which represents a single desktop computer having the combined processing power of ALL human brains!

    https://en.wikipedia.org/wiki/Technological_singularity
    I don't really believe in the Singularity. At least, not in that time frame. It's assuming a straight line progress, which as we know isn't possible. Around the early 2020s, we will lose the silicon war. When something else will come out to replace that, we don't know. Intel has just said that by the early 2020s, processors will even slow down in pursuit of lower power, and that newer technologies, such as spin tropics are inherently slower.

    when will carbon nanotubes, boron nanotubes and sheets work in a manufacturable form that challenges the current technology? We don't know. Some think not until the late 2020s, and possibly not even until the early 2030s. So right now, we just don't know how fast things will progress past 2020. 2017 will be the year that 10nm first appears. 7mm may come out in 2020, assuming everything works. Many experts think that 5nm isn't achievable. If not, then we're stuck. No major improvements for years afterwards, just smaller advances.
     0Likes 0Dislikes 0Informatives
  • Reply 58 of 74
    melgrossmelgross Posts: 33,723member
    tzeshan said:
    melgross said:
    This is very confusing. I'd need to read the whole thing to get a better idea of what they mean here. But, if I own the car, and I'm not responsible for an accident, then who is? We can't take the car to court. We can't fine it. Can we take the license away? What license, and how would that work, as every car from that manufacturer would presumably have the same software and hardware combo. So would all the cars need to get off the road? This is a real can of worms. Does the manufacturer have to pay for the accident? What about insurance? Normally we incurr penalties if we have an accident. How would that work here?

    it seems to me that a lot needs to be worked out.
    According to NHTSA the driver of the car is either the manufacturer or the owner of the operating system.  
    This is just a preliminary ruling so that testing can proceed. What will happen when these cars are actually available for sale to the public is something else. But state rules can supersed these. State rules can be stricter.
     0Likes 0Dislikes 0Informatives
  • Reply 59 of 74
    tenlytenly Posts: 710member
    melgross said:
    tenly said:
    I think that saying that every car from that manufacturer is the same is an over-simplification not exactly the same but close enough as saying that all humans are the same because we are all equipped with the same sensors - ie hands, eyes, ears, etc...  With the cars - the brains might be identical - but might not if there are different versions of firmware available - and as far as the other sensors are concerned - it's possible that they would be different models, or have different firmware, tolerances and wear or state of disrepair (maybe some mud partially obstructing an optical sensor - or a piece of metal debris affecting another sensor).  In any case - sensors placement would be different on the different model cars from a manufacturer - but even within a specific year and model, I don't think they would all be able to be considered "exactly" the same.

    Of course, having said all that - I don't disagree at all with your conclusion that "a lot needs to be worked out!"
    This software would need to be approved in some way. Manufacturers are going to have upgrades to the software that will be mandatory. Every car of a particular model will be far more alike than different. Cars won't be like people. People are either very competent, competent, or not competent. They also have differing personalities. Cars won't be like that either.

    If a manufacturer makes a million cars of a particular model in one year, all of those cars will be the same. They will react the same way in any particular situation. That's the entire point.
    With a possible exception due to the sensors -with millions of them in differing weather conditions and road conditions in the real world - some will be broken, partially broken, worn or obstructed.  Given the same input data, of course the computer brain will make the same decisions - however with sensors that provide differing input data (for any number of reasons), the action taken by the computer brain can and will vary.

    (Much like the reflexes of individual people vary.  Differences in vision quality, peripheral vision - response to audio stimulus, etc...)

    edited February 2016
     0Likes 0Dislikes 0Informatives
  • Reply 60 of 74
    melgross said:
    tzeshan said:
    According to NHTSA the driver of the car is either the manufacturer or the owner of the operating system.  
    This is just a preliminary ruling so that testing can proceed. What will happen when these cars are actually available for sale to the public is something else. But state rules can supersed these. State rules can be stricter.
    The type of vehicle Google wants to make won't even be possible under current California law, regardless of this Federal rule change.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.