AI 'drivers' will gain same legal status as human drivers in autonomous vehicles, NHTSA rules

124»

Comments

  • Reply 61 of 74
    melgrossmelgross Posts: 33,510member
    tenly said:
    melgross said:
    This software would need to be approved in some way. Manufacturers are going to have upgrades to the software that will be mandatory. Every car of a particular model will be far more alike than different. Cars won't be like people. People are either very competent, competent, or not competent. They also have differing personalities. Cars won't be like that either.

    If a manufacturer makes a million cars of a particular model in one year, all of those cars will be the same. They will react the same way in any particular situation. That's the entire point.
    With a possible exception due to the sensors -with millions of them in differing weather conditions and road conditions in the real world - some will be broken, partially broken, worn or obstructed.  Given the same input data, of course the computer brain will make the same decisions - however with sensors that provide differing input data (for any number of reasons), the action taken by the computer brain can and will vary.

    (Much like the reflexes of individual people vary.  Differences in vision quality, peripheral vision - response to audio stimulus, etc...)

    I think that's easily anticipated. Some of you guys don't seem to realize that these companies understand all of this already. You're bring up nothing that isn't already known. This is why there will be years of testing done. Do you really think they don't know that sensors will break, or cameras will be covered with snow? Seriously? If the car won't work properly because of problems, it will say so, and not work at all.
    bsimpsen
  • Reply 62 of 74
    melgrossmelgross Posts: 33,510member
    melgross said:
    This is just a preliminary ruling so that testing can proceed. What will happen when these cars are actually available for sale to the public is something else. But state rules can supersed these. State rules can be stricter.
    The type of vehicle Google wants to make won't even be possible under current California law, regardless of this Federal rule change.
    Yes. That's because, as I said elsewhere, state laws can be stricter than federal ones. So far, California does require a human driver. That may remain for years to come, and I think it's a good idea until these cars really work the way they're supposed to.
  • Reply 63 of 74
    volcan said:
    jony0 said:

    What we can answer is that the machine will analyze with better precision most of these parameters and will certainly execute much faster. What you’re really asking is if sensors will be able to be as good as our senses and if AI can apply the same priorities that we would. I am quite confident that both will be as good and will have to be up to the task before these vehicles are certified.
    While I wholeheartedly agree, humans do have intuition and although machines can be programmed with infinite algorithms, there is usually limited scope of possibilities that they will analyze. 
    Intuition is nothing more than decision making at a subconscious level. It's the same brain making the decisions, using the same potentially misinterpreted sensory input, the same potentially incorrect risk-assessment wiring, etc. Human intuition isn't somehow more accurate than conscious reasoning, and it's certainly no match for the vastly superior access to situation data that autonomous cars will eventually have, if they don't already have it. Humans also analyze a limited scope of possibilities and we're not getting any better at it. Technology is, and quickly. Autonomous cars will eventually have access to not only their own sensor data, but that of nearby cars and the road system itself. That's a level of collaboration humans cannot achieve. There is already work afoot to encode information in the flashing of LED head and taillights (they're already flashing to modulate brightness, you just can't see it). Car will not only be able to see traffic lights, and the lights of other cars, but glean information from them as well.
    Based on sensor and communications ability alone, no human has a chance of besting a well designed autonomous car.

    And for those that think human oversight will allow us to ease into autonomous technology, think again. It's already been shown in multiple studies that humans are worse at monitoring the operation of auto-pilots and autonomous cars than they are at piloting and driving themselves. We are so easily distracted when we're not completely in control that we quickly lose situational awareness. In the event the computer relinquishes control, it takes us so long to regain situational awareness that we're no better able to handle the unforeseen circumstance than the computer. I don't know just how the transition will take place, but I suspect it'll be a rather abrupt jump from driver-assist to driverless, with no intervening driver-supervised era.
  • Reply 64 of 74
    pmzpmz Posts: 3,433member
    Apple's own self-driving car.
    News flash: Apple is not making a self-driving car.

    Apple makes products for real people that will actually be purchased and used sometime this century.

    I'm so sick of the tech blog community's obsession with self-driving cars and this idiotic delusion that there will be any sort of mass adoption of them in anything resembling 'near future'.
  • Reply 65 of 74
    tzeshantzeshan Posts: 2,351member
    melgross said:
    The type of vehicle Google wants to make won't even be possible under current California law, regardless of this Federal rule change.
    Yes. That's because, as I said elsewhere, state laws can be stricter than federal ones. So far, California does require a human driver. That may remain for years to come, and I think it's a good idea until these cars really work the way they're supposed to.
    California requirement is not practical.  Human will become absent minded after the car drives by itself for a while.  
    bsimpsen
  • Reply 66 of 74
    First, corporations are people; now cars are people. When does my toaster become a person?
    tallest skil
  • Reply 67 of 74
    First, corporations are people; now cars are people. When does my toaster become a person?
    Apparently Cortana reports what you say to Microsoft... (“But she already does that, along with all of your personal information”) ...if you “demean” or “sexually harass” her.

    We wanted to be very careful that she didn’t feel subservient in any way”

    EXCEPT THAT IS LITERALLY THE ENTIRE PURPOSE OF THE SOFTWARE.
  • Reply 68 of 74
    volcanvolcan Posts: 1,799member
    SpamSandwich said:

    In this case, it's not science-fiction... it's the math that says so. Both Gordon Moore (the originator of Moore's Law) and Ray Kurzweil both have histories that are impressive. 
    There are also some scientists who warn that AI could eventually lead to extinction of the human race.
  • Reply 69 of 74
    maestro64maestro64 Posts: 5,043member
    jony0 said:
    maestro64 said:
    jony0 said:
    I trust that the AI manufacturers and the NHTSA will go over a lot of case scenarios and testing before certifying every single AI car design. To answer your question in particular I would imagine that the obstacle avoidance algorithms would have to and will be able to detect whether it is human or otherwise by assessing size and speed and general physical attributes, assess if the other lane is oncoming traffic and free to go around the obstacle while immediately applying all braking schemes available and avoid only if it's safe to do so. As for Bambi specifically, I live in deer country and am always mindful of my dad's early advice to slam the brakes immediately, brace yourself for impact and stay in your lane unless absolutely sure, there have been too many stories here of people going in the ditch or in oncoming traffic and killing themselves to avoid wildlife. I would certainly hope this would also be the chosen algorithm.


    Yeah you would hope, unless it is a person who jump out verse an animal. Then it kills you verse the stupid person who jump out in front of your car. The problem is there are so many scenarios they can not cover them all. Like their is a person on the road crossing illegally and on coming traffic the other way, and innocent pedestrian standing to your right, who gets killed.  Keep in mind, we humand have a ability to forgive someone for an unavoidable accident and they do happen, but we humand do not extend that forgiveness to objects and less so to companies of those objects. This is the paradigm we are dealing with and designers of these kinds of systems fail to understand, they just think it neat to allow a car drive itself. Also remember these systems were developed by the miliary and they are in the business of killing people so they did not worry about collateral damage, they did not want US soilder being killed.


    BTW, i was not looking for someone like you to answer I was looking for Google and other companies involved in this technolog to tell me who dies in these cases. Unless Google can tell me who dies I am interested in putting my faith in them to get it right in all cases and guess what, I bet I would change my mind on what is acceptable as time goes on.

    I responded (Reply 37) to one of your previous comments while you typed this one so I’ll try not to repeat myself too much here. As I’ve mentioned, and alandail as well, the AI is an order of magnitude faster, probably more. Whatever the circumstance, the computer will outperform us by a wide margin, that is an undeniable fact. The problem is not that they can't cover all scenarios, because neither can we. In your pedestrian example, you ask a question (who gets killed) that no one can answer with the simple conditions stated, we can only prioritize options and execute the best possible outcome of varying scenarios of speed, distance, traction, visibility and so on. What we can answer is that the machine will analyze with better precision most of these parameters and will certainly execute much faster. What you’re really asking is if sensors will be able to be as good as our senses and if AI can apply the same priorities that we would. I am quite confident that both will be as good and will have to be up to the task before these vehicles are certified.

    If an accident is truly unavoidable, then there is nothing to forgive, shit happens. Keep in mind that if it was avoidable. not all humans have your ability to forgive a human more than a company and I suspect that it would make no difference to most people who would be litigious to sue whomever or whatever is responsible regardless. If your neighbour accidentally shot you, do you really have that ability to forgive him but not forgive the gun and its manufacturer ?

    I have not had the privilege of speaking to any designers of these kinds of systems so I won’t speak for them, but I disagree with your assumption that they fail to understand the value of human life and that their endeavours are simply because they think it’s neat. And contrary to popular belief and cynicism the military is not in the business of killing people, but to protect the ones who sent them, sometimes by killing bad people. That’s irrelevant anyway, since there are many technologies initiated with military funds but crossed over to civilian peaceful applications.

    BTW I can’t speak for Google or other AI developers and although I understand that you weren’t expecting an answer from me, I would think that their purpose is to save lives first if at all possible, if not, minimize casualties, just like us humans would want to do, only much more efficiently.


    The answer to the question is this, Kill the deer no matter what, my life is more important than any deer on the road. Next kill the idiot on the road if they are not suppose to be there. Now if it is something different, than I am going to decide to void the women and children first and minimize injury to myself. I believe as you and others pointed out, the AI will try and avoid killing the things around you and most likely putting you in far more risk. My personal AI always put my own life and personal safety above someone else especially those who are doing things to put their own safety at risk. BTW, your brain will do the exact same thing, no one gives their lives for others specially for something you do not know. This is human phycology which is not something you can program into an algorithm, Unless they can guaranty me that my own life has the highest priority in the decision making process, I would not own one. I think the assumption will be they will risk you a bit to save a person since they will be the expectation the car will some how protect you from most injuries. Just like the Google cars which have not "caused" an accident, I believe these cars will most likely eliminate accident of none attentive drives types. But we will here comment like I here from older drivers (who should not be on the road anymore) they all tell you they never been in an accident and say everyone else is a bad driver and they back this statement up by telling about all the accidents they have seen. When you ask them were these accident they say behind them. What they do not realize they cause the accident by their driving habits.

    This was the hardest thing I had to teach my kids about driving, rules verse what is normal predictable driving and what others around you expect. My kids want to follow the rules to the letter, but it cause other drivers around them to do unexpected things to avoid them trying to follow the rules. Like traveling 60 MPH on high in traffic and looking around you to see if everything is clear to change lanes and when do you change. The rules say look for enough open space, many car lengths then signally to change lanes, and before you change lanes check again to make sure things are clear then change lanes and accelerate. Can not tell you how many time my kids almost got hit because most driver are not that patient to wait for all this to happen. You decide to change lanes, you begin accelerating, then begin the lane change and signal at the same time so some does not try and cut you off and keep accelerating and finish the lane change before anyone moves into that space. You can not mix rule followers with none rule followers, then add in unpredictable behavior, someone will get hurt.

    The only way this works and this is the reality of what they trying to make happen,  all cars will drive themselves and humans will no longer be allowed to drive, in this case self driving cars will be far safer for everyone. But do you really think everyone will give up driving.

  • Reply 70 of 74
    mac_128mac_128 Posts: 3,454member
    maestro64 said:


    The answer to the question is this, Kill the deer no matter what, my life is more important than any deer on the road. Next kill the idiot on the road if they are not suppose to be there. Now if it is something different, than I am going to decide to void the women and children first and minimize injury to myself. I believe as you and others pointed out, the AI will try and avoid killing the things around you and most likely putting you in far more risk. My personal AI always put my own life and personal safety above someone else especially those who are doing things to put their own safety at risk. BTW, your brain will do the exact same thing, no one gives their lives for others specially for something you do not know. This is human phycology which is not something you can program into an algorithm, Unless they can guaranty me that my own life has the highest priority in the decision making process, I would not own one. I think the assumption will be they will risk you a bit to save a person since they will be the expectation the car will some how protect you from most injuries. Just like the Google cars which have not "caused" an accident, I believe these cars will most likely eliminate accident of none attentive drives types. But we will here comment like I here from older drivers (who should not be on the road anymore) they all tell you they never been in an accident and say everyone else is a bad driver and they back this statement up by telling about all the accidents they have seen. When you ask them were these accident they say behind them. What they do not realize they cause the accident by their driving habits.


    But there are always going to be situations that don't adhere to such black and white interpretations. Here's a not impossible hypothetical:

    A physical failure of a car, or the road causes the autonomous vehicle to lose control. A human has to process the problem, may or may not understand the proper maneuver to apply for the dynamic forces involved, and then make a decision about what to do. All of which will take seconds, thus eliminating choices. The vehicle will have all of that programmed into it, and react in milliseconds, enough time to actually a have a choice, which is something the human likely won't have.

    So in the split second after the physical failure, the vehicle is faced with three choices -- kill the driver, kill a pedestrian on the sidewalk, kill a motorcyclist next to the vehicle. Everyone is legally where they are supposed to be. So now the car is programmed to always save the passenger. If the car is legally the driver, then the auto manufacturer is on the hook for the death of the person on the sidewalk, or the motorcyclist. In the case of the driver in control of the car, there is no way to know how that scenario would have played out, but in the case of the car, a black box of data will show exactly what choice the car made -- which then becomes intentional manslaughter. 

    Now add to that the additional complication of the person on the sidewalk being a pregnant female. Or, not just one person on the sidewalk, but let's say a group of girl scouts. Is the car able to detect that the woman is pregnant? Or the number of people on the sidewalk, or know that they are even children? If the car's mandate is to always save the driver, then the ability for a human being to self sacrifice for the greater good is removed. Save oneself and kill the pregnant woman or group of kids? And let's say the car is programmed to save the driver and minimize other potential collisions ... so the car avoids killing the motorcyclist, as that could cause more vehicles to crash into each other, and potentially more deaths. So the car choses to kill the girl scouts. At the end of the day, there's no moral decision involved. A bigger accident may be prevented by killing the girl scouts, but how will society view it? 

    There's a lot more than technical problems that have to be worked out here. And it's a lot more complex than just protect the driver.
    edited February 2016
  • Reply 71 of 74
    No thanks. I'd rather drive my own car and enjoy the experience. People invent newer and newer technology which always appears to phase out the need for anything human. Who needs people at all eventually when you can have a computer do it for you? Such a developer mindset.
    The developer mindset is to automate things we don't want to do, not things we enjoy doing. This is the mindset that has given us washing machines, dishwashers and online shopping. For myself personally, driving a car falls into the "chore" category and I'd be happy to do something else with that time. I assume you would still be able to drive your own car for the foreseeable future, just as you can still wash your laundry by hand if you enjoy doing that.
  • Reply 72 of 74
    gatorguygatorguy Posts: 24,213member
    maestro64 said:
    jony0 said:
    maestro64 said:

    Going back to my question which no one seems to be willing to answer, will the car kill you to save bambi, if so who pays

    I trust that the AI manufacturers and the NHTSA will go over a lot of case scenarios and testing before certifying every single AI car design. To answer your question in particular I would imagine that the obstacle avoidance algorithms would have to and will be able to detect whether it is human or otherwise by assessing size and speed and general physical attributes, assess if the other lane is oncoming traffic and free to go around the obstacle while immediately applying all braking schemes available and avoid only if it's safe to do so. As for Bambi specifically, I live in deer country and am always mindful of my dad's early advice to slam the brakes immediately, brace yourself for impact and stay in your lane unless absolutely sure, there have been too many stories here of people going in the ditch or in oncoming traffic and killing themselves to avoid wildlife. I would certainly hope this would also be the chosen algorithm.


    Yeah you would hope, unless it is a person who jump out verse an animal. Then it kills you verse the stupid person who jump out in front of your car. The problem is there are so many scenarios they can not cover them all. Like their is a person on the road crossing illegally and on coming traffic the other way, and innocent pedestrian standing to your right, who gets killed.  Keep in mind, we humand have a ability to forgive someone for an unavoidable accident and they do happen, but we humand do not extend that forgiveness to objects and less so to companies of those objects. This is the paradigm we are dealing with and designers of these kinds of systems fail to understand, they just think it neat to allow a car drive itself. Also remember these systems were developed by the miliary and they are in the business of killing people so they did not worry about collateral damage, they did not want US soilder being killed.


    BTW, i was not looking for someone like you to answer I was looking for Google and other companies involved in this technolog to tell me who dies in these cases. Unless Google can tell me who dies I am interested in putting my faith in them to get it right in all cases and guess what, I bet I would change my mind on what is acceptable as time goes on.

    What do you think the person driving the car you are in the backseat of will do? Human actions are much less predictable than machine. 
  • Reply 73 of 74
    melgross said:
    If you were a non-vehicle operating passenger in an autonomous vehicle that you did not own, it'd be no different from you taking a cab.

    If you owned the vehicle, presumably you'd still need insurance.
    This is very confusing. I'd need to read the whole thing to get a better idea of what they mean here. But, if I own the car, and I'm not responsible for an accident, then who is? We can't take the car to court. We can't fine it. Can we take the license away? What license, and how would that work, as every car from that manufacturer would presumably have the same software and hardware combo. So would all the cars need to get off the road? This is a real can of worms. Does the manufacturer have to pay for the accident? What about insurance? Normally we incurr penalties if we have an accident. How would that work here?

    it seems to me that a lot needs to be worked out.
    We need to think much further outside the little box we are so used to, here. We are talking about a disruptive technology, one with a huge capital value. And in a race for such a thing, companies, people, and even governments tend to become creative, really creative. And some interesting moves are already being taken that show that.

    So, it depends on what you mean by "a lot" that needs to be worked out. We have for ex the move that Volvo recently made, declaring that they will take all responsibilities in case of accidents with their self-driving cars. That decision alone eliminates just about half of the problems discussed in this thread.

    Then we have the insurance companies. I dont think it'll be hard for them to get along, at all. They actually have a lot to gain with this technology too, since it's expected to be like 100 times more safe (at least eventually), and thus extremely less accident prone. Still, authorities will demand every vehicle has a proper insurance. So insurance companies will sell a lot, and at the same time have really low costs associated with repairs etc. Some international insurance companies are already working out package deals that will cost you only about 20% if you switch to a self-driving car. Can you resist that? ……other half of the problems discussed here almost solved.

    And so other, even more clever solutions that deal with this new way of transporting people will surface. I actually think we would make a much better contribution by discussing similar new bright solutions here at the AI forum, instead of just pointing out the obvious problems. But that's just me liking to move forward.
  • Reply 74 of 74
    So if I'm in an accident, the manufacturer is automatically guilty.  
Sign In or Register to comment.