AI 'drivers' will gain same legal status as human drivers in autonomous vehicles, NHTSA rules
The artificial intelligence in a self-driving vehicle can be treated as the "driver" from a legal standpoint, the U.S. National Highway Traffic Safety Administration said in a decision that could set an important precedent for Apple's own self-driving car.
The position was announced in a letter to Google, published to the NHTSA's website, Reuters reported on Wednesday. The missive was written in response to a design submitted by Google on Nov. 12, which specified a vehicle that has "no need for a human driver."
The chief counsel for the NHTSA informed Google that the agency "will interpret 'driver' in the context of Google's described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants." Current vehicle regulations are based on the assumption of human control, which has created problems in establishing a legal framework for self-driving technology.
In January, U.S. Transportation Secretary Anthony Foxx announced that the federal government would be bending its interpretation of the rules in some cases to accelerate self-driving car development. "Best practices" guidelines should be established in the first half of the year, and the NHTSA will exempt up to 2,500 vehicles from safety standards for the sake of testing.
Although officially still under wraps, Apple is widely believed to be working on an electric car for launch in 2019 or 2020. While the first model may potentially lack self-driving systems, the company is at least thought to be working on the concept for subsequent vehicles.
The position was announced in a letter to Google, published to the NHTSA's website, Reuters reported on Wednesday. The missive was written in response to a design submitted by Google on Nov. 12, which specified a vehicle that has "no need for a human driver."
The chief counsel for the NHTSA informed Google that the agency "will interpret 'driver' in the context of Google's described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants." Current vehicle regulations are based on the assumption of human control, which has created problems in establishing a legal framework for self-driving technology.
In January, U.S. Transportation Secretary Anthony Foxx announced that the federal government would be bending its interpretation of the rules in some cases to accelerate self-driving car development. "Best practices" guidelines should be established in the first half of the year, and the NHTSA will exempt up to 2,500 vehicles from safety standards for the sake of testing.
Although officially still under wraps, Apple is widely believed to be working on an electric car for launch in 2019 or 2020. While the first model may potentially lack self-driving systems, the company is at least thought to be working on the concept for subsequent vehicles.
Comments
If you owned the vehicle, presumably you'd still need insurance.
There's also the question of how such insurance checkin would work. I can't imagine any two insurance companies agreeing on a protocol. They're worse than banks, and we've seen the nonsense involved in dealing with banks in the ApplePay rollout.
it seems to me that a lot needs to be worked out.
Interest point I did not think about, Insurance insures the car and driver today, if you are not driving you are not required to have insurance. Also GM and other car companies have jumped on the DRM bandwagon and claiming that you do not own the software which operates the car as such you are not allowed to modify the software, the softeware and control systems in the car is only licensed to you for use not ownership. If you do not own the control systems then legally you are not required to insure it. Since you are not driving the car you are not liable for the its actions since you can bet Apple and Google will not allow you to own the control system than you should not have to insure tha car for liability to hitting someone else. Now if the car is damage at no faul of you or the control system you would have to insure for that situation especially if you still own money on the car.
Since the NHTSA is saying the car is a legal person, just like a corporation is consider a legal person, this means you can sue your car, or the one who controls the car.
Going back to my question which no one seems to be willing to answer, will the car kill you to save bambi, if so who pays
Or will it just continue to be allowed to endanger people and remain in vehicles?
-kpluck
This is the equivalent to Boeing being responsible to a crash caused by the autopilot error rather than the Pilot in Command.
And I'm also dying to know what companies will be offering insurance on them, and at what cost
(https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_in_U.S._by_year)
So, if the threshold for licensing is "better than a human", your estimate is off by six decimal places.
A reasonable metric for licensing should be accidents per driven mile. Google's cars already beat humans by that metric. Granted, Google has limited its autonomous cars to relatively benign environments, and I suspect the licensing will restrict operation to proven operating envelopes. But, the rate of improvement of the technology will expand the envelope rapidly over the next decade. I think the outcome of the experiment will eventually be humans losing their right to drive, hopefully after losing their desire to.
This basically happened a few months ago in California: http://www.theguardian.com/technology/2015/nov/13/google-self-driving-car-pulled-over-driving-too-slowly
Of course, having said all that - I don't disagree at all with your conclusion that "a lot needs to be worked out!"