US House unanimously approves measures to speed up autonomous car testing
The U.S. House of Representatives voted unanimously on Wednesday to approve a proposal that could accelerate the testing and deployment of self-driving cars, though not without safety concerns from some parties.
The bill still has to go the Senate -- where a group has been working on its own legislation -- but in its first year would let automakers deploy up to 25,000 vehicles bypassing normal safety standards, Reuters reported. By the third year, the limit would grow to 100,000 vehicles.
Various companies -- such as GM, Waymo, and Volkswagen -- have put pressure on Congress in order to speed up their projects, and out of worry that state regulations could interfere, particularly ones proposed in California. Normally federal rules would block vehicles without human drivers, and indeed the House legislation still doesn't include large trucks.
A key point in the bill is the elimination of pre-market approval for technologies, although automakers will still have to submit safety assessments, and prove their cars are at least as safe as current models. Vehicles may also have to meet certain headlight standards, and warn people to check rear seats in case children are left behind.
Concerns have been raised throughout the bill's history. "The autonomous vehicle bill just passed by the House leaves a wild west without adequate safety protections for consumers. It pre-empts any state safety standards, but there are none at the national level," Consumer Watchdog said in a statement following the bill's approval.
Apple is currently engaged in limiting public testing of a self-driving car platform. The company appears to have scaled back its ambitions from selling an original vehicle to developing a corporate shuttle, and possibly getting involved with ridehailing services.
The bill still has to go the Senate -- where a group has been working on its own legislation -- but in its first year would let automakers deploy up to 25,000 vehicles bypassing normal safety standards, Reuters reported. By the third year, the limit would grow to 100,000 vehicles.
Various companies -- such as GM, Waymo, and Volkswagen -- have put pressure on Congress in order to speed up their projects, and out of worry that state regulations could interfere, particularly ones proposed in California. Normally federal rules would block vehicles without human drivers, and indeed the House legislation still doesn't include large trucks.
A key point in the bill is the elimination of pre-market approval for technologies, although automakers will still have to submit safety assessments, and prove their cars are at least as safe as current models. Vehicles may also have to meet certain headlight standards, and warn people to check rear seats in case children are left behind.
Concerns have been raised throughout the bill's history. "The autonomous vehicle bill just passed by the House leaves a wild west without adequate safety protections for consumers. It pre-empts any state safety standards, but there are none at the national level," Consumer Watchdog said in a statement following the bill's approval.
Apple is currently engaged in limiting public testing of a self-driving car platform. The company appears to have scaled back its ambitions from selling an original vehicle to developing a corporate shuttle, and possibly getting involved with ridehailing services.
Comments
I think I'll take my chances that the elderly will have enough sense to stop driving when appropriate, over people distracted in all sorts of ways, or doing stupid stuff like trying to pass in dangerous areas/ways (a huge problem in Northern BC).
And, I'd probably take them over FI (faked intelligence) vehicles roaming around out there as well.
But, I guess I'm mostly concerned with a bunch of greedy and corrupt morons running the country who can't think their way out of a paper sack.
Makes it easy to add a new one to the list... hot tubs, small aircraft, two to the head with gun in the left hand, etc.
When Elon Musk says they'll have an autonomous vehicle capable of self-driving coast-to-coast by 2019, I take him at his word because he's in a position to know!
Now, possibly, and AI car might (if the sensors properly detect the problem, and the decision tree has been properly programmed & filled with data) stop more quickly and efficiently than I could. On the other hand, just seeing this as I approached, I 'read' that there wasn't something quite right with the situation, so I was in some sense, anticipating, or I'd have hit him (I had already unconsciously been slowing).
But, I agree, I'm a car enthusiast and I don't want an AI car. Some AI assists maybe, not not full AI (if it even were possibly to work well).
The other problem is that distracted driving and the ability of people to reason and be responsible has been so impaired in recent years that as a society, we're starting to outrun (in stupidity) the incredible safety advances of our vehicles. As much as I oppose the lawmakers on this, maybe they are simply doing the math and have decided 'on the whole' AI would save lives vs trying to actually teach people how to drive and be responsible (which might be a hopeless cause). In that case, that guy above might just be a statistic outweighed by some other life saved because the AI stopped a car where the driver was texting.
That's not the kind of world I want to live in.... but it might be where we're headed. As my dad wisely says when we talk about some of these things... 'yea, but I'm not the average driver'. Of course, I remind him that this doesn't mean he's safe from the other idiots, but the point is a good one. If the average is low enough, you can actually be safer than that average through some experience and diligence.
Elon might have a great deal of technical understanding, but he's operating off a false view from which this future of AI is envisioned.
One thing to remember also, there are massively more cars on the road than say the 1960s and thus driving these days takes a lot more situational awareness than then.
You could maybe survive being a true dumbass driver with a terrible 1970s car with the crumiest of handling (almost none even in a straight line).
Cars also have improved immensely and not just because of driving assist. Even the lamest of cars these days has proper hand handling
compared to the really terrible handling of pre 1990s cars.
There also seems to be a rash of utter lack of common sense and judgement. Unless it's just where I live, I've had several close calls, and have witnessed accidents, where people will turn left blindly across oncoming traffic and stuff like that. I don't recall seeing that kind of stuff a whole lot before the last 5-10 years. (Sadly, so many people have gotten killed at a few intersections like that, here, in the last 3-4 years that they've rebuilt the lights/intersections recently. I was getting kind of gun-shy of these intersections, having had close calls a few times. Just today, in fact, while taking my son to school, a car blindly turned left in front of us at one of the non-upgraded intersections, though not close enough to lay rubber and miss by feet/inches, fortunately. )
Yea, I'm going more off stuff he's written about the future and his fear of AI. I'm sure he understands technically how the current 'AI' works in his Teslas, but his understanding of it in the big picture seems more sci-fi than anything. And, if he's counting on those kind of advances to ultimately make his automated car dreams come true, then he's going to be disappointed.
My fear - with someone like this - is that he might be trying to push for legal advancement based on those dreams.... so we can hurry up and meet the future. I've heard conflicting reports, already, between the PR they are putting out about the state of Autopilot and the actual reality. Some of these 'future types' end up actually believing the smoke they are blowing enough to try and assume the responsibility of leading the masses into their imagined future.
He references the advances in things like AI beating human players:
https://www.newscientist.com/article/2132086-deepminds-ai-beats-worlds-best-go-player-in-latest-face-off/
Typically the people expressing caution over AI avoid describing scenarios, which leads people to just dismiss them as fear-mongering and irresponsible. Elon gave an example there about an AI controlling information systems. When you look at how the world ticks in its entirety, it's driven by information and it's all information. The term encompasses everything. The way that these comments respond right now to Elon Musk is because of information; you never met him, you've never heard him say these things in person but even in person, it's just information signals. An AI in a network system can manipulate information, it can figure out how to find security vulnerabilities faster than a human. China uses machine learning on their firewall to dynamically shut down attempts to bypass it.
Elon gave an example of an AI being tasked with getting the best stock prices and hacking into airline control networks to change the navigation of a flight over restricted airspace to start a war. Ultimately the AI would be designed by someone and there are laws to cover malicious computer use but there could be no preventative legislation for making the AI just like for hacking or most crimes. The laws are just there to clean up afterwards. The best they can really do is to regulate the systems an AI can run on or has access to.
One mention in the video concerning autonomous vehicles was a system-wide hack across a manufacturer's vehicle control software. They want to ensure that humans have the ultimate override e.g an emergency button that shuts off the network and allows the vehicles to continue operating safely. This needs partitioning of control systems. This applies to sensitive data like in the Equifax hack. The iPhone uses partitioned systems with the Secure Enclave.
"Communication between the Secure Enclave and the application processor is isolated to an interrupt-driven mailbox and shared memory data buffers."
That kind of regulation reduces the possibility for sensitive control systems to be compromised.
I think that some people are over-exaggerating the feasibility of AI software alone causing a lot of damage, it's easy to extrapolate into science fiction scenarios, which ignore the fact that people are observing and monitoring systems all the time. It would have to get out of control very quickly but AI is adaptive. There was a case of a computer trading system going wrong quite quickly due to a bug, it took 45 minutes ( https://www.sec.gov/news/press-release/2013-222 ). If it was an AI operating on trading systems, it could cause a lot of economic disruption, especially if it's between countries but again, people would monitoring and be able to clean up a lot of this.
There are much simpler scenarios that would be much more damaging. A solar-powered drone could fly forever potentially:
http://www.iflscience.com/technology/solar-powered-drone-could-fly-nonstop-five-years/
Imagine a gun was attached pointing downwards with 100 bullets. Someone puts on control software with a camera to detect humans from above. They deploy 1000 drones across an entire country with an AI that locates and fires at people from above. That could wipe out 100,000 people in a matter of hours before anyone could react, more than a nuclear weapon. They refill the drones and keep on going. I think that's a lot more dangerous than software because software doesn't immediately hurt people, it just influences them, AI-driven hardware can hurt people directly.
AI-driven hardware can also be safer than human-driven hardware. That's always the responsibility that comes with advancing technology, to make sure that it does more good than harm.
Beating humans at chess or Go might be called 'AI' but it's not the kind of AI that Elon keeps lapsing into in his 'what if' scenarios. Please give me the steps in his scenario of how this computer goes from calculating stocks to hacking into a defense system. It's baloney, unless someone programmed it to do that. It doesn't 'figure out how' unless that's what it's designed to do.
So, ultimately, you come to the real danger that isn't intelligent at all, but is still a huge danger. We don't need AI to make advanced machines that could be designed (on purpose, or possibly by accident) to hurt or kill people, or cause a lot of destruction. But it won't be because the algos suddenly become self-aware and 'decide' to do something.
Every time I hear someone talking about AI... it goes from here's what it's doing to - incredible logic leap - then XYZ happens. And, the magic inserted seems to come down to... faster computers and time. I can plug all the sensors in the world into my computer.... make it a zillions time faster... and give it eternity, and it won't ever have a thought of it's own.... even if it's running some fancy skip-logic OS.
And that brings me back to the car problem. I think a lot of people imagine that AI cars work by actually making some kind of decisions (even if they are artificial), but in reality, what all this fancy stuff depends on is building a big enough database of what-ifs and solutions. The problem is that there are a nearly infinite amount of what-ifs. As more data is collected and programmers keep tweaking the routines, these AI cars will seems smarter and smarter... that is until they run into some situations they don't accurately 'read' or have a good solution to.
Then, something bad will happen. The game these guys are playing, is that they think less bad things will happen - on the whole - than would happen due to human error (or more likely, negligence... especially a society that considers not missing a Facebook update over keeping themselves or others alive). I don't think that's a trade I'd like to make. I welcome all sorts of assistive technology that can see the moose with IR or radar heading to cross the road I'm driving down, or maybe even auto-breaking that kicks in when I don't seem to recognize an obstacle I'm going to hit (though I can imagine problem scenarios with this). But, I want actual conscious thinking beings to ultimately be in charge, not some half-baked computer routines masquerading as 'intelligence.'