“From December 2018 through November 2019?” Have we gotten so technically advanced that we can predict future statistics or did I sleep through another year? I hate when that happens.
This is a story that lacks a substantial amount of background information. It only asks companies to self-report the number of disengagements. There is no information on how that data is audited/verified by 3rd parties etc.
It's tenuous to draw conclusions on best/worst performance from disengagement figures - since there is no standard for when to disengage and companies like Apple (and some of the other brands with high disengagements) are going to be much more protective of their brand.
That said I wouldn't expect apple to be leading the pack since other competitors have been working in this space for longer.
But when reading disengagement figures, keep in mind the number and seriousness of incidents. Apple have had just two - both not at fault (including one where the AI wasn't at play.) Now contrast this with companies such as Uber, where the AI didn't detect a pedestrian, resulting in a fatality.
The whole reason of Apple being in this arena (at the cost/investment of the Apple customer) is that (coming from an IT-perspective) they’d improve on the car industry, so car companies would be queueing to partner with them. Well, that’s is just a hilarious idea for several reasons, performance being just one of them
“On the other end of the spectrum was Google's Waymo, which managed an impressive 0.09 disengagements per 1,000 miles, or one disengagement every 11,154 miles.“
This is not a meaningful statistic. It’s dependent on what phase of training the system is in and the training theory under which the data is being compiled.
Waiting for the slough of posts calling this story troll-bait, uninformed, etc...
I have no idea how accurate this is, but given the fact that Apple is later to the self-driving car game than other players, it would not be surprising if it's true. They're just further back on the development curve.
It's completely accurate, coming from mandatory reports filed with the DMV.
b) are there uniform standards for disengagement, or is Apple simply more careful during testing to avoid negative press and/or keep competition in the dark on their progress?
First question I would ask is what are the criteria for disengagement? I have no idea what they are, but if Apple's criteria were less forgiving, then that could be a sign of them trying to reach a higher level.
I love how the media idiotically conflates fewer disengagements with safer. It just means the car is racking up miles and not learning anything. For the fastest development of AI you actually need new events that it can learn from not repetitions of things it already knows how to do.
Very conservative parameters would also result in frequent disengagements.
Exactly, I was just going to post 'Or the safest system so far' but you said it another way. It just could be the other systems take way too many risks and trust the AI too much at this stage of the game. Apple's AI is still learning and I'd guess will end up the safest (and not share your private info ... haha).
If I had made my decisions about investing in AAPL based on the constant, droning narrative of failure I read on AI and other Apple centric tech blogs I wouldn’t have invested at all. Thankfully I didn’t take any of that into consideration and now have increased the value of my portfolio because of the AAPL factor in it. It seems the real world is more like me. I wonder if any of the AI staff have AAPL in their portfolios and if so, why?
b) are there uniform standards for disengagement, or is Apple simply more careful during testing to avoid negative press and/or keep competition in the dark on their progress?
Very conservative parameters would also result in frequent disengagements.
Exactly, I was just going to post 'Or the safest system so far' but you said it another way. It just could be the other systems take way too many risks and trust the AI too much at this stage of the game. Apple's AI is still learning and I'd guess will end up the safest (and not share your private info ... haha).
Nope. The criteria is set by the State of California, and they all report under the same set of rules.
Waiting for the slough of posts calling this story troll-bait, uninformed, etc...
I have no idea how accurate this is, but given the fact that Apple is later to the self-driving car game than other players, it would not be surprising if it's true. They're just further back on the development curve.
It's completely accurate, coming from mandatory reports filed with the DMV.
I suspect that Apple is interpreting the regulation very, very differently from everyone else.
Apple reported 69,510 disengagements. The other companies, combined, reported 4,040.
Given that the company has to provide a written description of each disengagement, my theory is that most companies use a restrictive definition of "when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test drive disengage the autonomous mode and take immediate manual control of the vehicle." (emphasis mine) I theorize that Apple set up an automated reporting process to submit a report every time the driver took control for any reason--essentially "spamming" the system with tens of thousands of reports. In other words, I expect the reports filed by Apple compared to those filed by, for example, GM Cruise (86 reports with 3 times the cars and 5 times the miles as Apple) are, forgive me, apples to oranges.
Frankly, I expect Apple is the one reporting "wrong" by being too loose with its definition of "failure" and when "safe operations requires immediate human intervention." But I suppose it's theoretically possible that Apple is both the most fastidious about reporting and the absolute worst in terms of "failures" and safety. I just doubt it.
Comments
That’s a seriously impressive stat if accurate.
https://www.dmv.ca.gov/portal/wcm/connect/d48f347b-8815-458e-9df2-5ded9f208e9e/adopted_txt.pdf?MOD=AJPERES
https://www.dmv.ca.gov/portal/wcm/connect/0b712ccd-1e7e-41eb-84fc-f16af0ce6265/ol311r.pdf?MOD=AJPERES&CVID=
I was going to copy and paste the text of the relevant section (227.46), but I'm PDF challenged on my Windows' PC. I'll try later on my Mac.
Look at the summary of the raw data in the last table on this page: https://thelastdriverlicenseholder.com/2019/02/12/disengagement-reports-2018-preliminary-results/
Apple reported 69,510 disengagements. The other companies, combined, reported 4,040.
Given that the company has to provide a written description of each disengagement, my theory is that most companies use a restrictive definition of "when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test drive disengage the autonomous mode and take immediate manual control of the vehicle." (emphasis mine) I theorize that Apple set up an automated reporting process to submit a report every time the driver took control for any reason--essentially "spamming" the system with tens of thousands of reports. In other words, I expect the reports filed by Apple compared to those filed by, for example, GM Cruise (86 reports with 3 times the cars and 5 times the miles as Apple) are, forgive me, apples to oranges.
Frankly, I expect Apple is the one reporting "wrong" by being too loose with its definition of "failure" and when "safe operations requires immediate human intervention." But I suppose it's theoretically possible that Apple is both the most fastidious about reporting and the absolute worst in terms of "failures" and safety. I just doubt it.