YouTube drop tests are a terrible indication of iPhone durability
YouTubers have already flocked to Australia and other territories to perform drop tests on the new iPhone 15 -- but given that there is usually zero scientific rigor in the testing, they're just done for entertainment.
When dropped from a height, an iPhone will break -- image credit Sam Kohl
It's already begun, like every year. YouTubers with money to burn have flown to Australia to try to demonstrate iPhone durability, or what they perceive to be the lack thereof.
Some of the testers have a conclusion in mind before they even begin. Those are pretty obvious to spot and aren't really who we're talking about. We also aren't talking about Friday's video by Sam Kohl, or other YouTubers who do the best that they can with what they have.
The more pernicious ones are those adamant that they're doing scientific testing while dropping the device, typically right outside the Apple Store where they got it. In actuality, there are too many variables that they cannot control -- or choose not to -- to call these valid tests.
Ultimately, those are still just for entertainment to placate a ravenous YouTube audience who probably just want their opinions validated.
Early drop testing starts within minutes of availability
Quick iPhone drop testing fresh out of the box starts innocuously enough. Two-foot drops onto a sofa or similar. In every case, those tests are fine.
Testing rapidly escalates with four-foot drops onto grass, six feet onto dirt, and ten feet onto concrete. The end goal is always to demonstrate that a metal-framed rectangle with a lot of glass will break at some point.
In nearly every case, for tests performed within hours of the devices being available, there is zero scientific rigor. Phones will be dropped side-by-side onto an uneven surface, which act as point contacts, effectively concentrating the force from the drop onto a small area. There is also typically any regard for impact angle.
There aren't force meters involved or any consistency in the testing at all other than height -- and even that is sometimes shaky.
And, the testing is almost always with the same device used in all the "tests," which isn't great as drop damage is cumulative. It's not like the iPhone will heal itself between impacts.
Believe it or not, there are specs for drop testing
All of this is completely contradictory to the actual drop testing standards.
One of the more standard tests is MIL-STD-801G. Without delving into the 100-page instruction that covers just about every possible durability test, in short, five test units -- not one -- are dropped in controlled circumstances, a total of eight times on the corners, six times on the face, and 12 times on the edge.
All of this is done on specified flat surfaces selected so the target doesn't absorb kinetic energy by bending. Testing is also done in strictly defined temperatures. And, there are force meters in use.
How do I know all this? It's because I've seen it every year for over a decade.
We have zero support from Apple in any way other than a periodic press release about environmental matters or the like about 15 minutes before the release, and we prefer it that way. Still, we manage to have one new Apple device review on the day of release, and we can do this because I know a guy.
Before one of five of these test units are dropped, folded, spindled, or mutilated for the US government with scientific rigor, I get to use the unit for a few days. I've been doing this job privately since before the iPhone was introduced and publicly facing the internet for well over a decade.
In all those years after my use, I've witnessed more actual, controlled drop tests than I can count at this point. The results are always different from those that pop up on the internet.
Drop testing without rigor applied is a universal problem
This isn't just about the iPhone. Many of the same folks are about to do the same thing with the new Pixel and the Samsung devices. They aren't hard to find.
It shouldn't be shocking that glass breaks when force is applied greater than tensile strength. It shouldn't be surprising that variances in engineering between models might shift vulnerable points to a different location from one year to the next.
A titanium frame is just that -- a frame made of titanium. Without delving into the different modes of failure in materials and how they differ between glass, stainless steel, and titanium, the materials will react very differently depending on force, surface, and angle of incidence.
Without scientific rigor or standards applied, there's no value in any of these "tests," other than to the YouTuber. If you enjoy the destruction porn, go nuts. Otherwise, the conclusions shouldn't be believed when they say that X model is more or less durable than model X+1 or model X-1 without that rigor, or any standards at all, applied to the testing.
The only takeaway from any of this that the average user should take is that depending on their own level of clumsiness, they should either buy a case or a warranty of some kind for their steel or titanium-framed glass rectangle. And maybe, don't stand on a ladder and drop an iPhone onto a concrete floor.
Read on AppleInsider
Comments
That said, I usually try to do at least some zoo monkey testing in addition to formal testing or scripted testing for software that has some degree of user interface complexity. The reason is because the formal testing and scripted testing often guides the tester along a well traveled and predictable path that emulates how you "think" your customer is going to use your product. But oftentimes customers go a little zoo monkey on you, or have preconceived notions of how the software should work regardless of how it does work, or your tests truly aren't indicative of real world behavior, so you end up deviating from the predicted path and your software craps itself. If this testing blows up your software it may end up identifying areas where you need to constrain the workflow or put in more exception handling, error handling, or parameter validation.
Of course this type of unscripted testing is not truly "zoo monkey" because you do ask your testers, the zoo monkey surrogates, to at least record or remember the sequences that caused your software to fill its diaper. If it was totally random it wouldn't have much value because it may be unrepeatable. Key recorders and video capture can help with establishing a repeatable pattern in these unscripted cases. Just like YouTube, watching recordings of zoo monkey testing can be both entertaining and embarrassing all at the same time.
https://www.howtogeek.com/769688/what-is-mil-spec-drop-protection/
Waste of bandwidth, and a wasted device that someone could have used for a few years.