- Last Active
tallest skil said:damn_its_hot said:They do - in both the HI Guidelines and sample code ( e.g., controls).
sflocal said:mtbnut said:Loading up Gmail takes about 5-9 seconds; iCloud Mail takes about 20-30 seconds. I don't even bother browsing my photo library using iCloud Photos; what a lethargic POS. Google Photos? A luxurious, snappy breeze to use.
Apple is so far behind in the cloud game it's embarrassing.
On my iMac -
Gmail - 1.5 seconds.
iCloud Mail - Nearly instantaneous
Photo library - Fired it up for the first time just now, it synced 2500 +/- photos in about a minute.
Maybe it's not Apple that is so far behind, but maybe your Internet connection is?
9secondkox2 said:taniwha said:9secondkox2 said:You have to wonder about the CONTRNT that was being delivered by the websites CR tested.
that much inconsistency is scientifically impossible.
Unless... certain pages were. chosen That load random or varied content (with video, etc / yes that would imply conspiracy) which would negate the accuracy of these tests.
There seem to be a few people around here who think that because CR is testing for consumers, then there is no need for explanation or rigour because a user wouldn't be do that either. That's okay, but then a user would also not constantly download unspecified web pages from a private server until the battery was drained, so let's not use "but we are looking for something a user might experience" as an excuse for not doing your job properly. There is science and methodology in testing, whether you are doing if for consumers or not.
I read consumer testing when I was looking for a new microwave oven.
"We experienced a failure in our test unit, so we bought two more from different outlets and tested them again. Sadly, one of those units failed after eight days, and the other failed two days later. We cannot recommend this brand based on this test."
I would hope that having seen such bizarre results, CR would have gone back and tested other non-Apple laptops to make sure those were still giving expected results.
Many years ago, a couple of us were involved in testing a huge change to a web-based procurement system. We had a test rig that we use for regression testing, and when we ran the new software through it, it told us that none of the changes had caused a problem, and that the whole system was running just as it should. Yup, everything was hunky-dory; you're safe to go live.
My colleague and I looked at each other, looked at the project managers patting themselves on the back for having made such a large-scale change run flawlessly, and then we told them that we were going to take the test rig apart.
Because it was unlikely that such a massive change to such an old code base wouldn't cause a problem somewhere.
We took the test logs apart, and when that didn't show up any problems, we started on the build scripts. As it turned out, the problem was a simple labelling mistake: the build hadn't actually included the new code, so it was just testing against the old code base. (On reflection, we should have spotted this sooner, but we assumed that we were looking at a complicated problem and so it must have a complicated answer). Once we corrected the mistake and ran the regression test again, it failed so badly that the whole upgrade was cancelled and the code base was never touched again. (And again, if they could abandon the project after less than a half an hour's discussion, I have to ask if it was ever needed in the first place).
The point is that we looked at the initial result, and common sense told us this was unlikely, so we looked into the possibility that the problem was with the testing.
Now, I'm not saying that CR's methodology was wrong, or that if it is down to their test cases they should change it to cater for Apple (if other laptops can run the pages without widely fluctuations battery drainage then so should Apple kit). What I am saying that in face of such bizarre results, I would have worked with Apple to understand what was happening, before risking the reputation of the CR publication by making a recommendation based on a faulty test. I would have said: "Look Phil; these results don't make any sense. Get someone to look them over and if we don't get a reply from you in the next few days then we're going to give you a duff recommendation."
They probably suspect that the problem is being caused by Safari (indeed, they alluded to this), but that wouldn't make such a great headline as this is something that can be fixed with an update before the headline made any real impact. (And yes, they make money from page hits, so this will influence how they write their headlines).
metrix said:My guess is they ran one test using Safari and the others using Chrome. Chrome seems act like an application with a memory leak. Just my 2 cents.
It really makes no sense since Apple has dominated battery life in laptops for years and I mean years.