I got drawn here after this report came to me through different channels and i'm willing to toss the bone;
It's horribly biased.
But, i'm not just saying that. Before we all go down in rage with users looking up my username elsewhere on the net and dubbing me an android fanboy, but "we've" been unjustly been compared on multiple accounts and i'm willing to take up the bat in desperate hope we may finally have a real test for devices/mobile os-es which the users can read and mobile-development enthusiasts can talk about, without having to know the source for validity.
First of all, allow me to quote the following statement;
According to Pfeiffer, the test methodology attempts to eliminate as many subjective variables ? like brand loyalty or perception ? as possible, looking only at objective "aspects that have a direct impact on the day-to-day user experience of an average, non-technical user."
The comparison throws in stock iOS'es against Samsung's implementation of Android.
If we do not take into account brand loyalty then why;
- Do we also include a previous iOS
- Do we include a Samsung-customised OS? (if someone is not brand-loyal by definition, we could assume this would be the Google implementation that's running on the device)
- Do we include a previous version OS if we first claim to compare current OS-es, but don't include different vendor implementations of another OS. (i.e. Sony's android customisation or the Google Edition)
in my opinion it would've only been fair to;
Either compare to the Google/AOSP version, or both. (even just to prevent the possibility that something could be biased)
The next problem with the test is;
All pages tell you to read the PDF for full details, but i only see resulting graphs. no methodicals of testing or true criteria are told of.
The problem with this is; results can't be verified/reproduced by a third party.
The next problem I have with this test is;
Why do we include a subcategory customisation, and after that have Windows Phone's UXF affected by this previous result, but not the other phones, even though all phones will in any aspect differ in level of customisation. UXF should only address issues where a user can expect different behaviour too, and the "in-customisability" of windows phone is even better known then the "in-customisability" of the way-older iOS versions. so the user should already not expect this behaviour of customisability.
Also, taking into account accessibility options as a plus for Customisation is a bias, it's supposed to be part of the efficiency and integration rating (Efficiency, specifically). and the fact that there's not even a mention of which functions are noted as apparent or absent doesn't help things the least. Also, i thought we were comparing an average user, thus rendering accessibility options a no-no to compare non-seperately.
Also, not listing which functions are accounted for as apparent or absent and how they are taken into account, biases the result.
Another problem i have with the test is;
When we talk about Efficiency and integration the following is written in a way that says it's a negative aspect:
"Samsung's Android implementation offers mature but slightly overwhelming efficiency and integration options."
however having more efficiency and integration options can't be a downside when that's what we're looking for.
Especially the direct acces to the camera part further down that section is a sting in the eye, for the Google-Edition and AOSP versions -do- have this functionality and i feel like these versions have deliberately not been chosen even more strongly because of that single argument, along with the cognitive load.
Example; if android would not have voice commands, but iOS would lack text-to-speech, which is more important? (exactly; that depends on the user.)
And the last problem i have with the test is;
The test is designed to deliver a certain outcome and not to research a difference. To more clearly illustrate this, ill abstract it;
This test can be read along the lines of "We are going to research how much more calories are in sugar, compared to other additives" (except they're doing this with the amount of calories and a few specific other values).
An unbiased test would be read along the lines of "We are going to research the different impacts on one's health between sugar and other additives."
in the first test, sugar will turn out to be the worst. for it has many calories compared to pretty much any other (artificial) sweetening.
In the other, sugar would turn out to be "not so bad", for it has more calories, which would potentially make you fat more quickly, however some of the sweeteners have more severe side-effects.
Like the first test; the fans of iOS 7 have "proof" their choice is better, while remaining uninformed.
however if we would've had the second test, we'd actually have something to measure our different platforms against and even understand why the result is so.
Now, i hope there are more users willing to stick this up to big and noticeable places so we might get finally get reports that are worth something other then the funding companies' marketing money. Post it on facebook or whatnot, quote me even challenging wether what i stated here is true (go ahead, if i'm wrong i'd rather find out.), but let's set things in motion.
As a comparison i also looked into their 3dsMax pdf's, as i've been a hobbyist 3d-illustrator for quite a while(and while i own a copy of Max, i'm more "fan" of Cinema4D) and thus can more easily can attempt to verify authenticity. Those articles seem a lot less biased, for they at least show a way in which they can be compared/verified by a third party...
Also, i'm wondering if i'm the only person that's so sick and tired of biased "researches" everywhere, that there should a way to correctly list them as fraudulent so in due time the sources would no longer be visited thanks to ruining their name.
Much so like the touch-screen test which was basically a measure between the iPhone's hardware-based reaction time versus another touchscreen sensor's hardware (admittedly) but blatantly compared to an android device running the java implementation, thus including the UI (which is still software) within the benched time...
They had correctly stated this difference in the article (after mention of a few early birds), but reactions of users who stated that they did not understand why for the android test native C hadn't been used have been left unanswered or removed as far as i can find...
Since things are headed that direction; how can we, the commoner, fight the bias for our own sake?
So, in good hopes, let's start the constructive criticism!