I think the survey tells us what we already knew - that most people are very happy with their iPads. I can tell that by watching people I know use them.
I think the statistical discussion was a bit misguided. For each of the statistics there is an unkown true value and a calculated value, which is published here. The question is 'how near to the true value is the calculated value?' and statistics enables us to make statements like 'there is a 95% chance that the calculated value is within 5% of the unknown true value' or 'there is a 60% chance that it is within 3% of the true value'. We are dealing with a Gaussian distribution of calculated values around the unknown true value, and the Gaussian distribution goes off to infinity, so there is a small chance that the sample error is greater. Saying it is 'within ± 3%' is just too simplistic. Is that with 95% confidence or 60% confidence? Of course the larger the sample size the better the chance of making more accurate estimates.
If the sampled population was not completely random, then the results would be unreliable too. I don't see any indication it was random, in fact far from it.
"The survey was distributed to the 9000+ subscribers of Usability News (www.usabilitynews.org) as well as various email lists and Facebook groups. Completion time was approximately 20 minutes. Responses were collected during a 2-month time span."
A relatively insignificant 52 responses from a select group of 9000+ distributed survey requests primarily targeting professional computer interface designers isn't going to yield any results that can be shown to be representative of the iPad buyer/user at large.
Sure, iPad owners are generally happy with their purchase. Yes, some have a lot of apps, some probably have very few. Are the numbers in this study any indication whatsoever of the actual numbers? Not very likely. A survey of AppleInsider members would be just as valid an indicator.
Fun to discuss but statistically flawed if intended to represent the iPad user group in general.
You're missing the point that 52 users is too small to get a statistically significant sample.
Technically, you're wrong. 52 can be a statistically significant sample - but with a large margin of error.
You also left out confidence intervals from your discussion. With 52 people sampled, you cited an error margin of +/- 14%. That number depends on the confidence interval. Typically, statisticians use 95%, but there's no requirement for that. If you were content with a 90% confidence interval, the error margin would be smaller.
Quote:
Originally Posted by Porchland
You said that the results would likely be the same in a much larger sample, and that is simply not true. You are assuming that the margin of error for 52 users is much smaller than it actually is.
The results COULD be the same, although the error margin would be smaller.
Quote:
Originally Posted by Porchland
I'm serious. I have a degree in this shit.
Also, the margins can and have been proven. You can count out populations of M&Ms, colored beans, etc., draw them out randomly, and hit the margin of error for the measured populations more than 90% of the time.
Then you ought to recognize that an even larger problem than the sample size is the randomness factor. Without knowing how the sample was selected, one can't really know if it was random.
Besides that, there are basic math errors. As someone else pointed out, they cited 83.65% satisfied - which means that a fractional number of people were satisfied. Aside from that, of course, they clearly don't understand significant figures.
Comments
I think the survey tells us what we already knew - that most people are very happy with their iPads. I can tell that by watching people I know use them.
I think the statistical discussion was a bit misguided. For each of the statistics there is an unkown true value and a calculated value, which is published here. The question is 'how near to the true value is the calculated value?' and statistics enables us to make statements like 'there is a 95% chance that the calculated value is within 5% of the unknown true value' or 'there is a 60% chance that it is within 3% of the true value'. We are dealing with a Gaussian distribution of calculated values around the unknown true value, and the Gaussian distribution goes off to infinity, so there is a small chance that the sample error is greater. Saying it is 'within ± 3%' is just too simplistic. Is that with 95% confidence or 60% confidence? Of course the larger the sample size the better the chance of making more accurate estimates.
If the sampled population was not completely random, then the results would be unreliable too. I don't see any indication it was random, in fact far from it.
"The survey was distributed to the 9000+ subscribers of Usability News (www.usabilitynews.org) as well as various email lists and Facebook groups. Completion time was approximately 20 minutes. Responses were collected during a 2-month time span."
A relatively insignificant 52 responses from a select group of 9000+ distributed survey requests primarily targeting professional computer interface designers isn't going to yield any results that can be shown to be representative of the iPad buyer/user at large.
Sure, iPad owners are generally happy with their purchase. Yes, some have a lot of apps, some probably have very few. Are the numbers in this study any indication whatsoever of the actual numbers? Not very likely. A survey of AppleInsider members would be just as valid an indicator.
Fun to discuss but statistically flawed if intended to represent the iPad user group in general.
You're missing the point that 52 users is too small to get a statistically significant sample.
Technically, you're wrong. 52 can be a statistically significant sample - but with a large margin of error.
You also left out confidence intervals from your discussion. With 52 people sampled, you cited an error margin of +/- 14%. That number depends on the confidence interval. Typically, statisticians use 95%, but there's no requirement for that. If you were content with a 90% confidence interval, the error margin would be smaller.
You said that the results would likely be the same in a much larger sample, and that is simply not true. You are assuming that the margin of error for 52 users is much smaller than it actually is.
The results COULD be the same, although the error margin would be smaller.
I'm serious. I have a degree in this shit.
Also, the margins can and have been proven. You can count out populations of M&Ms, colored beans, etc., draw them out randomly, and hit the margin of error for the measured populations more than 90% of the time.
Then you ought to recognize that an even larger problem than the sample size is the randomness factor. Without knowing how the sample was selected, one can't really know if it was random.
Besides that, there are basic math errors. As someone else pointed out, they cited 83.65% satisfied - which means that a fractional number of people were satisfied. Aside from that, of course, they clearly don't understand significant figures.
The results COULD be the same, although the error margin would be smaller.
I think it would be funny as shit if somehow or other 52,000 people were actually surveyed and the results were 81% satisfied...
... but, apparently, that is mathematically not possible.