no they don't.. especially when you consider that educated android user does not exist. they use android without knowing what android is. to put it simply, they buy the only available 200 dollar smartphone that isn't nokia and that's it.
Is it your opinion that it's better that an app doesn't let you know what services or information it's accessing? It's clear that the App Store has become so large that Apple can't possibly thoroughly vet every app and every process it's accessing.
no they don't.. especially when you consider that educated android user does not exist. they use android without knowing what android is. to put it simply, they buy the only available 200 dollar smartphone that isn't nokia and that's it.
The issue is less of an issue with Android than iOS because most Android users aren't accessing the Android Market. The devices are the new feature phones for many.
Indeed, this has always been against the developers TOG -- Path either didn't care or just didn't RTFM and has now ruined it for the rest of us.
That's beside the point. Apple shouldn't allow any sensitive data to be accessible by 3rd-parties. They should have been proactive, not reactive, about this issue.
What's interesting is that this will get much less press than the location issue which was only sent to Apple, was anonymous, and only contained spots of WiFi and cellular tower data, not your specific location.
So the closed-garden control of the App Store doesn't always work, after all?
I'd say that after all these years and hundreds of thousands of apps, Apple has a pretty good track record. I'd trust them over the Android store any day of the week.
I'm all for contact privacy, but also for my jailbreak. I'll just keep using ContactPrivacy from Cydia and not update to 5.1 or 5.0.2, whichever fixes this.
I'd say that after all these years and hundreds of thousands of apps, Apple has a pretty good track record. I'd trust them over the Android store any day of the week.
That's beside the point. Apple shouldn't allow any sensitive data to be accessible by 3rd-parties. They should have been proactive, not reactive, about this issue.
Were they actively allowing it Solipsism? If so then I just changed my opinion whether Apple is responsible and should get the blame.
As I understood it Apple didn't approve to begin with, but instead was unaware. Possibly overwhelmed by the volume of apps submitted to the App Store coupled with developer pressure to get their apps approved? I've seen a few recent AI references to apps that should never have made it into the App Store, leaving some Apple users scratching their heads.
What are you talking about? Apple invented the permission-based app model. Or perfected it. Or made it popular. Or... forget it, Apple is making billions from the permission-based app model like it's nobody's business! /s
Trouble is, up until now, permission to access the contents of your address book was "always on" at runtime. Enforcement of Apple's customer privacy guidelines with respect to the address book was enforced at the source code level as Apple vetted Apps being submitted for inclusion in the App Store. If any snippet of source code violated Apple's guidelines regarding the privacy of the customer's address book, but it managed to slip through Apple's vetting process during sumbittal to the App Store, then that code could run without completely unfettered and without the need to notify the user or obtain the user's permission.
It was a trust-based permission, in that the customer had no choice whatsoever but to trust that Apple was doing an adequate job of vetting the source code.
Now we're apparently moving to an enforcement-based permission system. If they do it right, that will be the better system.
Here's what I would want to see: A pop-up shows up the first time an App tries to access any particular protected API, describing the nature of the information the App is trying to access, and asking the user for explicit decision to either to allow the App to continue or abort the access. It would also include an option to "remember the decision" so that the user can choose to be notified on each future attempt to access data of the same nature, or else the user can choose to give the App perpetual permission to access that type of data without the repeated nagging.
I hope the day comes soon when people realize that all this sharing is doing nothing to improve their lives and if anything keeps them chained in service to a computer whether mobile or at home wasting time when they could be out in reality. Everything you post online will be used against you someday. The more info you willingly give away to software companies and governments the less human you become.
I do not tweet, have no Facebook page or linkedin page and a healthy social life in spite of it all.
Here's what I would want to see: A pop-up shows up the first time an App tries to access any particular protected API, describing the nature of the information the App is trying to access, and asking the user for explicit decision to either to allow the App to continue or abort the access. It would also include an option to "remember the decision" so that the user can choose to be notified on each future attempt to access data of the same nature, or else the user can choose to give the App perpetual permission to access that type of data without the repeated nagging.
That's what Android does now. It won't ask again unless the permissions change. Even better, it won't allow an automatic updating of any app where the permissions have changed since you installed it. You have to update manually and agree that you've noted and accept the permission changes.
Were they actively allowing it Solipsism? If so then I just changed my opinion whether Apple is responsible and should get the blame.
As I understood it Apple didn't approve to begin with, but instead was unaware. Possibly overwhelmed by the volume of apps submitted to the App Store coupled with developer pressure to get their apps approved? I've seen a few recent AI references to apps that should never have made it into the App Store, leaving some Apple users scratching their heads.
As I understand it Apple didn't prevent anyone from getting access, they simply put up written rules that said you were not allowed. Laws are guidelines, not security. This is why I say Apple should have been proactive to prevent this from ever being an issue.
I hope the day comes soon when people realize that all this sharing is doing nothing to improve their lives and if anything keeps them chained in service to a computer whether mobile or at home wasting time when they could be out in reality. Everything you post online will be used against you someday. The more info you willingly give away to software companies and governments the less human you become.
I do not tweet, have no Facebook page or linkedin page and a healthy social life in spite of it all.
It was a trust-based permission, in that the customer had no choice whatsoever but to trust that Apple was doing an adequate job of vetting the source code.
Many here will testify that most users will be happy with that model and defend it even when it fails.
Quote:
Now we're apparently moving to an enforcement-based permission system. If they do it right, that will be the better system.
Here's what I would want to see: A pop-up shows up the first time an App tries to access any particular protected API, describing the nature of the information the App is trying to access, and asking the user for explicit decision to either to allow the App to continue or abort the access. It would also include an option to "remember the decision" so that the user can choose to be notified on each future attempt to access data of the same nature, or else the user can choose to give the App perpetual permission to access that type of data without the repeated nagging.
That's very well though out and indeed a step closer to the current Android model, which is far from perfect itself. There has been some research on the topic of information-stealing by smartphone applications. The authors of the linked article suggest that the user should be able not only to control what information to share, but also to alter this information with empty or bogus entries.
Indeed, this has always been against the developers TOG -- Path either didn't care or just didn't RTFM and has now ruined it for the rest of us.
I think its simpler than that. They made a conscious choice to take that data. Only when caught did they find a need to talk about it. At least they didn't try to spin their greed for your data.
As I understand it Apple didn't prevent anyone from getting access, they simply put up written rules that said you were not allowed. Laws are guidelines, not security. This is why I say Apple should have been proactive to prevent this from ever being an issue.
It's a shame that you're demanding Apple treat all of their developers as criminals. Because some act poorly, Apple should punish everybody. What you want is developers with ethics. Perhaps they're rare now.
Here's what I would want to see: A pop-up shows up the first time an App tries to access any particular protected API, describing the nature of the information the App is trying to access, and asking the user for explicit decision to either to allow the App to continue or abort the access.
Many here will testify that most users will be happy with that model and defend it even when it fails.
I'm not a developer, so I have no first hand knowledge of Apple's app approval process. And as a lazy bum, I've not tried to find any second hand knowledge either.
So, perhaps someone more industrious can help me out. Exactly what is included in Apple's app approval process? Do developers actually submit source code with their apps? Does Apple actually review that code? How would they determine what Path was doing?
Comments
image
Georack Obushma!
no they don't.. especially when you consider that educated android user does not exist. they use android without knowing what android is. to put it simply, they buy the only available 200 dollar smartphone that isn't nokia and that's it.
Is it your opinion that it's better that an app doesn't let you know what services or information it's accessing? It's clear that the App Store has become so large that Apple can't possibly thoroughly vet every app and every process it's accessing.
Better not to know about it then?
That was the policy, but Path showed Apple that they can't trust developers to do it on their own so now it will be forced on them by the OS
Indeed, this has always been against the developers TOG -- Path either didn't care or just didn't RTFM and has now ruined it for the rest of us.
no they don't.. especially when you consider that educated android user does not exist. they use android without knowing what android is. to put it simply, they buy the only available 200 dollar smartphone that isn't nokia and that's it.
The issue is less of an issue with Android than iOS because most Android users aren't accessing the Android Market. The devices are the new feature phones for many.
It's about bloody time as this issue has been in the media for several minutes now¡
Exactly what I was thinking. I may have to take you off ignore mode Solipsism.
Dang.
Indeed, this has always been against the developers TOG -- Path either didn't care or just didn't RTFM and has now ruined it for the rest of us.
That's beside the point. Apple shouldn't allow any sensitive data to be accessible by 3rd-parties. They should have been proactive, not reactive, about this issue.
What's interesting is that this will get much less press than the location issue which was only sent to Apple, was anonymous, and only contained spots of WiFi and cellular tower data, not your specific location.
So the closed-garden control of the App Store doesn't always work, after all?
I'd say that after all these years and hundreds of thousands of apps, Apple has a pretty good track record. I'd trust them over the Android store any day of the week.
I'd say that after all these years and hundreds of thousands of apps, Apple has a pretty good track record. I'd trust them over the Android store any day of the week.
Amen!
That's beside the point. Apple shouldn't allow any sensitive data to be accessible by 3rd-parties. They should have been proactive, not reactive, about this issue.
Were they actively allowing it Solipsism? If so then I just changed my opinion whether Apple is responsible and should get the blame.
As I understood it Apple didn't approve to begin with, but instead was unaware. Possibly overwhelmed by the volume of apps submitted to the App Store coupled with developer pressure to get their apps approved? I've seen a few recent AI references to apps that should never have made it into the App Store, leaving some Apple users scratching their heads.
What are you talking about? Apple invented the permission-based app model. Or perfected it. Or made it popular. Or... forget it, Apple is making billions from the permission-based app model like it's nobody's business! /s
Trouble is, up until now, permission to access the contents of your address book was "always on" at runtime. Enforcement of Apple's customer privacy guidelines with respect to the address book was enforced at the source code level as Apple vetted Apps being submitted for inclusion in the App Store. If any snippet of source code violated Apple's guidelines regarding the privacy of the customer's address book, but it managed to slip through Apple's vetting process during sumbittal to the App Store, then that code could run without completely unfettered and without the need to notify the user or obtain the user's permission.
It was a trust-based permission, in that the customer had no choice whatsoever but to trust that Apple was doing an adequate job of vetting the source code.
Now we're apparently moving to an enforcement-based permission system. If they do it right, that will be the better system.
Here's what I would want to see: A pop-up shows up the first time an App tries to access any particular protected API, describing the nature of the information the App is trying to access, and asking the user for explicit decision to either to allow the App to continue or abort the access. It would also include an option to "remember the decision" so that the user can choose to be notified on each future attempt to access data of the same nature, or else the user can choose to give the App perpetual permission to access that type of data without the repeated nagging.
I do not tweet, have no Facebook page or linkedin page and a healthy social life in spite of it all.
Here's what I would want to see: A pop-up shows up the first time an App tries to access any particular protected API, describing the nature of the information the App is trying to access, and asking the user for explicit decision to either to allow the App to continue or abort the access. It would also include an option to "remember the decision" so that the user can choose to be notified on each future attempt to access data of the same nature, or else the user can choose to give the App perpetual permission to access that type of data without the repeated nagging.
That's what Android does now. It won't ask again unless the permissions change. Even better, it won't allow an automatic updating of any app where the permissions have changed since you installed it. You have to update manually and agree that you've noted and accept the permission changes.
Were they actively allowing it Solipsism? If so then I just changed my opinion whether Apple is responsible and should get the blame.
As I understood it Apple didn't approve to begin with, but instead was unaware. Possibly overwhelmed by the volume of apps submitted to the App Store coupled with developer pressure to get their apps approved? I've seen a few recent AI references to apps that should never have made it into the App Store, leaving some Apple users scratching their heads.
As I understand it Apple didn't prevent anyone from getting access, they simply put up written rules that said you were not allowed. Laws are guidelines, not security. This is why I say Apple should have been proactive to prevent this from ever being an issue.
I hope the day comes soon when people realize that all this sharing is doing nothing to improve their lives and if anything keeps them chained in service to a computer whether mobile or at home wasting time when they could be out in reality. Everything you post online will be used against you someday. The more info you willingly give away to software companies and governments the less human you become.
I do not tweet, have no Facebook page or linkedin page and a healthy social life in spite of it all.
Thanks for sharing, Kerry Buckley.
...
It was a trust-based permission, in that the customer had no choice whatsoever but to trust that Apple was doing an adequate job of vetting the source code.
Many here will testify that most users will be happy with that model and defend it even when it fails.
Now we're apparently moving to an enforcement-based permission system. If they do it right, that will be the better system.
Here's what I would want to see: A pop-up shows up the first time an App tries to access any particular protected API, describing the nature of the information the App is trying to access, and asking the user for explicit decision to either to allow the App to continue or abort the access. It would also include an option to "remember the decision" so that the user can choose to be notified on each future attempt to access data of the same nature, or else the user can choose to give the App perpetual permission to access that type of data without the repeated nagging.
That's very well though out and indeed a step closer to the current Android model, which is far from perfect itself. There has been some research on the topic of information-stealing by smartphone applications. The authors of the linked article suggest that the user should be able not only to control what information to share, but also to alter this information with empty or bogus entries.
Indeed, this has always been against the developers TOG -- Path either didn't care or just didn't RTFM and has now ruined it for the rest of us.
I think its simpler than that. They made a conscious choice to take that data. Only when caught did they find a need to talk about it. At least they didn't try to spin their greed for your data.
As I understand it Apple didn't prevent anyone from getting access, they simply put up written rules that said you were not allowed. Laws are guidelines, not security. This is why I say Apple should have been proactive to prevent this from ever being an issue.
It's a shame that you're demanding Apple treat all of their developers as criminals. Because some act poorly, Apple should punish everybody. What you want is developers with ethics. Perhaps they're rare now.
Here's what I would want to see: A pop-up shows up the first time an App tries to access any particular protected API, describing the nature of the information the App is trying to access, and asking the user for explicit decision to either to allow the App to continue or abort the access.
Sounds like Vista for iOS
Many here will testify that most users will be happy with that model and defend it even when it fails.
I'm not a developer, so I have no first hand knowledge of Apple's app approval process. And as a lazy bum, I've not tried to find any second hand knowledge either.
So, perhaps someone more industrious can help me out. Exactly what is included in Apple's app approval process? Do developers actually submit source code with their apps? Does Apple actually review that code? How would they determine what Path was doing?