Surveillance app stored teens' Apple IDs on unprotected servers
An app meant to let parents monitor the phone activity of teenagers was saving the latter's Apple ID passwords in unprotected plaintext form, a report revealed on Sunday.
The information collected by TeenSafe was hosted on Amazon servers, and also included device identifiers and the email addresses of parents, ZDNet said, crediting the discovery to U.K. researcher Robert Wiggins. Those servers have been temporarily pulled offline, and a TeenSafe representative stated that the company has begun notifying anyone who might be impacted.
At least 10,200 records from the past three months contained customer data, though some were duplicates.
TeenSafe markets itself as a secure, encrypted way for parents to track call, Web, and location histories, as well as read text messages, even deleted ones.
Using the app to track a teen's iPhone requires that they have two-factor authentication turned off, though, which means that any hacker who discovered the plaintext passwords could hijack a teen's Apple ID and view private content.
It's not known if any malicious attacks have been launched, but some of the affected customers had already changed their account data prior to being alerted.
The information collected by TeenSafe was hosted on Amazon servers, and also included device identifiers and the email addresses of parents, ZDNet said, crediting the discovery to U.K. researcher Robert Wiggins. Those servers have been temporarily pulled offline, and a TeenSafe representative stated that the company has begun notifying anyone who might be impacted.
At least 10,200 records from the past three months contained customer data, though some were duplicates.
TeenSafe markets itself as a secure, encrypted way for parents to track call, Web, and location histories, as well as read text messages, even deleted ones.
Using the app to track a teen's iPhone requires that they have two-factor authentication turned off, though, which means that any hacker who discovered the plaintext passwords could hijack a teen's Apple ID and view private content.
It's not known if any malicious attacks have been launched, but some of the affected customers had already changed their account data prior to being alerted.
Comments
I use it so I know when Mrs Rayz2016 is approaching the house. Gives me plenty of time to clear up the liquor bottles and start mowing the lawn.
This is the reason why I hope Apple incorporates this into iOS directly. Using Family Sharing as a base, Apple could allow for more flexibility on the types of Users.
So if I setup my wife as a parent, she will have all her privacy and I will not be able to monitor her. If I setup my kids as children, then it should provide some kind of monitoring ability.
I am not the kind of parent that would want to read all their kids' messages and emails. I believe in giving them privacy, but if I need to opt for a way to make sure they are not being lured away by something, I'd rather have a stock solution that a third party one with which I have to share the Apple ID and password.
I've recently reviewed a few of the monitoring and screen time apps, but I've decided against using most of them since they use their own authentication and cloud servers. I have a hard time trusting them with my information.
I see that DayOne (the journaling app) that removed support for iCloud so that they could host journals on their own servers (for an extortionate price in my opinion), recently had their sync service fail.
They restored backups, got the service running again, and almost immediately began receiving reports that users were seeing entries from other people’s journals.
http://help.dayoneapp.com/day-one-sync/may-2018-day-one-outage-postmortem
Actually, @StrangeDays might be interested in the explanation as to how this happened.
They clearly don’t have a clue.
I'd really like to see Apple incorporate additional sandbox-like testing in their App approval process to detect whether apps submitted for approval "leak" any Apple product attributed private data as part of their data storage and retrieval services, like sending Apple Id credentials in cleartext over any outbound communication connection. I see no reason why Apple cannot setup a test environment with something akin to WireShark to monitor all outbound data traffic to see whether anything Apple cares about (like Apple Ids) are leaving the device/machine boundary in unencrypted form. Apps that leak any Apple attributed data should not be approved and apps that send unencrypted non-Apple attributed data, i.e., private data owned by the app itself, should be granted approval but only with a Big Red Flag in the App Store that warns users that the app sends its private data in unencrypted format over the internet. Users can then decide whether they want to trust the app with the data managed by the app. What I'm not asking for is for Apple to monitor communication transactions on the device to detect leakage. I'm only asking Apple to establish a test environment (with test ids and data) and use external analyzers to detect leakage on-the-wire.
**gets off soapbox**
Today is not the same as when we were young (if you're older than 40), but I can't tell you where it's gone wrong. I don't know. Maybe we're just trying too hard not to be our parents.
Anyway, I'm not sure why a developer would need to collect the Apple ID and password, or save it on an outside server. I've never used iCloud storage in my apps, but I assume the SDK does the authentication from the info saved in the device settings, so developers don't have to touch the login info. But parental controls apps have to rely on various odd workarounds to do what they do.
Of course, iOS doesn't help anything by popping up dialogs asking for one's AppleID/password all over the place. It trains users to expect to be asked for it, so they might not even realize when they are giving it to Apple, vs. some poorly written app, vs. being phished.
I agree a bit with both of these. There absolutely have to be boundaries and a certain amount of 'monitoring' so you can know if the other parenting work you are doing is working or not, so you can do some course-correction. At the same time, it's impossible to monitor everything, so if the foundations aren't in place, it's a hopeless mission of whack-a-mole until they get old enough to just head down the 'bad path' on their own. Plus, if monitoring is done wrong, it just adds a lack of relationship/trust and encourages a likely more tech-savvy kid to get in a game of cat-and-mouse they will likely win.
My own personal take on this is that it's a parents right/duty to interfere or check on what the kids are doing at any time. Mine is pretty young yet, but he knows if I hear something questionable, I'm having him show me what he was watching and we talk about it. It's also important for them to understand that with maturity (not necessarily age) comes privileges and that something 'bad' for a kid might be not as bad or fine for an adult. For example, something that scares kiddo so he won't sleep, might not scare mom/dad so they don't sleep.
And, then whenever possible, lay a foundation that would make a kiddo think for themselves that something is inappropriate or should be questioned (and brought to an adult) on their own. Actually talk to them about these things and why it is inappropriate. That can lead into talking about the difference between things that are always inappropriate vs things that are age-appropriate.
I'm sure it gets trickier as they get older, but I don't think there is any way to keep them in some kind of bubble. And, any efforts I've ever run into to do so tend to lead to worse outcomes. The options aren't..... 1. keep them in a bubble, or 2. let them run wild. Both of those tend to be disasters.
Yes, and it's *really*, *really* NOT just Cambridge Analytica! It's everyone from our own governments (yes, there is an actual well-funded department of the US-gov't that works with Hollywood/media to get messages and content into our media, TV, movies, etc.), to well-meaning people with un-thought-through worldviews, to all sorts of intentionally malicious stuff.
Yeah, other-ditch-itis seems to be a reoccurring societal tendency.
I have a story about the whole 'hands off' parenting too.
When my wife was in seminary in the SF Bay area, we were at a birthday party for a friend. Another woman was there with her pre-teen daughter. The conversation turned to what my wife was doing and when the mom heard this, she said something like,
"Yea, I should probably teach my daughter some of that stuff. We grew up Lutheran, but we figured we'd just let her [daughter] decide for herself what to believe. But, the other day we were in a jewelry store and she noticed a Catholic crucifix and asked what it was. I was shocked that she didn't seem to know anything about Christianity."
I suppose the atheists out there might think that a positive thing, but to be utterly ignorant and unaware of what like 1/3 of the world believes (over 1/2 w/ Islam), much of the last couple millennia of history, the foundations of the societies in which we live, etc. can't be a good thing. And, it isn't like the girl would now be picking from among the options in a well-informed manner, either. She just gets parented by the world-views of the media, movies, classroom, and culture she happens to run into (no matter how right or wrong they might be).