or Connect
AppleInsider › Forums › Mobile › iPhone › Apple's approval of 'Jekyll' malware app reveal flaws in App Store review process
New Posts  All Forums:Forum Nav:

Apple's approval of 'Jekyll' malware app reveal flaws in App Store review process

post #1 of 43
Thread Starter 
A group of researchers from Georgia Tech managed to get a malicious app past Apple's review process, finding the company may run only a few seconds' worth of tests before posting an app to the App Store.

apps


Dubbed "Jekyll," the malicious software was uploaded to Apple's App Store in March to test the company's control measures, which dictate what apps are allowed to be distributed through the App Store, reports MIT's Technology Review.

According to the research team responsible for creating the software, Apple was unable to distinguish dormant bits of code that would later be assembled into a malicious app. Once installed on a victim's device, Jekyll, disguised as a news delivery app, was able to post tweets, send email and text messages, access the phone's address book, take pictures, and direct Safari to a malicious website, among other nefarious actions.

?The app did a phone-home when it was installed, asking for commands," said Stony Brook University researcher Long Lu. "This gave us the ability to generate new behavior of the logic of that app which was nonexistent when it was installed.?

Jekyll also had code built in that allowed the researchers to monitor Apple's testing process, which reportedly only ran the app for "a few seconds" before letting it go live on the App Store. Lu said the Georgia Tech team deployed Jekyll for only a few minutes, downloading and pointing the app toward themselves for testing. No consumers installed the app before it was ultimately taken down as a safety precaution.

?The message we want to deliver is that right now, the Apple review process is mostly doing a static analysis of the app, which we say is not sufficient because dynamically generated logic cannot be very easily seen,? Lu said.

The research team wrote up its results in a paper that was scheduled for presentation on Friday at the Usenix conference in Washington, D.C.

Apple spokesman Tom Neumayr said the company took the research into consideration and has updated iOS to deal with the issues outlined in the paper. The exact specifics of these fixes were not disclosed, and no comment was made on the App Store review process.
post #2 of 43

And so the game of whack-a-mole continues.

"Proof is irrelevant" - Solipsism
Reply
"Proof is irrelevant" - Solipsism
Reply
post #3 of 43
Unfortunatelly, the story makes no mention of whether THIS research team had Apple's blessing to engage in this activity. Unlike poor Mr. Balic, self-described "security researcher" who has been villified for exposing serious issues with Apple's Dev Center website.

http://appleinsider.com/articles/13/07/22/researcher-admits-to-hacking-apples-developer-site-says-he-meant-no-harm-or-damage

It is sad to think that the Georgia Tech. connection is adequate to insulate this group involved in a potentially malicious activity from the same criticism that an international programmer received.
post #4 of 43
Quote:
Originally Posted by TeaEarleGreyHot View Post

Unfortunatelly, the story makes no mention of whether THIS research team had Apple's blessing to engage in this activity. Unlike poor Mr. Balic, self-described "security researcher" who has been villified for exposing serious issues with Apple's Dev Center website.

http://appleinsider.com/articles/13/07/22/researcher-admits-to-hacking-apples-developer-site-says-he-meant-no-harm-or-damage

It is sad to think that the Georgia Tech. connection is adequate to insulate this group involved in a potentially malicious activity from the same criticism that an international programmer received.

 

The difference being that this group used the application only on themselves, not real customers.  They didn't take 100,000 developer email addresses in the name of research.  In theory, they could have also used this opportunity to backdoor Apple while the app was actively running for testing.

post #5 of 43
Quote:
Originally Posted by AppleInsider View Post

According to the research team responsible for creating the software, Apple was unable to distinguish dormant bits of code that would later be assembled into a malicious app. Once installed on a victim's device, Jekyll, disguised as a news delivery app, was able to post tweets, send email and text messages, access the phone's address book, take pictures, and direct Safari to a malicious website, among other nefarious actions.

OK, so there's a way to bypass Apple's security. But notice that the app did not have any malicious code as submitted - the malicious code was reassembled in use. It's pretty hard to figure out how Apple (or anyone) could block an app which doesn't have malicious code when submitted.

I guess they'll have to settle for just being 100,000 times more secure than their competition.
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
post #6 of 43

So a security research firm has to create a scenario to test Apple's defenses.

Unlike Android, that pretty much keeps their door wide open.

I'll gladly accept Apple's "security-issues" any day versus what Android does.  

post #7 of 43
I am glad they big-noted themselves at Apples expense and made a whole pile of wannabe hackers aware of the flaw instead of just telling Apple.
post #8 of 43
So Apple already fixed this according to Neumayr? Updating iOS to make such exploits fail even IF approved makes sense. Hope that's sufficient, because no approval process alone is ever enough. It's a great first line of defense, lacking on Android, but not the last!

(The LAST line of defense is also lacking on Android--as long as devices aren't limited to the Play Store: disabling a malicious app once it's caught.)
post #9 of 43
Note to Apple: This is what some of the 30% is for. Get with the times. Malicious do little sleep need. 1oyvey.gif

When I find time to rewrite the laws of Physics, there'll Finally be some changes made round here!

I am not crazy! Three out of five court appointed psychiatrists said so.

Reply

When I find time to rewrite the laws of Physics, there'll Finally be some changes made round here!

I am not crazy! Three out of five court appointed psychiatrists said so.

Reply
post #10 of 43

So, walled and insecure? 

post #11 of 43
Quote:
Originally Posted by jragosta View Post


OK, so there's a way to bypass Apple's security. But notice that the app did not have any malicious code as submitted - the malicious code was reassembled in use. It's pretty hard to figure out how Apple (or anyone) could block an app which doesn't have malicious code when submitted.

I guess they'll have to settle for just being 100,000 times more secure than their competition.

 

Well, what's interesting about this whole issue and what got overlooked a bit is the fact that the app does not actually gain any increased rights within the system.

 

This demonstration has merely proven that the review process can be tricked, either by technical means such as in this example, or by pure luck, such as we have seen in the past through a number of questionable apps.

 

And while it was possible to get around the approval criteria by injecting additional, potentially malicious code into the application post approval, note that the application itself did not bypass or disable any of iOS's security features. It is therefore subject to the same restrictions as other apps that actually get legitimately approved.

 

The worst that could have happened is access to certain private APIs, but even those don't bypass security and the app would still run within its own sandbox, etc.

 

This is very much unlike the competition. ;-)

post #12 of 43
Quote:
Originally Posted by bulk001 View Post

So, walled and insecure? 

 

The review process got tricked, no one is talking about insecure, in fact not a single security mechanism of the system itself was cracked.

post #13 of 43
Quote:
Originally Posted by jragosta View Post
 It's pretty hard to figure out how Apple (or anyone) could block an app which doesn't have malicious code when submitted.
 

 

They can't. But they can make sure that any donwloaded code can't execute and more important can't cross the app sandbox.

 

Normally that would be the case for any app. The problem here is that the hack exploited a bunch of iOS bugs to let them around that. Apple just needs to patch those bugs.

 

In fact iMore is reporting that this hack already doesn't work on iOS7.

post #14 of 43
Quote:
Originally Posted by cynic View Post

 

The review process got tricked, no one is talking about insecure, in fact not a single security mechanism of the system itself was cracked.

Oh please. If this was Android, BB or MS you would be all over how they had failed in their security. Just because it is Apple does not give them a pass. And before you accuse me of trolling, I own a company with over 150 Apple devices. Apple has sold us the walled garden as being more secure and it turns out that if you know what you are doing, you can get around it. This time it was a security firm with no intent to do hard but who knows who will "trick" them next time. 

post #15 of 43
Quote:
Originally Posted by bulk001 View Post

Oh please. If this was Android, BB or MS you would be all over how they had failed in their security. 

 

Oh please, Google Play's "security" gets circumvented almost every week. Did you hear about their "Bouncer" that was going to stop everything? It's a total sieve.

 

So yes Apple's walled system is more secure. This has been demonstrated year after year.

 

The thing is no company in the world offers 100% foolproof security against determined hackers, it's a continous process.

 

You also have to understand that in this case these people "who know what they are doing" have PhDs in security.

post #16 of 43
Quote:
Originally Posted by bulk001 View Post

Apple has sold us the walled garden as being more secure

Didn't you just answer your own post?

Apple has never claimed 100% security. Nobody does.

However, you're more than happy to try the other camp, if you dare.
Smoke me a kipper. I'll be back for breakfast.
Reply
Smoke me a kipper. I'll be back for breakfast.
Reply
post #17 of 43
Quote:
Originally Posted by jragosta View Post

OK, so there's a way to bypass Apple's security. But notice that the app did not have any malicious code as submitted - the malicious code was reassembled in use. It's pretty hard to figure out how Apple (or anyone) could block an app which doesn't have malicious code when submitted.

It says the malicious code was submitted with the app but was dormant code. They perhaps split a malicious binary into parts and reassembled it to then execute it later or maybe just didn't call the code directly. They said Apple needs to do dynamic rather than static testing of the submitted apps to fix the issue. The fact they don't run apps explains why some apps get through with crazy in app purchases. Perhaps they need to hire some of their factory workers to manually run apps and see what they do in addition to the automated testing.

It says here they get 26,000 submissions a week:

http://articles.latimes.com/2012/mar/14/business/la-fi-tn-apple-26000-20120314

Hire 300 staff, check 20 apps each manually per day. It can even be HQ staff.
Quote:
Originally Posted by jragosta View Post

I guess they'll have to settle for just being 100,000 times more secure than their competition.

Until the review process is made more thorough, this method would allow any app to be submitted with malicious code. They obviously still have tighter security but this is a problem that needs to be fixed.

There is some inherent damage limitation in the stores in that apps are hard to find anyway and apps that don't really do anything useful in the first place won't be installed much but wherever possible, Apple should make things as secure as they can and dynamic testing would help.
post #18 of 43
Quote:
Originally Posted by Marvin View Post

It says the malicious code was submitted with the app but was dormant code. They perhaps split a malicious binary into parts and reassembled it to then execute it later or maybe just didn't call the code directly. They said Apple needs to do dynamic rather than static testing of the submitted apps to fix the issue. The fact they don't run apps explains why some apps get through with crazy in app purchases. Perhaps they need to hire some of their factory workers to manually run apps and see what they do in addition to the automated testing.
Until the review process is made more thorough, this method would allow any app to be submitted with malicious code. They obviously still have tighter security but this is a problem that needs to be fixed.

There is some inherent damage limitation in the stores in that apps are hard to find anyway and apps that don't really do anything useful in the first place won't be installed much but wherever possible, Apple should make things as secure as they can and dynamic testing would help.

Sure. So there's one security issue that affects iOS - but also would affect Android. And there are 100,000 security issues that affect Android and not iOS.

Which is better?

As for dynamic testing, that's nice in theory. In practice, it would have enormous impact on the system operation. The OS would be far larger and slower and everything would be dog slow. I don't think that's worth the tradeoff. Apple will presumably settle for 100,000 times more secure rather tan 101,000 times more secure.
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
post #19 of 43
Quote:
Originally Posted by Marvin View Post
It says the malicious code was submitted with the app but was dormant code. They perhaps split a malicious binary into parts and reassembled it to then execute it later or maybe just didn't call the code directly. They said Apple needs to do dynamic rather than static testing of the submitted apps to fix the issue. 
Hire 300 staff, check 20 apps each manually per day. It can even be HQ staff.
Until the review process is made more thorough, this method would allow any app to be submitted with malicious code. They obviously still have tighter security but this is a problem that needs to be fixed.

There is some inherent damage limitation in the stores in that apps are hard to find anyway and apps that don't really do anything useful in the first place won't be installed much but wherever possible, Apple should make things as secure as they can and dynamic testing would help.

 

That's a misunderstanding of the problem. It's not possible to do "dynamic testing" of downloadables, because they can change at any time. The bad guys could easily leave "safe" downloadables for a few weeks while Apple reviewed the app and then change to malicious ones when it went live.

 

No amount of staff in the world reviewing apps would change this.

 

What Apple should fix is to make sure iOS does not allow apps to attack themselves via buffer overflows (the method used in this "Jekyll" attack) and trigger dormant code that way.

 

iOS already uses Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) methods that would prevent this, but the researchers found a weakness in Apple's ASLR that allowed them to still guess the addresses of their malicious code.

 

The ASLR protections are exactly what Apple needs to improve to stop this from happening again. If an attacker can't guess the addresses of their malicious code they can't execute it.


Edited by jmncl - 8/16/13 at 7:13pm
post #20 of 43

"A group of researchers from Georgia Tech managed to get a malicious app past Apple's review process, finding the company may run only a few seconds' worth of tests before posting an app to the App Store."

 

 

 

And yet . . . 

 

 

 

 

 

Yeah. That's right.

post #21 of 43

Is it really any surprise that someone is able to make a legitimate-looking app, and bury some code in there that only activates after some time or criteria has passed?  It's called a trojan for a reason.  Famed security researcher Charlie Miller proved the exact same thing (and subsequently got himself banned from the App Store) by writing a fake finance app.

 

The part that IS disturbing to me is that I thought that the Contacts and such were supposed to be more or less firewalled until you explicitly give that app permission.  What's up with that?  It's a bit disturbing to think that a rogue app could start sending emails and such.  What is this, Android?!

post #22 of 43
Quote:
Originally Posted by _Rick_V_ View Post

Is it really any surprise that someone is able to make a legitimate-looking app, and bury some code in there that only activates after some time or criteria has passed?  It's called a trojan for a reason.  Famed security researcher Charlie Miller proved the exact same thing (and subsequently got himself banned from the App Store) by writing a fake finance app.

The part that IS disturbing to me is that I thought that the Contacts and such were supposed to be more or less firewalled until you explicitly give that app permission.  What's up with that?  It's a bit disturbing to think that a rogue app could start sending emails and such.  What is this, Android?!
Nothing in the article indicates it was done without permissions. All they did was execute code that was not in the original app.

The story is a lot more hype than substance. I imagine the user did give permission to access contacts, post tweets, etc. the problem is they did it with unreviewed code, not that they did it without permission.
post #23 of 43
Quote:
Originally Posted by jragosta View Post


Sure. So there's one security issue that affects iOS - but also would affect Android. And there are 100,000 security issues that affect Android and not iOS.

Which is better?

As for dynamic testing, that's nice in theory. In practice, it would have enormous impact on the system operation. The OS would be far larger and slower and everything would be dog slow. I don't think that's worth the tradeoff. Apple will presumably settle for 100,000 times more secure rather tan 101,000 times more secure.

 

I think the "dynamic testing" Marvin refers to would happen on the reviewers' workstations, not the end-user's device. Also, Google Bouncer does do some dynamic analysis, although who knows whether its tests would catch an app like Jekyll.


Edited by d4NjvRzf - 8/16/13 at 10:10pm
post #24 of 43
So... here we are again... Apple is obviously inadequate. They have a more in depth review process, stricter controls on what data/APIs apps can access, and more stringent built-in OS security than the competition; but because a highly focused security research team was able to sneak past their approval process with INACTIVE code this is a story. If the competition was held to the same standard as Apple this kind of news would be tiresome and old-hat. Why is there a double standard? Why do Android and Windows Phone get a free pass while people expect Apple to be perfect? Regardless, you'll find considerably less malware in the Apple ecosystem. Anyone who has a clue about security knows that NO SYSTEM IS INFALLIBLE. There will always be something that gets through.. Let's just try to keep it in context. Just take a look at how many pieces of Android malware are in the wild vs. iOS malware and then tell me which is the more secure mobile platform.
post #25 of 43
Quote:
Originally Posted by d4NjvRzf View Post

I think the "dynamic testing" Marvin refers to would happen on the reviewers' workstations, not the end-user's device. Also, Google Bouncer does do some dynamic analysis, although who knows whether its tests would catch an app like Jekyll.

Yeah I just meant running the apps at review but it seems they already do this and wouldn't easily find this vulnerability. I can't believe they approve all those apps with crazy in app purchases after running them manually.
Quote:
Originally Posted by Wovel 
Nothing in the article indicates it was done without permissions. All they did was execute code that was not in the original app.

The story is a lot more hype than substance.

This site has a very detailed run-down of the attack:

http://www.imore.com/jekyll-apps-how-they-attack-ios-security-and-what-you-need-know-about-them

The executable code was in the original app. It was an app that collected data e.g articles from a news server. The developer planted a deliberate vulnerability in their app and the app itself contained blocks of malicious code that were just separated out, which are named gadgets in this attack:

http://en.wikipedia.org/wiki/Return-oriented_programming

They created a data download on their server that exploited the buffer overflow vulnerability they put in their app, which was then able to execute the dormant code but the download was not executed. They knew where the malicious code was in memory because the address space layout randomisation Apple uses is limited.

Improving ASLR is one thing they can do but also protecting better against buffer overflow vulnerabilities as mentioned by jmncl earlier.

Apple uses a buffer overflow prevention called Data Execution Prevention, which meant the data they downloaded from the server could be written into memory but could not be executed. This data however changed the execution order of the application so the malicious code that was already in the app became active.

Perhaps they need to have an API that forces data that isn't bundled with the app to be loaded into some quarantined memory location so that it can't overwrite parts of the application binary and reorder the code execution. That way it wouldn't matter if anyone put a buffer overflow vulnerability in their app because it's not that code reading the file directly, they'd have to call Apple's API to read the downloaded file, which can also check for suspicious payloads at runtime.
Quote:
Originally Posted by _Rick_V_ 
The part that IS disturbing to me is that I thought that the Contacts and such were supposed to be more or less firewalled until you explicitly give that app permission.

They used private APIs to call the processes directly. This is likely why Apple bans private APIs but they can be bundled in a malicious app. Still, you'd think the Camera, Contacts and other processes could protect against this somehow with say verification keys that show the commands came from an authorised process. There could be an internal OS list that has approved processes for certain activities and the storage locations of those processes. They could use hash verification for the binaries but that might mean having to update the approval every time the app gets updated.

I'd say this kind of attack is fairly complex to pull off as you'd have to know how to write a buffer overflow vulnerability into an app, how to split malicious code into gadgets, how to get the memory addresses, how to then write a payload that will rewrite the execution order and then to actually pull it off to do something worthwhile. It can be trivial if people reuse working code but these researchers don't sound like the kind of people that would share exploit code irresponsibly. Hopefully Apple will implement at least some of the extra security measures where possible but it is quite far from Android's ability to source malware:

http://www.computerworld.com/s/article/9241596/New_Android_malware_is_being_distributed_through_mobile_ad_networks

Malware and advertising together, how will Google deal with that one?
post #26 of 43
Quote:
Originally Posted by jmncl View Post

They can't. But they can make sure that any donwloaded code can't execute and more important can't cross the app sandbox.

 

Normally that would be the case for any app. The problem here is that the hack exploited a bunch of iOS bugs to let them around that. Apple just needs to patch those bugs.

 

In fact iMore is reporting that this hack already doesn't work on iOS7.

That is the real news. The sandbox got exploited. Apple has always known that malicious code can lurk in the background undetected in the review process. One technique is to have date conditional code that doesn't reveal itself until after the review process. The only way Apple could be 100% on the review is if developers had to submit the app source code and Apple compiled it for them after the review which would be really expensive and time consuming.

 

It just goes to show that you should only download apps from reputable big name brands that have a reputation at stake. A small unknown programmer with a marginally useful application should be avoided especially when the app is brand new and has no recommendations.

Life is too short to drink bad coffee.

Reply

Life is too short to drink bad coffee.

Reply
post #27 of 43
So Apple has shut this door. Has Android? Now that the word is out, Android represents a very juicy target for this kind of exploit.

Yet another reason to move over to iOS ASAP.
post #28 of 43
@Chazwatson

Love your aviatar man! Brings back memories.
Edited by BARCODE - 8/17/13 at 11:00am
post #29 of 43
Quote:
Originally Posted by Marvin View Post

Malware and advertising together, how will Google deal with that one?
I don't know that either Android or iOS has a way yet to keep Black Hole from potentially exposing web page visitors to malware.
melior diabolus quem scies
Reply
melior diabolus quem scies
Reply
post #30 of 43

Where's TEKSTUD when you need him? This thread should have 200+ posts by now.

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply
post #31 of 43
Quote:
Originally Posted by mstone View Post

The only way Apple could be 100% on the review is if developers had to submit the app source code and Apple compiled it for them after the review which would be really expensive and time consuming.

One does submit their app's source code. The issue here is that not all code behavior can be easily determined. Code can be very complex, even excessively so if the writer chooses. That's why apps with bugs can make it into the App Store to begin with-- not all bugs are found in the first place, even the intended ones. This one was designed to look like it wanted to do something innocent.
post #32 of 43
Quote:
Originally Posted by chazwatson View Post

Quote:
Originally Posted by mstone View Post

The only way Apple could be 100% on the review is if developers had to submit the app source code and Apple compiled it for them after the review which would be really expensive and time consuming.

One does submit their app's source code. 

You submit an .ipa binary file not your xcode project. Apple really doesn't want your code and you don't want anyone else to have it either especially if it contains proprietary program code and intellectual property.

Life is too short to drink bad coffee.

Reply

Life is too short to drink bad coffee.

Reply
post #33 of 43
Quote:
Originally Posted by mstone View Post

You submit an .ipa binary file not your xcode project. Apple really doesn't want your code and you don't want anyone else to have it either especially if it contains proprietary program code and intellectual property.

 

It is promptly decompiled for examination.  Sorry, should have been more specific.

post #34 of 43
Quote:
Originally Posted by chazwatson View Post

Quote:
Originally Posted by mstone View Post

You submit an .ipa binary file not your xcode project. Apple really doesn't want your code and you don't want anyone else to have it either especially if it contains proprietary program code and intellectual property.

 

It is promptly decompiled for examination.  Sorry, should have been more specific.

I would argue against that being the case. Do you have any references? They use computers to scan your app to see if you are using any private APIs but even if they did decompile it, it would be so difficult to read they would not be able make any sense of it without days of work by a skilled programmer. The reviewers probably aren't even programmers. Furthermore, the code is your private property and if they were attempting to read your private code, that would be illegal.

Life is too short to drink bad coffee.

Reply

Life is too short to drink bad coffee.

Reply
post #35 of 43

My reaction:  

 

So what?

 

ONE got past Apple. 

 

Hundreds of thousands get into whatever they call the Android app store. 

 

Since Jobs' return to Apple, malware has NEVER been a problem in any meaningful way, whether for OS X or any other Apple OS on an Apple device. Nothing has really ever materialized for anyone to be concerned about. And Apple has reached saturation (and is pushing beyond that) years ago. 

 

This is simply not newsworthy. Apple's been notified, and that's it. 

post #36 of 43
Of course they got Apple's blessing. Post malware that is. It looks a near facsimile of Charlie's hack so nothing has changed, Most malware apps will be Mr Hyde.
post #37 of 43
Quote:
Originally Posted by jmncl View Post

Oh please, Google Play's "security" gets circumvented almost every week. Did you hear about their "Bouncer" that was going to stop everything? It's a total sieve.

So yes Apple's walled system is more secure. This has been demonstrated year after year.

The thing is no company in the world offers 100% foolproof security against determined hackers, it's a continous process.

You also have to understand that in this case these people "who know what they are doing" have PhDs in security.
Phew. I feel reassured now. Probably a team of PhD's who released all the jailbreaks for iPhone. Silly me.
post #38 of 43
Some of the nefarious actions listed here require explicit user approval via tapping yes on a confirmation dialog. The original article, however, does say the app works stealthily. Thus, I don't believe the app has access to address book, for instance (edit: I was right, the Jerkyll app is explicitly asking for user permissions to access address book, see link few posts above). It can’t access your contacts, your camera roll, your location etc. without using some previously undisclosed exploit (which I don't think they have because finding such exploit worth way more in security research than creating trojan apps). Thus, the scope of maliciousness of this app is more limited than it may seem.

Apple steps up user privacy for every iOS version, and iOS 7 will dump unique device identifier to the bin, while adding more fine-grained access controls to the personal information. Also, you have to remember that to be successful a malware app has to be installed thousands of times. Good luck achieving that on App Store with your knock-off functionality! But this isn’t the point.

The original article and AI’s article falsely led you to believe that because Apple’s app tests are not good enough to stop such kind of malware then Apple can fight it by introduced more lengthy and more thorough testing. This is plain bs. Objective-C, the language of iOS, is a highly dynamic language with first-class functions. It means the chunks of code can be created on the fly. There’s no way to catch the malicious activity in an app if it isn’t there yet. There’s no way of telling if it will be there tomorrow. And it’s not only Obj-C: the same can be achieved on C as well with a few tricks. All you need in a programming language is direct memory access.

Thus, the only way for Apple to prevent malware on iOS is to introduce more power to the user in selecting what apps can and cannot do, meaning iOS 7 is a step in the right direction.
Edited by ViktorCode - 8/18/13 at 12:30am
post #39 of 43
Clear evidence of selective abstraction, denial and delusional thinking
post #40 of 43
Quote:
Originally Posted by Quadra 610 View Post

My reaction:  

 

So what?

 

ONE got past Apple. 

 

Hundreds of thousands get into whatever they call the Android app store. 

 

Since Jobs' return to Apple, malware has NEVER been a problem in any meaningful way, whether for OS X or any other Apple OS on an Apple device. Nothing has really ever materialized for anyone to be concerned about. And Apple has reached saturation (and is pushing beyond that) years ago. 

 

This is simply not newsworthy. Apple's been notified, and that's it. 

 

prove that only "ONE got past Apple.", for all we know other apps have been exploiting the vulnerability for ages, security services and criminals don't shout about it

 

prove that "Hundreds of thousands get into whatever they call the Android app store.", although what relevance that has to apple i'm not sure

 

but of course you can't prove it, here's some free advice, spend less time posting lies on the internet, and one day you may own as many apple shares as i do

 

as a shareholder and long term apple user i'm angry, apple has been caught napping, it needs to beef up it's testing, otherwise it's only real justification for restricting general distribution of ios applications is dead

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: iPhone
  • Apple's approval of 'Jekyll' malware app reveal flaws in App Store review process
AppleInsider › Forums › Mobile › iPhone › Apple's approval of 'Jekyll' malware app reveal flaws in App Store review process