Apple's approval of 'Jekyll' malware app reveal flaws in App Store review process

Posted:
in iPhone edited January 2014
A group of researchers from Georgia Tech managed to get a malicious app past Apple's review process, finding the company may run only a few seconds' worth of tests before posting an app to the App Store.

apps


Dubbed "Jekyll," the malicious software was uploaded to Apple's App Store in March to test the company's control measures, which dictate what apps are allowed to be distributed through the App Store, reports MIT's Technology Review.

According to the research team responsible for creating the software, Apple was unable to distinguish dormant bits of code that would later be assembled into a malicious app. Once installed on a victim's device, Jekyll, disguised as a news delivery app, was able to post tweets, send email and text messages, access the phone's address book, take pictures, and direct Safari to a malicious website, among other nefarious actions.

?The app did a phone-home when it was installed, asking for commands," said Stony Brook University researcher Long Lu. "This gave us the ability to generate new behavior of the logic of that app which was nonexistent when it was installed.?

Jekyll also had code built in that allowed the researchers to monitor Apple's testing process, which reportedly only ran the app for "a few seconds" before letting it go live on the App Store. Lu said the Georgia Tech team deployed Jekyll for only a few minutes, downloading and pointing the app toward themselves for testing. No consumers installed the app before it was ultimately taken down as a safety precaution.

?The message we want to deliver is that right now, the Apple review process is mostly doing a static analysis of the app, which we say is not sufficient because dynamically generated logic cannot be very easily seen,? Lu said.

The research team wrote up its results in a paper that was scheduled for presentation on Friday at the Usenix conference in Washington, D.C.

Apple spokesman Tom Neumayr said the company took the research into consideration and has updated iOS to deal with the issues outlined in the paper. The exact specifics of these fixes were not disclosed, and no comment was made on the App Store review process.
«13

Comments

  • Reply 1 of 42
    droidftwdroidftw Posts: 1,009member


    And so the game of whack-a-mole continues.

  • Reply 2 of 42
    Unfortunatelly, the story makes no mention of whether THIS research team had Apple's blessing to engage in this activity. Unlike poor Mr. Balic, self-described "security researcher" who has been villified for exposing serious issues with Apple's Dev Center website.

    http://appleinsider.com/articles/13/07/22/researcher-admits-to-hacking-apples-developer-site-says-he-meant-no-harm-or-damage

    It is sad to think that the Georgia Tech. connection is adequate to insulate this group involved in a potentially malicious activity from the same criticism that an international programmer received.
  • Reply 3 of 42

    Quote:

    Originally Posted by TeaEarleGreyHot View Post



    Unfortunatelly, the story makes no mention of whether THIS research team had Apple's blessing to engage in this activity. Unlike poor Mr. Balic, self-described "security researcher" who has been villified for exposing serious issues with Apple's Dev Center website.



    http://appleinsider.com/articles/13/07/22/researcher-admits-to-hacking-apples-developer-site-says-he-meant-no-harm-or-damage



    It is sad to think that the Georgia Tech. connection is adequate to insulate this group involved in a potentially malicious activity from the same criticism that an international programmer received.


     


    The difference being that this group used the application only on themselves, not real customers.  They didn't take 100,000 developer email addresses in the name of research.  In theory, they could have also used this opportunity to backdoor Apple while the app was actively running for testing.

  • Reply 4 of 42
    jragostajragosta Posts: 10,473member
    According to the research team responsible for creating the software, Apple was unable to distinguish dormant bits of code that would later be assembled into a malicious app. Once installed on a victim's device, Jekyll, disguised as a news delivery app, was able to post tweets, send email and text messages, access the phone's address book, take pictures, and direct Safari to a malicious website, among other nefarious actions.

    OK, so there's a way to bypass Apple's security. But notice that the app did not have any malicious code as submitted - the malicious code was reassembled in use. It's pretty hard to figure out how Apple (or anyone) could block an app which doesn't have malicious code when submitted.

    I guess they'll have to settle for just being 100,000 times more secure than their competition.
  • Reply 5 of 42
    sflocalsflocal Posts: 4,401member


    So a security research firm has to create a scenario to test Apple's defenses.



    Unlike Android, that pretty much keeps their door wide open.



    I'll gladly accept Apple's "security-issues" any day versus what Android does.  

  • Reply 6 of 42
    I am glad they big-noted themselves at Apples expense and made a whole pile of wannabe hackers aware of the flaw instead of just telling Apple.
  • Reply 7 of 42
    nagrommenagromme Posts: 2,834member
    So Apple already fixed this according to Neumayr? Updating iOS to make such exploits fail even IF approved makes sense. Hope that's sufficient, because no approval process alone is ever enough. It's a great first line of defense, lacking on Android, but not the last!

    (The LAST line of defense is also lacking on Android--as long as devices aren't limited to the Play Store: disabling a malicious app once it's caught.)
  • Reply 8 of 42
    mhiklmhikl Posts: 471member
    Note to Apple: This is what some of the 30% is for. Get with the times. Malicious do little sleep need. :no:
  • Reply 9 of 42
    bulk001bulk001 Posts: 434member


    So, walled and insecure? 

  • Reply 10 of 42
    cyniccynic Posts: 124member

    Quote:

    Originally Posted by jragosta View Post





    OK, so there's a way to bypass Apple's security. But notice that the app did not have any malicious code as submitted - the malicious code was reassembled in use. It's pretty hard to figure out how Apple (or anyone) could block an app which doesn't have malicious code when submitted.



    I guess they'll have to settle for just being 100,000 times more secure than their competition.


     


    Well, what's interesting about this whole issue and what got overlooked a bit is the fact that the app does not actually gain any increased rights within the system.


     


    This demonstration has merely proven that the review process can be tricked, either by technical means such as in this example, or by pure luck, such as we have seen in the past through a number of questionable apps.


     


    And while it was possible to get around the approval criteria by injecting additional, potentially malicious code into the application post approval, note that the application itself did not bypass or disable any of iOS's security features. It is therefore subject to the same restrictions as other apps that actually get legitimately approved.


     


    The worst that could have happened is access to certain private APIs, but even those don't bypass security and the app would still run within its own sandbox, etc.


     


    This is very much unlike the competition. ;-)

  • Reply 11 of 42
    cyniccynic Posts: 124member

    Quote:

    Originally Posted by bulk001 View Post


    So, walled and insecure? 



     


    The review process got tricked, no one is talking about insecure, in fact not a single security mechanism of the system itself was cracked.

  • Reply 12 of 42
    jmncljmncl Posts: 42member

    Quote:

    Originally Posted by jragosta View Post

     It's pretty hard to figure out how Apple (or anyone) could block an app which doesn't have malicious code when submitted.

     


     


    They can't. But they can make sure that any donwloaded code can't execute and more important can't cross the app sandbox.


     


    Normally that would be the case for any app. The problem here is that the hack exploited a bunch of iOS bugs to let them around that. Apple just needs to patch those bugs.


     


    In fact iMore is reporting that this hack already doesn't work on iOS7.

  • Reply 13 of 42
    bulk001bulk001 Posts: 434member

    Quote:

    Originally Posted by cynic View Post


     


    The review process got tricked, no one is talking about insecure, in fact not a single security mechanism of the system itself was cracked.



    Oh please. If this was Android, BB or MS you would be all over how they had failed in their security. Just because it is Apple does not give them a pass. And before you accuse me of trolling, I own a company with over 150 Apple devices. Apple has sold us the walled garden as being more secure and it turns out that if you know what you are doing, you can get around it. This time it was a security firm with no intent to do hard but who knows who will "trick" them next time. 

  • Reply 14 of 42
    jmncljmncl Posts: 42member

    Quote:

    Originally Posted by bulk001 View Post


    Oh please. If this was Android, BB or MS you would be all over how they had failed in their security. 



     


    Oh please, Google Play's "security" gets circumvented almost every week. Did you hear about their "Bouncer" that was going to stop everything? It's a total sieve.


     


    So yes Apple's walled system is more secure. This has been demonstrated year after year.


     


    The thing is no company in the world offers 100% foolproof security against determined hackers, it's a continous process.


     


    You also have to understand that in this case these people "who know what they are doing" have PhDs in security.

  • Reply 15 of 42
    gtrgtr Posts: 3,231member
    bulk001 wrote: »
    Apple has sold us the walled garden as being more secure

    Didn't you just answer your own post?

    Apple has never claimed 100% security. Nobody does.

    However, you're more than happy to try the other camp, if you dare.
  • Reply 16 of 42
    MarvinMarvin Posts: 14,195moderator
    jragosta wrote: »
    OK, so there's a way to bypass Apple's security. But notice that the app did not have any malicious code as submitted - the malicious code was reassembled in use. It's pretty hard to figure out how Apple (or anyone) could block an app which doesn't have malicious code when submitted.

    It says the malicious code was submitted with the app but was dormant code. They perhaps split a malicious binary into parts and reassembled it to then execute it later or maybe just didn't call the code directly. They said Apple needs to do dynamic rather than static testing of the submitted apps to fix the issue. The fact they don't run apps explains why some apps get through with crazy in app purchases. Perhaps they need to hire some of their factory workers to manually run apps and see what they do in addition to the automated testing.

    It says here they get 26,000 submissions a week:

    http://articles.latimes.com/2012/mar/14/business/la-fi-tn-apple-26000-20120314

    Hire 300 staff, check 20 apps each manually per day. It can even be HQ staff.
    jragosta wrote: »
    I guess they'll have to settle for just being 100,000 times more secure than their competition.

    Until the review process is made more thorough, this method would allow any app to be submitted with malicious code. They obviously still have tighter security but this is a problem that needs to be fixed.

    There is some inherent damage limitation in the stores in that apps are hard to find anyway and apps that don't really do anything useful in the first place won't be installed much but wherever possible, Apple should make things as secure as they can and dynamic testing would help.
  • Reply 17 of 42
    jragostajragosta Posts: 10,473member
    Marvin wrote: »
    It says the malicious code was submitted with the app but was dormant code. They perhaps split a malicious binary into parts and reassembled it to then execute it later or maybe just didn't call the code directly. They said Apple needs to do dynamic rather than static testing of the submitted apps to fix the issue. The fact they don't run apps explains why some apps get through with crazy in app purchases. Perhaps they need to hire some of their factory workers to manually run apps and see what they do in addition to the automated testing.
    Until the review process is made more thorough, this method would allow any app to be submitted with malicious code. They obviously still have tighter security but this is a problem that needs to be fixed.

    There is some inherent damage limitation in the stores in that apps are hard to find anyway and apps that don't really do anything useful in the first place won't be installed much but wherever possible, Apple should make things as secure as they can and dynamic testing would help.

    Sure. So there's one security issue that affects iOS - but also would affect Android. And there are 100,000 security issues that affect Android and not iOS.

    Which is better?

    As for dynamic testing, that's nice in theory. In practice, it would have enormous impact on the system operation. The OS would be far larger and slower and everything would be dog slow. I don't think that's worth the tradeoff. Apple will presumably settle for 100,000 times more secure rather tan 101,000 times more secure.
  • Reply 18 of 42
    jmncljmncl Posts: 42member

    Quote:

    Originally Posted by Marvin View Post

    It says the malicious code was submitted with the app but was dormant code. They perhaps split a malicious binary into parts and reassembled it to then execute it later or maybe just didn't call the code directly. They said Apple needs to do dynamic rather than static testing of the submitted apps to fix the issue. 

    Hire 300 staff, check 20 apps each manually per day. It can even be HQ staff.

    Until the review process is made more thorough, this method would allow any app to be submitted with malicious code. They obviously still have tighter security but this is a problem that needs to be fixed.



    There is some inherent damage limitation in the stores in that apps are hard to find anyway and apps that don't really do anything useful in the first place won't be installed much but wherever possible, Apple should make things as secure as they can and dynamic testing would help.


     


    That's a misunderstanding of the problem. It's not possible to do "dynamic testing" of downloadables, because they can change at any time. The bad guys could easily leave "safe" downloadables for a few weeks while Apple reviewed the app and then change to malicious ones when it went live.


     


    No amount of staff in the world reviewing apps would change this.


     


    What Apple should fix is to make sure iOS does not allow apps to attack themselves via buffer overflows (the method used in this "Jekyll" attack) and trigger dormant code that way.


     


    iOS already uses Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) methods that would prevent this, but the researchers found a weakness in Apple's ASLR that allowed them to still guess the addresses of their malicious code.


     


    The ASLR protections are exactly what Apple needs to improve to stop this from happening again. If an attacker can't guess the addresses of their malicious code they can't execute it.

  • Reply 19 of 42
    quadra 610quadra 610 Posts: 6,743member


    "A group of researchers from Georgia Tech managed to get a malicious app past Apple's review process, finding the company may run only a few seconds' worth of tests before posting an app to the App Store."


     


     


     


    And yet . . . 


     


     


     


     


     


    Yeah. That's right.

  • Reply 20 of 42


    Is it really any surprise that someone is able to make a legitimate-looking app, and bury some code in there that only activates after some time or criteria has passed?  It's called a trojan for a reason.  Famed security researcher Charlie Miller proved the exact same thing (and subsequently got himself banned from the App Store) by writing a fake finance app.


     


    The part that IS disturbing to me is that I thought that the Contacts and such were supposed to be more or less firewalled until you explicitly give that app permission.  What's up with that?  It's a bit disturbing to think that a rogue app could start sending emails and such.  What is this, Android?!

Sign In or Register to comment.