Transitive supplying Apple with foundation for Rosetta technology

Posted:
in macOS edited January 2014
Transitive Technologies confirmed Tuesday that it is providing Apple with technology that allows old Macintosh software programs to run on computers with Intel processors.



The Los Gatos, Calif.-based company of 65 has been working with Apple on Rosetta, a dynamic binary translator that runs PowerPC code on Intel-based Macs transparently to the users and without a significant speed decrease.



"Steve was nice enough to recognize a relationship with us,'' Bob Wiederhold, CEO of Transitive, told The Mercury News. "Our efforts involve integrating our technology into their system software.''



Rosetta is an integral piece of Apple's transition to Intel-based Macs because it means consumers won't have to throw away their old software when they buy a new computer from Apple with Intel chips.



Transitive's translation software reportedly consists of three parts: The first is a decoder, which takes the code of the PowerPC binary and converts it into an intermediate format. Next, a "core processing engine" takes the intermediate format and calculates how fast it can run the older software in its new form.



The final part converts the software into a form that runs on the Intel-based Mac. "This software sits on the computer, in this case an Apple computer that uses Intel chips," writes The Mercury News. "Whenever a consumer clicks on an old Apple program loaded onto the computer, the translation software starts. It translates and leaves the final code stored in the system's main memory chips. If the consumer uses that software again, the machine can run the translated code from memory."



Such translation software has reportedly existed for decades, but because memory used to be a scarce commodity the translation usually slowed the speed of the computer significantly.



According to Wiederhold, the technology uses only about 25 percent more memory to run the PowerPC binaries on Intel chips when compared to the original application. He said his team figured out how to perform translation "at roughly 70 percent to 80 percent of the speed at which it ran on the original computer."

Comments

  • Reply 1 of 15
    So the translated app would run, "at roughly 70 percent to 80 percent of the speed at which it ran on the original computer."



    roughly? so it could run at 50 or 40 percent?
  • Reply 2 of 15
    boogabooga Posts: 1,082member
    Quote:

    Originally posted by Mike Moscow

    So the translated app would run, "at roughly 70 percent to 80 percent of the speed at which it ran on the original computer."



    roughly? so it could run at 50 or 40 percent?




    Or, if it spends most of its time in Apple libraries and Apple allows those to run native, 80-90%.



    Also, considering the speed of Intel chips these days, probably faster than the fatest Mac today.
  • Reply 3 of 15
    mithimithi Posts: 5member
    http://www.apple.com/quicktime/qtv/wwdc05/



    this is steve jobs telling you in his developers confrence about rosetta
  • Reply 4 of 15
    QUOTE: "If the consumer uses that software again, the machine can run the translated code from memory."



    Excellent news!!! So just pump up 8 GB of memory! (just kidding... should it be fine with 1 GB ?)
  • Reply 5 of 15
    louzerlouzer Posts: 1,054member
    Quote:

    Originally posted by Booga

    Or, if it spends most of its time in Apple libraries and Apple allows those to run native, 80-90%.



    Also, considering the speed of Intel chips these days, probably faster than the fatest Mac today.




    "Apple allows"? What's that to mean. That apple may not allow some software to run that fast? And since this is all marketing, I would easily expect slower than 70%. And average more in the 50-60% the original poster said. I've heard too many speed claims about software or hardware only to realize they're always talking, esp. early on, about theoreticals and best-cases, not real-life.



    And notice they don't say WHAT computer they're saying it will be 80% of, my iMac G5, your dual G4, or some 2.7GHz Dual G5? So it might be 80% of the high-end, which is 40% or less on the low-end.



    And to say it'll be faster then the fastest mac today, that's iffy at best. I'd first want to see how OS X runs on one of those Intel macs before making some rash judgement that its going to be faster because, I don't know, its got a higher clock speed or something.



    All I know is I have a 3.2GHz Dell on my desk, and the time it takes to do some simple tasks (open windows explorer, click on a mail) at times is downright infuriating. And MS has had 10 years to work on optimizations. And while Apple's been keeping OS X dual-platform, does that mean they've spent the time to dual-optimize it? And OS X is slow to begin with (unless you're working with a dual-G5, which is a lot faster then a single 3.8 pentium), with a LOT of overhead.



    Oh, and if they were smart, they should take the translated code and shove it into a cache somewhere, so if you boot up tomorrow, they don't have to re-translate again.
  • Reply 6 of 15
    nagrommenagromme Posts: 2,834member
    Quote:

    Originally posted by Louzer

    "Apple allows"? What's that to mean. That apple may not allow some software to run that fast?



    He didn't mean "allows" that speed, I think he said "allows" those components to run natively.



    Quote:

    And notice they don't say WHAT computer they're saying it will be 80% of, my iMac G5, your dual G4, or some 2.7GHz Dual G5? So it might be 80% of the high-end, which is 40% or less on the low-end.



    They didn't say 80% of a current Mac's speed, they said 80% of native Intel speed--at least that's how I read it. It will vary, though.



    Quote:

    And to say it'll be faster then the fastest mac today, that's iffy at best.



    Yes it is--that would be a long shot. But it will be USABLY fast--and on the fastest 2007 PowerMac it MAY be faster at many things than current top PowerMacs. And usable is all that's needed, since Rosetta is merely a back-plan to tide people over on certain apps that don't go native right away. Most major apps will, however--there's time--and so Rosetta won't be the main way you use your apps at all. Remember that need for top speed is in Power Macs, and they don't change over until 2007 supposedly.



    Quote:

    All I know is I have a 3.2GHz Dell on my desk, and the time it takes to do some simple tasks (open windows explorer, click on a mail) at times is downright infuriating. And MS has had 10 years to work on optimizations. And while Apple's been keeping OS X dual-platform, does that mean they've spent the time to dual-optimize it?



    On the same hardware, Windows will be faster at some things and OS X at others. But no Mac will have a 2.3 Ghz Pentium 4--or any Pentium 4 at all, most likely. They'll have what Intel is offering in 2006 and 2007. And your Windows PC may have specific problems too, that shouldn't be used to judge OS X.



    I'm sure Apple has maintained quality control over OS X on Intel--otherwise they'd simply dig themselves into a 5-year hole. Considering that Steve did all his keynote demos--lots of apps--on Intel, I'd say they didn't do that to themselves. And they have another year to keep optimizing if they need too.



    And don't forget: OS X on Intel has been around LONGER than Windows XP. Apple's had 5 years to optimize OS X on Intel. Microsoft has had less than that. And building on Win 95, ME, etc. isn't necessarily an advantage for speed: that's legacy bloat.
  • Reply 7 of 15
    schmidm77schmidm77 Posts: 223member
    Wasn't there a system similar to this for the DEC Alpha version of NT 4 that converted x86 software to native Alpha code? If I remember correctly, each time an app was run, the converter would produce something a bit more optimized than the last until eventually the converted app ran at near native speed. I remeber it also did a permanent conversion so that it didn't have to start over when the machine was rebooted.



    My memory's a little vague right now though.



    edit: just did a google search - yep it was called FX!32.
  • Reply 8 of 15
    macvaultmacvault Posts: 323member
    Seems like I read on Slashdot or somewhere just about a week ago that Transitive is going out of business or something like that. Or am I smoking something?
  • Reply 9 of 15
    mr. memr. me Posts: 3,221member
    Quote:

    Originally posted by schmidm77

    Wasn't there a system similar to this for the DEC Alpha version of NT 4 that converted x86 software to native Alpha code? If I remember correctly, each time an app was run, the converter would produce something a bit more optimized than the last until eventually the converted app ran at near native speed. I remeber it also did a permanent conversion so that it didn't have to start over when the machine was rebooted.



    My memory's a little vague right now though.



    edit: just did a google search - yep it was called FX!32.




    FX!32 is supposed to run x86 code at 70% of the host (Alpha) speed.
  • Reply 10 of 15
    jarodsixjarodsix Posts: 8member
    Quote:

    And don't forget: OS X on Intel has been around LONGER than Windows XP.



    WinXP = Win NT 5.1

    Win2k = Win NT 5.0

    Win NT4



    So I would say M$ had much longer time to optimize their NT technology (much of todays winxp is just NT wrapped in new graphics and optimized + support for new tech).



    Though NeXTStep and Openstep have also been on x86 back then.
  • Reply 11 of 15
    aslan^aslan^ Posts: 599member
    Quote:

    Originally posted by Macvault

    Seems like I read on Slashdot or somewhere just about a week ago that Transitive is going out of business or something like that. Or am I smoking something?



    You'd be thinking of Transmeta - they made cool, energy efficient chips that nobody wanted
  • Reply 12 of 15
    schmidm77schmidm77 Posts: 223member
    Quote:

    Originally posted by Mr. Me

    FX!32 is supposed to run x86 code at 70% of the host (Alpha) speed.



    Significant;y better than just running it through an emulator or virtualized environment everytime. I wonder why nobody else has done anything with this sort of technology as there is only so far you can push the performance of hardware emulation.
  • Reply 13 of 15
    junkyard dawgjunkyard dawg Posts: 2,801member
    Sounds awesome.



    I figure even if they can get Rosetta running PPC code at 50% of PPC performance, that will be fine. Consider this: most Mac users are running between 1 GHz G4s and 2 GHz G5s. Intel's Pentium 4 is at 3.6 GHz a full year before the first MacIntel ships. So for many Mac users who upgrade to a MacIntel, their Intel-based Mac will run their apps faster in emulation than their old 1 GHz G4 Powermac did natively!



    If the talk of dual core Pentiums pans out, then it may very well come to pass that, except for Altivec code, Rosetta is faster than using a PPC-based Mac.



    Also, Transitive and Apple have a full year to further optimize Rosetta. If it's already acceptable now, we might be in for a surprise when MacIntels finally ship!
  • Reply 14 of 15
    xoolxool Posts: 2,460member
    I would expect that a dual core processor could run a translated app pretty fast. One thread could do lookahead translation on one core and the other core could execute the translated code at full speed. Guess we'll just have to wait and see though.
  • Reply 15 of 15
    strobestrobe Posts: 369member
    ABI-level emulation is nothing new. QEMU can do this.



    The main question for me is if you have to translate all the PowerPC libs as well, or is there some magical way for function arguments to be passed in a generic way in Mach-O's ABI. Also it isn't clear if x86 apps can load PowerPC libs. For example can Photoshop x86 load PowerPC plugins.



    I suspect not.
Sign In or Register to comment.