MacBidouille posts PPC 970 benchmarks

1252628303134

Comments

  • Reply 541 of 665
    johnsonwaxjohnsonwax Posts: 462member
    Quote:

    Originally posted by Ed M.

    You gotta give them the whole skinny in layman's terms man ;-). You should have also mentioned that the reason why most all 3D render developers use double precision FP is because it covers up a lot of shoddy and sloppy algorithms.



    Well, this is quite true.



    Most rendering shouldn't need this kind of precision. Usually the reason why it is demanded is when the algorithm is unable to protect against small precision errors propogating forward throught a calculation chain. Avoiding these are usually pretty straightforward - just a little math, a little reasoning.



    I had a physics professor who *loved* putting problems on exams that would overflow your calculator, demanding that you do a little algebra or such just to keep everything in check. You could expect one such problem in every homework, on every exam. It served as a nice reminder that math != calculation.



    Any programmer who really knows their math should be able to work their way out of these problems with little trouble. In the creative market, SP really should be sufficient.



    That said, there are whole classes of problems that really do require DP because the data feeding in or out is DP. If Apple wants to get into the scientific/engineering arena - which they do seem to want to - then DP is much more important. Of course, the more I think about it, perhaps it's not critically important since once you need more than SP, in many cases you need more than DP as well. For those cases, you're going to have arbitrary precision code which will be slow but will always work. The real market is then who needs very fast, cost effective DP performance but not more than DP in most cases.
     0Likes 0Dislikes 0Informatives
  • Reply 542 of 665
    programmerprogrammer Posts: 3,503member
    Quote:

    Originally posted by johnsonwax

    Of course, the more I think about it, perhaps it's not critically important since once you need more than SP, in many cases you need more than DP as well. For those cases, you're going to have arbitrary precision code which will be slow but will always work. The real market is then who needs very fast, cost effective DP performance but not more than DP in most cases.



    Single precision has a 23-bit mantissa, an 8-bit exponent, and the sign bit. That's a range of only about 16.7 million values with 256 exponents which really isn't a whole lot when you think about it. Consider measuring distance, for example, if you need accuracy of about a millimeter. The largest distance you could measure with single precision in that case would be about 16.7 kilometers. This might seem pretty good at first, but if you want to have a 3D model of a city you're starting to push the limit. There are ways to mitigate this limitation, but representing a whole country at this resolution becomes troublesome and a fair bit of fiddling about.



    Now consider double precision. 53-bit mantissa and 11-bit exponent, plus the sign bit. That means you have about a billion times as many values, and 2048 exponents. At the same precision as above you can now represent a distance of about 18,000,000,000 kilometers. Instead of struggling to represent the size of a country, you can now consider representing our entire solar system (except for a few comets -- http://www.lhs.berkeley.edu/iu/hou/houMS01/SolSys.html).





    Interestingly this same logic applies to the size of address spaces. People see that the migration from 8-bit to 16-bit to 32-bit address spaces went fairly quickly. The first had 256 bytes and was almost immediately unusable for anything sophisticated. The second got us 65,536 bytes (256 times the 8-bit space) and lasted a while before work arounds were needed (and even then who could possibly need more than 640K?). The third has put us up to about 4,300,000,000 bytes (64K times the 16-bit space). The naive person naturally speculates that we'll quickly outgrow the 64-bit address spaces that are arriving in desktop machines this year. What they don't understand is that these new machines can address about 18,500,000,000,000,000,000 bytes. This is 4 billion times the current amount of space. That means that if each webpage in Google's cache of the Internet (currently 3 billion pages) was 1 MB then it would have to grow to about 6000 times its current size to fill the address space of a single 64-bit machine. From a 100 MB/sec ATA drive (assuming you had one that big) it would take almost 6,000 years to read all the data in.
     0Likes 0Dislikes 0Informatives
  • Reply 543 of 665
    johnsonwaxjohnsonwax Posts: 462member
    Quote:

    Originally posted by Programmer



    Now consider double precision. 53-bit mantissa and 11-bit exponent, plus the sign bit. That means you have about a billion times as many values, and 2048 exponents. At the same precision as above you can now represent a distance of about 18,000,000,000 kilometers. Instead of struggling to represent the size of a country, you can now consider representing our entire solar system (except for a few comets -- http://www.lhs.berkeley.edu/iu/hou/houMS01/SolSys.html).




    Yeah, it's a pretty big improvement, but in physics and engineering it's not really that hard to blow away even those numbers.



    Again, it's the problem with propogating small errors forward, for example, in discrete time simulations. These can be cleverly worked around in controlled cases, but not always. Some areas are particularly sensitive to it such as many discrete time dynamic systems modeling - computational fluid dynamics, etc. Mainly because you're routinely bashing very small and relatively large values into each other over huge iterations. Hell, I ran into these problems as an undergraduate 15 years ago. Lots of fun optimizing code, but I never found my way back into DP space in most cases.



    A whole pile of problems will work nicely in the DP space and outside of the SP space, but I'm just not quite sure how many of these also will run outside of the DP space. While some gains can be made by strong hardware support for DP, in the more general case a lot may still need to stay in software. That was my experience, at least.



    So, while DP Altivec might be great for some limited markets, it might really be too limited for the overall scientific community to bother coding for. Perhaps better to seek a different solution, like running it over a cluster, or just buying some big iron time to get the needed performance rather than trying to justify silicon R&D.
     0Likes 0Dislikes 0Informatives
  • Reply 544 of 665
    johnsonwaxjohnsonwax Posts: 462member
    Quote:

    Originally posted by Gizzmonic

    LBB, how can you argue that a decent-performing PowerPC CPU would be an impetus to migrate to x86? x86 should (and is) a last-ditch effort if the PowerPC can't keep up at all.



    (snip)



    The 970 will bring Macs into performance parity with PCs for ALL tasks, not just a few photoshop filters. There's simply no need to see Mac OS on x86. If that happened, Apple would end up like also-rans OS/2 and BeOS.




    No, a decent-performing PPC is a precondition to introducing an additional hardware platform. Keep in mind that right now, Apple probably has one of the larger application groups out there in the world. To repeat: Apple's model must be one that minimizes profits on systems in order to maximize sales (and therefore profits) on software and accessories (iPods, etc.). Apple is making pretty big money on the PC iPods. Apple could make pretty big money on non-PPC OS X. Note that Apple substantially dropped price points in Jan.



    Apple cannot, however, afford to lose system sales, and Apple doesn't yet have the sales and profits coming out of the software group to carry the company. OS X for x86-32 simply won't happen. OS X for x86-64 just might, however. It establishes two things:



    1) Apple as clear owner of the affordable 64 bit space. I guarantee that Apple can move developers faster than MS can.



    2) A low-risk enterprise channel. Enterprise can buy x86-64 boxes from Apple and run Windows instead of OS X if need be. Since most enterprise customers are site-licensing Windows anyway, there's really little added cost. Apple potentially gets an OS X user that it otherwise might not have, and worst case sells a box and never sees the profits from software and accessories. Regardless, Apple gets in the door.



    Of course, unless PPC can match the performance who wouldn't just buy x86-64 once the apps are all there? (PPC will almost always be faster provided the OS takes advantage of Altivec) Long term, Apple probably doesn't care. This is where Apple can get out of the business of propping up the PPC market because, well, if it dies, so what? It puts the burden on IBM to keep PPC alive against x86-64 and AMD to keep x86-64 alive against PPC. Intel seems too busy watching the clouds float by... So long as Intel doesn't win the whole pie, Apple is okay. Of course, Apple can't afford to split the market such that both go under either, but this whole thing is pitched as inroads to Intel, so as long as that happens it'll be fine.



    Ignore x86-32. You're right, that's a lost cause. Consider only x86-64 not as a replacement but as an addition. Xserves, Powermacs, etc. Then reconsider your arguments. Don't be shocked if Opteron systems come out of Apple at WWDC. Don't be shocked if they don't, either. Sooner or later, Apple will make a move like this and this really does look like the best possible time to do it, IMO.
     0Likes 0Dislikes 0Informatives
  • Reply 545 of 665
    lemon bon bonlemon bon bon Posts: 2,383member
    Quote:

    Apple doesn't yet have the sales and profits coming out of the software group to carry the company.



    Shrewdly stated in an insightful post.



    Apple is headed towards OS hardware independence long-term. I think four years n the mhz wilderness has probably taught Apple a lesson or two. It can't continue to hang onto the tit of M$, Moto...or IBM long term for that matter. Apple's many problems...have largely been self inflicted.



    As a sole supplier, IBM will do fine for the next few years.



    As for 'Wintel'/Apple 64 bit. If Wintel are starting from scratch on the next floor up...that gives Apple a fine opportunity to enter a huge market.



    There is a huge perception problem surrounding Apple. Once they say they 'support' the Wintel market in a similar or different manner to how they ratcheted up iPod for PC... Imagine software and services for Apple from PC sales at the current iPod ratio. Apple would be close to a billion turnover a year. But Apple aren't at this point yet. They need to secure the PPC position first. Tread carefully. Put the pieces in place like the iTunes for Wintel music store. These are all parts of the puzzle.



    It is at this nexus of 64 bitness that Apple has finely come to its strongest point in years...and Wintel at its most weak and divided. It will be interesting to see what, if any, Wintel support Panther has. Macs already get to play nice in Wintel networks. What next?



    Real PC are back on the scene. Strange that. And they're working verrrry closely with Apple. Dual 970 gives 1 gig Pentium 3 performance...I bet. That would be quite good.



    Imagine what you could emulate with dual 980s? Emulation is only a small part of an overall strategy to grow revenue. I've never seen Apple this divergent. And I bet it will continue.



    Apple will no longer cower in the shadow of M$'s threats to withdraw 'Office'. Apple will introduce their own. Apple have shown with iPod that Apple can compete and beat Wintel on their own turf.



    Lemon Bon Bon
     0Likes 0Dislikes 0Informatives
  • Reply 546 of 665
    lemon bon bonlemon bon bon Posts: 2,383member
    Quote:

    No, a decent-performing PPC is a precondition to introducing an additional hardware platform. Keep in mind that right now, Apple probably has one of the larger application groups out there in the world. To repeat: Apple's model must be one that minimizes profits on systems in order to maximize sales (and therefore profits) on software and accessories (iPods, etc.). Apple is making pretty big money on the PC iPods. Apple could make pretty big money on non-PPC OS X. Note that Apple substantially dropped price points in Jan.



    I liked this bit.



    ...and you're right...they can't intro' x86 support. At least, not yet. The PPC line aint off life support but I think Tower sales early 2004 will prob' look alot better.



    More likely a AMD/Intel 64 bit alliance is forged at some point.



    The good thing is that Wintel will do the grunt work to enable Apple to take them on...



    I'm curious to how my private bets will play out in the next several years.



    Lemon Bon Bon



    Quote:

    Consider only x86-64 not as a replacement but as an addition. Xserves, Powermacs, etc. Then reconsider your arguments. Don't be shocked if Opteron systems come out of Apple at WWDC. Don't be shocked if they don't, either. Sooner or later, Apple will make a move like this and this really does look like the best possible time to do it, IMO.



    I liked that one too. It's not an 'either/or' proposition. It's choice. It's about expanding the Apple universe and profits. The iPod has shown Apple can do this without compromising the Mac userbase.





    Quote:

    Don't be shocked if Opteron systems come out of Apple at WWDC. Don't be shocked if they don't, either.



    I should imagine that an Opteron could emulate a 1 gig G4 in its sleep. That could protect alot of software investments...
     0Likes 0Dislikes 0Informatives
  • Reply 547 of 665
    rhumgodrhumgod Posts: 1,289member
    Quote:

    Originally posted by Programmer

    That means that if each webpage in Google's cache of the Internet (currently 3 billion pages) was 1 MB then it would have to grow to about 6000 times its current size to fill the address space of a single 64-bit machine. From a 100 MB/sec ATA drive (assuming you had one that big) it would take almost 6,000 years to read all the data in.



    Interesting analogy...never thought of it in those terms of loading memory from disk (where the obvious bottleneck is). Could you imagine the performance of loading an entire OS (ala Panther) in NVRAM and just loading from that, kind of like a thin client with no disk. You could almost set up an entire "file system" in the memory footprint that you do on disk. Could almost rid yourself of the need for HDDs entirely.



    Problem is NVRAM is WAAAAAAAY expensive for that large of an OS. I wonder if anyone is developing anything in this direction?
     0Likes 0Dislikes 0Informatives
  • Reply 548 of 665
    programmerprogrammer Posts: 3,503member
    Quote:

    Originally posted by Rhumgod

    Interesting analogy...never thought of it in those terms of loading memory from disk (where the obvious bottleneck is). Could you imagine the performance of loading an entire OS (ala Panther) in NVRAM and just loading from that, kind of like a thin client with no disk. You could almost set up an entire "file system" in the memory footprint that you do on disk. Could almost rid yourself of the need for HDDs entirely.



    Problem is NVRAM is WAAAAAAAY expensive for that large of an OS. I wonder if anyone is developing anything in this direction?




    Considering that consumer camera flash memory cards are expected to hit 8 GB this year or next, I'd say that is one possibility. The OS isn't very large, however, it is the data that the OS manipulates that can get massive. Fast, solid state "file system" devices exist but it doesn't look like they will catch up with the ol' spinning magnetic disk.
     0Likes 0Dislikes 0Informatives
  • Reply 549 of 665
    programmerprogrammer Posts: 3,503member
    Quote:

    Originally posted by johnsonwax

    Yeah, it's a pretty big improvement, but in physics and engineering it's not really that hard to blow away even those numbers.





    Absolutely, but my point was that double is exponentially better than single precision and that is sufficient for a very large class of problems, especially if you don't squander your precision in needless ways. It will always be possible to exceed fixed precision, however, and that is why things like Mathmatica use variable precision numerical representations.
     0Likes 0Dislikes 0Informatives
  • Reply 550 of 665
    rhumgodrhumgod Posts: 1,289member
    Quote:

    Originally posted by Programmer

    Considering that consumer camera flash memory cards are expected to hit 8 GB this year or next, I'd say that is one possibility. The OS isn't very large, however, it is the data that the OS manipulates that can get massive. Fast, solid state "file system" devices exist but it doesn't look like they will catch up with the ol' spinning magnetic disk.



    Problem is the speed at which data flows into those flash memory cards is way too slow for an OS to operate in. Consider how long it takes to capture, say, an 1856x1392 image to that flash card. My Sony DSCF505V takes about 2 or 3 seconds. And at higher resolutions, it is worse.



    Would be neat though.
     0Likes 0Dislikes 0Informatives
  • Reply 551 of 665
    gizzmonicgizzmonic Posts: 511member
    History has taught us that any OS that relies completely on emulation will fail.



    The PPC970 is all the CPU that Apple needs. They can challenge PC performance without giving up their main revenue stream, hardware.



    The release of an x86 Mac would be like the clone saga all over again, only many times worse. If you were a developer, what would you do?



    Continue to support two architectures on a platform with tiny market share, where you can't use (Carbon) code between the two archs, after you just got done making an extensive rewrite for OS X? Or drop OS X and tell them to use the Windows version of under emulation?



    I don't want Mac OS X to be another OS/2. I don't want to see Apple software dwindle to Quicktime, FCP for Windows, and iApps for Windows and Apple hardware dwindle to the iPod. That's what Apple will become if they go to any x86-compatible architectures.
     0Likes 0Dislikes 0Informatives
  • Reply 552 of 665
    Quote:

    Originally posted by Lemon Bon Bon

    Real PC are back on the scene. Strange that. And they're working verrrry closely with Apple. Dual 970 gives 1 gig Pentium 3 performance...I bet. That would be quite good.





    I have heard, from varying sources in the Open Source community that Apple has also been working with the good folks who developed "bochs".



    What bochs does is emulate the Windows environment. What I could see Apple doing is developing a bochs that would allow you to run Windows apps much like you can run Classic apps. It would behave like the classic world, and would seem near to invisible to the end user.



    The only problem I have had with this statement was that the G4 (dual or otherwise) would not be able to handle effectivly that kind of emulation.



    However, I am feeling that with the 970 just around the corner, and from what we have seen as far as benchmarks are concerned; that might become a good possibility.



    Ok, you can now toast away...
     0Likes 0Dislikes 0Informatives
  • Reply 553 of 665
    keyboardf12keyboardf12 Posts: 1,379member
    toast.



    good news. (but i still have to build a pc soon) i need to compile stuff so i do'nt think emulation will cut it for that, but for webby stuff. boyah 970 bochs.



    I wonder when you say "working with them" if it would be in a similar manner as khtml and safari.
     0Likes 0Dislikes 0Informatives
  • Reply 554 of 665
    Quote:

    Originally posted by keyboardf12

    toast.



    good news. (but i still have to build a pc soon) i need to compile stuff so i do'nt think emulation will cut it for that, but for webby stuff. boyah 970 bochs.



    I wonder when you say "working with them" if it would be in a similar manner as khtml and safari.




    Again, only hearing stuff from the grapevine, but what I mean is that Apple hired several people that was doing bochs development. I believe they are trying to make it as transparent as Classic was (i.e. not a windowed emulation, but just a free-standing emulation).
     0Likes 0Dislikes 0Informatives
  • Reply 555 of 665
    programmerprogrammer Posts: 3,503member
    Quote:

    Originally posted by Rhumgod

    Problem is the speed at which data flows into those flash memory cards is way too slow for an OS to operate in. Consider how long it takes to capture, say, an 1856x1392 image to that flash card. My Sony DSCF505V takes about 2 or 3 seconds. And at higher resolutions, it is worse.



    Would be neat though.




    That's for a single chip though -- in a desktop machine you could hook 8 or 16 of them up in parallel and get much higher throughput. The new xD format is quite fast for a consumer part as well, so the combination should make for an acceptable speed. Future solid state technologies hold promise as well.
     0Likes 0Dislikes 0Informatives
  • Reply 556 of 665
    netromacnetromac Posts: 863member
    I don't think Apple is ever going to make Windows emulation so integrated with the os as the case was for classic.

    If we finally get macs that can run windows in emulation at decent speeds, and Apple made that kind of emulation environment, wouldn't that make developers think twice before bother making an os X version of their apps. Would MS bother making another office if Office XP ran decently in emulation mode? This is kinda OS/2 history repeating itself, isn't it?



    No I think the best solution still is for 3'rd parties to develop and sell (with help from Apple) the emulation software. Someone of them can maybe find a way to make windows applications behave as classic apps, but at least it will not be bundled with the os, and it will not be supported by Apple. This way software companies will have a reason to develop specifically for os X.
     0Likes 0Dislikes 0Informatives
  • Reply 557 of 665
    keyboardf12keyboardf12 Posts: 1,379member
    but if bochs is release and can be used in the same manner as webCore( i forget what they call it) it can be picked up and used by them in the same way omni is doing with the new version of their browser.
     0Likes 0Dislikes 0Informatives
  • Reply 558 of 665
    netromacnetromac Posts: 863member
    Quote:

    Originally posted by keyboardf12

    but if bochs is release and can be used in the same manner as webCore( i forget what they call it) it can be picked up and used by them in the same way omni is doing with the new version of their browser.



    Huh I'm obviously too stupid to understand what you're talking 'bout here
     0Likes 0Dislikes 0Informatives
  • Reply 559 of 665
    keyboardf12keyboardf12 Posts: 1,379member
    hi. sorry. no coffee yet.



    as you know, safari is really made up of 2 parts. the cocoa app, windows, menus etc. and the html rendering engine. this engine is actually open source. it was started by the kde group and called khtml. apple took khtml and made is better and gave back their changes to the OS community.



    the html rendering portion is called webCore. this allows 3rd party developers to "use" this rendering engine and display html code in their apps. like omniweb is doing with their next rev of their web browser. they are ditching the old rendering engine and using webcore to give them the exact capablities of safari. (plus what ever they add) itunes uses the same for the music store.



    if apple did the same with the boch code (winCore?) then 3rd parties could use the winCore and build their own windows emulators.
     0Likes 0Dislikes 0Informatives
  • Reply 560 of 665
    Good god lets not get started with this Bochs/windows emulation again. Even I started a topic about this subject nearly 6 months ago. Can't we just move on?



    Back to lurking.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.