or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › MacBidouille posts PPC 970 benchmarks
New Posts  All Forums:Forum Nav:

MacBidouille posts PPC 970 benchmarks - Page 14

post #521 of 666
Quote:
Originally posted by Lemon Bon Bon
Has anybody checked out Macrumors? They've got an 'oobie-doobie' link which claims some a mystery 'ports' on the 970 motherboard. Firewire 800x2 of course. Amongst others...

Lemon Bon Bon

Those ports are designed to suck up lemon juice
Mac Pro 2.66, 5GB RAM, 250+120 HD, 23" Cinema Display
MacBook 1.83GHz, 2GB RAM
Reply
Mac Pro 2.66, 5GB RAM, 250+120 HD, 23" Cinema Display
MacBook 1.83GHz, 2GB RAM
Reply
post #522 of 666
"His assumptions were based on public info about the processor, however. Moki has hinted that this info is, at least to some degree, not strictly accurate, and that there are several key 'surprises' about the chip or it's decendents (typically, he was rather obtuse about this!)

Of course, i have no inforation to back my theory up, and i don't totally subscribe to it myself, but if Apple thought that with this chip and the rumoured ppc980 or other 9xx series chips, it could offer a decent advantage over any x86 based machine, it could introduce a x86 based version of OS X whilst reducing the likelihood that it would endanger it's PPC hardware sales. Whether this would come don t releasing it's own x86 machines or offering OS X to third party manufacturers is another matter."

I think that sooner or later, Apple will have to really go after that 98% and that it can't quite do it while it is viewed as 'proprietary.'

The 9XX roadmap seems to finally give PPC the edge over the next few years. And that buys 'Apple Software Co.' time. For what? If PPC kit outperforms Wintel kit in key markets such as workstation, print and video...they're going to keep buying premium priced PPC kit. This gives Apple the chance to have a pop at the Wintel consumer market. No way? They already are. iPod. iTunes 4 by the years end. Apple software on Wintel? No way...? Way?

970 iMacs that might run Wintel 'as fast' as Wintel? Gives access to 'X'. A licensed version of 'X' to Intel/Amd? Key people like Dell? Dell are already selling iPods. Dell has a massive customer base. How about an agnostic OS that can run on PPC and Intel hardware?

I don't see this as something they can do overnight. Certainly NOT while they're moving to 64-bit. Not while they're 'breaking even'. Not while the transition to 'X' is incomplete...(and it is incomplete by a mile in terms of Apple's userbase adoption...despite what Apple says...)

They may build 'x86' in x86 support. If the Wintel market is moving beyond 'x86' then build in support for 'next gen' of Wintel chips. Encourage key developers to begin 'fat compiling' versions which will run PPC/Hammer/Itanium. This could take some time...and may not begin until developers 'frayed' nerves have been soothed with 'X' sales of eg Quark Aqua for a few years.

To me...this is a very long-term Trojan Horse battleplan. And it maybe something fundamental to Panther's gameplan. This maybe the beginning of something new. If the 970 was made for Apple by a player like IBM...what has been ceded in return? Why did Jobs talk at an Intel conference ('He's Pixar's CEO...')? Yeah. That's right...

Apple already make as much from Software/Services/'Other Products' as they do 'power'Mac sales. And with more Apple software and 'other services' coming on line along with 70 plus retail stores at the end of the year...we can expect that number to increase.

We can expect iTunes 4 for Wintel by the end of the year for the 'music store'. If it does for sales what iPod PC has done for 'Mac', sorry, 'Apple' growth then...

This divergent strategy from 'new' Apple hints that you can't go on indefinitely ignoring a hundred million customers with proprietary kit. Alot of PC users are whinging about the fact they can't have a go at 'X' on their computers...or play with iTunes Music Store...or those great iApps...or the iPod...oops...Apple gave them a version 'just for them' and they sell like crazy.

Apple has been increasingly observant of 'open standards' (on their terms) these last few years. I wonder if they will broaden this to 'non-proprietary' Macs? And broader support for other markets from within 'X'..? 'X' for IBM servers...'X' for Intel renderfarms..., 'X' for cheap Wintel Boxes...'X' for PPC workstation markets (As Amorph would say, 'That's where Apple makes its profits...')

Apple have discovered they can make money in a Wintel market. In fact, they now sell more iPods in PC land than they do in Mac land. And they're dominating market share that 'music player market.' How about that...(better than the 2% they play with in Mac land. And it appears their 'new found confidence' is growing...

We have been warned.

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #523 of 666
You're saying that the 970 will come with a straw?



Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #524 of 666
LBB, how can you argue that a decent-performing PowerPC CPU would be an impetus to migrate to x86? x86 should (and is) a last-ditch effort if the PowerPC can't keep up at all.

Also, how can you argue that an Apple is any LESS proprietary than a Dell? They are equally proprietary, as is every piece of hardware made by Intel, AMD, Western Digital, Micron, etc. Sorry if I sound rude, but it's a real sticking point for me. I've listened to too many morons complaining that "Apple has a monopoly on Apple Macs."

The 970 will bring Macs into performance parity with PCs for ALL tasks, not just a few photoshop filters. There's simply no need to see Mac OS on x86. If that happened, Apple would end up like also-rans OS/2 and BeOS.
Dual 2Ghz G5, Single 2Ghz Xserve G5, Dual 1Ghz QS G4, Single 1.25Ghz MDD G4.
Reply
Dual 2Ghz G5, Single 2Ghz Xserve G5, Dual 1Ghz QS G4, Single 1.25Ghz MDD G4.
Reply
post #525 of 666
Quote:
Originally posted by NETROMac
As you said, nothing new or remotely interesting, not to me at least. I added a few shots under that shows what kind of accelerating altivec can be capable of.

Thanks NETROMac.
post #526 of 666
Quote:
Originally posted by Programmer
Audio and colour spaces are better represented by 64-bit floating point which 32-bit PowerPCs do just fine.

Well, not if you want performance.

In this case, I'm thinking more of the efficiency with which the 970 will do 64-bit FP than some notion that it's a new capability.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #527 of 666
Quote:
Originally posted by Amorph
Well, not if you want performance.

In this case, I'm thinking more of the efficiency with which the 970 will do 64-bit FP than some notion that it's a new capability.

With an AltiVec implementation the G4 will hold up okay. Depending on the size of your working set we may see little improvement with the 970 @ 1.4 GHz.

But yes, the 970's improvement will be very significant in doing scalar floating point work and anything memory bound.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #528 of 666
I heard the makers of Vue (landscape renderer...) said that Apple could really do with beefing up double precision floating points. For more precise rendering. I didn't quite 'get that'.

Can't the Altivec unit do a double precision float? Or is it limited to single precision. Which Vue doesn't support?In fact, I always thought that the Altivec unit could be used as an extra fpu unit.

Sorry, not a clue what I'm talking about here. Paraphrasing and thrashing about wildly some talk I heard from the makers of Vue.

Personally, I know 3D performance could be improved on all platforms. Especially Apple towers.

Maybe somebody could clear up the Vue thing for me.

On those Macdoobie benches...even the low-end 1.4 970 humbles the the 'twice as fast' Pentium 4 at 3 gig. 'Apparently.'

I'm not a techie. (What, you can tell?) But those benches are still wowing me. That's another whole level for the Mac platform... I can see Alias adding to those 25% of customers being Mac.

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #529 of 666
Mr. Lemon

To answer your question about AltiVec

AltiVec can NOT do double precision

Is it a good enough answer?
Mac Pro 2.66, 5GB RAM, 250+120 HD, 23" Cinema Display
MacBook 1.83GHz, 2GB RAM
Reply
Mac Pro 2.66, 5GB RAM, 250+120 HD, 23" Cinema Display
MacBook 1.83GHz, 2GB RAM
Reply
post #530 of 666
Quote:
AltiVec can NOT do double precision

Thanks for the confirmation Leonis.

So...why did Eon think it a problem? Doesn't it allow as accurate colour rendering? Do you get the odd pixel pop?Renders on a Mac always looked pretty good to me...

S'pose it don't matter if we're getting 970 fpu Hell Hounds!!!

Next dumb question...can the 970 do double precision?



Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #531 of 666
Quote:
Originally posted by Lemon Bon Bon
Thanks for the confirmation Leonis.

So...why did Eon think it a problem? Doesn't it allow as accurate colour rendering? Do you get the odd pixel pop?Renders on a Mac always looked pretty good to me...

S'pose it don't matter if we're getting 970 fpu Hell Hounds!!!

Next dumb question...can the 970 do double precision?

OK, let's get this sorted out:

The G3 and G4 FPUs can do double-precision floating point. They just can't do it very quickly relative to other CPUs. (The G3 is particularly weak at FP at any precision).

The 970 FPU can do double-precision floating point, and it looks to be a significantly better performer than the G4 at this task.

AltiVec can indeed be used as an FPU, but only for single precision (32 bit) floating point. The G4 and the 970 both have dedicated scalar FPUs that can do both single and double precision work.

Double precision FP makes it easier to write rendering routines with a high degree of precision. AltiVec can't accelerate these routines, because it only works in single precision. (This is not to say that it isn't possible to write very accurate single precision renderers, only that hardly any of the commercial companies actually have. They lean on high-performing double-precision hardware instead.) The upshot of this is that if the 970 crunches double-precision floating point numbers significantly faster than the G4 - which has nothing to do with the fact that it's a 64 bit chip - then existing renderers will see a dramatic speedup.

As for 64 bit color, the video card doesn't have to know anything about 64-bit FP color, because the monitor it's driving certainly doesn't. The software could do all the work on the CPU, downsample to 24 bits, and the GPU would be none the wiser. However, if you want to be able to offload any color-dependent processing to the GPU (remember that transparency is a "color"), then it does have to be able to work with 64-bit FP color. No matter what, it'll end up spitting out 24-bit integer color, but the ability to work at a much higher precision and downsample for output will result in noticeably slicker, clearer rendering and compositing output.

The trend now, of course, is to get the GPU to do as much work as possible. I'm not sure which, if any, currently available GPUs support a 64-bit color space, but if they're not here they're coming. John Carmack's been banging this drum for years now, and they listen to him.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #532 of 666
Quote:
Originally posted by Amorph
John Carmack's been banging this drum for years now, and they listen to him.

I wonder why
Former WWDC Watchdog.
Reply
Former WWDC Watchdog.
Reply
post #533 of 666
Quote:
Originally posted by Amorph
OK, let's get this sorted out:

A couple of points where I think there is potential confusion...

Quote:

AltiVec can indeed be used as an FPU, but only for single precision (32 bit) floating point. The G4 and the 970 both have dedicated scalar FPUs that can do both single and double precision work.

AltiVec is a vector unit and can only be used for vector operations. Some of those possible operations are on vectors of 4 32-bit floats, but it cannot be used as a scalar floating point unit which is what the term "FPU" is usually refering to. This means to use AltiVec the developer must write vector code, and that is not a standardized process so the code is unique to PowerPC AltiVec processors. Scalar code, on the other hand, is built into C/C++ (and most other languages) and is cross-platform. Most floating point code out there is scalar in nature (i.e. it works on one number at a time) and therefore does not use the AltiVec unit.

There are some tools available to convert scalar code into vector code, but their use is not widespread and they are not tremendously effective.

Quote:

Double precision FP makes it easier to write rendering routines with a high degree of precision. AltiVec can't accelerate these routines, because it only works in single precision. (This is not to say that it isn't possible to write very accurate single precision renderers, only that hardly any of the commercial companies actually have. They lean on high-performing double-precision hardware instead.) The upshot of this is that if the 970 crunches double-precision floating point numbers significantly faster than the G4 - which has nothing to do with the fact that it's a 64 bit chip - then existing renderers will see a dramatic speedup.

Keep in mind that there are more uses for numbers in rendering than just the pixels you end up seeing on the screen. The positions of the vertices, the texel coordinates for the texture, the normal on the vertex, the position of the camera, etc etc etc. The colours are actually a relatively small part of the whole process and in some ways are the least sensitive to precision requirements which is how GPUs have gotten away with relatively low precision on them (until recently).

Precision requirements usually come out of a long sequence of operations, or from places where really large and really small numbers are being combined. There are techniques available to mitigate these problems, but most developers don't bother to use them (or don't know about them). Floating point math is very often used without being fully understood. Sometimes, however, single precision just isn't enough. Sometimes even double precision isn't enough.

Quote:

As for 64 bit color, the video card doesn't have to know anything about 64-bit FP color, because the monitor it's driving certainly doesn't. The software could do all the work on the CPU, downsample to 24 bits, and the GPU would be none the wiser. However, if you want to be able to offload any color-dependent processing to the GPU (remember that transparency is a "color"), then it does have to be able to work with 64-bit FP color. No matter what, it'll end up spitting out 24-bit integer color, but the ability to work at a much higher precision and downsample for output will result in noticeably slicker, clearer rendering and compositing output.

The need for high precision colour mainly comes from doing many blending and lighting passes, and the amount of that being done has been increasing dramatically as the hardware becomes more powerful. The modern GPUs which support floating point buffers (Radeon 9700 and later, geForceFX and later) can output full floating point precision, but cannot display it directly. To be displayed it has to be cut down to 8 or 10 bits per RGB channel. The loss of information in this conversion isn't too important, however, because no further blending will be done and the eye's preceptive limit is somewhere around 8-10 bits per channel. Alternatively the image can be read back from the card at full precision and used elsewhere.

Quote:
The trend now, of course, is to get the GPU to do as much work as possible. I'm not sure which, if any, currently available GPUs support a 64-bit color space, but if they're not here they're coming. John Carmack's been banging this drum for years now, and they listen to him.

Radeon 9700 & later, geForceFX & later. They actually support 32-bit float per channel which is either 96-bit or 128-bit pixels depending on whether the alpha channel is supported/included.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #534 of 666
That does clear up more ambiguities. Thanks for the followup.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #535 of 666
post #536 of 666
I love all the names everyone has for mac bippy boop.
A good brain ain't diddly if you don't have the facts
Reply
A good brain ain't diddly if you don't have the facts
Reply
post #537 of 666
Quote:
Originally posted by Amorph
The 970 FPU can do double-precision floating point, and it looks to be a significantly better performer than the G4 at this task.

Just to amplify on this point (though I agree with Amorph's post).

1.0 GHz G4+ SpecFP 0187

1.8 GHz 970 SpecFP 1051

The 970's double precision floating point with be MUCH better.
post #538 of 666
Quote:
Originally posted by Nevyn
Just to amplify on this point (though I agree with Amorph's post).

1.0 GHz G4+ SpecFP 0187

1.8 GHz 970 SpecFP 1051

The 970's double precision floating point with be MUCH better.

Looks like christmas is coming early this year
Former WWDC Watchdog.
Reply
Former WWDC Watchdog.
Reply
post #539 of 666
http://www.3dfestival.com/story.php?story_id=774

I guess that's why these guys will be buying Apple this year.

Guess the 970's going to be pret-ty good at 3D then.

I can see the power-starved Mac users baying like malnutritioned wolves at the WWDC...(Steve better be careful...) and outside those Apple retail stores... ((Picture of Apple store sales reps throwing the last dual 1.42 G4'e' carcass outside the store to rabid Mac heads. 'That's all we got...we can't make it go any faster...what do you want from us? Speed?' Incensed by the cpu impotence, the Mac-heads are after blood and proceed to savage the poor defenseless tower on the pavements outside...and after trashing the Ive Plastic MDD masterpiece then proceed to bang on the barricaded store doors...snarling and slabbering...'Jeee....ffiiiivvvvuh...')

Thanks to Amorph and Programmer and Co. for their full front assault on my sporadic questions.

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #540 of 666
Quote:
Originally posted by Lemon Bon Bon
I can see the power-starved Mac users baying like malnutritioned wolves at the WWDC...(Steve better be careful...) and outside those Apple retail stores... ((Picture of Apple store sales reps throwing the last dual 1.42 G4'e' carcass outside the store to rabid Mac heads. 'That's all we got...we can't make it go any faster...what do you want from us? Speed?' Incensed by the cpu impotence, the Mac-heads are after blood and proceed to savage the poor defenseless tower on the pavements outside...and after trashing the Ive Plastic MDD masterpiece then proceed to bang on the barricaded store doors...snarling and slabbering...'Jeee....ffiiiivvvvuh...')

The only thing that your forgot about was the pitchforks, torches, and somebody asking for a sacrifice.

This summer is going to be very sweet for the Mac Loyalists.

-- Mike Eggleston
-- Mac Fanatic since 1984.
-- Proud Member of PETA: People Eating Tasty Animals
-- Wii #: 8913 3004 4519 2027

Reply

-- Mike Eggleston
-- Mac Fanatic since 1984.
-- Proud Member of PETA: People Eating Tasty Animals
-- Wii #: 8913 3004 4519 2027

Reply
post #541 of 666
Quote:
Mr. Lemon

To answer your question about AltiVec

AltiVec can NOT do double precision

Is it a good enough answer?

Nor would we want it to...

Amorph writes:

Quote:
AltiVec can indeed be used as an FPU, but only for single precision (32 bit) floating point. The G4 and the 970 both have dedicated scalar FPUs that can do both single and double precision work.

Double precision FP makes it easier to write rendering routines with a high degree of precision. AltiVec can't accelerate these routines, because it only works in single precision. (This is not to say that it isn't possible to write very accurate single precision renderers, only that hardly any of the commercial companies actually have. They lean on high-performing double-precision hardware instead.) The upshot of this is that if the 970 crunches double-precision floating point numbers significantly faster than the G4 - which has nothing to do with the fact that it's a 64 bit chip - then existing renderers will see a dramatic speedup.

You gotta give them the whole skinny in layman's terms man ;-). You should have also mentioned that the reason why most all 3D render developers use double precision FP is because it covers up a lot of shoddy and sloppy algorithms.

Here is a link to a VERY knowledgeable programmer for PowerPCs. I've run it by Chris Cox from Adobe and a few other extremely sharp PPC programmers; They only confirmed that the person's reasoning is spot on.

Why we wouldn't want double-precision in the AltiVec unit (unless it was 256-bit) and: double precision is mainly used to cover up sloppy/shoddy algorithms.

It's the 19th or 20th post down on that page.

--
Ed
post #542 of 666
Quote:
Originally posted by Ed M.
You gotta give them the whole skinny in layman's terms man ;-). You should have also mentioned that the reason why most all 3D render developers use double precision FP is because it covers up a lot of shoddy and sloppy algorithms.

Well, this is quite true.

Most rendering shouldn't need this kind of precision. Usually the reason why it is demanded is when the algorithm is unable to protect against small precision errors propogating forward throught a calculation chain. Avoiding these are usually pretty straightforward - just a little math, a little reasoning.

I had a physics professor who *loved* putting problems on exams that would overflow your calculator, demanding that you do a little algebra or such just to keep everything in check. You could expect one such problem in every homework, on every exam. It served as a nice reminder that math != calculation.

Any programmer who really knows their math should be able to work their way out of these problems with little trouble. In the creative market, SP really should be sufficient.

That said, there are whole classes of problems that really do require DP because the data feeding in or out is DP. If Apple wants to get into the scientific/engineering arena - which they do seem to want to - then DP is much more important. Of course, the more I think about it, perhaps it's not critically important since once you need more than SP, in many cases you need more than DP as well. For those cases, you're going to have arbitrary precision code which will be slow but will always work. The real market is then who needs very fast, cost effective DP performance but not more than DP in most cases.
The plural of 'anecdote' is not 'data'.
Reply
The plural of 'anecdote' is not 'data'.
Reply
post #543 of 666
Quote:
Originally posted by johnsonwax
Of course, the more I think about it, perhaps it's not critically important since once you need more than SP, in many cases you need more than DP as well. For those cases, you're going to have arbitrary precision code which will be slow but will always work. The real market is then who needs very fast, cost effective DP performance but not more than DP in most cases.

Single precision has a 23-bit mantissa, an 8-bit exponent, and the sign bit. That's a range of only about 16.7 million values with 256 exponents which really isn't a whole lot when you think about it. Consider measuring distance, for example, if you need accuracy of about a millimeter. The largest distance you could measure with single precision in that case would be about 16.7 kilometers. This might seem pretty good at first, but if you want to have a 3D model of a city you're starting to push the limit. There are ways to mitigate this limitation, but representing a whole country at this resolution becomes troublesome and a fair bit of fiddling about.

Now consider double precision. 53-bit mantissa and 11-bit exponent, plus the sign bit. That means you have about a billion times as many values, and 2048 exponents. At the same precision as above you can now represent a distance of about 18,000,000,000 kilometers. Instead of struggling to represent the size of a country, you can now consider representing our entire solar system (except for a few comets -- http://www.lhs.berkeley.edu/iu/hou/houMS01/SolSys.html).


Interestingly this same logic applies to the size of address spaces. People see that the migration from 8-bit to 16-bit to 32-bit address spaces went fairly quickly. The first had 256 bytes and was almost immediately unusable for anything sophisticated. The second got us 65,536 bytes (256 times the 8-bit space) and lasted a while before work arounds were needed (and even then who could possibly need more than 640K?). The third has put us up to about 4,300,000,000 bytes (64K times the 16-bit space). The naive person naturally speculates that we'll quickly outgrow the 64-bit address spaces that are arriving in desktop machines this year. What they don't understand is that these new machines can address about 18,500,000,000,000,000,000 bytes. This is 4 billion times the current amount of space. That means that if each webpage in Google's cache of the Internet (currently 3 billion pages) was 1 MB then it would have to grow to about 6000 times its current size to fill the address space of a single 64-bit machine. From a 100 MB/sec ATA drive (assuming you had one that big) it would take almost 6,000 years to read all the data in.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #544 of 666
Quote:
Originally posted by Programmer

Now consider double precision. 53-bit mantissa and 11-bit exponent, plus the sign bit. That means you have about a billion times as many values, and 2048 exponents. At the same precision as above you can now represent a distance of about 18,000,000,000 kilometers. Instead of struggling to represent the size of a country, you can now consider representing our entire solar system (except for a few comets -- http://www.lhs.berkeley.edu/iu/hou/houMS01/SolSys.html).

Yeah, it's a pretty big improvement, but in physics and engineering it's not really that hard to blow away even those numbers.

Again, it's the problem with propogating small errors forward, for example, in discrete time simulations. These can be cleverly worked around in controlled cases, but not always. Some areas are particularly sensitive to it such as many discrete time dynamic systems modeling - computational fluid dynamics, etc. Mainly because you're routinely bashing very small and relatively large values into each other over huge iterations. Hell, I ran into these problems as an undergraduate 15 years ago. Lots of fun optimizing code, but I never found my way back into DP space in most cases.

A whole pile of problems will work nicely in the DP space and outside of the SP space, but I'm just not quite sure how many of these also will run outside of the DP space. While some gains can be made by strong hardware support for DP, in the more general case a lot may still need to stay in software. That was my experience, at least.

So, while DP Altivec might be great for some limited markets, it might really be too limited for the overall scientific community to bother coding for. Perhaps better to seek a different solution, like running it over a cluster, or just buying some big iron time to get the needed performance rather than trying to justify silicon R&D.
The plural of 'anecdote' is not 'data'.
Reply
The plural of 'anecdote' is not 'data'.
Reply
post #545 of 666
Quote:
Originally posted by Gizzmonic
LBB, how can you argue that a decent-performing PowerPC CPU would be an impetus to migrate to x86? x86 should (and is) a last-ditch effort if the PowerPC can't keep up at all.

(snip)

The 970 will bring Macs into performance parity with PCs for ALL tasks, not just a few photoshop filters. There's simply no need to see Mac OS on x86. If that happened, Apple would end up like also-rans OS/2 and BeOS.

No, a decent-performing PPC is a precondition to introducing an additional hardware platform. Keep in mind that right now, Apple probably has one of the larger application groups out there in the world. To repeat: Apple's model must be one that minimizes profits on systems in order to maximize sales (and therefore profits) on software and accessories (iPods, etc.). Apple is making pretty big money on the PC iPods. Apple could make pretty big money on non-PPC OS X. Note that Apple substantially dropped price points in Jan.

Apple cannot, however, afford to lose system sales, and Apple doesn't yet have the sales and profits coming out of the software group to carry the company. OS X for x86-32 simply won't happen. OS X for x86-64 just might, however. It establishes two things:

1) Apple as clear owner of the affordable 64 bit space. I guarantee that Apple can move developers faster than MS can.

2) A low-risk enterprise channel. Enterprise can buy x86-64 boxes from Apple and run Windows instead of OS X if need be. Since most enterprise customers are site-licensing Windows anyway, there's really little added cost. Apple potentially gets an OS X user that it otherwise might not have, and worst case sells a box and never sees the profits from software and accessories. Regardless, Apple gets in the door.

Of course, unless PPC can match the performance who wouldn't just buy x86-64 once the apps are all there? (PPC will almost always be faster provided the OS takes advantage of Altivec) Long term, Apple probably doesn't care. This is where Apple can get out of the business of propping up the PPC market because, well, if it dies, so what? It puts the burden on IBM to keep PPC alive against x86-64 and AMD to keep x86-64 alive against PPC. Intel seems too busy watching the clouds float by... So long as Intel doesn't win the whole pie, Apple is okay. Of course, Apple can't afford to split the market such that both go under either, but this whole thing is pitched as inroads to Intel, so as long as that happens it'll be fine.

Ignore x86-32. You're right, that's a lost cause. Consider only x86-64 not as a replacement but as an addition. Xserves, Powermacs, etc. Then reconsider your arguments. Don't be shocked if Opteron systems come out of Apple at WWDC. Don't be shocked if they don't, either. Sooner or later, Apple will make a move like this and this really does look like the best possible time to do it, IMO.
The plural of 'anecdote' is not 'data'.
Reply
The plural of 'anecdote' is not 'data'.
Reply
post #546 of 666
Quote:
Apple doesn't yet have the sales and profits coming out of the software group to carry the company.

Shrewdly stated in an insightful post.

Apple is headed towards OS hardware independence long-term. I think four years n the mhz wilderness has probably taught Apple a lesson or two. It can't continue to hang onto the tit of M$, Moto...or IBM long term for that matter. Apple's many problems...have largely been self inflicted.

As a sole supplier, IBM will do fine for the next few years.

As for 'Wintel'/Apple 64 bit. If Wintel are starting from scratch on the next floor up...that gives Apple a fine opportunity to enter a huge market.

There is a huge perception problem surrounding Apple. Once they say they 'support' the Wintel market in a similar or different manner to how they ratcheted up iPod for PC... Imagine software and services for Apple from PC sales at the current iPod ratio. Apple would be close to a billion turnover a year. But Apple aren't at this point yet. They need to secure the PPC position first. Tread carefully. Put the pieces in place like the iTunes for Wintel music store. These are all parts of the puzzle.

It is at this nexus of 64 bitness that Apple has finely come to its strongest point in years...and Wintel at its most weak and divided. It will be interesting to see what, if any, Wintel support Panther has. Macs already get to play nice in Wintel networks. What next?

Real PC are back on the scene. Strange that. And they're working verrrry closely with Apple. Dual 970 gives 1 gig Pentium 3 performance...I bet. That would be quite good.

Imagine what you could emulate with dual 980s? Emulation is only a small part of an overall strategy to grow revenue. I've never seen Apple this divergent. And I bet it will continue.

Apple will no longer cower in the shadow of M$'s threats to withdraw 'Office'. Apple will introduce their own. Apple have shown with iPod that Apple can compete and beat Wintel on their own turf.

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #547 of 666
Quote:
No, a decent-performing PPC is a precondition to introducing an additional hardware platform. Keep in mind that right now, Apple probably has one of the larger application groups out there in the world. To repeat: Apple's model must be one that minimizes profits on systems in order to maximize sales (and therefore profits) on software and accessories (iPods, etc.). Apple is making pretty big money on the PC iPods. Apple could make pretty big money on non-PPC OS X. Note that Apple substantially dropped price points in Jan.

I liked this bit.

...and you're right...they can't intro' x86 support. At least, not yet. The PPC line aint off life support but I think Tower sales early 2004 will prob' look alot better.

More likely a AMD/Intel 64 bit alliance is forged at some point.

The good thing is that Wintel will do the grunt work to enable Apple to take them on...

I'm curious to how my private bets will play out in the next several years.

Lemon Bon Bon

Quote:
Consider only x86-64 not as a replacement but as an addition. Xserves, Powermacs, etc. Then reconsider your arguments. Don't be shocked if Opteron systems come out of Apple at WWDC. Don't be shocked if they don't, either. Sooner or later, Apple will make a move like this and this really does look like the best possible time to do it, IMO.

I liked that one too. It's not an 'either/or' proposition. It's choice. It's about expanding the Apple universe and profits. The iPod has shown Apple can do this without compromising the Mac userbase.


Quote:
Don't be shocked if Opteron systems come out of Apple at WWDC. Don't be shocked if they don't, either.

I should imagine that an Opteron could emulate a 1 gig G4 in its sleep. That could protect alot of software investments...
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #548 of 666
Quote:
Originally posted by Programmer
That means that if each webpage in Google's cache of the Internet (currently 3 billion pages) was 1 MB then it would have to grow to about 6000 times its current size to fill the address space of a single 64-bit machine. From a 100 MB/sec ATA drive (assuming you had one that big) it would take almost 6,000 years to read all the data in.

Interesting analogy...never thought of it in those terms of loading memory from disk (where the obvious bottleneck is). Could you imagine the performance of loading an entire OS (ala Panther) in NVRAM and just loading from that, kind of like a thin client with no disk. You could almost set up an entire "file system" in the memory footprint that you do on disk. Could almost rid yourself of the need for HDDs entirely.

Problem is NVRAM is WAAAAAAAY expensive for that large of an OS. I wonder if anyone is developing anything in this direction?
...we have assumed control
Reply
...we have assumed control
Reply
post #549 of 666
Quote:
Originally posted by Rhumgod
Interesting analogy...never thought of it in those terms of loading memory from disk (where the obvious bottleneck is). Could you imagine the performance of loading an entire OS (ala Panther) in NVRAM and just loading from that, kind of like a thin client with no disk. You could almost set up an entire "file system" in the memory footprint that you do on disk. Could almost rid yourself of the need for HDDs entirely.

Problem is NVRAM is WAAAAAAAY expensive for that large of an OS. I wonder if anyone is developing anything in this direction?

Considering that consumer camera flash memory cards are expected to hit 8 GB this year or next, I'd say that is one possibility. The OS isn't very large, however, it is the data that the OS manipulates that can get massive. Fast, solid state "file system" devices exist but it doesn't look like they will catch up with the ol' spinning magnetic disk.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #550 of 666
Quote:
Originally posted by johnsonwax
Yeah, it's a pretty big improvement, but in physics and engineering it's not really that hard to blow away even those numbers.

Absolutely, but my point was that double is exponentially better than single precision and that is sufficient for a very large class of problems, especially if you don't squander your precision in needless ways. It will always be possible to exceed fixed precision, however, and that is why things like Mathmatica use variable precision numerical representations.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #551 of 666
Quote:
Originally posted by Programmer
Considering that consumer camera flash memory cards are expected to hit 8 GB this year or next, I'd say that is one possibility. The OS isn't very large, however, it is the data that the OS manipulates that can get massive. Fast, solid state "file system" devices exist but it doesn't look like they will catch up with the ol' spinning magnetic disk.

Problem is the speed at which data flows into those flash memory cards is way too slow for an OS to operate in. Consider how long it takes to capture, say, an 1856x1392 image to that flash card. My Sony DSCF505V takes about 2 or 3 seconds. And at higher resolutions, it is worse.

Would be neat though.
...we have assumed control
Reply
...we have assumed control
Reply
post #552 of 666
History has taught us that any OS that relies completely on emulation will fail.

The PPC970 is all the CPU that Apple needs. They can challenge PC performance without giving up their main revenue stream, hardware.

The release of an x86 Mac would be like the clone saga all over again, only many times worse. If you were a developer, what would you do?

Continue to support two architectures on a platform with tiny market share, where you can't use (Carbon) code between the two archs, after you just got done making an extensive rewrite for OS X? Or drop OS X and tell them to use the Windows version of under emulation?

I don't want Mac OS X to be another OS/2. I don't want to see Apple software dwindle to Quicktime, FCP for Windows, and iApps for Windows and Apple hardware dwindle to the iPod. That's what Apple will become if they go to any x86-compatible architectures.
Dual 2Ghz G5, Single 2Ghz Xserve G5, Dual 1Ghz QS G4, Single 1.25Ghz MDD G4.
Reply
Dual 2Ghz G5, Single 2Ghz Xserve G5, Dual 1Ghz QS G4, Single 1.25Ghz MDD G4.
Reply
post #553 of 666
Quote:
Originally posted by Lemon Bon Bon
Real PC are back on the scene. Strange that. And they're working verrrry closely with Apple. Dual 970 gives 1 gig Pentium 3 performance...I bet. That would be quite good.

I have heard, from varying sources in the Open Source community that Apple has also been working with the good folks who developed "bochs".

What bochs does is emulate the Windows environment. What I could see Apple doing is developing a bochs that would allow you to run Windows apps much like you can run Classic apps. It would behave like the classic world, and would seem near to invisible to the end user.

The only problem I have had with this statement was that the G4 (dual or otherwise) would not be able to handle effectivly that kind of emulation.

However, I am feeling that with the 970 just around the corner, and from what we have seen as far as benchmarks are concerned; that might become a good possibility.

Ok, you can now toast away...

-- Mike Eggleston
-- Mac Fanatic since 1984.
-- Proud Member of PETA: People Eating Tasty Animals
-- Wii #: 8913 3004 4519 2027

Reply

-- Mike Eggleston
-- Mac Fanatic since 1984.
-- Proud Member of PETA: People Eating Tasty Animals
-- Wii #: 8913 3004 4519 2027

Reply
post #554 of 666
toast.

good news. (but i still have to build a pc soon) i need to compile stuff so i do'nt think emulation will cut it for that, but for webby stuff. boyah 970 bochs.

I wonder when you say "working with them" if it would be in a similar manner as khtml and safari.
post #555 of 666
Quote:
Originally posted by keyboardf12
toast.

good news. (but i still have to build a pc soon) i need to compile stuff so i do'nt think emulation will cut it for that, but for webby stuff. boyah 970 bochs.

I wonder when you say "working with them" if it would be in a similar manner as khtml and safari.

Again, only hearing stuff from the grapevine, but what I mean is that Apple hired several people that was doing bochs development. I believe they are trying to make it as transparent as Classic was (i.e. not a windowed emulation, but just a free-standing emulation).

-- Mike Eggleston
-- Mac Fanatic since 1984.
-- Proud Member of PETA: People Eating Tasty Animals
-- Wii #: 8913 3004 4519 2027

Reply

-- Mike Eggleston
-- Mac Fanatic since 1984.
-- Proud Member of PETA: People Eating Tasty Animals
-- Wii #: 8913 3004 4519 2027

Reply
post #556 of 666
Quote:
Originally posted by Rhumgod
Problem is the speed at which data flows into those flash memory cards is way too slow for an OS to operate in. Consider how long it takes to capture, say, an 1856x1392 image to that flash card. My Sony DSCF505V takes about 2 or 3 seconds. And at higher resolutions, it is worse.

Would be neat though.

That's for a single chip though -- in a desktop machine you could hook 8 or 16 of them up in parallel and get much higher throughput. The new xD format is quite fast for a consumer part as well, so the combination should make for an acceptable speed. Future solid state technologies hold promise as well.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #557 of 666
I don't think Apple is ever going to make Windows emulation so integrated with the os as the case was for classic.
If we finally get macs that can run windows in emulation at decent speeds, and Apple made that kind of emulation environment, wouldn't that make developers think twice before bother making an os X version of their apps. Would MS bother making another office if Office XP ran decently in emulation mode? This is kinda OS/2 history repeating itself, isn't it?

No I think the best solution still is for 3'rd parties to develop and sell (with help from Apple) the emulation software. Someone of them can maybe find a way to make windows applications behave as classic apps, but at least it will not be bundled with the os, and it will not be supported by Apple. This way software companies will have a reason to develop specifically for os X.
Former WWDC Watchdog.
Reply
Former WWDC Watchdog.
Reply
post #558 of 666
but if bochs is release and can be used in the same manner as webCore( i forget what they call it) it can be picked up and used by them in the same way omni is doing with the new version of their browser.
post #559 of 666
Quote:
Originally posted by keyboardf12
but if bochs is release and can be used in the same manner as webCore( i forget what they call it) it can be picked up and used by them in the same way omni is doing with the new version of their browser.

Huh I'm obviously too stupid to understand what you're talking 'bout here
Former WWDC Watchdog.
Reply
Former WWDC Watchdog.
Reply
post #560 of 666
hi. sorry. no coffee yet.

as you know, safari is really made up of 2 parts. the cocoa app, windows, menus etc. and the html rendering engine. this engine is actually open source. it was started by the kde group and called khtml. apple took khtml and made is better and gave back their changes to the OS community.

the html rendering portion is called webCore. this allows 3rd party developers to "use" this rendering engine and display html code in their apps. like omniweb is doing with their next rev of their web browser. they are ditching the old rendering engine and using webcore to give them the exact capablities of safari. (plus what ever they add) itunes uses the same for the music store.

if apple did the same with the boch code (winCore?) then 3rd parties could use the winCore and build their own windows emulators.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › MacBidouille posts PPC 970 benchmarks