or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › First Intel Macs on track for January
New Posts  All Forums:Forum Nav:

First Intel Macs on track for January - Page 11

post #401 of 452
Quote:
Originally posted by aegisdesign
I didn't imply it would. It'd be as good as running under VT and no better but have the advantages of integration with it's host OS.

Would it be as good as running under a VM?

I remember Windows apps running under OS/2 would run much faster when run in full screen mode. Those "advantages of integration with its host OS" come at a significant cost.

TNSTAAFL.
post #402 of 452
If you need to run something so CPU intensive that VPC won't do, the cheapest and easiest solution will still be to buy a cheap PC. This is especially true for rendering.
post #403 of 452
Quote:
Originally posted by strobe
If you need to run something so CPU intensive that VPC won't do, the cheapest and easiest solution will still be to buy a cheap PC. This is especially true for rendering.

That would surely be the best way now.

But with the Mactels', I don't think that it will continue to be true.

If a Mactel is equipped the same as a PC , the power will be the same. Getting a cheap PC will be getting a PC that could be much less powerful them the Mac you have.

It would be good only if you needed to do to intensive tasks at once, or if the PC task didn't require the same power as your Mac task.

An interesting article by Rob Enderle, the guy we love to hate, has been coming around to the view that Apple will really have something with the new machines which he calls "Aptels"

He seems to think that the OS will perform much better on x86 than it has been on PPC, because of the x86 roots of FreeBSD. He could be right. Reports I 've been getting from friends involved say that speedups are significant.

Read his article;

http://www.macnewsworld.com/story/nr...Tel-Line.xhtml
post #404 of 452
Quote:
Originally posted by melgross
He seems to think that the OS will perform much better on x86 than it has been on PPC, because of the x86 roots of FreeBSD. He could be right. Reports I 've been getting from friends involved say that speedups are significant.

What component of OS X that GUI applications rely on has anything to do with FreeBSD?
post #405 of 452
Quote:
Originally posted by Chucker
What component of OS X that GUI applications rely on has anything to do with FreeBSD?

Multi-processor support, most likely. Although FreeBSD has been slower than other OSs in terms of that. With 6.x, things do seem better however. Not that it has any impact on where Apple is now going. I just hope Finder itself is faster. It has been a dog throughout OS X's lifetime.
...we have assumed control
Reply
...we have assumed control
Reply
post #406 of 452
Quote:
Originally posted by Rhumgod
Multi-processor support, most likely. Although FreeBSD has been slower than other OSs in terms of that. With 6.x, things do seem better however. Not that it has any impact on where Apple is now going. I just hope Finder itself is faster. It has been a dog throughout OS X's lifetime.

I really don't see the Finder being impacted much at all by the processor architecture.

Yes, things like multi-processor support are ported from FreeBSD to the XNU kernel; most of OS X's high-level stuff has little to do with FreeBSD at all, however.

The idea seems far-fetched to me.
post #407 of 452
Quote:
Originally posted by Chucker
What component of OS X that GUI applications rely on has anything to do with FreeBSD?

The GUI is only part of the app remember?

Besides, any GUI works through the underlying OS, which is Darwin, which is Apple's version of FreeBSD.
post #408 of 452
Quote:
Originally posted by Chucker
I really don't see the Finder being impacted much at all by the processor architecture.

Yes, things like multi-processor support are ported from FreeBSD to the XNU kernel; most of OS X's high-level stuff has little to do with FreeBSD at all, however.

The idea seems far-fetched to me.

You are wrong. the reports from the developers are that the finder is much faster running under a 3.6GHz Pentium 4 than on any PPC Mac.
post #409 of 452
Quote:
Originally posted by melgross
Besides, any GUI works through the underlying OS, which is Darwin, which is Apple's version of FreeBSD.

Darwin is much more than "a version of FreeBSD". Most notably, the kernel is not a BSD kernel.
post #410 of 452
Quote:
Originally posted by Chucker
Darwin is much more than "a version of FreeBSD". Most notably, the kernel is not a BSD kernel.

I know it's much more, that's why it's Apple's version. They changed, and added to it.

We all know that the lernel is MACH.
post #411 of 452
Quote:
Originally posted by melgross
I know it's much more, that's why it's Apple's version. They changed, and added to it.

We all know that the lernel is MACH.

The kernel is no pure Mach either, nor is it (like Mach would be) a microkernel.

Saying Darwin is Apple's version of FreeBSD is like saying Windows NT is Microsoft's version of VMS.
post #412 of 452
Quote:
Originally posted by Chucker
The kernel is no pure Mach either, nor is it (like Mach would be) a microkernel.

Saying Darwin is Apple's version of FreeBSD is like saying Windows NT is Microsoft's version of VMS.

Microkernels are out. Have been for years.

Darwin is more like XP is to NT.
post #413 of 452
Quote:
Originally posted by melgross
Microkernels are out. Have been for years.

That's besides the point.

Quote:
Darwin is more like XP is to NT.

Er. I think I give up.
post #414 of 452
Please forgive my rudeness for just jumping in here!

I'm really excited about the new intel microprocessors. However, some people I've talked to are worried about whether they will eventually turn into the average, pay-for-what-you-get type of non-elegant pc. What say you?
JAM COMPUTERS-http://www.jamcomputers.net
A forum for users of all computer systems (WE REALLY REALLY NEED MORE MAC PEOPLE!!!!)
Reply
JAM COMPUTERS-http://www.jamcomputers.net
A forum for users of all computer systems (WE REALLY REALLY NEED MORE MAC PEOPLE!!!!)
Reply
post #415 of 452
Quote:
Originally posted by Rhumgod
Multi-processor support, most likely. Although FreeBSD has been slower than other OSs in terms of that. With 6.x, things do seem better however. Not that it has any impact on where Apple is now going.

AFAIK, Darwin's MP support comes from Mach, not FreeBSD.
post #416 of 452
Quote:
Originally posted by RazzFazz
AFAIK, Darwin's MP support comes from Mach, not FreeBSD.

Actually, I recall comments at WWDC that the newer FreeBSD base helped improve multiprocessing. I could be wrong.
post #417 of 452
Quote:
Originally posted by Chucker
Actually, I recall comments at WWDC that the newer FreeBSD base helped improve multiprocessing. I could be wrong.

I think this probably was something along the lines of "helped improve performance on MP systems", e.g. through finer-grained locking in the VFS stack etc. This is, however, not something that's architecture dependent, so it won't be inherently faster on x86.
post #418 of 452
Quote:
Originally posted by superhall
Please forgive my rudeness for just jumping in here!

I'm really excited about the new intel microprocessors. However, some people I've talked to are worried about whether they will eventually turn into the average, pay-for-what-you-get type of non-elegant pc. What say you?

Welcome, superhall, and jump right in!

No need to worry about them producing non-elegant machines. Apple can't compete with Dell on price, so there's no way Apple will create cheapie machines. It's not in their blood, anyway.

iPods weren't the cheapest MP3 players when they first came out. The Mac mini is relatively inexpensive, but not as cheap (in all senses of the word ) as a Dell.

No, don't worry my child. Apple will produce nice machines . . . and you'll pay for it.
post #419 of 452
BSD includes file system, communications, and memory management support. If these perform better on x86 hardware you would see a small across-the-board improvement. I think a bigger factor is that GCC has an x86 heritage as well, and has always produced better x86 code than PPC code (plus Apple may provide Intel's compiler, which is quite good). This will affect everything compiled for the x86.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #420 of 452
Quote:
Originally posted by Programmer
BSD includes file system, communications, and memory management support. If these perform better on x86 hardware you would see a small across-the-board improvement.

I still doubt that much of this is really somehow "optimized for x86". AFAIK, only a small part of the FreeBSD kernel is architecture-dependent.


Quote:
I think a bigger factor is that GCC has an x86 heritage as well, and has always produced better x86 code than PPC code (plus Apple may provide Intel's compiler, which is quite good). This will affect everything compiled for the x86.

Yes, I think this is going to be a much more important factor (although I kinda doubt they'll provide icc with OS X).

It's also a pity, really -- generating good PPC code should actually be easier (more registers, more orthogonal instruction set, whatnot), but I guess it's tough to beat the sheer amount of optimization that went (and still goes) into gcc's x86 backend...
post #421 of 452
Quote:
Originally posted by RazzFazz
I guess it's tough to beat the sheer amount of optimization that went (and still goes) into gcc's x86 backend...

Beats the rear-ending Apple has gotten from Motorola/IBM.
...we have assumed control
Reply
...we have assumed control
Reply
post #422 of 452
Another factor of optimization on the INTEL platform will also come from the fact that the ABI for MacOS X was originally defined for the x86 version of NeXTStep. When Apple starts to work on Rhapsody, they rush into the definition of the PPC ABI of MacOS X by borrowing much of it to its x86 cousin.

(The ABI is the specification and codification of API call in the assembly language. It also defined the structure of executables and more generally the way C function calls are done in machine language).

An optimization in the ABI will be reflected in all API call (and hidden subcalls). At the end of the day, it could represented a several percent boost specially with object oriented programming which tends to increase call imbrications.
post #423 of 452
Quote:
Originally posted by superhall
Please forgive my rudeness for just jumping in here!

I'm really excited about the new intel microprocessors. However, some people I've talked to are worried about whether they will eventually turn into the average, pay-for-what-you-get type of non-elegant pc. What say you?

Apple could have made those while on the PowerPC. They did make them during the beige era. When Jobs came back, did decided to make something different. Having an intel processor will not make it magically morph into a Dell or HP.
post #424 of 452
Quote:
Originally posted by FrenchMac
Another factor of optimization on the INTEL platform will also come from the fact that the ABI for MacOS X was originally defined for the x86 version of NeXTStep. When Apple starts to work on Rhapsody, they rush into the definition of the PPC ABI of MacOS X by borrowing much of it to its x86 cousin.

(The ABI is the specification and codification of API call in the assembly language. It also defined the structure of executables and more generally the way C function calls are done in machine language).

An optimization in the ABI will be reflected in all API call (and hidden subcalls). At the end of the day, it could represented a several percent boost specially with object oriented programming which tends to increase call imbrications.

That and 99.99% of all programs are written to be optimized for x86 processors anyway.
post #425 of 452
Quote:
Originally posted by BenRoethig
That and 99.99% of all programs are written to be optimized for x86 processors anyway.

I'm fairly sure the vast majority of programs are not optimized for any particular processor at all (except for the optimizations performed behind the programmer's back by the C/C++ compiler).
post #426 of 452
Quote:
Originally posted by RazzFazz
I'm fairly sure the vast majority of programs are not optimized for any particular processor at all (except for the optimizations performed behind the programmer's back by the C/C++ compiler).

I'm pretty sure that's true on the Windows side. Even here, many programs that could benefit from a high amount of optimization often don't get more than the minimum.
post #427 of 452
You might want to reconsider that thought. The issue is that integer operations may be as much as three times faster than on the old PPC hardware. For things like the finder, and many other pieces of software that rely on a processors integet capabilities you should see dramatic performance increases. It is one thing that is hard to deny about the Intel hardware relative to PPC. That from a guy that almost died upon hearing about Apples move to Intel.

As far as the other stuff, think about how much of the "high-level" stuff actually makes use of AltVec which is where PPC really shines. OS'es ride best on machines with good integer performnce. Applications are a different story of course and really depend on the applications domain.

This doesn't even take into account things like dual core and hyperthreading available on intel hardware. It is to bad we never got to see OS/X running on a multi threaded PPC processor. Even then though Intel would still have a advantage through the integer ALU.


Thanks
Dave



Quote:
Originally posted by Chucker
I really don't see the Finder being impacted much at all by the processor architecture.

Yes, things like multi-processor support are ported from FreeBSD to the XNU kernel; most of OS X's high-level stuff has little to do with FreeBSD at all, however.

The idea seems far-fetched to me.
post #428 of 452
Quote:
Originally posted by RazzFazz
I still doubt that much of this is really somehow "optimized for x86". AFAIK, only a small part of the FreeBSD kernel is architecture-dependent.


There is sure to be some optimization for i86 but that really isn't where the advantage is. i86 just has the performance advantage with in the CPU where it matters. That is it's integer capability and the ability to do those integet ops very fast.
Quote:




Yes, I think this is going to be a much more important factor (although I kinda doubt they'll provide icc with OS X).

GCC is really not that bad. In fact it could be argued that it came a long way with the 4.x.x release.
Quote:

It's also a pity, really -- generating good PPC code should actually be easier (more registers, more orthogonal instruction set, whatnot), but I guess it's tough to beat the sheer amount of optimization that went (and still goes) into gcc's x86 backend...

I still don't buy the optimization arguement. It is part of the advantage but fast integer ALU's can and do play a big part here. Frankly this could be seen when the G5's arrived, Apple concentrated on performance numbers that involved either AltVec or the floating Point unit. To their credit though the other numbers where there for people willing to look. Not many people did and I can remember hear a number of not so kind response when I tried to point out the biased promotion of the G5.

The G5 can be the ideal processor if it does what you want it to do. If it doesn't then it can be a rather poor performer. That has nothing to do with optimization, rather it just takes into account the slow integer unit. It should be noted though that slowness is relative, the integer unit on the G5 doesn't do that bad for the clock rate, but all things being equal the clock rate on the G5 hasn't really kept pace.

Dave
post #429 of 452
Quote:
Originally posted by wizard69
You might want to reconsider that thought. The issue is that integer operations may be as much as three times faster than on the old PPC hardware. For things like the finder, and many other pieces of software that rely on a processors integet capabilities you should see dramatic performance increases. It is one thing that is hard to deny about the Intel hardware relative to PPC. That from a guy that almost died upon hearing about Apples move to Intel.

I was responding to the claim that OS X will be faster on Intel because FreeBSD is more optimized on Intel. Whether OS X will be faster on Intel because Intel has better integer performance is a wholly different matter.
post #430 of 452
Quote:
Originally posted by wizard69
There is sure to be some optimization for i86 but that really isn't where the advantage is. i86 just has the performance advantage with in the CPU where it matters. That is it's integer capability and the ability to do those integet ops very fast.

What makes you think integer performance is the most important metric? If that were the case, the G4 (which has 4 integer units -- three simple plus one complex, compared to just 2 general-purpose units on the G5, and 2 double-pumped simple and one complex on the P4) should fare a lot better than it seems to do in reality...


Quote:
I still don't buy the optimization arguement. It is part of the advantage but fast integer ALU's can and do play a big part here. Frankly this could be seen when the G5's arrived, Apple concentrated on performance numbers that involved either AltVec or the floating Point unit.

Well, HPC for one thing is all about Flops, as are generally a lot of the really computationally intensive applications (e.g. rendering, audio processing). Stuff like the Finder and such isn't really CPU-bound in the first place; you don't get the SBOD in Finder because the CPU can't add, subtract or perform logical operations on integers fast enough -- more likely, it's caused by the software architecture (e.g. locking, FS performance, etc.), lack of I/O performance, or a combination of both rather than lack of raw integer processing power.


Quote:
The G5 can be the ideal processor if it does what you want it to do. If it doesn't then it can be a rather poor performer. That has nothing to do with optimization, rather it just takes into account the slow integer unit. It should be noted though that slowness is relative, the integer unit on the G5 doesn't do that bad for the clock rate, but all things being equal the clock rate on the G5 hasn't really kept pace.

The relative performance of the integer units doesn't change the fact that GCC generates far better x86 code then PPC code (or code for any other architecture).
post #431 of 452
Quote:
Originally posted by Rhumgod
Multi-processor support, most likely. Although FreeBSD has been slower than other OSs in terms of that.

You do realize that Darwin uses the xnu kernel, which in turn is a frankenstien of Mach and BSD kernels. Threading is handled by the Mach-side of that two-headed monster.

I fail to see what FreeBSD's SMP support has to do with the price of tea in china. A lot of that work has been cleaning up driver code, which again, Darwin does not adopt.
post #432 of 452
Quote:
Originally posted by melgross
We all know that the lernel is MACH.

We all know you're wrong.

This whole discussion of how much better OS X runs on Intels as compared to PowerPC is meaningless. It's like commenting on the relative size of watermelons and grapes.

Will it run more efficiently on Intel CPUs? Perhaps (mostly due to the moronic Mach-O ABI), but that's meaningless when you only notice overall speed. Who cares if its 10% LESS efficient so long the processor is many times faster.
post #433 of 452
BTW I happen to think that Apple has made a real mess of the kernel situation. I don't think I'm alone.

The core problem is Apple decided to adopt Mach (the peak of 1980s technology), then they tried to modernize it, then they bolted on some ancient UNIX crap on it (the peak of 1960s technology) so they get their System V IPC and various ancient BS. Its a frankenstein with dueling brains and APIs.

If that wasn't enough, they adopted the Mach-O binary format and ABI instead of using PEF XCOFF or ELF. Supposedly they did this so they wouldn't have to rewrite some of their toolchain, but they had to rewrite a bunch of that anyway to upgrade to ecgs.

I don't usually like to play Armchair Steve (I coined that term, feel free to use it), but WTF were they thinking?! Not only are they loaded with layers of greasy ancient fat, much like dioxins they'll never be able to work it out of the system. To maintain backward compatibility every new OS X version will have to support not only both APIs in all their vileness, it will also have to support the old ABI including references to both Mach and BSD inherent in it. Look at the difficulty in implementing the Mach-O ABI on NetBSD where Mach-O calls have to be emulated. If Apple wants to write a better kernel (perhaps an EXO kernel) they will have to emulate both sets of API in the ABI.

Perhaps you don't think this is a big deal because faster CPUs will run this bloated sack of crap faster. However, due to the schizophrenic nature of XNU, SMP performance sucks and this will only get worse with multi-core and hyperthreading. Why? The BSD-side isn't thread safe or re-entrant, so Apple had to create two big (Mach) mutex locks to make sure only one BSD thread accessed either the filesystem or network stack at a time.

This is enforced through "funnels" which you have to use if you want to access BSD (which handles network and filesystem). In a way, it's a clever solution because the nature in which funnels are allowed to gain access to the BSD stuff can be changed and improved, thus this can be fixed by hacking the BSD portion more...but WHY? Why keep hacking the same old crap? Why entrench yourself into a mode of hacking old crap instead of building an escape hatch for the future? Why adopt really ancient NeXT-era kernel technology when something else could have been adopted? It's not like Mach made anything easier, like writing IOKit. Apple could have grafted this 1969-era geezer-ware and IOKit on anything else, like a modern microkernel, then provide a fresh and carefully designed API for those kernel functions so it could easily be replaced.

Well anyway I guess we're stuck until someone engineers a box around this turkey and we can move beyond the mistakes of the past, and the ancient past. Ancient 8-bit 8088 ISAs, APIs from OSs designed to run a game on a PDP-7, and ABIs old enough to grow hair in nasty places like ear holes.

Welcome to the future.
post #434 of 452
Quote:
Originally posted by strobe
We all know you're wrong.

This whole discussion of how much better OS X runs on Intels as compared to PowerPC is meaningless. It's like commenting on the relative size of watermelons and grapes.

Will it run more efficiently on Intel CPUs? Perhaps (mostly due to the moronic Mach-O ABI), but that's meaningless when you only notice overall speed. Who cares if its 10% LESS efficient so long the processor is many times faster.

So the kernel is not Mach? Then what do you think it is?

And don't say that it is so modified that it's completely different.
post #435 of 452
Quote:
Originally posted by melgross
So the kernel is not Mach? Then what do you think it is?

It is so vastly different from Mach that it's unfair to call it that. It takes quite a few concepts from Mach 3.0, but when you really get down to it, it's not all that similar.

Quote:
And don't say that it is so modified that it's completely different.

That's exactly what it is.
post #436 of 452
Quote:
Originally posted by Chucker
It is so vastly different from Mach that it's unfair to call it that. It takes quite a few concepts from Mach 3.0, but when you really get down to it, it's not all that similar.



That's exactly what it is.

Look, over time, everything is modified, added to and improved (we hope). sometimes that means that a product, or code is vastly different from what it started out being. That doesn't mean that the lineage isn't there, as well as the continuation of the name.

They can use version 1 or version 37. We all know that it's changed.

These days RISC has CISC, CISC has RISC, multiple cores, etc. That doesn't mean that the chips can't be traced backwards, and that the naming scheme can't be maintained, if the manufacturer wants it to be.

Todays cars aren't the same ones made 50 years ago. They may be better or worse. But a Caddie is still a Caddie, whatever you may think of it.

These are games of semantics.
post #437 of 452
Quote:
Originally posted by strobe
BTW I happen to think that Apple has made a real mess of the kernel situation. I don't think I'm alone.

The core problem is Apple decided to adopt Mach (the peak of 1980s technology), then they tried to modernize it, then they bolted on some ancient UNIX crap on it (the peak of 1960s technology) so they get their System V IPC and various ancient BS. Its a frankenstein with dueling brains and APIs.

If that wasn't enough, they adopted the Mach-O binary format and ABI instead of using PEF XCOFF or ELF. Supposedly they did this so they wouldn't have to rewrite some of their toolchain, but they had to rewrite a bunch of that anyway to upgrade to ecgs.

I don't usually like to play Armchair Steve (I coined that term, feel free to use it), but WTF were they thinking?! Not only are they loaded with layers of greasy ancient fat, much like dioxins they'll never be able to work it out of the system. To maintain backward compatibility every new OS X version will have to support not only both APIs in all their vileness, it will also have to support the old ABI including references to both Mach and BSD inherent in it. Look at the difficulty in implementing the Mach-O ABI on NetBSD where Mach-O calls have to be emulated. If Apple wants to write a better kernel (perhaps an EXO kernel) they will have to emulate both sets of API in the ABI.

Perhaps you don't think this is a big deal because faster CPUs will run this bloated sack of crap faster. However, due to the schizophrenic nature of XNU, SMP performance sucks and this will only get worse with multi-core and hyperthreading. Why? The BSD-side isn't thread safe or re-entrant, so Apple had to create two big (Mach) mutex locks to make sure only one BSD thread accessed either the filesystem or network stack at a time.

This is enforced through "funnels" which you have to use if you want to access BSD (which handles network and filesystem). In a way, it's a clever solution because the nature in which funnels are allowed to gain access to the BSD stuff can be changed and improved, thus this can be fixed by hacking the BSD portion more...but WHY? Why keep hacking the same old crap? Why entrench yourself into a mode of hacking old crap instead of building an escape hatch for the future? Why adopt really ancient NeXT-era kernel technology when something else could have been adopted? It's not like Mach made anything easier, like writing IOKit. Apple could have grafted this 1969-era geezer-ware and IOKit on anything else, like a modern microkernel, then provide a fresh and carefully designed API for those kernel functions so it could easily be replaced.

Well anyway I guess we're stuck until someone engineers a box around this turkey and we can move beyond the mistakes of the past, and the ancient past. Ancient 8-bit 8088 ISAs, APIs from OSs designed to run a game on a PDP-7, and ABIs old enough to grow hair in nasty places like ear holes.

Welcome to the future.

This is what may be called a trenchant anaylysis. What do you propose for a good modern kernel now? Rewriting the OS X kernel, or use the FreeBSD kernel? It doesn't sound like you favour BSD either.
post #438 of 452
Quote:
Originally posted by strobe
Perhaps you don't think this is a big deal because faster CPUs will run this bloated sack of crap faster. However, due to the schizophrenic nature of XNU, SMP performance sucks and this will only get worse with multi-core and hyperthreading. Why? The BSD-side isn't thread safe or re-entrant, so Apple had to create two big (Mach) mutex locks to make sure only one BSD thread accessed either the filesystem or network stack at a time.

This is enforced through "funnels" which you have to use if you want to access BSD (which handles network and filesystem). In a way, it's a clever solution because the nature in which funnels are allowed to gain access to the BSD stuff can be changed and improved, thus this can be fixed by hacking the BSD portion more...but WHY?

Wasn't the locking situation supposed to have been substantially improved in Tiger? IIRC, the situation you describe above with only two huge-ass funnels applies to Panther and older.
post #439 of 452
[QUOTE]Originally posted by RazzFazz
What makes you think integer performance is the most important metric?
[\\quote]
For the purposes of this discussion it is one of the most important components with regard to OS functions. Very few things that the OS does can be accelerated with a vector unit or FPU. That is not ot say though that other unts such as DMA units aren't important to a machines performance.
Quote:
If that were the case, the G4 (which has 4 integer units -- three simple plus one complex, compared to just 2 general-purpose units on the G5, and 2 double-pumped simple and one complex on the P4) should fare a lot better than it seems to do in reality...

Actually the G4 use to fair pretty good until it gave up with respect to the clock rate race. Given that we can look at how it compared to the same sort of hardware running at the same clock rate, such as a early P4, clearly it would seem like the G4 should do better when evenly matched. Compilers are certainly an issue but I still contend that the intel hardware just handles the integer threads better.
Quote:




Well, HPC for one thing is all about Flops, as are generally a lot of the really computationally intensive applications (e.g. rendering, audio processing).

Well we aren't talking about HPC, instead we where talking about OS/X'es feel on intel hardware. Specifically that it feels much faster.

Even then HPC isn't always about flops, it really depends on the applications domain.
Quote:
Stuff like the Finder and such isn't really CPU-bound in the first place; you don't get the SBOD in Finder because the CPU can't add, subtract or perform logical operations on integers fast enough -- more likely, it's caused by the software architecture (e.g. locking, FS performance, etc.), lack of I/O performance, or a combination of both rather than lack of raw integer processing power.

I think you are gravely mistaken with respect to the above statements. Certainly system as a whole comes into play but that does not forgive the reality that the finder has an incredible amount of data to deal with and present to the user. In any event all one has to do is to slam a faster processor into a machine running linux say, to see how much an increase in processor performance helps the browser there.

The Mac is much the same you have a code base that can see considerable improvements with a speed up of the processor. Now that isn't to imply that system itself (hardware & software) can't be improved just that much of it is CPU bound at the moment.

Taken another way people that upgrade their machines processor to get better overall performance are all wet according to your reasoning. Clearly this isn't the case as the upgrade market for CPU's is still alive and well.
Quote:




The relative performance of the integer units doesn't change the fact that GCC generates far better x86 code then PPC code (or code for any other architecture).

The fact that GCC might generate better code for x86 has nothing to do with the real issues here. Its the hardware!!!. More so the G4 is dependant to a greater extent on the compiler generating good code vs the latest i86 hardware. I86 just runs sloppy integer code better.

Dave
post #440 of 452
G4 was good until they stopped developing it. For all intents and purposes, it's the same exact processor it was four years ago.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › First Intel Macs on track for January