I guarantee we will see at least 3ghz for 2005... when is a better question. Even if they put out a 2.8... I'm sure they'll make an update in the middle of the year if they have to.
By that time we'll be at a different stage of proc anyways... 975 correct? This 970fx won't be around much longer in the powermac....... i hope.
----------------------
Here's to wishing for new kb and mouse on next pm. This raised one is killing my wrists.
I'm waiting for a 3 gig dual dual core Antares. That's my dream machine. With 'Tiger'. Pre-installed. If Apple could deliver an announcment on this by March 2005? They're going to break the 300K tower barrier.
I'm waiting for a 3 gig dual dual core Antares. That's my dream machine. With 'Tiger'. Pre-installed. If Apple could deliver an announcment on this by March 2005? They're going to break the 300K tower barrier.
And by that time you will be holding out for eCLipz. Maybe we should just set up Paypal donation accounts to buy Macs for you and hmurchison.
I was that incensed by Apple's performance lagging implementations of PC cards that I wrote to Maxxon support on Cinebench re: what on earth was causing such disparity between the same card for Mac/PC!?!?
Quote:
MAXON Support_UK writes:
>Is Apple's Open GL implementation 'soft' or 'slow'?
Greetings **********,
From what I've seen, Mac's OpenGL support is just lagging. We're doing
the
best we can with the software as far as using as much of the 'power'
available however, if the 'water's dripping, the buckets gonna fills
very
slowly.' Apple has to beef up the support for the cards before we
really
really utilize it within CINEMA.
Regards,
MAXON Technical Support
That's what the 3rd party developer thinks.
'Beef up' support for the very cards they ship.
Over 100% more support for the cards they ship.
Roll on Open GL 2 and 'Tiger' and maybe a CORE 3D API.
It looks like C+. Not bad. But can do wayyyyyyyy better?
Very interesting. As I stated before cinema was originally written for mac os. However this version never shipped and wintel shipped first. I didn't think it was maxxons fault for a # of reasons...
reason 1 and most important:
Other apps lag also (games, maya, etc).
I hope apple has the bugs worked out by Tiger and OGL 2.0. Many pc cards already support OGL 2.0. (My 5900xt doesn't).
I can't wait until open gl 2.0 and tiger... I have a feeling things are going to change BIG TIME!
I'll leave it at that. I have a feeling it has to do with porting ogl... as amorph has stated before, porting code from an optimized system is hard to do since a lot of that optimization is system specific.
Looks to me the opteron 150 is significantly ahead!
Where's your proof?
Look at all of those benchs. The Xeon doesn't come close to the opteron in any of those tests... you can't convince me without proof the Xeon renders faster.
There is a lot of documentation, and general discussion amongst 3D pro's on rendering between these two processors that I read frequently, but I did a quick search and grabbed few articles of interest, and one (split in two) that describes in detail how these two perform under high demanding operations.
Quote:
In response to: Poster: maninflash
Subject: Re: Opteron vs. Xeon: Render Times @ highend3d.com
Both operating systems are Windows XP with Maya 6.
00:41:75
Dual XEON 3.2GHz, SuperMicro X5DA8 mainboard, 2GB RAM, with Hyper-Thread
00:47:68
Dual Opteron 248, Tyan Thunder mainboard, 2GB RAM
00:50:88
Dual Xeon 3.2GHz, SuperMicro X5DA8 mainboard, 2GB RAM, without Hyper-Thread
Hard drive for both systems were Seagate Cheetah 36GB 15k, the primary bootup drive with XP and Maya6 installed. RAM for XEON was two Kingston 1GB DDR 266 ECC and for Opteron two Corsair 1GB DDR 400.
If anyone has Dual Opteron 250, it would be interesting to see if it can beat Xeon's time.
Michael
That is a significant difference on a pair of solo work machines, but not terribly convincing as it does not go into detailed analysis of the processors pro's, con's, and it also lacks descriptions of the faults in the architecture, or superiority of one over another.
Here is a link to a toms hardware article/showdown that has the two processors in a head to head shoot out, and the Opteron is the champ , but there is a 3D rendering comparison in the same article ( which is a heavy load ) that shows the Xeon with much better render times.
More recent testing reveal some significant surprises about these two processors.
_
Workstation showdown: Xeon vs. Opteron
Intel's Xeon-based workstations are much faster than workstations based on AMD's Opteron when it comes to heavy multitasking
Quote:
What we found was eye-opening. The Opteron machine outperformed the Xeons when lightly loaded with minimal multitasking, but once the real work started, the Opteron stopped. It was effectively shut down by the same multitasking load that the two Xeons performed with ease. In the clean environment, it still performed at less than half the speed of the older and allegedly less-capable Xeons.
This is just a further description of the testing of the process, but it reveals some interesting data about the two x86 counterparts.
How we put the workstations under pressure
Quote:
Enter the workstation: designed for concurrent multiprocessing, workstations are rugged and reliable, with multiple, symmetric CPUs and gobs of memory to power through even the toughest workloads. You need to really load these machines down before their relative merits begin to surface, and that means generating concurrent workloads that exercise a variety of OS and application subsystems.
For this review, we did just that. I utilized one of my favorite test tools, Clarity Studio from CSA Research. Using a combination of parallel workloads -- client/server database (specifically, ActiveX Data Objects), workflow (MAPI), Windows Media playback, and Windows Media encoding -- I generated a hailstorm of CPU and memory activity.
I then scaled these workloads on each system, increasing the number of concurrent tasks as well as their complexity, all the while tracking the systems' performance and health through various internal and external metrics counters.
The net result? Despite a great deal of hype, AMD's 2.2GHz Opteron 248 CPU -- as embodied in the IBM IntelliStation A Pro workstation -- doesn't fare well under heavy workloads. When compared head-to-head with last year's Intel Xeon platform, a 3.2GHz/533MHz Front Side Bus model represented here by the MPC NetFrame 600, the Opteron fades as the workload level increases.
In fact, across the range of tests, the Opteron system took an average of 15 percent longer to complete the tasks than the Xeon. In some cases, most notably client/server workflow against a MAPI message store, the Opteron took over 30 percent longer.
An examination of OS metrics data collected by Clarity Studio showed that the Opteron was definitely struggling to juggle all those threads. One metric in particular shed additional light on the results. The Peak CPU Saturation Index, which is calculated from a sampling of the Processor Queue Length counter as exposed by the Windows Performance Data Helper libraries, showed that, on average, the Opteron system had 16 percent more waiting threads in its queue -- a clear indication that the system was in fact CPU-bound and running out of processor bandwidth.
My interpretation: Hyper-threading support on the Xeon allowed it to continue to scale thanks to its ability to execute more than one instruction at a time. Once again, Intel's simultaneous multitasking technology -- where underutilized pipeline resources are shared to create a second, virtual processor image -- is looking like an ace in the hole for the company's workstation strategy.
The story gets worse for AMD when you factor in the newest Xeon processors from Intel. Preliminary results from two systems based on the new 800MHz FSB Xeon show the aforementioned average performance gap widening to nearly 50 percent (the MAPI workload, in particular, is now running 115 percent faster than Opteron), with CPU Saturation now 30 percent higher for Opteron when compared to the next-generation Xeon CPU (watch for our expanded coverage in an upcoming issue).
Aren't the Opteron 248's, and 250's newer, and faster than the 150's?
Also, aren't those SP setups. What 's the point of having a Xeon if it's not a Dual? Especially for rendering. I don't have as much faith in a-tech credibility if that's what they are comparing. Sorry.
Since our use of Ubench in the previous article clearly infuriated many people, we are going to kick that benchmark to the side for the time being until we can decide a better way to implement it._
Aren't the Opteron 248's, and 250's newer, and faster than the 150's?
Also, aren't those SP setups. What 's the point of having a Xeon if it's not a Dual? Especially for rendering. I don't have as much faith in a-tech credibility if that's what they are comparing. Sorry.
They could only get one 3.6ghz Xeon... they had to test single proc. Dual proc opterons are faster than the Xeons in every setup I have seen. Even the quad setups are faster.
Either way, you can look at it this way.
(single proc)
The Opteron 150 is a lot faster than the g5. The Xeon and g5 are very close.
I give up .... i'm too tired to care right now. My brain is numb from programming all day. I'll try and dig up some urls tomorrow that illustrate how fast the 150 is.
And no the opteron 248's aren't faster. They are using the Hammer core instead of the sledgehammer core... sledgehammer is newer. The 250 is faster yet.
Sorry this post I just did is worthless. i'm too tired
Too much of this thread has probably been spent on OpenGL already, but I have had at least one bad experience with Apple's GL implementation (on 10.2.something, not sure if they've fixed it since). Basically one call which is used to update a small potion of a texture resulted in the full upload of the entire texture, causing massive slowdown over the same code on the PC. Apple also takes a very different approach to texture management - supposedly you're supposed to let it do all the figuring out as to what textures should be loaded on to the card and what shouldn't, instead of "micromanaging" your resources. I think this is partly because of Quartz Extreme- no one program is going to have exclusive use of the 3D card's resources.
I'm glad to see this thread is starting to head back on topic rather than re-hashing discussions of x86 processors. We have our own processors to think abut let alone theirs. I was about to re-direct it back towards "what will be the new specs for the next PM line"
But now that you guys are mentioning graphics programing I was over at nvidia's site all excited about the new Geforce 6 series SDK released today, and low, and behold it's an .exe. WTF? What a GD slap in the face that was.
Well, I know that in both Cinema and Lightwave forums OpenGL for the Mac is considered a disappointment. I'm not an expert, but that is just what I've observed. I don't know about Maya. If I were to make a diagnosis I would say that right now Apple is all about clever and nifty solutions and not about just getting down to it and doing hard work. I would suspect OpenGL driver optimizations would be tough work, with little immediate reward.
My opinion is that OS X should be the unquestioned premiere platform for everything OpenGL. But it is just not.
Comments
I guarantee we will see at least 3ghz for 2005... when is a better question. Even if they put out a 2.8... I'm sure they'll make an update in the middle of the year if they have to.
By that time we'll be at a different stage of proc anyways... 975 correct? This 970fx won't be around much longer in the powermac....... i hope.
----------------------
Here's to wishing for new kb and mouse on next pm. This raised one is killing my wrists.
Originally posted by Lemon Bon Bon
I'm waiting for a 3 gig dual dual core Antares. That's my dream machine. With 'Tiger'. Pre-installed. If Apple could deliver an announcment on this by March 2005? They're going to break the 300K tower barrier.
Lemon Bon Bon
We wants it! We needs it, Must haves it!
The Precious!
Originally posted by Lemon Bon Bon
I'm waiting for a 3 gig dual dual core Antares. That's my dream machine. With 'Tiger'. Pre-installed. If Apple could deliver an announcment on this by March 2005? They're going to break the 300K tower barrier.
And by that time you will be holding out for eCLipz. Maybe we should just set up Paypal donation accounts to buy Macs for you and hmurchison.
An Apple GG!
Does that really say G6?
Originally posted by kcmac
Okay, bring on the "photoshop experts."
Does that really say G6?
It does now.http://forums.appleinsider.com/showt...threadid=45759
True that, murk!
MAXON Support_UK writes:
>Is Apple's Open GL implementation 'soft' or 'slow'?
Greetings **********,
From what I've seen, Mac's OpenGL support is just lagging. We're doing
the
best we can with the software as far as using as much of the 'power'
available however, if the 'water's dripping, the buckets gonna fills
very
slowly.' Apple has to beef up the support for the cards before we
really
really utilize it within CINEMA.
Regards,
MAXON Technical Support
That's what the 3rd party developer thinks.
'Beef up' support for the very cards they ship.
Over 100% more support for the cards they ship.
Roll on Open GL 2 and 'Tiger' and maybe a CORE 3D API.
It looks like C+. Not bad. But can do wayyyyyyyy better?
Lemon Bon Bon \
Lemon Bon Bon
Originally posted by Lemon Bon Bon
Thoughts?
Lemon Bon Bon
Very interesting. As I stated before cinema was originally written for mac os. However this version never shipped and wintel shipped first. I didn't think it was maxxons fault for a # of reasons...
reason 1 and most important:
Other apps lag also (games, maya, etc).
I hope apple has the bugs worked out by Tiger and OGL 2.0. Many pc cards already support OGL 2.0. (My 5900xt doesn't).
I can't wait until open gl 2.0 and tiger... I have a feeling things are going to change BIG TIME!
I'll leave it at that. I have a feeling it has to do with porting ogl... as amorph has stated before, porting code from an optimized system is hard to do since a lot of that optimization is system specific.
Originally posted by emig647
Renders faster in what program? Every test I have seen the opteron 150 smashes the Xeon.
Example:
http://www.anandtech.com/linux/showdoc.aspx?i=2163&p=4
Looks to me the opteron 150 is significantly ahead!
Where's your proof?
Look at all of those benchs. The Xeon doesn't come close to the opteron in any of those tests... you can't convince me without proof the Xeon renders faster.
There is a lot of documentation, and general discussion amongst 3D pro's on rendering between these two processors that I read frequently, but I did a quick search and grabbed few articles of interest, and one (split in two) that describes in detail how these two perform under high demanding operations.
In response to: Poster: maninflash
Subject: Re: Opteron vs. Xeon: Render Times @ highend3d.com
I did these render tests last night, using MentalRay and the scene from ZooRender.com at this link http://www.zoorender.com/benchmark/Benchmark_Mental.zip
Both operating systems are Windows XP with Maya 6.
00:41:75
Dual XEON 3.2GHz, SuperMicro X5DA8 mainboard, 2GB RAM, with Hyper-Thread
00:47:68
Dual Opteron 248, Tyan Thunder mainboard, 2GB RAM
00:50:88
Dual Xeon 3.2GHz, SuperMicro X5DA8 mainboard, 2GB RAM, without Hyper-Thread
Hard drive for both systems were Seagate Cheetah 36GB 15k, the primary bootup drive with XP and Maya6 installed. RAM for XEON was two Kingston 1GB DDR 266 ECC and for Opteron two Corsair 1GB DDR 400.
If anyone has Dual Opteron 250, it would be interesting to see if it can beat Xeon's time.
Michael
That is a significant difference on a pair of solo work machines, but not terribly convincing as it does not go into detailed analysis of the processors pro's, con's, and it also lacks descriptions of the faults in the architecture, or superiority of one over another.
Here is a link to a toms hardware article/showdown that has the two processors in a head to head shoot out, and the Opteron is the champ , but there is a 3D rendering comparison in the same article ( which is a heavy load ) that shows the Xeon with much better render times.
http://www.tomshardware.com/cpu/2003...ml#3drendering
More recent testing reveal some significant surprises about these two processors.
_
Workstation showdown: Xeon vs. Opteron
Intel's Xeon-based workstations are much faster than workstations based on AMD's Opteron when it comes to heavy multitasking
What we found was eye-opening. The Opteron machine outperformed the Xeons when lightly loaded with minimal multitasking, but once the real work started, the Opteron stopped. It was effectively shut down by the same multitasking load that the two Xeons performed with ease. In the clean environment, it still performed at less than half the speed of the older and allegedly less-capable Xeons.
_http://infoworld.com/article/04/08/1...station_1.html
This is just a further description of the testing of the process, but it reveals some interesting data about the two x86 counterparts.
How we put the workstations under pressure
Enter the workstation: designed for concurrent multiprocessing, workstations are rugged and reliable, with multiple, symmetric CPUs and gobs of memory to power through even the toughest workloads. You need to really load these machines down before their relative merits begin to surface, and that means generating concurrent workloads that exercise a variety of OS and application subsystems.
For this review, we did just that. I utilized one of my favorite test tools, Clarity Studio from CSA Research. Using a combination of parallel workloads -- client/server database (specifically, ActiveX Data Objects), workflow (MAPI), Windows Media playback, and Windows Media encoding -- I generated a hailstorm of CPU and memory activity.
I then scaled these workloads on each system, increasing the number of concurrent tasks as well as their complexity, all the while tracking the systems' performance and health through various internal and external metrics counters.
The net result? Despite a great deal of hype, AMD's 2.2GHz Opteron 248 CPU -- as embodied in the IBM IntelliStation A Pro workstation -- doesn't fare well under heavy workloads. When compared head-to-head with last year's Intel Xeon platform, a 3.2GHz/533MHz Front Side Bus model represented here by the MPC NetFrame 600, the Opteron fades as the workload level increases.
In fact, across the range of tests, the Opteron system took an average of 15 percent longer to complete the tasks than the Xeon. In some cases, most notably client/server workflow against a MAPI message store, the Opteron took over 30 percent longer.
An examination of OS metrics data collected by Clarity Studio showed that the Opteron was definitely struggling to juggle all those threads. One metric in particular shed additional light on the results. The Peak CPU Saturation Index, which is calculated from a sampling of the Processor Queue Length counter as exposed by the Windows Performance Data Helper libraries, showed that, on average, the Opteron system had 16 percent more waiting threads in its queue -- a clear indication that the system was in fact CPU-bound and running out of processor bandwidth.
My interpretation: Hyper-threading support on the Xeon allowed it to continue to scale thanks to its ability to execute more than one instruction at a time. Once again, Intel's simultaneous multitasking technology -- where underutilized pipeline resources are shared to create a second, virtual processor image -- is looking like an ace in the hole for the company's workstation strategy.
The story gets worse for AMD when you factor in the newest Xeon processors from Intel. Preliminary results from two systems based on the new 800MHz FSB Xeon show the aforementioned average performance gap widening to nearly 50 percent (the MAPI workload, in particular, is now running 115 percent faster than Opteron), with CPU Saturation now 30 percent higher for Opteron when compared to the next-generation Xeon CPU (watch for our expanded coverage in an upcoming issue).
http://infoworld.com/infoworld/artic...tion-sb_1.html
_
I think that article says enough about the differences in performance between the processors when taking on heavy loads.
It compares the opteron 150 to the Xeon 3.6ghz... the opteron was clearly faster.
I will take anandtech's word for benchmarking over just about anyone.
If you paid attention to the pc world you would to... Those benchmarks are real.
The links you sent me are comparing OLD processors.
Also, aren't those SP setups. What 's the point of having a Xeon if it's not a Dual? Especially for rendering. I don't have as much faith in a-tech credibility if that's what they are comparing. Sorry.
Opstone
Since our use of Ubench in the previous article clearly infuriated many people, we are going to kick that benchmark to the side for the time being until we can decide a better way to implement it._
Even a-tech later disregarded that test.
http://www.anandtech.com/linux/showdoc.aspx?i=2163&p=6
Originally posted by onlooker
Aren't the Opteron 248's, and 250's newer, and faster than the 150's?
Also, aren't those SP setups. What 's the point of having a Xeon if it's not a Dual? Especially for rendering. I don't have as much faith in a-tech credibility if that's what they are comparing. Sorry.
They could only get one 3.6ghz Xeon... they had to test single proc. Dual proc opterons are faster than the Xeons in every setup I have seen. Even the quad setups are faster.
Either way, you can look at it this way.
(single proc)
The Opteron 150 is a lot faster than the g5. The Xeon and g5 are very close.
I give up .... i'm too tired to care right now. My brain is numb from programming all day. I'll try and dig up some urls tomorrow that illustrate how fast the 150 is.
And no the opteron 248's aren't faster. They are using the Hammer core instead of the sledgehammer core... sledgehammer is newer. The 250 is faster yet.
Sorry this post I just did is worthless. i'm too tired
But now that you guys are mentioning graphics programing I was over at nvidia's site all excited about the new Geforce 6 series SDK released today, and low, and behold it's an .exe. WTF? What a GD slap in the face that was.
My opinion is that OS X should be the unquestioned premiere platform for everything OpenGL. But it is just not.