or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › What will be the new specs for the next PM line?
New Posts  All Forums:Forum Nav:

What will be the new specs for the next PM line? - Page 3

post #81 of 282
Quote:
Originally posted by onlooker
I can understand that, and why lemon insano is freakin', but wasn't apple the first to use OGL outside of SGI? I think they should have some serious experience with OGL. Were also forgetting the guys from raycer graphics. Those guys were supposed to be the OGL dev's from heaven. Didn't they design the Quarts engine?

I doubt they were the first after SGI, but they were certainly early adopters; they were the first PC vendor to ship OpenGL AFAIK.

However, as I mentioned above, a correct and complete OpenGL implementation that is designed for accuracy is going to lose speed benchmarks to a limited implementation optimized for speed at screen resolution, with the difficult routines left off. But if you're actually doing 3D work, you will prefer the former implementation no matter how much slower it is rendering the "Quake subset," because it is accurate and complete.

As for the sources of the information I provided, I can only say that it's all public, and it's been gleaned over a few years of reading information on the subject all over the place. The truth is out there.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #82 of 282
Quote:
Originally posted by onlooker
I can understand that, and why lemon insano is freakin', but wasn't apple the first to use OGL outside of SGI? I think they should have some serious experience with OGL. Were also forgetting the guys from raycer graphics. Those guys were supposed to be the OGL dev's from heaven. Didn't they design the Quarts engine?

I think that ATI thing is an ATI issue with Mac's, but cinni-biache beta is an awful test.

Why dont' you tell us about a better test onlooker.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply
post #83 of 282
I'm not happy about this at all.

IF the 6800 Ultra supports full Pro 3D drivers then surely Apple would mention that and it would be marketed as a Quadro of 'pro' card.

or...

It would be quite easy to re-badge and activate via OS X as a Quadro card and driver.

Apple can't be 2300 Open GL points behind PC card equivalent. Hell, Nvidia cards are better at Open GL than Ati cards, aren't they? Don't Nvidia have ex-SGI staff pumping out their excellent GL drivers?

Surely Apple could draw on that to have excellent GL implementation..?

It's not the cpu. Dual 2.5s see off everything(!) in rendering bar ridiculously overclocked Xeons and Quad Xeons...and the G5 is often giving away upto 1.4 gig(!) on each cpu.

It's not the physics test either. Here the dual 2.5 gig and 1.25 gig bus thrape the opposition AGAIN!

But how, HOW can the 'same' card be over 100% slower?

1. Bad Apple GL drivers?
(When people have said Apple have a solid, if not outstanding, GL support?)
2. Panther is slower than Windows XP at Open GL
(do we seriously believe that considering how snail like Explorer is versus Safari?)

Software is WHAT Apple does. They are just about the best. Even IF they are slower, they are always stylish and there's no way Apple is over 100% slower than the opposition in anything..? So I don't see the above, in theory, being the bug bear.

3. Cinebench. Beta. PC optimised.
(Okay. But can that explain the over 100% crippling of a Mac graphic card which is essentially the same as the PC card? Especially when the CPU and bandwidth do excellently well in the other tests...UNLESS, UNLESS the GL acceleration is the last bit of bench that is not optimised...can anybody get the email address of Maxxon/Cinebench test authors so I can email them about this? I feel so strongly about this. This issue of Mac graphic cards being slower than PC counterparts have p*********** me off for years...)
4. Games. It's been noted, consistently, that Mac graphic card do not perform as well as the PC equiv' on 'hot' games.
(Well, for a long while, the 'PC ports' have been CPU bound on the Mac with the lame G4 trying to power the ATI cards onward. BUT, SURELY with the G5, at 2.5 gig (easily equivalent of a 3-3.2 gig Pentium 4!), and there's two of them, on 1.25 gig bandwidth, using an industry standard AGPx8 pipe...SURELY this is now a NON-ISSUE..?!?!

What is causing the problem?

IF it IS a software problem related to Apple giving a full GL implementation which slows Apple's GL down or appear slower...then...erm...this IS a 3D test and Apple's GL should shine in this very bench....

...perhaps they should split the GL calls into simple implementation for games and precise calls for 3D pro modelling apps.

However, the ATI x800 XT card presumably has simple GL? ie not full Fire or Quadro drivers? So why, WHY does it do so well in a 3D GL stress test? Modern consumer cards DO fuller GL implentations than in the past where cards such as the Voodoo had 'mini GL' for games. But times have moved on since then. The gap has closed significantly. The consumer cards are now ridiculously powered and have fuller GL drivers.

So much so that Nvidia once proudly posted benches on their site a while back showing how one of their Geforce cards blew a Wildcat out the water. Was a few years back though.

Are there any benches on Quadro vs 6800 Ultra?
Mac vs PC ATI 9800sXT?

Accelerate your Mac? Do they have any dirt to dish on the Mac's dirty secret?

Nothing to do with fast writes not being supported on the Mac? I thought the announcement of the Nvidia Geforce 2MX hurdled the issue of fast writes way back...

Still scratching my head...I can't see what it is...

In games, Macs used to be CPU bound.
Are G5 rigs STILL cpu bound? Do G5s now achieve parity with the same card in games?

If not, we know it's not just the Cinebench beta that is fishy.

And nobody said it was an Endian issue.

Where's the bottleneck?

Lemon Bon Bon \
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #84 of 282
I'm thinking a bit that Apple is focusing their efforts on getting OpenGL 2.0 in Tiger. The current OpenGL isn't bad but it's still not as speedy as it could be. OpenGL is important to Apple and particularly the new extensions and shader support. Tiger should focus on the implementation moreso than Jaguar or Panther IMO.

I'm almost wondering if Apple does indeed have plans to add the Quadro line but simply is going to wait until they can support PCI Express. If I was Apple I'd do this. It wouldn't hurt for sales of PCI Express Powermacs and I could center my driver code around PCI Express not that it requires a lot of changes.

Rumors had it that Apple and 3Dlabs were talking about the Wildcats so we know Apple is indeed looking for bigger iron. Well Apple will have more incentive after they buy Luxology

It's amazing but we're still talking first generation hardware with today's Macs. I'm ready to see the successor and what suprising stuff may show up. I have faith in Apple that they are going to beef up the hardware soon. They made a nice splash at Siggraph and will look to extend that good will in the future.
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
post #85 of 282
Good post.

My thinking is Tiger is an opportunity to beef up their software and hardware specs for GL.

Open GL 2. A faster speed. Part of Tiger. Antares.

Cards. Quadro. Geforce whatevers also by then. 512 meg cards maybe a reality by then.

And what of a stackable SLI option for Mac Nvidia customers?

IF Apple is going to play with the workstation crowd...then surely 3D is the next natural step for a company with workstations in a closely affiliated Steve Jobs company...

Wouldn't they (Pixar) want Quadros too?

As for Luxology?

Why not buy both them and Newtek?

Newtek are deeply engrained in broadcast graphics.

It's the final piece of Apple's workstation line of software?

That or XSI version 4?

And Maya was just spun off solo for a reported $26 million? Surely not? Apple would have snapped 'em up at this price...?

Either way, Mac graphic cards have got to get better than those GL benches. It's...surprisingly, the weakest link in an otherwise great workstation.

Antares can't come soon enough. I want my 3 gig plus Mac.

And I want it now.

Next Spring seems a long time to wait. And they'd better get quick to it with a Wild Cat or Quadro...

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #86 of 282
Accel-KKR purchased Alias for 56 Million

Perhaps a bit spendy for Apple. Most of Alias' tools are PC. I'm sure Apple doesn't have a problem with a few PC apps but having too many means more work for them. Luxology wouldn't have this problem. Just OSX and Windows and 2 products max. Luxology and Newtek would be like inviting the Hatfields and the McCoys to the same party..ouch.

Apple needs some 3D brain power in the company. They get that cheaply with Luxology and Stuart Ferguson, Allen Hastings and Brad Peebler. By OSX 10.5 Apple needs a Core3D API for developers to start integrating 3D into applications.

Powermacs with PCI Express should support Nvidia SLI. All you need is 16x PCIe and 8x. The space issue and power requirements seem to be bigger potential issues.

I think the nextgen Powermacs will have either two optical drives or two additional drive bays. My money is on the two drive bays.

I'd like to see dual gigabit ports as well with trunking. Gigabit is now ready for consumer computers so the next step for workstations is trunked gigabit and multiple ports.

And last but not least how about a fanless Powersupply to cut noise down drastically.
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
post #87 of 282
Quote:
Originally posted by hmurchison
I'd like to see dual gigabit ports as well with trunking. Gigabit is now ready for consumer computers so the next step for workstations is trunked gigabit and multiple ports.

I can see dual gigabit, but trunking is a switch level thing. Why incorporate that into the computer?
...we have assumed control
Reply
...we have assumed control
Reply
post #88 of 282
Quote:
Originally posted by Rhumgod
I can see dual gigabit, but trunking is a switch level thing. Why incorporate that into the computer?

Saves using a PCI slot later to add trunking features and creates product differentiation between consumer lines which should have Gigabit standard IMO at $1k and above and workstations which should have dual gigabit thus enabling trunking capability with the right switch and software(Tiger Server). I'm looking long term at the probablity of fast switches and $499 Xsan clients linked via multiple trunked gigabit connections. Cheaper than fiber channel either way you look at it.

Granted it is a bit frivilous but sometimes the high end needs to be impractical in ways.
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
post #89 of 282
Quote:
Core3D API

Dude now your talking my language! That is the best freaking idea I've ever heard.


Quote:
Originally posted by hmurchison


Apple needs some 3D brain power in the company. They get that cheaply with Luxology and Stuart Ferguson, Allen Hastings and Brad Peebler. By OSX 10.5 Apple needs a Core3D API for developers to start integrating 3D into applications.

Powermacs with PCI Express should support Nvidia SLI. All you need is 16x PCIe and 8x. The space issue and power requirements seem to be bigger potential issues.

I think the nextgen Powermacs will have either two optical drives or two additional drive bays. My money is on the two drive bays.

I'd like to see dual gigabit ports as well with trunking. Gigabit is now ready for consumer computers so the next step for workstations is trunked gigabit and multiple ports.

And last but not least how about a fanless Powersupply to cut noise down drastically.

I just love this post.


Apple buying Luxology would be a wet dream come true. And that has actually happened to me before.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #90 of 282
Lemon Bon Bon,

I saw you comparing Xeon to g5's earlier. People keep doing this and I do NOT know why. The top PC proc is the Opteron 150 and 200. The 150 smoked the Xeon 3.6ghz in all tests done by anandtech. It averaged about 2 times the speed. The opteron 150 is 200-300 dollars cheaper than the Xeon 3.6 (when it comes out) and its faster. You can buy quad proc mobos for the opteron as well. I just wanted to point out that you should be comparing the g5 with the opteron (also a 64bit proc) instead of the Xeon (a 32bit proc). And yes the opteron is ... don't say it emig ... gulp ... faster than the g5 in all tests.

Ibm is doing a good job in my opinion. It is on par with the Xeon but unfortunately AMD has us by the nuts.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply
post #91 of 282
Quote:
Originally posted by emig647
Lemon Bon Bon,

I saw you comparing Xeon to g5's earlier. People keep doing this and I do NOT know why. The top PC proc is the Opteron 150 and 200. The 150 smoked the Xeon 3.6ghz in all tests done by anandtech. It averaged about 2 times the speed. The opteron 150 is 200-300 dollars cheaper than the Xeon 3.6 (when it comes out) and its faster. You can buy quad proc mobos for the opteron as well. I just wanted to point out that you should be comparing the g5 with the opteron (also a 64bit proc) instead of the Xeon (a 32bit proc). And yes the opteron is ... don't say it emig ... gulp ... faster than the g5 in all tests.

Ibm is doing a good job in my opinion. It is on par with the Xeon but unfortunately AMD has us by the nuts.

Xeon still renders faster though.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #92 of 282
Renders faster in what program? Every test I have seen the opteron 150 smashes the Xeon.

Example:
http://www.anandtech.com/linux/showdoc.aspx?i=2163&p=4

Looks to me the opteron 150 is significantly ahead!

Where's your proof?

Look at all of those benchs. The Xeon doesn't come close to the opteron in any of those tests... you can't convince me without proof the Xeon renders faster.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply
post #93 of 282
Quote:
Originally posted by Lemon Bon Bon
2000(!) Open GL points and over beind an x800XT card on the pc.

The Mac's best card is projected at 1,700 on a DUAL 2.5 gig?

WHAT THE HELLL IS GOING ON!?!?!?

Apart all factors already discussed here, maybe this discussion would serve as a hint on what else could happen.
post #94 of 282
PB,

ROFLMAO!!!!!!

I'm sure that has a pretty big impact. And as you noticed a lot of those machines in cinebench were overclocked... if someone has the wisdom to overclock a pc cpu I'm sure they are smart enough to use coolbits or powerstrip and overclock their GPU's also.

I'm kind of insulted because I bought a 9600xt. If they did that to thte 9600xt i'm sure they did it to the 9800xt too. I wish someone could verify those clock speeds. Really the 6800 ultra is the only card that was worth upgrading to. If I was a gamer or a diehard graphics nut i would have got one and waited. Oh well. Thats what pc's are for.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply
post #95 of 282
Quote:
Originally posted by Lemon Bon Bon
I'm not happy about this at all.

IF the 6800 Ultra supports full Pro 3D drivers then surely Apple would mention that and it would be marketed as a Quadro of 'pro' card.

It's up to NVIDIA to say that it's a GeForce or a Quadro. If they don't ship Apple the firmware, test and approve the driver + OpenGL + card + hardware with the major applications, etc., they're not going to call it a Quadro. If Apple tries, they'll jump down Apple's throat.

Quote:
Apple can't be 2300 Open GL points behind PC card equivalent. Hell, Nvidia cards are better at Open GL than Ati cards, aren't they? Don't Nvidia have ex-SGI staff pumping out their excellent GL drivers?

Apple's NVIDIA drivers are ported PC code (and they started out really shaky, unlike NVIDIA's rock-solid PC drivers), and NVIDIA's SGI veterans are probably working on the full OpenGL implementation, which Apple doesn't use because they have their own.

Quote:
1. Bad Apple GL drivers?
(When people have said Apple have a solid, if not outstanding, GL support?)

This is certainly possible. OpenGL is absolutely immense, so optimizing it is no mean feat. In the mean time, as I've been saying, completeness and accuracy trump speed in the pro space.

As for Cinebench, I'd just write that off and look at the performance of 3D apps in actual use. I have a sneaking suspicion that it's not unlike cross-platform Premier benchmarking...

Quote:
4. Games. It's been noted, consistently, that Mac graphic card do not perform as well as the PC equiv' on 'hot' games. (Well, for a long while, the 'PC ports' have been CPU bound on the Mac with the lame G4 trying to power the ATI cards onward. BUT, SURELY with the G5, at 2.5 gig (easily equivalent of a 3-3.2 gig Pentium 4!), and there's two of them, on 1.25 gig bandwidth, using an industry standard AGPx8 pipe...SURELY this is now a NON-ISSUE..?!?!

Un-optimized code — or worse, code optimized for a completely different platform — can squander any potential hardware advantage completely. Given that the Mac games market is not generally seen as lucrative by the larger publishers, they don't spend a lot of money on the port, and so you get code optimized for a Pentium 4 running on something it was never designed to run on, and maybe you get lucky and it's not a big deal, and maybe you get hammered. A lot of the tricks for speeding up PPCs involve cache hinting instructions and other processor- and ISA-specific tricks; the PC game code might use the P4's analogs, and the port might strip them out and replace them with nothing, for want of time to do proper profiling and optimizing and (if necessary) refactoring.

Quote:
However, the ATI x800 XT card presumably has simple GL? ie not full Fire or Quadro drivers? So why, WHY does it do so well in a 3D GL stress test?

Partly because Carmack has been breathing down their necks, and partly because DirectX is getting remarkably full-fledged itself, to the point where the OpenGL ARB has basically released 80% of OpenGL 2.0 already (as extensions) just to keep up.

It used to be true that pro cards were completely different architectures. Now that isn't. Then it used to be true that they were faster. Now it isn't. Now it's true that their OpenGL implementations are more complete. Eventually, it won't be. The pro 3D card market is going into the same death spiral that the pro workstation ($10k+ UNIX desktops) has been in. Nevertheless, for now, it's still true that you are guaranteed a complete OpenGL with the pro cards.

Otherwise, we wouldn't be having this discussion, would we? onlooker and others would be happily working on their high-end consumer GPUs while Quadro owners looked on in envy.

Quote:
Are there any benches on Quadro vs 6800 Ultra?
Mac vs PC ATI 9800sXT?

If the bench doesn't note differences in precision, and also any tests that one card can run and the other can't, then it's not particularly meaningful. If it uses an app like Maya that intentionally withholds densely optimized OpenGL code from consumer cards, it's even less meaningful, because the 6800 (e.g.) will be twiddling its thumbs.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #96 of 282
i have it on good info, that it will be a

Dual G5 4GHz
2-4GB 667MHz DDR2
8x Dual Layer DVD-RW
Silent Water Cooling
ATi X800pro standard
yes
yes
no
yes
no
no
no
yes
no
yes
no
no
Reply
yes
yes
no
yes
no
no
no
yes
no
yes
no
no
Reply
post #97 of 282
Quote:
i have it on good info, that it will be a

WTF???? No 10G Ethernet! No buy!
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
post #98 of 282
Quote:
i have it on good info, that it will be a

Dual G5 4GHz
2-4GB 667MHz DDR2
8x Dual Layer DVD-RW
Silent Water Cooling
ATi X800pro standard

When?

WHEN!?

Lemon Bon Bon \
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #99 of 282
Quote:
Originally posted by industryMan
i have it on good info, that it will be a

Dual G5 4GHz
2-4GB 667MHz DDR2
8x Dual Layer DVD-RW
Silent Water Cooling
ATi X800pro standard

You forgot to add dual bonded Ultra Wide Band and 16x HD-DVD quad layer blue laser.
Apple Fanboy: Anyone who started liking Apple before I did!
Reply
Apple Fanboy: Anyone who started liking Apple before I did!
Reply
post #100 of 282
Quote:
Originally posted by Lemon Bon Bon
When?

WHEN!?

Lemon Bon Bon \

2006...Come on, IBM hasnt even shipped a 3 Ghz, and just reciently started shipping 2.5's after a bit of a delay.
post #101 of 282
Yeah, but when is Tiger due anyway? Last I recall SJ said it was like a year, and a day away.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #102 of 282
Tiger is about 10 months away.

I guarantee we will see at least 3ghz for 2005... when is a better question. Even if they put out a 2.8... I'm sure they'll make an update in the middle of the year if they have to.

By that time we'll be at a different stage of proc anyways... 975 correct? This 970fx won't be around much longer in the powermac....... i hope.

----------------------

Here's to wishing for new kb and mouse on next pm. This raised one is killing my wrists.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply
post #103 of 282
Quote:
Originally posted by Lemon Bon Bon


I'm waiting for a 3 gig dual dual core Antares. That's my dream machine. With 'Tiger'. Pre-installed. If Apple could deliver an announcment on this by March 2005? They're going to break the 300K tower barrier.

Lemon Bon Bon

We wants it! We needs it, Must haves it!
The Precious!

post #104 of 282
Quote:
Originally posted by Lemon Bon Bon


I'm waiting for a 3 gig dual dual core Antares. That's my dream machine. With 'Tiger'. Pre-installed. If Apple could deliver an announcment on this by March 2005? They're going to break the 300K tower barrier.

And by that time you will be holding out for eCLipz. Maybe we should just set up Paypal donation accounts to buy Macs for you and hmurchison.
When they said "Think Different", I ran with it.
Reply
When they said "Think Different", I ran with it.
Reply
post #105 of 282
From the looks of this picture Lemon's wishes just might come true :

It's Better To Be Hated For What You Are Than To Be Loved For What You Are Not
Reply
It's Better To Be Hated For What You Are Than To Be Loved For What You Are Not
Reply
post #106 of 282
Wow!

An Apple GG!

post #107 of 282
Okay, bring on the "photoshop experts."

Does that really say G6?
iPad2 16 GB Wifi

Who is worse? A TROLL or a person that feeds & quotes a TROLL? You're both idiots.....
Reply
iPad2 16 GB Wifi

Who is worse? A TROLL or a person that feeds & quotes a TROLL? You're both idiots.....
Reply
post #108 of 282
Quote:
Originally posted by kcmac
Okay, bring on the "photoshop experts."

Does that really say G6?

It does now.http://forums.appleinsider.com/showt...threadid=45759
When they said "Think Different", I ran with it.
Reply
When they said "Think Different", I ran with it.
Reply
post #109 of 282


True that, murk!
iPad2 16 GB Wifi

Who is worse? A TROLL or a person that feeds & quotes a TROLL? You're both idiots.....
Reply
iPad2 16 GB Wifi

Who is worse? A TROLL or a person that feeds & quotes a TROLL? You're both idiots.....
Reply
post #110 of 282
I was that incensed by Apple's performance lagging implementations of PC cards that I wrote to Maxxon support on Cinebench re: what on earth was causing such disparity between the same card for Mac/PC!?!?

Quote:
MAXON Support_UK writes:
>Is Apple's Open GL implementation 'soft' or 'slow'?

Greetings **********,

From what I've seen, Mac's OpenGL support is just lagging. We're doing
the
best we can with the software as far as using as much of the 'power'
available however, if the 'water's dripping, the buckets gonna fills
very
slowly.' Apple has to beef up the support for the cards before we
really
really utilize it within CINEMA.

Regards,
MAXON Technical Support

That's what the 3rd party developer thinks.

'Beef up' support for the very cards they ship.

Over 100% more support for the cards they ship.

Roll on Open GL 2 and 'Tiger' and maybe a CORE 3D API.

It looks like C+. Not bad. But can do wayyyyyyyy better?

Lemon Bon Bon \
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #111 of 282
Thoughts?

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #112 of 282
Quote:
Originally posted by Lemon Bon Bon
Thoughts?

Lemon Bon Bon

Very interesting. As I stated before cinema was originally written for mac os. However this version never shipped and wintel shipped first. I didn't think it was maxxons fault for a # of reasons...

reason 1 and most important:
Other apps lag also (games, maya, etc).

I hope apple has the bugs worked out by Tiger and OGL 2.0. Many pc cards already support OGL 2.0. (My 5900xt doesn't).

I can't wait until open gl 2.0 and tiger... I have a feeling things are going to change BIG TIME!

I'll leave it at that. I have a feeling it has to do with porting ogl... as amorph has stated before, porting code from an optimized system is hard to do since a lot of that optimization is system specific.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply
post #113 of 282
Quote:
Originally posted by emig647
Renders faster in what program? Every test I have seen the opteron 150 smashes the Xeon.

Example:
http://www.anandtech.com/linux/showdoc.aspx?i=2163&p=4

Looks to me the opteron 150 is significantly ahead!

Where's your proof?

Look at all of those benchs. The Xeon doesn't come close to the opteron in any of those tests... you can't convince me without proof the Xeon renders faster.

There is a lot of documentation, and general discussion amongst 3D pro's on rendering between these two processors that I read frequently, but I did a quick search and grabbed few articles of interest, and one (split in two) that describes in detail how these two perform under high demanding operations.


Quote:
In response to: Poster: maninflash
Subject: Re: Opteron vs. Xeon: Render Times @ highend3d.com



I did these render tests last night, using MentalRay and the scene from ZooRender.com at this link http://www.zoorender.com/benchmark/Benchmark_Mental.zip

Both operating systems are Windows XP with Maya 6.

00:41:75
Dual XEON 3.2GHz, SuperMicro X5DA8 mainboard, 2GB RAM, with Hyper-Thread

00:47:68
Dual Opteron 248, Tyan Thunder mainboard, 2GB RAM

00:50:88
Dual Xeon 3.2GHz, SuperMicro X5DA8 mainboard, 2GB RAM, without Hyper-Thread

Hard drive for both systems were Seagate Cheetah 36GB 15k, the primary bootup drive with XP and Maya6 installed. RAM for XEON was two Kingston 1GB DDR 266 ECC and for Opteron two Corsair 1GB DDR 400.

If anyone has Dual Opteron 250, it would be interesting to see if it can beat Xeon's time.

Michael


That is a significant difference on a pair of solo work machines, but not terribly convincing as it does not go into detailed analysis of the processors pro's, con's, and it also lacks descriptions of the faults in the architecture, or superiority of one over another.


Here is a link to a toms hardware article/showdown that has the two processors in a head to head shoot out, and the Opteron is the champ , but there is a 3D rendering comparison in the same article ( which is a heavy load ) that shows the Xeon with much better render times.

http://www.tomshardware.com/cpu/2003...ml#3drendering

More recent testing reveal some significant surprises about these two processors.

_

Workstation showdown: Xeon vs. Opteron
Intel's Xeon-based workstations are much faster than workstations based on AMD's Opteron when it comes to heavy multitasking


Quote:
What we found was eye-opening. The Opteron machine outperformed the Xeons when lightly loaded with minimal multitasking, but once the real work started, the Opteron stopped. It was effectively shut down by the same multitasking load that the two Xeons performed with ease. In the clean environment, it still performed at less than half the speed of the older and allegedly less-capable Xeons.
_http://infoworld.com/article/04/08/1...station_1.html



This is just a further description of the testing of the process, but it reveals some interesting data about the two x86 counterparts.


How we put the workstations under pressure


Quote:
Enter the workstation: designed for concurrent multiprocessing, workstations are rugged and reliable, with multiple, symmetric CPUs and gobs of memory to power through even the toughest workloads. You need to really load these machines down before their relative merits begin to surface, and that means generating concurrent workloads that exercise a variety of OS and application subsystems.

For this review, we did just that. I utilized one of my favorite test tools, Clarity Studio from CSA Research. Using a combination of parallel workloads -- client/server database (specifically, ActiveX Data Objects), workflow (MAPI), Windows Media playback, and Windows Media encoding -- I generated a hailstorm of CPU and memory activity.


I then scaled these workloads on each system, increasing the number of concurrent tasks as well as their complexity, all the while tracking the systems' performance and health through various internal and external metrics counters.


The net result? Despite a great deal of hype, AMD's 2.2GHz Opteron 248 CPU -- as embodied in the IBM IntelliStation A Pro workstation -- doesn't fare well under heavy workloads. When compared head-to-head with last year's Intel Xeon platform, a 3.2GHz/533MHz Front Side Bus model represented here by the MPC NetFrame 600, the Opteron fades as the workload level increases.


In fact, across the range of tests, the Opteron system took an average of 15 percent longer to complete the tasks than the Xeon. In some cases, most notably client/server workflow against a MAPI message store, the Opteron took over 30 percent longer.

An examination of OS metrics data collected by Clarity Studio showed that the Opteron was definitely struggling to juggle all those threads. One metric in particular shed additional light on the results. The Peak CPU Saturation Index, which is calculated from a sampling of the Processor Queue Length counter as exposed by the Windows Performance Data Helper libraries, showed that, on average, the Opteron system had 16 percent more waiting threads in its queue -- a clear indication that the system was in fact CPU-bound and running out of processor bandwidth.


My interpretation: Hyper-threading support on the Xeon allowed it to continue to scale thanks to its ability to execute more than one instruction at a time. Once again, Intel's simultaneous multitasking technology -- where underutilized pipeline resources are shared to create a second, virtual processor image -- is looking like an ace in the hole for the company's workstation strategy.


The story gets worse for AMD when you factor in the newest Xeon processors from Intel. Preliminary results from two systems based on the new 800MHz FSB Xeon show the aforementioned average performance gap widening to nearly 50 percent (the MAPI workload, in particular, is now running 115 percent faster than Opteron), with CPU Saturation now 30 percent higher for Opteron when compared to the next-generation Xeon CPU (watch for our expanded coverage in an upcoming issue).

http://infoworld.com/infoworld/artic...tion-sb_1.html
_

I think that article says enough about the differences in performance between the processors when taking on heavy loads.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #114 of 282
onlooker did you even look at the anandtech link I sent you?

It compares the opteron 150 to the Xeon 3.6ghz... the opteron was clearly faster.

I will take anandtech's word for benchmarking over just about anyone.

If you paid attention to the pc world you would to... Those benchmarks are real.

The links you sent me are comparing OLD processors.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply
post #115 of 282
Aren't the Opteron 248's, and 250's newer, and faster than the 150's?

Also, aren't those SP setups. What 's the point of having a Xeon if it's not a Dual? Especially for rendering. I don't have as much faith in a-tech credibility if that's what they are comparing. Sorry.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #116 of 282
Quote:
Opstone

Since our use of Ubench in the previous article clearly infuriated many people, we are going to kick that benchmark to the side for the time being until we can decide a better way to implement it._


Even a-tech later disregarded that test.



http://www.anandtech.com/linux/showdoc.aspx?i=2163&p=6
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #117 of 282
Quote:
Originally posted by onlooker
Aren't the Opteron 248's, and 250's newer, and faster than the 150's?

Also, aren't those SP setups. What 's the point of having a Xeon if it's not a Dual? Especially for rendering. I don't have as much faith in a-tech credibility if that's what they are comparing. Sorry.

They could only get one 3.6ghz Xeon... they had to test single proc. Dual proc opterons are faster than the Xeons in every setup I have seen. Even the quad setups are faster.

Either way, you can look at it this way.

(single proc)
The Opteron 150 is a lot faster than the g5. The Xeon and g5 are very close.

I give up .... i'm too tired to care right now. My brain is numb from programming all day. I'll try and dig up some urls tomorrow that illustrate how fast the 150 is.

And no the opteron 248's aren't faster. They are using the Hammer core instead of the sledgehammer core... sledgehammer is newer. The 250 is faster yet.

Sorry this post I just did is worthless. i'm too tired

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply
post #118 of 282
Too much of this thread has probably been spent on OpenGL already, but I have had at least one bad experience with Apple's GL implementation (on 10.2.something, not sure if they've fixed it since). Basically one call which is used to update a small potion of a texture resulted in the full upload of the entire texture, causing massive slowdown over the same code on the PC. Apple also takes a very different approach to texture management - supposedly you're supposed to let it do all the figuring out as to what textures should be loaded on to the card and what shouldn't, instead of "micromanaging" your resources. I think this is partly because of Quartz Extreme- no one program is going to have exclusive use of the 3D card's resources.
post #119 of 282
Very interesting... glad you pointed that out since I'm about to get into open gl.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

Reply
post #120 of 282
I'm glad to see this thread is starting to head back on topic rather than re-hashing discussions of x86 processors. We have our own processors to think abut let alone theirs. I was about to re-direct it back towards "what will be the new specs for the next PM line"

But now that you guys are mentioning graphics programing I was over at nvidia's site all excited about the new Geforce 6 series SDK released today, and low, and behold it's an .exe. WTF? What a GD slap in the face that was.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › What will be the new specs for the next PM line?