Performance beyond G5
All the discussion on this forum concerning speed has side-stepped other critical factors and determinants of overall performance. What I'd like to know is what people have to say (and more importantly inform) regarding the medium-term architectural improvements to the Dual G5s motherboards. Info or illuminating discussion on the prospects of the adoption of HT2, PCIe, RapidIO (for PBs in case of Freescale if not refactored for HT - also how difficult would refactoring be if pursued??), memory subsytem improvements, possible modifications to the current SATA scheme, etc...
Also out of curiosity, anyone know what the bus multiplier is on the 2.3Ghz Xserves at VTech is?
Technically-inclined posts most welcome!!!!
Also out of curiosity, anyone know what the bus multiplier is on the 2.3Ghz Xserves at VTech is?
Technically-inclined posts most welcome!!!!
Comments
Originally posted by Machead @ Amherst
Technically-inclined posts most welcome!!!!
Try this for a more geeky discussion:
http://episteme.arstechnica.com/eve/...2&f=8300945231
Originally posted by Machead @ Amherst
Also out of curiosity, anyone know what the bus multiplier is on the 2.3Ghz Xserves at VTech is?
2:1 of course.
Originally posted by talksense101
Try this for a more geeky discussion:
http://episteme.arstechnica.com/eve/...2&f=8300945231
most people at arstechnica don't know shit but just pretend that they know shit.
Originally posted by Nr9
most people at arstechnica don't know shit but just pretend that they know shit.
Are you still here mr. insult from IBM?
Arstechnica has plenty of good information. Their breakdown of PCI-Express is probably the best article I have read to date that distinguishes what PCI-Express is compared to other bus types. I have been very satisfied with arstechnica over the last few years.
Either way, I think the big performance boost is going to come more from software in the next year. Of course hardware will get faster, but I want to address the software side of things.
IBM's PPC assembly / C compiler has been seen in recent Developer mailed cds. It seems apple is pushing PPC assembly a little bit for the 64 bit libraries. If GOOD assemblers can do some inline assembly in intense CPU programs (cinema, maya, etc) we could see some programs FLY on g5s.
Fairly soon we should see some Tiger builds being seeded to others besides the extremely privileged. We will be able to optimize more towards the new 64bit libraries included with tiger... I am anxiously awaiting my copy .
Originally posted by Nr9
most people at arstechnica don't know shit but just pretend that they know shit.
In the other end, we have Nr9, who still will be ignored or not taken as seriously as he would prefer, until he starts to manage to present his knowledge to other people. I, for one, trust the arstechnica people more than you just because of this.
Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.
Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
amateur has just posted a very good summary of the problem at ars in the 41 page iMac G5 topic here:
amateur's ars post
At times in the past posting anything pro Apple was an invitation for a gang-bang by the "regulars" that inhabit the "battlefront". The usual gang of suspects still exist at ars, but many of them have bought Macs now and are not so sure of how bad Apple computers are.
We both know that even with the same hardware, the macs numbers would lag behind.... because of OpenGL implementation on OS X.
Originally posted by emig647
But onlooker,
We both know that even with the same hardware, the macs numbers would lag behind.... because of OpenGL implementation on OS X.
I'm thinking/saying/implying - when saying "graphics" that includes improved new better optimization of OGL.
I don't know why they don't have this licked by now anyways. They have been using OGL for years. Why is it so neglected? I'm starting to think this is their biggest issue.
They (Apple) release all these great #'s, and statistics, but when there tested outside by independents they turn out sluggish in odd areas, and under-whelming where they should have been either neck, and neck, or out performing others.
They need to straighten this out, and before Core Image. If they let Core Image just make up the difference where they screwed up on OGL Core Image will never perform to it's full potential either. Just like the PowerMacs, the rest of them it will be scoffed at when performance tested by outside independent testers as anther Apple gimmick to hype an under-performing technology that Apple once again exaggerated to boost sales of another mediocre line of Apple computers. What if all it was - is the OGL that was slowing the majority of things down all along?
Originally posted by Nr9
most people at arstechnica don't know shit but just pretend that they know shit.
This is so, so true.
"We're prosumers!"
Great. I'm a real professional, and I don't seem to have a problem with using dated hardware so long as it gets the job done. The intelligence and knowledge density at AI is much higher than at Ars.
Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.
Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
There is one big difference between Ars and this forum. The people on Ars often back their claims up with good sources of information. Here we have people posting things that seem to run contrary to industry trend and refuse to support their positions with good references.
Some positions that I've objected to personally, I do so due to a number of factors. For the concerns about 90nm, I look at the industry as a whole, when doing so I don't see everyone going into convulsions with respect to 90nm. IBM may be getting the market ready for when they fatten their chips but that doesn't mean everybody has to take the same approach. IF IBM can not scale frequency on their 90nm process all that really means is that Apple hooked up with the wrong CPU maker again. IBM is not the only choice when it comes to custom chips and may not even be the industry leader that many suspect it of being.
Originally posted by Splinemodel
The intelligence and knowledge density at AI is much higher than at Ars.
Well DENSE could be applied to some here.
Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.
Don't worry, as soon as my work resetting my posts is done I'll disappear forever.