I believe raycer were developing an advanced OpenGL render engine that sorted only the visible polygons before the're rendered, If this is the case I believe it would be no problem at all to put in a special chip to sort the polygons before they're sent to an ATI/Nvidia card. The GPU cards would never know they were being 'tricked' into rendering a much largely reduced poly scene.
Or
A G5 has rumored to be 4-5 more efficient in FPU/Integer processing than G4. Now Im not sure if this is mhz for mhz, Now factor in that the G5 is twice as fast (1.6 Ghz) and you can easily achieve 10-20x speed increases especially if you're using altivec enhanced code.
Some sort of quartz/aqua acceleration hardware would be really useful. Sure, the UI isn't all that slow anymore (here on a DP 533), but have you ever ran top and looked at CPU usage? A sheet slides down in Terminal - it jumps to 15%... move a windows across the screen, WindowManager jumps to 80%...
I mean, come on here... this might be ok right now, but I don't want WindowManager sucking 80% CPU when I've got FCP3 rendering in the background.
[quote]Only if you can get DVI throughput. Hard on wireless right now, by an order of magnitude.<hr></blockquote>
I'm just thinking that if there *is* a quartz accelerator, it would play into the discussion in the other thread about sending primitives across the network instead of full screen draws.
Not that I think we'll see this anytime soon, but I don't think it's nearly as farfetched as some other people here.
This new "superwidget" seems like a spiffy idea except that the expense would not just be for the initial design and fabbing the chips. every time a new Graphics chip was introduced by ATI or Nvidia wouldn' t the whole thing need to be qa'd again to make sure the two chips play nice? That'd cost money not to mention the longer delay from the PC launch date of new graphics systems to the Mac launch date. Wouldn't this new chip also increase the development costs for developers of Add in Video cards?
No I have never posted under the name pismo something... I was a member of the old AI boards for a long time, but that was many moons ago.
Hah no my lunch with my friend and his apple friend was a relatively non-alcoholic one
What I said earlier was the successful part of our conversation (my friend joined in at one point). We both asked about the future of the powermac line (referring to G4 apollo/G5), which he quickly moved on from. The new iMac was a big concern for my friend (he likes that line, I like the G4 ), and we asked and got little. Trust me, I realised how potencially big this could have been (I mean how often do people get to have lunch with a higher up from the one company whos fans are sometimes referred to as cultists), I milked it for what I could.
I can't wait to see how its gunna work... aqua really needs the speed, and considering the amount of photoshop users in the mac market (PLEASE macromedia hope you make use of this stuff too), it could really help.... a lot.
One month left until my friends' friend is proven right, and one month left till I wish I asked and badgered more
Edit: removed some info I shouldn't have said
[ 12-07-2001: Message edited by: Your personal informant ]</p>
hmmm ... i believe having read somewhere that the g5 roadmap doesn't mention about altivec. couldn't this be a sign for some new kind of hardware acceleration ... maybe this insanely great raycer chip?
I'm just thinking that if there *is* a quartz accelerator, it would play into the discussion in the other thread about sending primitives across the network instead of full screen draws.
Not that I think we'll see this anytime soon, but I don't think it's nearly as farfetched as some other people here.</strong><hr></blockquote>
Well if the chips are integrated into the displays (akin to the sleep, monitor functionality and the ADC) then this will increase Apple display sales, something they'v always wanted.
Imagine, buy an Apple monitor and the OS becomes faster and the design is killer. A bit pricey yes, but is a 2x-3x increase in GUI performance equivalent to the extra $200? Yes!
<strong>The PPC roadmap says "Symmetric Processing Capabilities". Alti-vec is specifically a SIMD (Single Instruction Multiple Data) symmetric processing unit. So the more generalized description entails Alti-vec capabilities already.</strong><hr></blockquote>
Nope, this is incorrect. SMP is used in conjunction with multiple CPUs inside one machine, and "SMP support" here probably refers to technologies that enable efficient multi-CPU-designs using G5 processors (cache coherency and FSB issues, for example).
SIMD on the other hand refers to vector processing inside a single CPU, meaning that you have one CU in charge of multiple ALUs, who can now work on multiple items of data concurrently (but all performing the exact same instruction).
[quote]<strong>It also leaves the door open when reading between the lines for things such as MIMD (Multiple Instruction Multiple Data) which is potentially more valuable but more difficult to program. No guarantee of that, but I really doubt Alti-vec is going away.</strong><hr></blockquote>
Given the fact that OS X is said to be heavily AltiVec-optimized, and that Apple has never been shy about emphasizing the power of the Velocity Engine, I doubt it too.
Comments
I believe raycer were developing an advanced OpenGL render engine that sorted only the visible polygons before the're rendered, If this is the case I believe it would be no problem at all to put in a special chip to sort the polygons before they're sent to an ATI/Nvidia card. The GPU cards would never know they were being 'tricked' into rendering a much largely reduced poly scene.
Or
A G5 has rumored to be 4-5 more efficient in FPU/Integer processing than G4. Now Im not sure if this is mhz for mhz, Now factor in that the G5 is twice as fast (1.6 Ghz) and you can easily achieve 10-20x speed increases especially if you're using altivec enhanced code.
I mean, come on here... this might be ok right now, but I don't want WindowManager sucking 80% CPU when I've got FCP3 rendering in the background.
I'm just thinking that if there *is* a quartz accelerator, it would play into the discussion in the other thread about sending primitives across the network instead of full screen draws.
Not that I think we'll see this anytime soon, but I don't think it's nearly as farfetched as some other people here.
Hah no my lunch with my friend and his apple friend was a relatively non-alcoholic one
What I said earlier was the successful part of our conversation (my friend joined in at one point). We both asked about the future of the powermac line (referring to G4 apollo/G5), which he quickly moved on from. The new iMac was a big concern for my friend (he likes that line, I like the G4 ), and we asked and got little. Trust me, I realised how potencially big this could have been (I mean how often do people get to have lunch with a higher up from the one company whos fans are sometimes referred to as cultists), I milked it for what I could.
I can't wait to see how its gunna work... aqua really needs the speed, and considering the amount of photoshop users in the mac market (PLEASE macromedia hope you make use of this stuff too), it could really help.... a lot.
One month left until my friends' friend is proven right, and one month left till I wish I asked and badgered more
Edit: removed some info I shouldn't have said
[ 12-07-2001: Message edited by: Your personal informant ]</p>
just speculating ... like all of you people here
<strong>
I'm just thinking that if there *is* a quartz accelerator, it would play into the discussion in the other thread about sending primitives across the network instead of full screen draws.
Not that I think we'll see this anytime soon, but I don't think it's nearly as farfetched as some other people here.</strong><hr></blockquote>
Well if the chips are integrated into the displays (akin to the sleep, monitor functionality and the ADC) then this will increase Apple display sales, something they'v always wanted.
Imagine, buy an Apple monitor and the OS becomes faster and the design is killer. A bit pricey yes, but is a 2x-3x increase in GUI performance equivalent to the extra $200? Yes!
Coming soon: Apple Intelligent Displays
<strong>The PPC roadmap says "Symmetric Processing Capabilities". Alti-vec is specifically a SIMD (Single Instruction Multiple Data) symmetric processing unit. So the more generalized description entails Alti-vec capabilities already.</strong><hr></blockquote>
Nope, this is incorrect. SMP is used in conjunction with multiple CPUs inside one machine, and "SMP support" here probably refers to technologies that enable efficient multi-CPU-designs using G5 processors (cache coherency and FSB issues, for example).
SIMD on the other hand refers to vector processing inside a single CPU, meaning that you have one CU in charge of multiple ALUs, who can now work on multiple items of data concurrently (but all performing the exact same instruction).
[quote]<strong>It also leaves the door open when reading between the lines for things such as MIMD (Multiple Instruction Multiple Data) which is potentially more valuable but more difficult to program. No guarantee of that, but I really doubt Alti-vec is going away.</strong><hr></blockquote>
Given the fact that OS X is said to be heavily AltiVec-optimized, and that Apple has never been shy about emphasizing the power of the Velocity Engine, I doubt it too.
Bye,
RazzFazz