" 1. Custom ASIC's would not be unheard of for Apple machines. They have used different types before and outsourced production. Why do you think their R&D budget is so large? If they only used existing stuff they wouldn't be much more innovative then DELL. If the object is handling the 2D Aqua processing the economy of scale comes from 10-15 years of consistently designed hardware and OS combinations.
2. Why buy graphics hardware engineers if not to make graphics hardware? Sure EE/CE's can do lots of different stuff, but you buy the package for it's demonstrated strengths.
3. You only compete with NVidia and ATI if you go 3D. If the chip is optimized for 2D rendering effects it is in a separate niche, one that the big boys have already told Apple they weren't interested in. Hence no current Quartz acceleration. Current accelerators (via drivers) just intercept OpenGL(or like) API calls, shuffling then to te GPU. If the effect you need isn't supported explicitly in the GPU hardware, there is no API call and no way to accelerate the code. Current Vertex/Pixel shading programmability is not able to do Aqua style processing and probably won't be for quite some time, if ever. "
ok, I'll elaborate, and excuse me for breaking it down like this AirSluf
your point 1:
you are correct, custom ASICs have been used in macs for a very long time. What are these ASICS? well, the mutherboard chipsets come to mind - and how well does apple do? consistantly behind the times - bus speeds, and the types of supported memory are at least a year or more behind what is offered in the PC world. Why do macs cost more? well, that larger R & D budget comes to mind (along with the higher profit margins), and again lower economies of scale.
2. your point 2
as far as I know, raycer never shipped a product, and in my book they have "demonstrated" nothing. They must have been worth something to Apple, and I presume it was for the engineers. again, IF apple is making a graphics chip - it would be the whole shebang, like I said. What was Raycer's focus? high end 3d graphics. Although they demonstrated nothing, if they WERE to be making a graphics chip - don't cha think it'd be 3D? and if they demonstrated any strength at all it'd be in 3d graphics.
your point 3:
lets just say it is a 2d only chip. How is it supposed to get to the frame buffer? an apple designed 3d card using NVIDIA chips that were never designed to be used with a "aqua" chip? Howabout have nvidia modify their chips to work with apple tech? thats likely. or put it on the system bus, and suffer the same problems the G4 has pushing pixels over the AGP bus? Again, if this hypothetical chip touches a pixel, it will do all the rendering: 2d, 3d, and "aqua" duties (technically 2d also).
In your point 3 you beautifully demonstrate how you are speaking about that which you do not know.
"Current accelerators (via drivers) just intercept OpenGL(or like) API calls, shuffling then to te GPU. If the effect you need isn't supported explicitly in the GPU hardware, there is no API call and no way to accelerate the code"
OK, so you want to talk about GL? well, what exactly in Aqua can't be handled by todays 3D hardware? transparency? no problem. The genie effect? simple, take the window buffer and use it as a texture onto a series of triangles that bend to the desired shape of the window. you must mean the vector nature of Quartz. Well, with rendering to memory, and the hardware behind GL evaluators, bezier curves CAN be rendered in hardware, and so can the rest of Quartz. The ONLY sticky point being some of the Anti Aliasing - but with a little clever programming, that can be done in hardware as well - especially on the GF3 generation of HW. Listen, no offense, but it is important to keep an eye on reality. Apple Quartz only chip - not gunna happen. Apple complete graphics subsistem - maybe, but I highly doubt it. 20X gains would be easy if some of the rendering were offloaded onto the 3D subsystem. I could be wrong, it has happened countless times before - but my expierience and education brought me to my conclusions, and I think that those whose expect the impossible will be greatly disappointed. so, live your lives, stop looking at this computer, and we'll all know the answer when apple ships their gear.
-David</strong><hr></blockquote>
Grad student your excellent thread is not coming from a graded student, but from a doctor honoris causa !
You can change that yourself.. I have.</strong><hr></blockquote>
hey Sine, so don´t wanna show us what you did and how you did it? or is there already a topic about this? if not, you might create a new one, hu? or is there a topic in another forum already? ...link?
so many questions...
please. thanks.
btw: i hope all these brilliant ideas of a fixed "graphics-whatever-coprocessor" come true... cause that would be a damn friggin' cool move.
Back in the days when RAVE was new, Apple made an Rave accelerator card. You could even put more cards in your mac for further acceleration.
And with Apple controlling the whole way from cpu to agp, why couldn't they incorporate an Aqua accelaration into Pangea?
To Scheisskopf: "Apple also has the highest margins in the computer industry (>20%), slightly higher than the 0.01% we make in the automotive industry " 20% is just gross profit. Take a look at Apple's last fiscal report
BTW: I have never heard the word Scheißkopf before. May be you meant Scheißkerl?
Correct me if I'm wrong, but how is a custom chip that offloads some GUI calculations from the CPU any different than Mpeg2 accelerator chips that were used in Macs and PCs until recently?
I don't see why such a chip would be in conflict with the 3D card or anything.
Doesn´t Geforce 3 have a programmable GPU? Why couldn´t apple just write new drivers that allow the use of Geforces GPU to accelerate Quartz, or am I totally wrong in here?
[quote]The only way to get a 20x boost in performance is to hardware accelerate something that was not hardware accelerated before.<hr></blockquote>
Oh no. As Whisper pointed out upthread, you can get mindboggling increases in speed entirely in software, without even considering things like SIMD. Even high-level architectural decisions can have an enormous impact on overall performance.
That said, stimuli's vision of a z-sort chip sitting between the CPU and the AGP bus would be a godsend. It would offload work from the CPU and greatly reduce traffic on the AGP bus. If it was a coprocessor it would be perched on a very fast dedicated bus to the processor, and it might have access to its cache(s) as well.
And it wouldn't step on ATi or nVIDIA. In fact, they'd probably welcome it.
Decurion wrote:
[quote] Doesn´t Geforce 3 have a programmable GPU? Why couldn´t apple just write new drivers that allow the use of Geforces GPU to accelerate Quartz, or am I totally wrong in here?<hr></blockquote>
It seemed possible to me too. But according to AirSluf, who's actually digging down into OpenGL and 3D acceleration for his thesis, it's not happening. Maybe in a few years, but not now. AirSluf reposted his argument to this effect toward the bottom of page 1 in this thread.
OK, while we've got some great discussion about implementation, I'm curious about something: Aqua currently depends heavily on bitmaps and absolute pixel values. This is probably a good performance optimization, and comforting to Carbon programmers writing for compatibility with older MacOS releases.
However, with some work OS X could support an all-vector interface. That would free Apple to use the highest-resolution displays they could get their hands on, because the absolute sizes of the widgets and text wouldn't change. If the OS knew the physical resolution of the display it could snap subpixel calculations to the nearest physical pixel to keep text and widget borders sharp. (OmniWeb already does this, actually.) The pinstripes and lighting effects would be calculated and drawn rather than just blitted.
Let's say that some form of texturing would still be necessary, since it won't be reasonable to calculate the iApps' brushed metal look any time soon. They could be rendered to a very high resolution.
So, what do we need to do this, hardware- and/or software-wise? Let's say, for the sake of argument, that we're reimplementing Aqua, just so we have something to work with.
That would be so insanely cool. With something like that, Apple could be the first consumer computer manufacturer to boast higher-than-72-dpi monitor compatibility.
And the technology IS out there. At a demonstration I was at, IBM was demoing their new monitors technology. They had a 21" CRT on the table with what looked like black lines being displayed. When asked what the lines were, they handed the person a magnifying glass. Turns out it was perfectly legible *incredibly small* text.
Kinda puts the whole 72 dpi world to shame...
As for this coprocessor we are talking about, I'm not so sure that even at a cost of $20 a machine that it would be worth it. My DP 500 run Aqua faster than I can use it (excluding resizing quicktime movies while they play). At $20 a machine with 4 million machines a year, that's a good chunk of change for Apple that I bed they would be hesitant to give up. For the device to be useful, it would need to accelerate much more than Aqua, and it would need to enable us to do things not only faster, but things that we cannot do with existing tech at all.
It was only a matter of time until this became public.
I can confirm this, for a FACT, that this whole idea of a rapidly accelerated 2D - is reality. I had a discussion over lunch not long ago with a friend and his rather high placed apple friend. While I choose to not disclose his name (he was a nice guy, dun wanna see anything bad happen cuz of some little fact he leaked), I can say its real.
The topic came up when (suprise suprise) I turned the conversation around to how Apple is doing in the 3d accelerator market, criticising Apple's usage of the overly underpowered GF2MX in their high end machines. He knew this was a weakpoint of them, but citing recent OSX/MP800/GF3 scores in other games, he was able to turn it around for the better. When I asked him if he knew of the GF3Ti range, he merely said "very well" and didn't comment from there. The interesting part came next (he seemed eager to move on the conversation). "we are focusing more on the 2d market, as you said before, aqua can be pretty sluggish on even our newer G4 range.......you shall see soon, photoshop will never have run so fast before, and all the aqua speed issues will just go away". I was suprised to say the least, and it was pretty obvious what he was getting at. When he finished off, I muttered "raycer", and he nodded.
Personally I'd put money on this guy's word, he's trustworthy my friend said.
I wouldn't be suprised to see a 10x speed improvement in the UI, he seemed pretty damn excited about it
[quote]Originally posted by Your personal informant:
<strong>...you shall see soon, photoshop will never have run so fast before, and all the aqua speed issues will just go away"... I muttered "raycer", and he nodded.
</strong><hr></blockquote>
Perhaps the role of Adobe's PDF format in Quartz becomes clearer...
<strong>As for this quartz chip, haven't we been saying this since 1999?</strong><hr></blockquote>
Rather than a quartz/PDF accelerator, why not go all out and do QuickTime-on-a-chip. If some aqua features are integrated into quicktime, this chip could power an entire line of iDevices.
Comments
<strong>quote:
" 1. Custom ASIC's would not be unheard of for Apple machines. They have used different types before and outsourced production. Why do you think their R&D budget is so large? If they only used existing stuff they wouldn't be much more innovative then DELL. If the object is handling the 2D Aqua processing the economy of scale comes from 10-15 years of consistently designed hardware and OS combinations.
2. Why buy graphics hardware engineers if not to make graphics hardware? Sure EE/CE's can do lots of different stuff, but you buy the package for it's demonstrated strengths.
3. You only compete with NVidia and ATI if you go 3D. If the chip is optimized for 2D rendering effects it is in a separate niche, one that the big boys have already told Apple they weren't interested in. Hence no current Quartz acceleration. Current accelerators (via drivers) just intercept OpenGL(or like) API calls, shuffling then to te GPU. If the effect you need isn't supported explicitly in the GPU hardware, there is no API call and no way to accelerate the code. Current Vertex/Pixel shading programmability is not able to do Aqua style processing and probably won't be for quite some time, if ever. "
ok, I'll elaborate, and excuse me for breaking it down like this AirSluf
your point 1:
you are correct, custom ASICs have been used in macs for a very long time. What are these ASICS? well, the mutherboard chipsets come to mind - and how well does apple do? consistantly behind the times - bus speeds, and the types of supported memory are at least a year or more behind what is offered in the PC world. Why do macs cost more? well, that larger R & D budget comes to mind (along with the higher profit margins), and again lower economies of scale.
2. your point 2
as far as I know, raycer never shipped a product, and in my book they have "demonstrated" nothing. They must have been worth something to Apple, and I presume it was for the engineers. again, IF apple is making a graphics chip - it would be the whole shebang, like I said. What was Raycer's focus? high end 3d graphics. Although they demonstrated nothing, if they WERE to be making a graphics chip - don't cha think it'd be 3D? and if they demonstrated any strength at all it'd be in 3d graphics.
your point 3:
lets just say it is a 2d only chip. How is it supposed to get to the frame buffer? an apple designed 3d card using NVIDIA chips that were never designed to be used with a "aqua" chip? Howabout have nvidia modify their chips to work with apple tech? thats likely. or put it on the system bus, and suffer the same problems the G4 has pushing pixels over the AGP bus? Again, if this hypothetical chip touches a pixel, it will do all the rendering: 2d, 3d, and "aqua" duties (technically 2d also).
In your point 3 you beautifully demonstrate how you are speaking about that which you do not know.
"Current accelerators (via drivers) just intercept OpenGL(or like) API calls, shuffling then to te GPU. If the effect you need isn't supported explicitly in the GPU hardware, there is no API call and no way to accelerate the code"
OK, so you want to talk about GL? well, what exactly in Aqua can't be handled by todays 3D hardware? transparency? no problem. The genie effect? simple, take the window buffer and use it as a texture onto a series of triangles that bend to the desired shape of the window. you must mean the vector nature of Quartz. Well, with rendering to memory, and the hardware behind GL evaluators, bezier curves CAN be rendered in hardware, and so can the rest of Quartz. The ONLY sticky point being some of the Anti Aliasing - but with a little clever programming, that can be done in hardware as well - especially on the GF3 generation of HW. Listen, no offense, but it is important to keep an eye on reality. Apple Quartz only chip - not gunna happen. Apple complete graphics subsistem - maybe, but I highly doubt it. 20X gains would be easy if some of the rendering were offloaded onto the 3D subsystem. I could be wrong, it has happened countless times before - but my expierience and education brought me to my conclusions, and I think that those whose expect the impossible will be greatly disappointed. so, live your lives, stop looking at this computer, and we'll all know the answer when apple ships their gear.
-David</strong><hr></blockquote>
Grad student your excellent thread is not coming from a graded student, but from a doctor honoris causa !
<strong>
You can change that yourself.. I have.</strong><hr></blockquote>
hey Sine, so don´t wanna show us what you did and how you did it? or is there already a topic about this? if not, you might create a new one, hu? or is there a topic in another forum already? ...link?
so many questions...
please. thanks.
btw: i hope all these brilliant ideas of a fixed "graphics-whatever-coprocessor" come true... cause that would be a damn friggin' cool move.
And with Apple controlling the whole way from cpu to agp, why couldn't they incorporate an Aqua accelaration into Pangea?
To Scheisskopf: "Apple also has the highest margins in the computer industry (>20%), slightly higher than the 0.01% we make in the automotive industry " 20% is just gross profit. Take a look at Apple's last fiscal report
BTW: I have never heard the word Scheißkopf before. May be you meant Scheißkerl?
I don't see why such a chip would be in conflict with the 3D card or anything.
Isn't Apple almost exclusively using Nvidia in their computer desktops?
[quote]The only way to get a 20x boost in performance is to hardware accelerate something that was not hardware accelerated before.<hr></blockquote>
Oh no. As Whisper pointed out upthread, you can get mindboggling increases in speed entirely in software, without even considering things like SIMD. Even high-level architectural decisions can have an enormous impact on overall performance.
That said, stimuli's vision of a z-sort chip sitting between the CPU and the AGP bus would be a godsend. It would offload work from the CPU and greatly reduce traffic on the AGP bus. If it was a coprocessor it would be perched on a very fast dedicated bus to the processor, and it might have access to its cache(s) as well.
And it wouldn't step on ATi or nVIDIA. In fact, they'd probably welcome it.
Decurion wrote:
[quote] Doesn´t Geforce 3 have a programmable GPU? Why couldn´t apple just write new drivers that allow the use of Geforces GPU to accelerate Quartz, or am I totally wrong in here?<hr></blockquote>
It seemed possible to me too. But according to AirSluf, who's actually digging down into OpenGL and 3D acceleration for his thesis, it's not happening. Maybe in a few years, but not now. AirSluf reposted his argument to this effect toward the bottom of page 1 in this thread.
[ 12-03-2001: Message edited by: Amorph ]</p>
However, with some work OS X could support an all-vector interface. That would free Apple to use the highest-resolution displays they could get their hands on, because the absolute sizes of the widgets and text wouldn't change. If the OS knew the physical resolution of the display it could snap subpixel calculations to the nearest physical pixel to keep text and widget borders sharp. (OmniWeb already does this, actually.) The pinstripes and lighting effects would be calculated and drawn rather than just blitted.
Let's say that some form of texturing would still be necessary, since it won't be reasonable to calculate the iApps' brushed metal look any time soon. They could be rendered to a very high resolution.
So, what do we need to do this, hardware- and/or software-wise? Let's say, for the sake of argument, that we're reimplementing Aqua, just so we have something to work with.
And the technology IS out there. At a demonstration I was at, IBM was demoing their new monitors technology. They had a 21" CRT on the table with what looked like black lines being displayed. When asked what the lines were, they handed the person a magnifying glass. Turns out it was perfectly legible *incredibly small* text.
Kinda puts the whole 72 dpi world to shame...
As for this coprocessor we are talking about, I'm not so sure that even at a cost of $20 a machine that it would be worth it. My DP 500 run Aqua faster than I can use it (excluding resizing quicktime movies while they play). At $20 a machine with 4 million machines a year, that's a good chunk of change for Apple that I bed they would be hesitant to give up. For the device to be useful, it would need to accelerate much more than Aqua, and it would need to enable us to do things not only faster, but things that we cannot do with existing tech at all.
-Ender
EDIT: I had to go see if I could find a link to what I was talking about. I didn't find the CRT that I had seen, but I found a similar LCD. <a href="http://www.pc.ibm.com/europe/pcnews/accessories/options_op1.html" target="_blank">http://www.pc.ibm.com/europe/pcnews/accessories/options_op1.html</a>
[ 12-03-2001: Message edited by: Ender ]</p>
I can confirm this, for a FACT, that this whole idea of a rapidly accelerated 2D - is reality. I had a discussion over lunch not long ago with a friend and his rather high placed apple friend. While I choose to not disclose his name (he was a nice guy, dun wanna see anything bad happen cuz of some little fact he leaked), I can say its real.
The topic came up when (suprise suprise) I turned the conversation around to how Apple is doing in the 3d accelerator market, criticising Apple's usage of the overly underpowered GF2MX in their high end machines. He knew this was a weakpoint of them, but citing recent OSX/MP800/GF3 scores in other games, he was able to turn it around for the better. When I asked him if he knew of the GF3Ti range, he merely said "very well" and didn't comment from there. The interesting part came next (he seemed eager to move on the conversation). "we are focusing more on the 2d market, as you said before, aqua can be pretty sluggish on even our newer G4 range.......you shall see soon, photoshop will never have run so fast before, and all the aqua speed issues will just go away". I was suprised to say the least, and it was pretty obvious what he was getting at. When he finished off, I muttered "raycer", and he nodded.
Personally I'd put money on this guy's word, he's trustworthy my friend said.
I wouldn't be suprised to see a 10x speed improvement in the UI, he seemed pretty damn excited about it
<strong>...you shall see soon, photoshop will never have run so fast before, and all the aqua speed issues will just go away"... I muttered "raycer", and he nodded.
</strong><hr></blockquote>
Perhaps the role of Adobe's PDF format in Quartz becomes clearer...
Eat lunch w/ him more often!
Airsurf, would it be theoretically possible to make a voxel accelerator for a GF3?
As for this quartz chip, haven twe bee nsaying this since 1999?
If it'd been me, I would have gotten this guy ripped and pryed him for information about the G5, the new iMacs, and Steve Jobs's sex life!
<strong>As for this quartz chip, haven't we been saying this since 1999?</strong><hr></blockquote>
Rather than a quartz/PDF accelerator, why not go all out and do QuickTime-on-a-chip. If some aqua features are integrated into quicktime, this chip could power an entire line of iDevices.