Technical Questions about GPUL / ApplePI

Posted:
in Future Apple Hardware edited January 2014
I realize answers require speculation, but I'm not interested in "whether this is the G5", or when it will be released or "what it will mean" for Apple. This is about understanding the technologies based on what we know. So please, don't clutter this thread with something other than that, as a lock would be imminent otherwise.





Here are my initial questions:



1. The eWeek article (and Ars) have noted that the GPUL could be offered to Apple as either a Dual or Quad core - meaning two or four procesor units per discrete chip if I understand right. My question is this: would a single GPUL processor with a dual core (say at 1.25 GHz each) run OS X and its apps at a similar pace as the dual 7455 DMM running at 1.25 GHZ - all else being equal (I know it *won't* be equal given the ApplePI, but I'm speaking hypothetically)?



OR, [are they] saying that this type of setup will run TWICE as fast? That is, each GPUL core represents the data moving power of the entire 1.25GHz DMM (in this example)?



2. I am going to assume that there will be a major update to OS X roughly every twelve months, and that GPUL will start shipping to Apple in about twelve months from now. To me, this means 10.3 will be a 64 bit OS, though applications might not follow suit until the next rev (after Photoshop 8, GoLive 7, etc.). Anyone disagree with that in principle?



My understanding is that there would really be no reason to re-compile 32-bit apps (out of the normal dev cycle) since the penalty for running them under GPUL will be small - and in any event still much faster than anything we can run them on now - correct?



3. Given the design of the GPUL core, would Apple and IBM be more likely to use Hypertransport or RapidIO as their main data moving technology for ApplePI - or perhaps both, using them in different parts of the system? If the latter, which parts would use what IYO?



I know the G5 supposedly utilizes an oboard memory controller for example...how would GPUL differ?



Will the ApplePI FSB speed be dictated more by the speed and adoption rates of other technologies, or by the GPUL itself? For example, most big RAM manufacturers still offer only PC2700 DDR - nothing faster. Will it be a chicken and egg thing, where if PC2700 is still the standard a few months from now - that's where ApplePI will end up (initially)?



4. ApplePI - a PowerMac and Xserve only architecture? More modular so that it can be modified between different machine types / market segments?



5. How likely is it Apple will make sure these things are not Powerlogix or Sonnet upgradeable? Are they going to go gung-ho on this now that they have much more powerful chips, have spent money and research on ApplePI, etc?



[ 09-20-2002: Message edited by: Moogs ]</p>
«13

Comments

  • Reply 1 of 42
    thttht Posts: 3,243member
    <strong>Originally posted by Moogs:

    1. ... My question is this: would a single GPUL processor with a dual core (say at 1.25 GHz each) run OS X and its apps at a similar pace as the dual 7455 DMM running at 1.25 GHZ - all else being equal... ?</strong>



    The Power4 core which this GPUL chip is based on has two fully pipelined multiply-add FPU units. It should double or more the G4 performance in FPU. The integer performance should also improve quite a bit, maybe double if both the integer units on Power4 can do all integer instructions.



    On top of this, the interchip bandwidth in the GPUL should be about 5 to 6 times more than the dual G4 config, so there is an added benifit there.



    <strong>OR, [are they] saying that this type of setup will run TWICE as fast? That is, each GPUL core represents the data moving power of the entire 1.25GHz DMM (in this example)?</strong>



    It's a doubling of performance per core.



    <strong>2. I am going to assume that there will be a major update to OS X roughly every twelve months, and that GPUL will start shipping to Apple in about twelve months from now. To me, this means 10.3 will be a 64 bit OS, though applications might not follow suit until the next rev (after Photoshop 8, GoLive 7, etc.). Anyone disagree with that in principle?</strong>



    The Power4 has a requirement of:



    Maintain binary compatibility for both 32-bit and 64-bit applications with prior PowerPC and PowerPCAS systems: Several internal IBM task forces in the first half of the 1990s had concluded that the PowerPC architecture did not have any technical impediments to allow it to scale up to significantly higher frequencies with excellent performance. With no technical reason to change, in order to keep our customers software investment in tact, we accepted the absolute requirement of maintaining binary compatibility for both 32-bit and 64-bit applications, from a hardware perspective.



    I'm not sure if OS 10 itself needs to be 64 bit to run on the Power4.



    <strong>3. Given the design of the GPUL core, would Apple and IBM be more likely to use Hypertransport or RapidIO as their main data moving technology for ApplePI - or perhaps both, using them in different parts of the system? If the latter, which parts would use what IYO?</strong>



    Since Apple is on the HT consortium, I presume this interconnect will be HT based. The interconnect will be an on-chip HT switched fabric with a HT link to an I/O controller. For supporting RapidIO processors, the PI would just have one of it's HT links bridge to a RapidIO one. This is all baseless speculation.



    <strong>I know the G5 supposedly utilizes an oboard memory controller for example...how would GPUL differ?</strong>



    Possibly eliminate the L3 and move the memory controller on-die to reduce packing costs.



    <strong>Will the ApplePI FSB speed be dictated more by the speed and adoption rates of other technologies, or by the GPUL itself? For example, most big RAM manufacturers still offer only PC2700 DDR - nothing faster. Will it be a chicken and egg thing, where if PC2700 is still the standard a few months from now - that's where ApplePI will end up (initially)?</strong>



    If GPUL or Moto G5 have on-die memory controllers, the PI don't really need to do much except support AGP bandwidth.



    <strong>4. ApplePI - a PowerMac and Xserve only architecture? More modular so that it can be modified between different machine types / market segments?</strong>



    If there is a 0.13u G4, it would be a very nice processor for iBooks, iMacs, eMac and Powerbooks for the next couple of years.



    <strong>5. How likely is it Apple will make sure these things are not Powerlogix or Sonnet upgradeable? Are they going to go gung-ho on this now that they have much more powerful chips, have spent money and research on ApplePI, etc?</strong>



    Depends on the packaging.
  • Reply 2 of 42
    kecksykecksy Posts: 1,002member
    [quote]Originally posted by Moogs:

    <strong>I realize answers require speculation, but I'm not interested in "whether this is the G5", or when it will be released or "what it will mean" for Apple. This is about understanding the technologies based on what we know. So please, don't clutter this thread with something other than that, as a lock would be imminent otherwise.





    Here are my initial questions:



    1. The eWeek article (and Ars) have noted that the GPUL could be offered to Apple as either a Dual or Quad core - meaning two or four procesor units per discrete chip if I understand right. My question is this: would a single GPUL processor with a dual core (say at 1.25 GHz each) run OS X and its apps at a similar pace as the dual 7455 DMM running at 1.25 GHZ - all else being equal (I know it *won't* be equal given the ApplePI, but I'm speaking hypothetically)?



    OR, [are they] saying that this type of setup will run TWICE as fast? That is, each GPUL core represents the data moving power of the entire 1.25GHz DMM (in this example)?



    2. I am going to assume that there will be a major update to OS X roughly every twelve months, and that GPUL will start shipping to Apple in about twelve months from now. To me, this means 10.3 will be a 64 bit OS, though applications might not follow suit until the next rev (after Photoshop 8, GoLive 7, etc.). Anyone disagree with that in principle?



    My understanding is that there would really be no reason to re-compile 32-bit apps (out of the normal dev cycle) since the penalty for running them under GPUL will be small - and in any event still much faster than anything we can run them on now - correct?



    3. Given the design of the GPUL core, would Apple and IBM be more likely to use Hypertransport or RapidIO as their main data moving technology for ApplePI - or perhaps both, using them in different parts of the system? If the latter, which parts would use what IYO?



    I know the G5 supposedly utilizes an oboard memory controller for example...how would GPUL differ?



    Will the ApplePI FSB speed be dictated more by the speed and adoption rates of other technologies, or by the GPUL itself? For example, most big RAM manufacturers still offer only PC2700 DDR - nothing faster. Will it be a chicken and egg thing, where if PC2700 is still the standard a few months from now - that's where ApplePI will end up (initially)?



    4. ApplePI - a PowerMac and Xserve only architecture? More modular so that it can be modified between different machine types / market segments?



    5. How likely is it Apple will make sure these things are not Powerlogix or Sonnet upgradeable? Are they going to go gung-ho on this now that they have much more powerful chips, have spent money and research on ApplePI, etc?



    [ 09-20-2002: Message edited by: Moogs ]</strong><hr></blockquote>



    I think eWeek meant that a single GPUL is twice as fast as a single G4. That's still impressive given that two G4s are not twice as fast as one G4.



    My guess too is that 10.3 will support 64-bit CPUs. GPUL will probably require a 64-bit kernal to run, but not 64-bit apps thank god.



    What is Apple Pi? I don't think it's RapidIO as I'm fairly certain that RapidIO can't hit speeds as high as 6.4GBps. Modified Hypertransport is my guess.



    Memory? Several technologies can hit 6.4GBps. DDR-II is the best option, but if it's not out in time Apple will likely use dual channel PC2700 or Quad Band DDR. (See: <a href="http://anandtech.com/chipsets/showdoc.html?i=1709)" target="_blank">http://anandtech.com/chipsets/showdoc.html?i=1709)</a> I'm hoping they'd use Quad Band if DDR-II wasn't ready since it can actually hit 6.4GBps and Dual channel PC2700 would fall short.



    I don't think we'll see updgrade cards for several reasons. First, GPUL isn't backward compatible with MPX like the G4 was with 60x. Any GPUL card would need to be PCI, but let's face it, PCI could never deliver enough bandwidth. Perhaps it is possible to design a daughter card which works MPX, but again you have to ask yourself the bandwidth question. I don't think either solution would work well.
  • Reply 3 of 42
    amorphamorph Posts: 7,112member
    [quote]Originally posted by Moogs:

    <strong>

    1. The eWeek article (and Ars) have noted that the GPUL could be offered to Apple as either a Dual or Quad core - meaning two or four procesor units per discrete chip if I understand right. My question is this: would a single GPUL processor with a dual core (say at 1.25 GHz each) run OS X and its apps at a similar pace as the dual 7455 DMM running at 1.25 GHZ - all else being equal (I know it *won't* be equal given the ApplePI, but I'm speaking hypothetically)?



    OR, [are they] saying that this type of setup will run TWICE as fast? That is, each GPUL core represents the data moving power of the entire 1.25GHz DMM (in this example)?</strong><hr></blockquote>



    Considering that there's some question about whether GPUL will be multicore (probably, but some rumors say no) that's up in the air.



    However, given the GPUL's greater parallelism, and the possibility that it is simply more powerful per clock aside from that, a single core GPUL might well be twice as fast as a 7455, assuming that it's running code that plays to its strengths. Note that twice as fast as a 7455 is faster than a dual 7455 (with the exception of a few boundary cases), since multiprocessing always carries a certain amount of overhead.



    [quote]<strong>2. I am going to assume that there will be a major update to OS X roughly every twelve months, and that GPUL will start shipping to Apple in about twelve months from now. To me, this means 10.3 will be a 64 bit OS, though applications might not follow suit until the next rev (after Photoshop 8, GoLive 7, etc.). Anyone disagree with that in principle?</strong><hr></blockquote>



    Likely. It depends on what "64 bit support" entails. For Photoshop, it would mean whole new color spaces, and so I'd expect it to come with a major release. For Lightwave, it might be as simple as a recompile (well, hardly anything is, but the point is that it might come as a point release).



    [quote]<strong>My understanding is that there would really be no reason to re-compile 32-bit apps (out of the normal dev cycle) since the penalty for running them under GPUL will be small - and in any event still much faster than anything we can run them on now - correct?</strong><hr></blockquote>



    If the structure of the GPUL is significantly different from the 7455 - and it looks like it will be - then even 32 bit applications will benefit at least slightly from a recompile, just to get code optimized to the CPU's peculiarities. The same thing happened with PPC code running slower on 7450s than it did on 7400s, until it was recompiled.



    [quote]<strong>3. Given the design of the GPUL core, would Apple and IBM be more likely to use Hypertransport or RapidIO as their main data moving technology for ApplePI - or perhaps both, using them in different parts of the system? If the latter, which parts would use what IYO?</strong><hr></blockquote>



    RapidIO is a fair bet, since both Mot and IBM are adopting it. Also, since it's aimed at the embedded market, it's inexpensive, it scales nicely, and it's fast, so it can trickle down quickly - if not immediately - to the consumer and portable lines. HyperTransport will probably appear where Apple has to interface with third parties, since ideas like HT graphics cards are being bandied about - but then, those could also go to 3GIO.



    [quote]<strong>I know the G5 supposedly utilizes an oboard memory controller for example...how would GPUL differ?</strong><hr></blockquote>



    No idea.



    [quote]<strong>Will the ApplePI FSB speed be dictated more by the speed and adoption rates of other technologies, or by the GPUL itself? For example, most big RAM manufacturers still offer only PC2700 DDR - nothing faster. Will it be a chicken and egg thing, where if PC2700 is still the standard a few months from now - that's where ApplePI will end up (initially)?</strong><hr></blockquote>



    Obviously there's little point in having a bus to the CPU that's faster than RAM. Otherwise, dunno.



    [quote]<strong>4. ApplePI - a PowerMac and Xserve only architecture? More modular so that it can be modified between different machine types / market segments?</strong><hr></blockquote>



    If it's a protocol implemented over RapidIO, then as far as I understand RapidIO it should scale up from relatively slow and narrow to fast and wide (via multiple channels), and it can be used for simple interconnects or for fabrics. Ditto HT, except that HT doesn't scale down as far.



    [quote]<strong>5. How likely is it Apple will make sure these things are not Powerlogix or Sonnet upgradeable? Are they going to go gung-ho on this now that they have much more powerful chips, have spent money and research on ApplePI, etc?</strong><hr></blockquote>



    I don't know if it's possible to make a CPU that frustrates upgrades. That's usually the motherboard's job, or the firmware's.



    [ 09-20-2002: Message edited by: Amorph ]</p>
  • Reply 4 of 42
    I'm not positive with the new IBM/Apple processor and architecture or whatever, but the rest of the PC industry will be moving to a hypertransport bus, but with "3GIO/PCI-Express" interface cards. Or at least thats what I understand. It might turn into the ISA PCI thing where you get two slots of each, one PCI and one PCI-Express, or whatever combo Apple feels is best.



    Apple would be smart to follow the rest of the industry with the whole PCI-Express thing.



    *note - PCI-Express is not PCI-X
  • Reply 5 of 42
    thttht Posts: 3,243member
    <strong>Originally posted by Kecksy:

    Memory? Several technologies can hit 6.4GBps. DDR-II is the best option, but if it's not out in time Apple will likely use dual channel PC2700 or Quad Band DDR. (See: <a href="http://anandtech.com/chipsets/showdoc.html?i=1709)" target="_blank">http://anandtech.com/chipsets/showdoc.html?i=1709)</a> I'm hoping they'd use Quad Band if DDR-II wasn't ready since it can actually hit 6.4GBps and Dual channel PC2700 would fall short.</strong>



    Quad channel PC800 DRDRAM can hit 6.4 GByte/s today. Quad channel PC1033 DRDRAM can hit 8.4 GByte/s today. Quad channel PC1200 DRDRAM can hit 9.6 GByte/s tomorrow. If only someone would do it.
  • Reply 6 of 42
    Random notes:



    - With some fairly minor tweaks to the kernel's HAL I expect they could have the 32-bit version of MacOS X running on 64-bit hardware. A full 64-bit version with 64-bit system libraries and APIs is harder, but shouldn't be too hard... at least for Cocoa, Posix, and Mach. The Carbon APIs are a more interesting question...?



    - At the same clock rate a single GPUL core will be twice as fast as a single G4 core (at least according to eWeek). This feels about right given what we know about the GPUL so far. Since they are talking about up to 2 GHz cores, a dual core GPUL will likely be roughly four times faster than a current dual 1 GHz G4... in virtually all ways except, possibly, computationally bound AltiVec code.



    - Apple Pi has been described as a "processor interconnect". My guess is that this is a fast point-to-point link to connect processor chips, possibly an extension to the HyperTransport spec. If they are going to per-processor memory controllers then they need a fast way for processors to access eachother's memory. This can be done through an off-processor device which sits on the MPX or RapidIO bus, or through an extra port (or ports) on the processor die itself.



    - Forget about upgrades to the new processor for current MPX-based machines.



    - The POWER4 accesses memory "through the L3 subsystem". This says to me that the L3 controller and the memory controller are combined in that design. It'll be interesting to see if they retain this arrangement.



    - I expect DDR-II since the other alternatives above aren't mainstream or standardized enough for Apple and I doubt they'll go RAMBus.



    - 32-bit code could probably benefit noticably from a recompile (20+ %) on a compiler with an instruction scheduler that understands the GPUL. Even without this, however, I expect the GPUL will stomp all over the 7455 even at the same clock rate.



    - There is a lot of point to having a CPU bus that is faster than RAM... it means you can move to better RAM technologies without updating the processor, talk to memory AND I/O at full speed simultaneously, and (if its a shared bus) send data between processors at higher speeds.



    - I don't think "the rest of the PC industry" is moving to HyperTransport. Certainly AMD, nVidia, ATI and the rest of the HT consortium are but I haven't seen Intel mention HT and they are the biggest player. Intel is flogging 3GIO but that isn't a direct competitor to HT, it fills a different role (expansion bus vs. chip interconnect). I'm sure Apple will go with whatever the industry's expansion bus direction is, since they're on the PCI & AGP bandwagon already.
  • Reply 7 of 42
    More Random notes

    [quote]Originally posted by Programmer:

    - At the same clock rate a single GPUL core will be twice as fast as a single G4 core (at least according to eWeek). This feels about right given what we know about the GPUL so far. Since they are talking about up to 2 GHz cores, a dual core GPUL will likely be roughly four times faster than a current dual 1 GHz G4... <hr></blockquote>

    And don't forget that each core will (perhaps) share the same cache. This would mean a bonus to SMP aware, multi-threaded apps that can switch processors with little or no context switching penalty, right?

    [quote]- Apple Pi has been described as a "processor interconnect". My guess is that this is a fast point-to-point link to connect processor chips, possibly an extension to the HyperTransport spec.<hr></blockquote>

    I doubt it's an "extension" as much as just a bare-bones implementation. Apple has a long history of only taking what they want from a spec and building implementations specific to their needs at the moment.

    [quote]If they are going to per-processor memory controllers then they need a fast way for processors to access each other's memory. This can be done through an off-processor device which sits on the MPX or RapidIO bus, or through an extra port (or ports) on the processor die itself.<hr></blockquote>

    But they don't really need to access "each other's" memory (I think you mean core cache?) if they can share a fast cache on die that is external to the individual cores.

    [quote]- The POWER4 accesses memory "through the L3 subsystem". This says to me that the L3 controller and the memory controller are combined in that design. It'll be interesting to see if they retain this arrangement.<hr></blockquote>

    I must insist that people remember that the GPUL is not simply a die shrunk Power4. Yes, it will be very interesting to see which features of the Power4 make into the GPUL, but some of the discussion assumes too much in this regard. And overlooks details like the impact of adding an Altivec processor to the mix. This may mean dual or quad core packages are more myth than substance, for instance.

    [quote]- 32-bit code could probably benefit noticably from a recompile (20+ %) on a compiler with an instruction scheduler that understands the GPUL. Even without this, however, I expect the GPUL will stomp all over the 7455 even at the same clock rate.<hr></blockquote>

    Well one would hope that doubling the number of instructions per clock would tend to have that effect, but we don't know much about pipeline depth, cache, or other factors that could affect performance one way or another as well.



    But the upshot is that whatever tradeoffs IBM had to make, they aren't likely to come up with a complete dog. Especially not since the title of their MPF session refers to "breaking through computationally intensive barriers". I wouldn't mind Apple breaking through again.
  • Reply 8 of 42
    [quote]Originally posted by dartblazer:

    <strong> would be smart to follow the rest of the industry with the whole PCI-Express thing.</strong><hr></blockquote>

    Why? How many OEMs are who are interested in the PCI-Express standard are interested in providing solutions and drivers and support to the Mac community?
  • Reply 9 of 42
    moogsmoogs Posts: 4,296member
    THT, Programmer, Tomb, et al:



    Thanks for the great feedback so far! Sorry I didn't respond a little sooner but I just got a copy of Jaguar (finally) and so have been spending the meantime installing / updating my apps / etc.



    I guess one of the bigger question marks is the ApplePI architecture, as you can make arguments either way: Apple on the HT board, IBM and MOT on the RIO board, PC2700 not being widely adopted, etc.



    I will post back here later this weekend with some more questions. This is the kind of thread that can really benefit people vs. the random speculation we often see (though that can be fun too at times)....



    Thanks again guys. Keep the good ideas coming!



    -Moogs
  • Reply 10 of 42
    airslufairsluf Posts: 1,861member
  • Reply 11 of 42
    [quote]Originally posted by Programmer:

    <strong>Random notes:

    - Forget about upgrades to the new processor for current MPX-based machines.</strong><hr></blockquote>

    Maybe it's just me, but I don't consider CPU 'upgrades' to be a worthwile expense, so this is a big 'so what?' for me.



    [quote]<strong>- The POWER4 accesses memory "through the L3 subsystem". It'll be interesting to see if they retain this arrangement.</strong><hr></blockquote>

    Makes more sense than the 'off-to-one-side' approaches (better staging of the data(?)).



    [quote]<strong>- I expect DDR-II since the other alternatives above aren't mainstream or standardized enough for Apple and I doubt they'll go RAMBus.</strong><hr></blockquote>

    Hard to see how I could have missed this, but I'm drawing a blank here. Got a &lt;=25-word summary of DDR-II's differences?



    [quote]<strong>- There is a lot of point to having a CPU bus that is faster than RAM... it means you can move to better RAM technologies without updating the processor, talk to memory AND I/O at full speed simultaneously, and (if its a shared bus) send data between processors at higher speeds.</strong><hr></blockquote>

    I've been wanting to ask you: how's the new PowerMac arrangement working out in terms of responsiveness/speed under load (disk access, transfers,etc)?





    Also, TotU brought up the matter of instructions per cycle. This was never an issue for me when I was programming, so I know nothing about it. Is this something that the CPU manages itself (gathering instructions from the top 8 processes, maybe?), or can it be exploited thru design & code?





    Interesting stuff - glad there's something to talk about again.



    [ 09-26-2002: Message edited by: Capt. Obvious ]</p>
  • Reply 12 of 42
    amorphamorph Posts: 7,112member
    [quote]Originally posted by AirSluf:

    <strong>



    I'd lay odds that Carbon will forever live in 32-bit land. As long as the GPUL runs 32-bit code without penalty there isn't much motivation for Apple to help dev's remain in Carbon land.



    Even apps like Photoshop could stand a little more distancing between the UI and underlying worker code following the Model-View-Controller pattern. A jump to features requiring 64-bit code, both on Hammers and GPUL's, would require a not so insignificant rewrite from the ground up anyway. It would be a perfect opportunity to push the underlying engine and clean out years of legacy muck.</strong><hr></blockquote>



    I don't know how true that is. However, I can see Apple deprecating huge swaths of Carbon in the near future - basically, all the OS 9 hangover stuff: WaitNextEvent() and kindred, and everything else that is basically a concession to the older OS'. Getting rid of all that cruft alone would go a long way toward making Carbon easier to migrate to 64 bit. After all, it's served its main purpose by now, which is getting Toolbox apps over to OS X. Its secondary purpose, targeting both Mac OS and OS X, is on its way out, so the part of Carbon geared toward that will be the next on the chopping block (meaning either that the calls will disappear, or that Apple will not blink at altering them in ways that break under Mac OS).



    After what Apple has said, though, I don't think Carbon is going anywhere. Evolving, yes. Vanishing, no.
  • Reply 13 of 42
    [quote]Originally posted by Tomb of the Unknown:

    <strong>

    Why? How many OEMs are who are interested in the PCI-Express standard are interested in providing solutions and drivers and support to the Mac community?</strong><hr></blockquote>



    PCI will eventually be replaced by PCI Express, so Apple doesn't really have a choice here. OEM support will come if Apple sells enough macs with PCI Express slots.
  • Reply 14 of 42
    [quote]Originally posted by Programmer:

    <strong>

    - Apple Pi has been described as a "processor interconnect". My guess is that this is a fast point-to-point link to connect processor chips, possibly an extension to the HyperTransport spec.</strong><hr></blockquote>



    I think it's basically HT + coherency layer, something like AMD's Coherent HyperTransport technology that will be used to connect multiple Opteron processors.



    [quote]<strong>If they are going to per-processor memory controllers then they need a fast way for processors to access eachother's memory. This can be done through an off-processor device which sits on the MPX or RapidIO bus, or through an extra port (or ports) on the processor die itself.

    </strong><hr></blockquote>



    Sounds like a job for Apple Pi.



    [ 09-21-2002: Message edited by: Analogue bubblebath ]</p>
  • Reply 15 of 42
    mokimoki Posts: 551member
    [quote]Originally posted by Amorph:

    <strong>



    I don't know how true that is. However, I can see Apple deprecating huge swaths of Carbon in the near future - basically, all the OS 9 hangover stuff: WaitNextEvent() and kindred, and everything else that is basically a concession to the older OS'. Getting rid of all that cruft alone would go a long way toward making Carbon easier to migrate to 64 bit.</strong><hr></blockquote>



    At the kernel level, some work needs doing in order to update the OS to run fully 64 bit, but at the application level, it is trivial.



    All that needs doing is changing the basic pointer data type from UInt32 to UInt64 (which Apple would do in the 64 bit variant of their headers), and then you make sure your code doesn't assume that pointers can only be 32 bits in size when doing pointer arithmetic.



    There really isn't much more to it than that.
  • Reply 16 of 42
    bungebunge Posts: 7,329member
    [quote]Originally posted by moki:

    <strong>



    There really isn't much more to it than that.</strong><hr></blockquote>



    And then all of your games will double in speed, right?



    To whom it may concern, that was a joke.



    Now if a 64-bit chip is released, the current OS X would run on it without penalty in 32-bit mode. At that point could a third party application take advantage of let's say, the 64-bit addressing without a new kernel?



    I'm assuming no. So if Apple has to update the OS to allow third party companies to take advantage of the GPUL's advanced features, will there have to be two code bases for the OS from that point forward? Two boxes on the shelf? Double the testing to make sure a third party app is bug free? That would be a bummer.
  • Reply 17 of 42
    airslufairsluf Posts: 1,861member
  • Reply 18 of 42
    [quote]Originally posted by AirSluf:

    <strong>

    I don't think it will go away completely either, but I don't see much motivation for Apple making it 64-bit clean. There aren't an overwhelming number of applications that will NEED to be 64-bit based, and since none exist today on the Mac or PC platform (no legacy platform porting issues) the potential harm in only offering Cocoa for 64-bit is radically less than it was during the corresponding shift from OS 9 to OS X.

    </strong><hr></blockquote>

    I disagree with this : Adobe is interested by a 64 bit version of Photoshop (64 bit color), and i doubt that Adobe will develop a Cocoa version of photoshop. They have just carbonised photoshop 7.O, make a Cocoa photoshop is a too huge investissement.
  • Reply 19 of 42
    zosozoso Posts: 177member
    [quote]Originally posted by Powerdoc:

    <strong>

    I disagree with this : Adobe is interested by a 64 bit version of Photoshop (64 bit color), and i doubt that Adobe will develop a Cocoa version of photoshop. They have just carbonised photoshop 7.O, make a Cocoa photoshop is a too huge investissement.</strong><hr></blockquote>



    I'd say you're unfortunately right... I think many (if not all) key applications from the Toolbox days will never be rewritten in Cocoa, especially huge apps like Photoshop. It's a real pity though, Cocoa apps are really a different world compared to Carbon apps.



    The best thing Apple bought from NeXT could very well be the Cocoa environment, but with very few exceptions I've only seen so far new small apps/shareware developed using it. Either Apple continues improving it like it's been doing (think about anti aliasing, it only came to Carbon apps with 10.1.5) or it will become--with time--something people will try to avoid as much as possible.



    I'd personally love a Carbon-free OS X!



    But then, maybe it's just me...



    ZoSo
  • Reply 20 of 42
    <a href="http://hankfiles.pcvsconsole.com/answer.php?file=430"; target="_blank">http://hankfiles.pcvsconsole.com/answer.php?file=430</a>;



    [quote] Instead of looking at the total bits and total color (or colour ;-]) possiblities, it's better to look at the bits per channel to know how smooth shade transitions can be.



    With 24-bit or 32-bit color, the RGB channels have 8 bits each. That allows 256 shades. For example, going from solid black to solid white would include 256 shades (black-0, white-255, and 254 in between). 24-bit = RGB; 32-bit = RGBA (alpha included). I know that I can see the difference in shades here, but maybe not everyone can. It's not very noticeable though.



    There are not 16,777,216 shades of one color, but that's the total number of all shades of all colors, in case anyone is not aware.



    Now when lots of color combining occurs in real-time graphics at 32-bit precision (8 bits per channel), some precision can be lost with each pass of rendering. As graphics get more demanding, more passes will occur too. Eventually, games will use stuff like 64 texture layers per poly (maybe 4 passes and 16 textures per pass), plus FSAA, motion blur, fog, and more color blending effects.



    With 8 bits per channel, even if 2 bits are lost, that's already down to only 64 shades per color channel (only slightly better than 16-bit color). If 4 bits are lost, we're down to 16 shades (below 16-bit quality).



    You can look at 64-bit color as Color Anti-Aliasing. Everything can be processed at 64 bits internally, and the monitor could display it at 24-bit still. Just like how supersampled FSAA renders internally at a resolution larger than the screen res, and motion blur renders internally with more frames than your monitor/eyes can process.



    64-bit color can reduce color banding and can improve sub-pixel accuracy, full-scene anti-aliasing, and color/gamma correction.



    Plus, 64-bit color will be using 16-bit floating point value channels, and that's more friendly for the graphics pipeline. Current 32-bit color uses 8-bit integer values per channel. <hr></blockquote>



    But now does the OS have to be 64 bit or can Photoshop support 64 bit color in a 32 bit App without a major speed penalty?
Sign In or Register to comment.