Temporary Bridge in 970 Processor?

Posted:
in Future Apple Hardware edited January 2014
What is the reality behind the Register article that says Panther will not be a 64-bit OS? Is the author a little confused, or have we been making incorrect assumptions? The article also speaks of bridge circuitry in the 970 that will go away once Apple is able to provide a 64-bit OS.



http://www.theregister.co.uk/content/39/31600.html



As I understand it, Panther would be a 64-bit OS if it can run and make full use of 64-bit applications. It does not matter how much of Panther itself is written as 64-bit code. Is this what is confusing the author, or will Panther fall short in running 64-bit applications? Is that bridge thing real, and what is it about?



This is a mix of hardware and OS X. I'll assume that it will get moved if it needs to.
«134

Comments

  • Reply 1 of 75
    paulpaul Posts: 5,278member
    panther is 32 bit... parts of it and smeagol are special and allow it to address more then 4 gigs of ram...



    the 970 can run both 32 and 64 bit code at full speed-one of the major benefits of RISC processors...



    don't know anything about a bridge...
  • Reply 2 of 75
    rhumgodrhumgod Posts: 1,289member
    Quote:

    Originally posted by Paul

    panther is 32 bit... parts of it and smeagol are special and allow it to address more then 4 gigs of ram...



    the 970 can run both 32 and 64 bit code at full speed-one of the major benefits of RISC processors...



    don't know anything about a bridge...




    Not too sure about that. Smeagol will be Mac OS X 10.2.7 that ships with the new Power Mac G5's but that isn't Panther. Panther won't grace our presence until fall/winter (sept-dec). A lot of the reseller websites I've been visiting say Panther will be here at the "end of the year". True the 970 can run either native, I think that is what the author meant by the bridge. Panther, I am figuring, will be "for the most part" 64-bit.
  • Reply 3 of 75
    paulpaul Posts: 5,278member
    Quote:

    Originally posted by Rhumgod

    Not too sure about that. Smeagol will be Mac OS X 10.2.7 that ships with the new Power Mac G5's but that isn't Panther. Panther won't grace our presence until fall/winter (sept-dec).



    i know...

    "panther is 32 bit... parts of it (panther) and smeagol (not panther) are special"





    panther is 32 bit... no reason to be any more then that... for the memory thing they will use the same "hack" that they used in smeagol... no reason to recompile the whole OS because not everyone has a G5...
  • Reply 4 of 75
    commoduscommodus Posts: 270member
    As has been said, there's not much incentive to make Panther 64-bit. Developers are only just getting G5s and don't have any immediate incentive to go 64-bit with most of their software. You don't need 64-bit precision in Photoshop filters.



    What you do need is access to lots of memory. I've heard that you might not be able to assign more than 4 GB to one app in Jaguar or Panther, but just being able to have memory for a large-scale project AND having memory left over is probably going to mean everything to a pro.
  • Reply 5 of 75
    rhumgodrhumgod Posts: 1,289member
    Quote:

    Originally posted by Paul

    panther is 32 bit... no reason to be any more then that... for the memory thing they will use the same "hack" that they used in smeagol... no reason to recompile the whole OS because not everyone has a G5...



    I do think that all pertinent IO drivers, kernel etc will be 64-bit. That is the part I meant by "most of it". True, there will be some legacy 32-bit stuff involved, but it really won't matter much with a 970 that can deal with both until the OS is completely 64-bit.



    As they said at the WWDC, a recompile of an app is a relatively painless undertaking. They took the trouble to develop it so other developers wouldn't need to spend months/years doing the dirty work, God bless em (IBM and Apple).
  • Reply 6 of 75
    snoopysnoopy Posts: 1,901member
    I have a limited understanding of the 970, and I know nothing about this so called bridge. If it exists, it sounds like a way for 32-bit applications to address more memory. With the bridge, existing 32-bit applications need not be written as 64-bit applications to take advantage of more memory. It may be temporary, since eventually these applications will all be 64-bit applications. I'm asking the question because I'm not sure what is going on.



    Regarding Panther, the author may be confused about what makes a 64-bit OS, but I'm not sure. I understand that there are different 'levels' at which OS X can work with the 970. The following may be over simplified, which is something I do to help me understand it.



    Level 1: OS X must have certain changes to even work with the 970.



    Level 2: Level 1 plus OS X can handle the memory for a 64 bit system. I believe this is the level that OS X 10.2.7 (Smeagol) works.



    Level 3: Level 2 plus OS X can handle applications written for a 64-bit system, with 64-bit data and everything else it implies. I believe Panther will work at this level. I believe Panther must work at least at this level, so developers can write 64-bit applications.



    Level 4: Level 3 plus OS X itself is written to make use of 64-bit code in everything that might benefit from this transition. I believe this level will come later.
  • Reply 7 of 75
    rhumgodrhumgod Posts: 1,289member
    Quote:

    Originally posted by snoopy

    The following may be over simplified, which is something I do to help me understand it.



    Those levels sound pretty accurate from everything I've read. Anyone else know anything more indepth about any of this discussion? Programmer?
  • Reply 8 of 75
    synpsynp Posts: 248member
    Quote:

    Originally posted by snoopy

    I have a limited understanding of the 970, and I know nothing about this so called bridge. If it exists, it sounds like a way for 32-bit applications to address more memory. With the bridge, existing 32-bit applications need not be written as 64-bit applications to take advantage of more memory. It may be temporary, since eventually these applications will all be 64-bit applications. I'm asking the question because I'm not sure what is going on.



    Regarding Panther, the author may be confused about what makes a 64-bit OS, but I'm not sure. I understand that there are different 'levels' at which OS X can work with the 970. The following may be over simplified, which is something I do to help me understand it.



    Level 1: OS X must have certain changes to even work with the 970.



    Level 2: Level 1 plus OS X can handle the memory for a 64 bit system. I believe this is the level that OS X 10.2.7 (Smeagol) works.



    Level 3: Level 2 plus OS X can handle applications written for a 64-bit system, with 64-bit data and everything else it implies. I believe Panther will work at this level. I believe Panther must work at least at this level, so developers can write 64-bit applications.



    Level 4: Level 3 plus OS X itself is written to make use of 64-bit code in everything that might benefit from this transition. I believe this level will come later.




    Sounds right. I'd say that level 3 also needs some support for "bi-modal" apps, meaning apps that can run at either 32- or 64-bit. Perhaps this can be accomplished by having two binaries in the application (which is actually a folder)



    Level 4 is a long way off, since Apple won't release any OS that depends on 64-bits until about three years after the lowliest ibook has that capability.
  • Reply 9 of 75
    smirclesmircle Posts: 1,035member
    The two main problems with migrating to a 64Bit-system are:



    1) be able to expand the address space beyond the 4GB limit



    2) rewrite the APIs to allow applications to use more than 4GB



    The first condition has to be met in the system kernel. The memory allocator, the virtual memory subsystem and the task scheduler need to be rewritten to move to 64Bit long adresses.

    If you cannot redo all the kernel in time, with the PPC you could hack 64Bit adressing via switching between 32Bit wide chunks of memory. This is most likely what we will see in Smeagol. Consequences of this approach are that applications cannot easily use more than 4GB (maybe they cannot at all, I don't know the specifics). This is similar to the 640K limit DOS suffered quite for a time. This is most likely what theReg is referring to as a bridge extension of the PPC instruction set architecture - it's not a chip on the mainboard but extensions to the assembler commands that allow you to switch between 32Bit banks.

    At this stage you are essentially 32Bit but can use the full address space of the 970.



    To move to a native 64Bit implementation you need to get over this and have a fully native 64Bit kernel. Now applications are free to move to 64Bit pointers, and only then can developers write applications that address the full 64Bit without the bridge hack. But the applications now will not be able to use a lot of the system APIs if they are not brought up to 64Bit too.



    So the next step will be to provide 32Bit- and 64Bit versions of the system APIs (Core Foundation, Cocoa, Carbon). Contrary to Apples RDF, rewriting an app that is optimized for MacOS is a very tedious job and will require about as much time as carbonizing took when MacOS-X came about.



    So, my guestimate is a 4-6 year transition period until the migration is complete - and even then, boatloads of 32Bit apps will remain since word processors rarely need to address more than 4GB.
  • Reply 10 of 75
    outsideroutsider Posts: 6,008member
    The hardware supports 36bit addressing versus 32bit of the G4 so at the most the hardware can support up to 64GB, more realistically 16GB (2GB DIMMS). Its all in the OS, so Apple MUST be looking to add full memory support in a future OS soon, either with panther, or a panther point release. So either by the end of the year or Q1 2004 there should be a MacOS with proper support for at least 8GB of RAM.
  • Reply 11 of 75
    programmerprogrammer Posts: 3,458member
    Oi. I was afraid this would cause endless confusion. I'll try to explain what I think the situation is. Please bear with me.





    The major feature of software which is compiled as 32-bits is that its memory addresses (called pointers) are 32 bits long. This means they can identify any one of 2^32 bytes (4 GB). When these pointers are stored in memory they, obviously, occupy 32 bits (4 bytes). The processor's internal "registers" (which are the only data PowerPC instructions can operate on directly) are 32 bits when in 32-bit mode. This is why we call it a 32-bit machine. All of the software in Jaguar and previous Apple operating systems back to System 1.0 is 32-bit software (even though in the really early days they made some tricky assumptions that pointers would never actually have values larger than 24-bits, but that's ancient history now).



    The major feature of software which is compiled as 64-bits is that memory addresses (called pointers) are 64 bits long. This means they can identify any one of 2^64 bytes (~16 billion GB). When these pointers are stored in memory they, obviously, occupy 64 bits (8 bytes). The processor's internal "registers" are 64 bits when in 64-bit mode. This is why we call it a 64-bit machine. So far none of Apple's software has been 64-bit software.



    An operating system, broadly speaking, does two things. First it manages the hardware resources -- it doles out memory, processor time, disk space, etc etc for the use of applications. Second, it presents system services to the applications so that they can do the common things with those resources -- create files, allocate memory, draw graphics, play audio, etc etc. The first function of the operating system is hidden from the applications, and they get access to it by using the services provided by the operating system. These services are accessed via things called "application programming interfaces" (APIs). An API is a detailed programming spec which the compiler understands and can use when compiling an application to allow the application access to the services provided by the operating system. Quartz, for example, provides an API that applications can "call" to draw things on the screen. There are "functions" in this API to do things like erase a window, create a line that goes from point A to point B, set the colour of the text, etc.



    Since these APIs are software source code they must be compiled just like the operating system code and the application code, and so you must choose a compiler to use. You can either use a 32-bit compiler, or a 64-bit compiler. As a result we have at least 4 pieces of software to consider in each case -- the operating system resource management (which includes drivers), the operating system service (e.g. Quartz, QuickTime, the file system, the memory manager, etc), the API, and the application that wants to use the services. Pre-G5 these are all 32-bit software. The G5 makes this all much more complicated.



    The first requirement in a 64-bit system is how to deal with memory addresses. The underlying resource management in a 32-bit OS is built to deal with 4 GB of (possibly virtual) memory per application. This is worth a little explanation -- pre-OS X the Mac kept all applications and the operating system in a single "address space". That meant that all 32-bit addresses in the system were equivalent and given any 32-bit address you could find any of the up to 4 billion bytes in the machine, including ones which didn't belong to you. Mac OS X brought the Mac into the modern age of operating systems by giving each application its own "address space" so that it could have its own virtual 4 billion bytes of memory, and its addresses would refer only to its own space.



    An aside: Theoretically the G4 could support 36-bit addresses, although Apple never supported it, which meant that the hardware could refer to ~64 billion bytes but a single application still only had a 32-bit address and therefore could only use 4 billion bytes because it had no way to give the address for any more than that. In theory the operating system could do some fancy management to keep track of multiple applications whose physical memory was in excess of 4 billion bytes (up to 16 billion on the G4). They didn't do this since it wasn't useful and bit hard to manage since the OS itself could only refer to 4 billion bytes directly as well. The virtual memory system can track more memory space than fits physically in the machine, however. Now back to the main topic, in the next message...
  • Reply 12 of 75
    powerdocpowerdoc Posts: 8,123member
    Smircle is right, smeagol only support 4GB per application according to apple.
  • Reply 13 of 75
    programmerprogrammer Posts: 3,458member
    I mention all of this stuff about "address spaces" because now in a 64-bit system it is possible to have an address space which is intended for use by a 32-bit application or a 64-bit application. The OS knows which kind of code the application is, so when it realizes that the application is only 32-bits it knows that the app can't possibly refer to more than 4 billion bytes. On the other hand, when it sees that the app is 64-bits it has to be prepared to deal with the app refering to far more than that, and thus it has to manage the memory resources a bit differently.



    When an application wants to use a system service, part of the information it passes to and from that service is memory addresses. It communications with the service and says things like "draw triangle with the data at this memory address", and it provides one of its memory addresses where it has hopefully put the data for the triangle it wants drawn. It should be fairly obvious now that if this is a 32-bit app it will specify the memory address as a 32-bit value, and if it is a 64-bit app it will specify the memory address as a 64-bit value. The software that is the API must be set up to take this value correctly, and it must pass it along to the service's software correctly. The same happens in the opposite direction -- if the system is giving data to the app it will tell the app which memory address to look at to find the data that has been provided.





    So now we come to Panther.



    First of all consider the memory resource management -- Panther is almost certainly going to have a 64-bit memory manager (Smeagol won't). This will allow 64-bit applications to be created which are assigned a 64-bit address space. Along with this, therefore, it is necessary to provide a 64-bit API for getting this memory.



    Next, consider that Apple has a great deal of software in Mac OS X and up until now almost all of it has been carefully crafted to be optimal on the G3/G4 which are both 32-bit processors. This software will not instantly convert to 64-bit "for free", so waiting to release the new OS until it has all been converted would significantly delay the release of the OS. The solution is to provide a "bridge". In this context a bridge is a way for 64-bit applications to use 32-bit APIs to 32-bit system services. The main thing this bridge has to do is somehow allow software that talks in terms of 64-bit pointers to communicate with software that talks in terms of 32-bit pointers.



    Imagine a similar problem for a moment: how do you talk to somebody with a much smaller vocabulary than yours? You choose your words carefully, and you limit yourself to using words that you know that they know. Apply the same principle to this software problem. A 64-bit app that wants to talk to a 32-bit piece of software knows that it can't use all of its addresses with the 32-bit software because it knows about 4 billion times as many. Instead it limits itself to using only about 4 billion addresses and makes sure that the 32-bit software knows where those particular 4 billion addresses are.



    Apple will provide a new API which gets "32-bit friendly" addresses for use when talking to 32-bit APIs. The operating system's memory resource manager will set up the hardware so that the 32-bit addresses produced by 32-bit software will correctly identify the same bytes that the 64-bit software thinks they identify. Now any time somebody writes 64-bit software that communicates with the older APIs they just use a bit of care to ensure that any memory addresses are understood by the 32-bit software.



    In this way Apple can roll out Panther with a 64-bit memory manager, plus any pieces of software that they have had time to update to 64-bits (including their APIs), but leave all the other 32-bit stuff intact and usable. Thanks to the hardware support in the G5 the software can switch back and forth between 32-bit and 64-bit modes within a single application even in with a 64-bit address space and the 32-bit software is just oblivious to the existance of the huge amount of extra space -- it can't "see" it because it has no way to describe its address.



    This scheme will work fine in applications and in the kernel with drivers. Apple did exactly this kind of thing (only much harder) during the 68K -> PowerPC transition back in 94/95. There they didn't have to translate memory addresses, they had to switch processor instruction sets, but many of the same ideas apply.



    Since Apple uses FreeBSD, and that software is already supported on various 64-bit hardware, then I think there is a good chance we will see 64-bit APIs for that part of the OS (networking, file system, basic standard library support, memory management, etc). The file system has been the focus of a lot of work by Apple lately, and I'd be surprised if they didn't provide 64-bit support for it. This level of functionality is very useful because it is all of the basic Unix services which means that 64-bit Unix software will port easily to Panther -- that's a big win for Apple. Things which won't likely be 64-bit are things specific to Apple -- window manager, Quartz, QuickTime, Core Audio, their OpenGL implementation, etc. This probably won't be a "big deal", and depending on how Apple handles this software may just recompile and work by default in 64-bit and require changes only if you want to actually allocate >4 GB.



    I hope this hasn't been too confusing -- I wrote it on the fly after not getting enough sleep. I'll try to answer any questions (at least on this topic ).
  • Reply 14 of 75
    kickahakickaha Posts: 8,760member
    Excellent rundown.



    Of course, the bottom line regarding this 'bridge' is... it doesn't matter. The 970 runs 32bit code with zero penalty, unlike the competition, so there's no reason to compile the whole shebang for 64-bit (incurring a space penalty) just to eliminate a phantom problem.



    Heck, you want to know how good Apple's engineers have been?



    Launch a Classic app.



    You're running 68k assembly code in portions of the Classic OS, on a 68k emulator, running inside an OS that's been launched as an application inside another OS for a completely different hardware platform.



    And you can't tell me you can tell the difference.
  • Reply 15 of 75
    These types of articles are funny. Near as I can tell, from the Register's criteria, even Solaris isn't a 64-bit operating system (IIRC X11 and a number of other libraries are 32-bit only).



    A 64-bit operating system is one that allows 64-bit binaries to be compiled and run (where a 64-bit binary is one where sizeof(void*) = 8). Anything above and beyond is entirely optional.
  • Reply 16 of 75
    cubistcubist Posts: 954member
    So then would we expect that Panther would include both 32- and 64-bit binaries for things like the file system, since older machines won't be able to run the 64-bit code?



    The Sun Sparc machines have this type of logic today. Some Sparc machines are able to run either 32- or 64-bit OS code; some can only run the 64-bit OS code.



    From your (Programmer's) description, the "bridge" is what people used to Intel coding would call a "thunk", is that so?
  • Reply 17 of 75
    Quote:

    Originally posted by cubist

    So then would we expect that Panther would include both 32- and 64-bit binaries for things like the file system, since older machines won't be able to run the 64-bit code?



    The Sun Sparc machines have this type of logic today. Some Sparc machines are able to run either 32- or 64-bit OS code; some can only run the 64-bit OS code.



    From your (Programmer's) description, the "bridge" is what people used to Intel coding would call a "thunk", is that so?




    A thunk is just a context switch. This bridge is a differen thing includes logic to let 64-bit apps call into 32-bit libraries.



    Solaris right now ships two sets of libraries and kernels for the basic system; OS X would need to do that too, but has the advantage of Mach fat binaries. I fully expect to see this support used in Panther, if not in Smeagol.



    OS X will potentially always be a hybrid 32/64 bit system. There are far fewer reasons to go to 64-bit from 32-bit than there were to go from 16 to 32. A low end consumer machine doesn't really need that 64-bit support (especially a laptop). And for programs which don't need / benefit from 64-bitness, it will just slow them down slightly.
  • Reply 18 of 75
    kidredkidred Posts: 2,402member
    Quote:

    Originally posted by Kickaha

    Excellent rundown.



    Of course, the bottom line regarding this 'bridge' is... it doesn't matter. The 970 runs 32bit code with zero penalty, unlike the competition, so there's no reason to compile the whole shebang for 64-bit (incurring a space penalty) just to eliminate a phantom problem.



    Heck, you want to know how good Apple's engineers have been?



    Launch a Classic app.



    You're running 68k assembly code in portions of the Classic OS, on a 68k emulator, running inside an OS that's been launched as an application inside another OS for a completely different hardware platform.



    And you can't tell me you can tell the difference.




    Sure I can, no scroll wheel support or drop shadows
  • Reply 19 of 75
    kidredkidred Posts: 2,402member
    So Programmer, in idiot terms, is Panther hacked to get access to more memory? I'm not versed enough to understand half your brilliant looking explanation, but I can see some claiming Panther to be a hack like the QS/MDD were DDR hacks. Can you break it down in less then a few sentences? hehe
  • Reply 20 of 75
    programmerprogrammer Posts: 3,458member
    Quote:

    Originally posted by KidRed

    So Programmer, in idiot terms, is Panther hacked to get access to more memory? I'm not versed enough to understand half your brilliant looking explanation, but I can see some claiming Panther to be a hack like the QS/MDD were DDR hacks. Can you break it down in less then a few sentences? hehe



    I dispute that the QS/MDD are "hacks". Just because the memory throughput exceeds the processor's front side bus speed doesn't make the system a "hack". The bandwidth is put to good use by other parts of the system.



    Inter-operability between 32-bit and 64-bit code on the same machine is going to be a permanent feature. If you don't need the larger address space then 32-bit code will run faster because the pointers you have to move around and store occupy less space. What they are doing to Panther isn't "hacking" it, it is simply architected to support bimodal operation. Anybody who insists on calling this a hack is naive of the realities of software development.



    That said, Panther will not be nearly as 64-bit as it could be. Over time more and more of the system APIs will be exposed as full 64-bit APIs and the system library implementations running in application space will become 64-bit mode code. This will reduce the amount of switching between modes. A mode switch might be really really really inexpensive, but it has to require some work (i.e. at least 1 instruction) every time you do it. This might be completely insignificant in a practical sense, but it is still there. Plus having 32-bit code in the 64-bit address space requires using the mechanism Apple is providing to restrict the addresses use. Not having to do this makes it easier to use the APIs.



    My guess is that IBM's AIX and Linux implementations have full 64-bit APIs. The reason I say this is because of the comments that this "bridging" hardware was added in the 970, whereas the POWER3/4 have been running for years without it.
Sign In or Register to comment.