Software issue will make PPC970-based machines "suck"

124

Comments

  • Reply 61 of 92
    It is a bit unfortunate that a representative of AI would spawn an entire thread to make such ignorant statements. Leonis, I'm surprised that you would be this misinformed, in spite of the fact that you spend so much time around here. I was expecting to read some valid criticism of the 970 and found only worthless, totally uncorroborated speculation.



    I am grateful for the true computer scientists around here upon whom we can rely for substantiated opinions and analysis. If you're uninformed, however, that's fine -- but don't write as if you're an expert.
  • Reply 62 of 92
    lucaluca Posts: 3,833member
    EDIT: Whoops, wrong thread.



    [ 12-02-2002: Message edited by: Luca Rescigno ]</p>
  • Reply 63 of 92
    drboardrboar Posts: 477member
    The thing with 64 bit memory space is that, while it today is in the realm of servers. Back in early 90s 8 MB of ram and a 80 MB HD was common. in the mid 90s it was 64 MB and a 500 MB HD at 2000 it was several hundred of MB RAM and many GB of HD space. Suddenly that 4GB limit is quite close.



    The problem is perception. The numerial system is linear but the growth is exponential. With many computers at 512 MB of RAM it is only 3 doublings away from the 4GB limit. Think about how fast we went from 16 to 128 MB RAM (also 3 doublings).



    So 64 bit CPUs might be "pointless" now but they will be needed soon and Apple really can not afford to be last to adopt that.



    In theory the far more efficent memory management of the OS X unix could give lower RAM requirements than OS 9s statical allocations. In practise however OS X seem to gobble up all the RAM you add



    Apple will need 64 bit adressing quite soon for the high end server and then later for the blade X server and the towers. My guess is that alredy (late) next year we will see a 64 bit server OS and then the regluar OS will get there 2004 or 2005. Assuming that the RAM requirments of X will grow at the same speed as the classical mac OS did. <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" />
  • Reply 64 of 92
    amorphamorph Posts: 7,112member
    More to the point, the 970 is clearly designed to serve two markets: workstations and (low end) servers. Workstations only need 64 bit capabilities under some circumstances, but servers can use them now (for e.g. Oracle).



    Apple should have by far the most painless migration to 64 bit of any vendor currently on a 32 bit platform. I hope they take advantage of it.
  • Reply 65 of 92
    outsideroutsider Posts: 6,008member
    Just to clarify, the 970 doesn NOT have 64bit memory addressing. It's 48bit (262,144GB or 256TB). And the G4 (745X series) has 36bit memory addressing , allowing up to 64GB of real RAM addressing (not virtual). Only the G3 and 7410 G4 have 32bit memory addressing.
  • Reply 66 of 92
    amorphamorph Posts: 7,112member
    [quote]Originally posted by Outsider:

    <strong>Just to clarify, the 970 doesn NOT have 64bit memory addressing. It's 48bit (262,144GB or 256TB). And the G4 (745X series) has 36bit memory addressing , allowing up to 64GB of real RAM addressing (not virtual). Only the G3 and 7410 G4 have 32bit memory addressing.</strong><hr></blockquote>



    Thanks for the clarification. I'll clarify myself that I said the 970 allows each app to have (up to) a 64 bit virtual address space, which can obviously exceed the size of the physical (RAM) address space by a margin. Not that I think any application will use more than 48 bits of that potential space for some time to come. 256TB ought to be enough for anybody.



    Well, except for the Mormon geneology project. I think that's into the petabytes by now.



    [ 12-02-2002: Message edited by: Amorph ]</p>
  • Reply 66 of 92
    programmerprogrammer Posts: 3,467member
    [quote]Originally posted by Outsider:

    <strong>Just to clarify, the 970 doesn NOT have 64bit memory addressing. It's 48bit (262,144GB or 256TB). And the G4 (745X series) has 36bit memory addressing , allowing up to 64GB of real RAM addressing (not virtual). Only the G3 and 7410 G4 have 32bit memory addressing.</strong><hr></blockquote>



    Quibbling: the 970 has 64-bit logical memory addressing, and 48-bit physical memory addressing. The G4 series has 32-bit logical addressing and 36-bit physical addressing.



    This means that 64-bit software on the 970 can behave like it has an address space which contains 16 billion billion bytes, whereas 32-bit software is limited to 4 billion bytes. This is the logical addressing.



    The physical addressing limits how much actual RAM that the processor is capable of talking to. For the G4 this is about 16 GB. For the 970 this is 256 TB. Interestingly this means that the G4 can use more RAM than a single program can use in a single address space, whereas the 970 can use far far far less RAM than one of its applications can describe.



    Future 970-series chips can simply enhance the hardware slightly and 64-bit applications will be able to "directly" access all the new RAM. Depending on the bus interface this will involve additional address lines, or it will involve larger address fields in the packets crossing the bus. The programming model won't have to change and thus the software won't have to be recompiled. I'd even venture to say that there will never be a need to go to larger addresses. 16 billion billion bytes is quite a bit of memory and by the time we can build that much we'll probably be using & accessing it quite differently.



    [ 12-02-2002: Message edited by: Programmer ]</p>
  • Reply 68 of 92
    nevynnevyn Posts: 360member
    [quote]Originally posted by Programmer:

    <strong>The G4 series has 32-bit logical addressing and 36-bit physical addressing.</strong><hr></blockquote>



    That was succinct, thanks

    Can you extend that to _virtual_ memory? That's where things seem to get really confused to me.



    It seems like a function of the memory controller, not the CPU (yet), but I would imagine the G4 has access to (far) more than 4GB (or even 16GB) of virtual memory... otherwise it becomes a lot less useful as a technique, yes?



    In other words, reading what you said, because of the logical address limit, if I was maxing out a G4, my app would be using 4GB (though getting there is tricky). I could have four of those programs exercising RAM simultaneously (in a hypothetical 1xG4 box with the max of 16GB physical ram).



    Now, can I have a 5th 4GB app that is entirely swapped out to disk? How much more space is there? (The answer has historically always been 'as much as your HD can handle' before - but now HDs have gotten BIG).
  • Reply 69 of 92
    amorphamorph Posts: 7,112member
    [quote]Originally posted by Nevyn:

    <strong>



    That was succinct, thanks

    Can you extend that to _virtual_ memory? That's where things seem to get really confused to me.</strong><hr></blockquote>



    "Logical addressing" is what defines virtual memory.



    [quote]<strong>It seems like a function of the memory controller, not the CPU (yet), but I would imagine the G4 has access to (far) more than 4GB (or even 16GB) of virtual memory... otherwise it becomes a lot less useful as a technique, yes?</strong><hr></blockquote>



    The memory controller only deals with real RAM. Virtual memory is an illusion created by the operating system with some help from bits of dedicated hardware, and the illusion affects applications, because applications always use logical addressing - which the OS maps to real addresses behind the scenes. What Programmer is saying is that the G4 can access more actual memory than any single application can be aware of (because applications use logical addressing). This is an unusual arrangement, to say the least. It means that a G4 could address, say, 16GB of RAM, but applications would be limited to 4GB (or 2GB, if the address is signed) of RAM each - this machine could therefor run four (or eight) applications entirely in RAM!



    The 970 has a more conventional arrangement that allows virtual memory (i.e., the amount of memory implied by the size of logical addresses) to be far larger than the amount of physical RAM the chip can address. But then, both numbers are so large that this difference will hardly matter for some time to come.



    [quote]<strong>In other words, reading what you said, because of the logical address limit, if I was maxing out a G4, my app would be using 4GB (though getting there is tricky). I could have four of those programs exercising RAM simultaneously (in a hypothetical 1xG4 box with the max of 16GB physical ram).



    Now, can I have a 5th 4GB app that is entirely swapped out to disk? How much more space is there? (The answer has historically always been 'as much as your HD can handle' before - but now HDs have gotten BIG).</strong><hr></blockquote>



    OK, it wasn't clear from the paragraph before that you got the logical/real distinction right. Pressing onward...



    Virtual memory works with pages, which are discrete chunks of (logical) RAM, usually 2K or 4K in size. These pages are swapped into real RAM when they're needed, and swapped out when they haven't been accessed in a while, and another page needs to be swapped in. Pages are treated independently - in other words, the virtual memory system just looks at how long it's been since anything referred to them, not at which application they happen to belong to. So in a system with a lot of applications running and relative to the amount of RAM, bits and pieces of every application will be in RAM, and more bits and pieces will be on disk. In your scenario, the only way that an entire application would end up swapped to disk (except for a tiny stub) would be if it had been idling for a long time, and the space it occupied in real RAM was needed by more active applications.



    As for HDDs, since they're now much larger than a 32-bit logical address space, VM in a 32-bit system will never take up more than a small corner of a modern HDD.



    Now, of course, a 64 bit logical address space completely dwarfs any HDD....



    I hope that came out semi-coherent. It's been a while since my operating systems class.



    [ 12-02-2002: Message edited by: Amorph ]</p>
  • Reply 70 of 92
    [quote]Originally posted by Amorph:

    <strong>

    As for HDDs, since they're now much larger than a 32-bit logical address space, VM in a 32-bit system will never take up more than a small corner of a modern HDD.

    </strong><hr></blockquote>



    That's not correct - you can have at most 4GB HD swap space taken up per process, but any number of such processes might be swapped out at any point in time, so you could basically use any amount of HD space available for swapping. You most certainly would want to avoid that, though.



    Bye,

    RazzFazz
  • Reply 71 of 92
    amorphamorph Posts: 7,112member
    [quote]Originally posted by RazzFazz:

    <strong>



    That's not correct - you can have at most 4GB HD swap space taken up per process, but any number of such processes might be swapped out at any point in time, so you could basically use any amount of HD space available for swapping. You most certainly would want to avoid that, though.

    </strong><hr></blockquote>



    Damnit, I knew I missed something.



    Thanks for the correction.



    [ 12-02-2002: Message edited by: Amorph ]</p>
  • Reply 72 of 92
    shetlineshetline Posts: 4,695member
    [quote]Originally posted by Barto:

    <strong>



    Bull$hit. So was 640k.



    I can tell the difference between certain colours in 24-bit. 64-bit will be much closer to the limit.



    Also, 64-bit colour spaces ≠ 64-bit final image.



    Working in 64-bit and rendering at 32-bit gives a much higher-quality image, as you have more precision when applying effects.



    Barto</strong><hr></blockquote>



    Just because at one time or another someone has said "This is more than enough!" and been wrong about doesn't mean that you can conclude that more and more is always better.



    There are some limits to the resolution of the human eye, and 64-bit color is way, way beyond those limits. 24-bit color is damned close to those limits as a bit resolution for a final image.



    Yes, it's nice to have some overhead bits for processing so that you can perform a lot of image manipulation without round-off errors and without magnifying quantization noise. But do you really need 21 bits per channel? That's over two million discrete levels of intensity? Considering that a mere 256 levels of intensity does a fine job most of the time for a final image, do you really need 8192 subdivisions of intensity to improve upon such images or to edit them?



    Even 16-bits/channel with alpha is overkill, it just happens to be computationally convenient overkill. 64-bit color isn't merely "closer to the limit", it's well past the limit.



    [ 12-02-2002: Message edited by: shetline ]</p>
  • Reply 73 of 92
    bartobarto Posts: 2,246member
    The human race could get by in black and f'king white if we had to!



    Don't you see that computers are about doing things faster with better quality, and 64-bit will be much better quality!



    Making something a bit "nicer" is a perfectly good reason for doing something.



    Don't delude yourself otherwise.



    64-bit might be overkill, but it's computationally convienient overkill. It is certainly better than 32-bit. Also, (correct me if I'm wrong), but 16-bit alpha will probably mean better transparency in Mac OS X/Quartz Extreme.



    Barto
  • Reply 74 of 92
    [quote]Originally posted by Barto:

    <strong>Also, (correct me if I'm wrong), but 16-bit alpha will probably mean better transparency in Mac OS X/Quartz Extreme.

    </strong><hr></blockquote>



    Yeah, for those frequent occasions when 256 levels of transparency are not enough.



    This might become a point once we have support for transparency gradients, but up until then, 8 bit alpha will only be a noticeable limit if you either have to have tons (&gt;256) of transparent windows with different opacity on your screen, or regions where dozens of transparent windows overlap.



    Bye,

    RazzFazz
  • Reply 75 of 92
    lundylundy Posts: 4,466member
    [quote]Originally posted by Nevyn:

    because of the logical address limit, if I was maxing out a G4, my app would be using 4GB (though getting there is tricky). I could have four of those programs exercising RAM simultaneously (in a hypothetical 1xG4 box with the max of 16GB physical ram). <hr></blockquote>

    The amount of actual RAM you have in the box isn't relevant. The OS uses VM to let each process "think" it has a full 4GB address space. And there is no limit on the number of processes. Each process can ask for and get the whole 4GB of space, and store and load to its heart's content in that space. The OS maps these references to some portion of the real RAM, no matter how much real RAM there actually is. If the portion that the process wants to access is actually paged out to disk, it is brought in. As far as the process knows, all it did was store a value in a memory location.

    [quote]





    Now, can I have a 5th 4GB app that is entirely swapped out to disk? How much more space is there? (The answer has historically always been 'as much as your HD can handle' before - but now HDs have gotten BIG).<hr></blockquote>



    Since there is no relation between real and virtual memory spaces, this 5th app would think it had 4GB just like the other 4. Whatever portion of each app that is needed by the app is paged in whenever it is needed, and special hardware ( the MMU) in the processor maps the app's memory reference to wherever in "real" memory the page got loaded.
  • Reply 76 of 92
    spookyspooky Posts: 504member
    [quote]Originally posted by r-0X#Zapchud:

    <strong>spooky, great stuff like

    ?900MHz FSB, which scales with the processors as they get faster

    ?more FPU's (which'll double the FP performance)

    ?8/5-way superscalar (?)

    ?built for SMP

    ?very good performance/power-ratio

    ?built and manufactured by someone else than mot, which also led to plans to migrate quickly to a better process (0.09µ)

    ?longer pipeline combined with above leads me to believe the processors will scale alot upwards quickly



    etc.



    [ 12-01-2002: Message edited by: r-0X#Zapchud ]</strong><hr></blockquote>



    does anyone else have a horrible nagging feeling that true to form, even if the 970 offers killer performance, Apple will find some way of messing up their machine specs? Like having slower bus speeds, drives, ram, graphics etc? Apple seem to be brilliant at taking potentiality and then producing a mixed bag package that doesn't quite reach the capabilities that a processor might have.
  • Reply 77 of 92
    zapchudzapchud Posts: 844member
    [quote]Originally posted by spooky:

    <strong>



    does anyone else have a horrible nagging feeling that true to form, even if the 970 offers killer performance, Apple will find some way of messing up their machine specs? Like having slower bus speeds, drives, ram, graphics etc? Apple seem to be brilliant at taking potentiality and then producing a mixed bag package that doesn't quite reach the capabilities that a processor might have.</strong><hr></blockquote>



    I don't believe Apple will mess up the machine specs by having something like a much slower system bus. Apple is behind in raw price/performance, and they know it, and they really need to show that their stuff really can pull some tricks. So I believe they'll at least triple the bus-bandwidth because it'll both increase performance by leaps and bounds, and leave a good room for improvement for their next revision (Steve will have the opportunity to roar "We did it again!")



    I disagree in that Apple has decided to create a 'mixed bag performance' machine, they've used the G4 for what's it's worth all the time (exept for the Yikes!-series). Every time a new revision of the G4 has been ready (as far as I/we know), Apple has put it in their PM's.



    I think even the first revision of the PPC 970 will be wicked fast !
  • Reply 78 of 92
    outsideroutsider Posts: 6,008member
    Apple will have no choice on the bus speed. The bus speed ratio is locked on 2:1.
  • Reply 79 of 92
    zapchudzapchud Posts: 844member
    [quote]Originally posted by Outsider:

    <strong>Apple will have no choice on the bus speed. The bus speed ratio is locked on 2:1.</strong><hr></blockquote>



    the FSB will be locked on 2:1, but AFAIK isn't the northbridge-&gt;memory-bus locked, so Apple can use whatever they want there... please correct me if this is wrong, but this is at least how it's done on P4 motherboards with DDR RAM.
  • Reply 80 of 92
    xypexype Posts: 672member
    [quote]Originally posted by RazzFazz:

    <strong>This might become a point once we have support for transparency gradients, but up until then, 8 bit alpha will only be a noticeable limit if you either have to have tons (&gt;256) of transparent windows with different opacity on your screen, or regions where dozens of transparent windows overlap.</strong><hr></blockquote>



    The point is rather that Apple can claim first to market with 16bit transparency in their windowing system.



    I think while 64bit surely has it's uses most of it will be raped for marketing by Apple. Which is a good thing!
Sign In or Register to comment.