G5 in Jan - new info

123457

Comments

  • Reply 121 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by macrumorzz:

    <strong>

    (the 7440 is considered to be a 7450 with 0kb l3 cache, as far as I can see).

    </strong><hr></blockquote>



    Well, according to moto's website, it for all intents and purposes *is* a 7450 with missing L3 cache tags.





    [quote]<strong>

    Intel x86 (dedicated code setups for i386, i486, i486sx, PentiumPro, PentiumII). Note the lack of support for the original Pentium CPU, this processor probably is considered to behave like a 486, no support for MMX.

    </strong><hr></blockquote>



    Hm, this is kinda weird, considering the fact that the gap between the i486 and the Pentium is much larger than the one between the i386 and i486sx (or i386 + i387 and i486).





    [quote]<strong>

    The P6 design has been separated in PPro and PII because of the different L2 configs, PIII and PIV use the same setup as PII.

    </strong><hr></blockquote>



    Weird again - the later P3s (coppermine and up) have quite a different L2 architecture from the P2 (256k full-speed on-die rather than 512k half-speed off-die), and the first celeron models didn't have any L2 at all.

    Furthermore, the P3 has additional registers because of SSE - does that mean SSE is not supported by darwin (specifically by its context switching code) at all?





    [quote]<strong>

    Athlon is considered to be 100% x86 compatible.

    </strong><hr></blockquote>



    Actually, according to the release notes in Darwin 1.4.1, AMD CPUs are specifically not supported at all, and known not to work.





    [quote]<strong>

    Intel Risc aka i860/i960 (i960 is considered to be 100% compatible with i860).

    </strong><hr></blockquote>



    Funny, this is actually in Darwin? Wonder what use it could possibly have, there's hardly any desktop i860 hardware out there, and I can't really imagine Darwin on a laser printer or on an Intel Paragon



    Bye,

    RazzFazz
  • Reply 122 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by craiger77:

    <strong>I am not a tech person so don't quite understand all these details of 32 bit vs 64 bit processors, but it seems to me that the trend of late has been for the hardware to be way out ahead of the software.</strong><hr></blockquote>



    Hehe, OS X being the perfect counter-example...



    Bye,

    RazzFazz
  • Reply 123 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by macrumorzz:

    <strong>

    Some weeks ago a company - sorry, forgot the name - demonstrated PPC software (not en entire OS, of course), on an Athlon. Thus the Athlon can emulate a 1 Ghz PowerPC (speaking in terms of G3, I think, and no AltiVec).

    </strong><hr></blockquote>



    From what I have read, this was nothing more than vaporware. There was no actual demonstration, and those claims of being able to emulate a 1GHz PPC on an Athlon were about the only thing they could hand out.



    Given the immense architectural differences, I'm pretty sceptical they will be able to deliver a working prototype anywhere in the near future.



    Bye,

    RazzFazz
  • Reply 124 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Programmer:

    <strong>

    nVidia is onboard with HT as well, so CPU lt;-&gt; GPU communications via HT would give 3D graphics a real kick in the pants. Right now feeding the GPU is a huge bottleneck, and its just going to get worse as the programmable shader technology really swings into high gear. </strong><hr></blockquote>



    Hmm, aren't programmable shaders actually more likely to *decrease* the amount of bandwidth required between CPU and GPU, since all the texture manipulation that traditionally has been done on the CPU and then sent to the graphics card via AGP is now done locally on it?



    Bye,

    RazzFazz
  • Reply 125 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Programmer:

    <strong>The nForce is interesting as a low-cost motherboard chipset. Good performance, but fairly low end if I recall correctly.</strong><hr></blockquote>



    Actually, price-wise, currently available nForce mainboards are not very low-cost at all. Performance-wise they are indeed, as long as the internal graphics engine is used. They do have other strengths though - the highly advanced integrated audio chip comes to mind.



    Bye,

    RazzFazz
  • Reply 126 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by xype:

    <strong>

    the integrated gfx chip is geforce2mx only, but one is free to use any other agp card with it (and I think that also enables the use of two screens).</strong><hr></blockquote>



    Nope, from wht I understand, you can only use *either* the integrated graphics *or* a card in the AGP slot (remember AGP is point-to-point, not a bus like PCI, so it can only accomodate one single device at a time). For a dual-screen setup, you'd have to use one PCI graphics card.





    [quote]<strong>

    it's a nice all-in-one package which would be ideal for a consumer machine. as far as i know it's amd/intel only now.

    </strong><hr></blockquote>



    The nForce currently supports AMD CPUs exclusively, though I think the XBox a variant that connects to a modified mobile Celeron.



    Bye,

    RazzFazz
  • Reply 127 of 141
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by RazzFazz:

    <strong>

    Hmm, aren't programmable shaders actually more likely to *decrease* the amount of bandwidth required between CPU and GPU, since all the texture manipulation that traditionally has been done on the CPU and then sent to the graphics card via AGP is now done locally on it?

    </strong><hr></blockquote>



    Nope... in practice what happens is that the vertex shaders take more input data and do funkier things with it than is traditionally the case. Previously the CPU would do a bunch of work on a larger set of data, and pass the results to the GPU. Now all of the inputs go to the GPU. If you are repeatedly drawing the same thing and the inputs don't change, and you've got lots of free VRAM then it can live on the graphics card... but VRAM is quite precious and other things compete for it (buffers & textures being the leading contenders).
  • Reply 128 of 141
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by RazzFazz:

    <strong>



    Actually, price-wise, currently available nForce mainboards are not very low-cost at all. Performance-wise they are indeed, as long as the internal graphics engine is used. They do have other strengths though - the highly advanced integrated audio chip comes to mind.

    </strong><hr></blockquote>



    Hmmm, I could have sworn that I read a while back that nVidia's intention was to attack to the low-end of the mobo market, hence the inclusion of what is now their low-end 3D accelerator. I guess that didn't pan out.
  • Reply 129 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by mmicist:

    <strong>

    Is possible , but I know nothing about it.

    One possibility did cross my mind. If Apple use the Hypertransport connection scheme from AMD's Hammer, there is little technical problem in putting both processors on the same motherboard, sharing memory and peripherals. Two distinct 64 bit chips in one computer, it will run damn nearly everything.

    </strong><hr></blockquote>



    Dunno, seems unlikely to me. Hammer has always been said to be drop-in x86 backwards-compatible, so you'd have tons of problems managing both architectures in one (endinanness, memory models, stack alignment, data alignment, ...). One possible solution would be to keep them separate to some extent, but then you'd probably not gain much over two completely separate machines in the first place.



    Bye,

    RazzFazz
  • Reply 130 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Programmer:

    <strong>

    If you are repeatedly drawing the same thing and the inputs don't change, and you've got lots of free VRAM then it can live on the graphics card... but VRAM is quite precious and other things compete for it (buffers & textures being the leading contenders).

    </strong><hr></blockquote>



    You're right, I was thinking of pixel shaders only, vertex shaders are a different matter.



    Bye,

    RazzFazz
  • Reply 131 of 141
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by RazzFazz:

    <strong>



    Dunno, seems unlikely to me. Hammer has always been said to be drop-in x86 backwards-compatible, so you'd have tons of problems managing both architectures in one (endinanness, memory models, stack alignment, data alignment, ...). One possible solution would be to keep them separate to some extent, but then you'd probably not gain much over two completely separate machines in the first place.



    Bye,

    RazzFazz</strong><hr></blockquote>



    Whoa. Reminds me of the DEC Rainbow 100 from the early 80s. A Z80 and an 8086 in one box. Never flew in the market. But it dates me doesn't it?
  • Reply 132 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Programmer:

    <strong>

    Hmmm, I could have sworn that I read a while back that nVidia's intention was to attack to the low-end of the mobo market, hence the inclusion of what is now their low-end 3D accelerator. I guess that didn't pan out.</strong><hr></blockquote>



    Well, all I can say is that (at least over here in Germany) most of the time it's cheaper to get a VIA KT266-based board and stick a GF2MX-200 in than to get an nForce-based one. If you get an ALiMaGiK- or SiS735-based board, you end up paying significantly less.



    Bye,

    RazzFazz
  • Reply 133 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Programmer:

    <strong>

    Whoa. Reminds me of the DEC Rainbow 100 from the early 80s. A Z80 and an 8086 in one box. Never flew in the market. But it dates me doesn't it?</strong><hr></blockquote>



    Hehe, I remember seeing an ad for something like that somwhere around 1990. I think it was called "4860", manufactured by Hauppauge (the WinTV guys?) and featured an i486 for compatibility, an i860 for incompatibility (well, and for number-crunching-power ) and a hefty price tag to say the least.



    Bye,

    RazzFazz



    [ 01-03-2002: Message edited by: RazzFazz ]</p>
  • Reply 134 of 141
    tjmtjm Posts: 367member
    [quote]Originally posted by Programmer:

    <strong>



    Whoa. Reminds me of the DEC Rainbow 100 from the early 80s. A Z80 and an 8086 in one box. Never flew in the market. But it dates me doesn't it?</strong><hr></blockquote>



    Me, too! I was in grad school when they came out. I tried to convince my advisor to buy a Mac for the group - my Dad had one and I fell in love with it (the original 128k/no HD version; I think it was running Mac OS 2). He thought Macs were silly (and that Apple was going to go bankrupt soon ), and bought one of those Rainbow 100s. God, it was frustrating trying to write in WordPerfect 1.0 (IIRC) using the embedded dot commands for all the bold-face, italics, and other formatting. It was like having to ride a donkey after having driven a Porsche.



    The Rainbow was a nice computer, though, for what it was (a DOS POS!).



    [ 01-03-2002: Message edited by: TJM ]</p>
  • Reply 135 of 141
    nonsuchnonsuch Posts: 293member
    This thread has been an awesome read ... kudos to all the very smart people posting here. And as for that Fahre451 guy, well, he can speak up anytime he wants to.
  • Reply 136 of 141
    powerdocpowerdoc Posts: 8,123member
    [quote]Originally posted by Spart:

    <strong>



    4 G4s at 1.4 MHz will result in 6-8x the output of a dual 800 MHz G4...what are you smoking? <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" /> </strong><hr></blockquote>

    no 4 G4s at 1,4 Mhz will be one hundred time slower than a dual 800 mhz G4



  • Reply 137 of 141
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by TJM:

    <strong>

    (the original 128k/no HD version; I think it was running Mac OS 2).

    </strong><hr></blockquote>



    I thought the term "Mac OS" was only introduced with version 8? Just out of curiosity, how was it actually called back then? "System 2"? Then what would the name of the first version have been? Just "System"? Or something completely different?



    Bye,

    RazzFazz
  • Reply 138 of 141
    [quote]Originally posted by RazzFazz:

    <strong>



    I thought the term "Mac OS" was only introduced with version 8? Just out of curiosity, how was it actually called back then? "System 2"? Then what would the name of the first version have been? Just "System"? Or something completely different?



    Bye,

    RazzFazz</strong><hr></blockquote>



    It was "System 2", "System 6.0.7", etc. until 7.6 where it became "Mac OS 7.6." Until that time, even Copland was System 8.
  • Reply 139 of 141
    o and ao and a Posts: 579member
    just out of curiousity anyone know where i can find screen shots of copland and some really good info?



    always been intrested in it but never found anythign good on the net about it and i wasn't an avid mac fan when i was 13 so if ya can help me out !



    cheers
  • Reply 140 of 141
    nebrienebrie Posts: 483member
    [quote]Originally posted by mmicist:

    <strong>



    I would be very suprised if AMD were fabbing for Apple, as they are still ramping their Dresden plant, and need all their capacity for their own chips at the moment.

    </strong><hr></blockquote>



    Yeah... I do remember Steve jobs saying at the last analysts meeting that a year or two ago, they had trouble getting space to fab their chips but now they have plenty of space. When you consider how many semiconductor plants are now idle, probably even at Motorola, Apple is probably negotiated bottom-barrel pricing and are just producing as many chips as possible at the highest speed even if the yield isn't that great. That's one way to help the mega-hurts war. Just my wild guess.
Sign In or Register to comment.