Intel Core 2 Step Pentium

2»

Comments

  • Reply 21 of 27
    marcukmarcuk Posts: 4,442member
    AMD's fusion looks like the winner in the longterm from my pov. Of course, we dont know intels plans in this direction...



    There is a law of dimishing returns i can make out from reading the current benchmarks of 4 and 8 way core systems, no doubt this is somewhat due to software limitations, and also fsb bandwidth issues, but there is software that can make use of as many cores are available, and it doesn't scale linearly with core increases.



    Until new methods of software creation develop, it seems quite a waste of resources going anything above 8 cores.



    I like the specialized hardware/module approach; for general computing, 8 cores is going to be fine for a very long time to come, but those of us who do have specific tasks that we run frequently will benefit much greater from a dedicated DSP than we would from doubling our core count.



    As for AMD integrated SOC/graphics modules - im quite favourable to that, for a GPU to be integrated within the CPU - remember- CPU's are going to be running around 3-4 GHZ, which is 4-5 times faster than GPU's currently run. If you can make the integrated GPU run at CPU speeds, you can make its die footprint 4-5 times smaller to achieve the same performance. Negating the whole bus architecture, you might even make its floorprint less than that.



    Imagine something like a GF7300GT or X1600 running at 4 ghz talking directly to the CPU for your integrated graphics solution. Sure would beat an intel GMA950 and be quite a powerful solution for all but the diehard gamers.
     0Likes 0Dislikes 0Informatives
  • Reply 22 of 27
    zandroszandros Posts: 537member
    Quote:
    Originally Posted by MarcUK View Post


    Imagine something like a GF7300GT or X1600 running at 4 ghz talking directly to the CPU for your integrated graphics solution. Sure would beat an intel GMA950 and be quite a powerful solution for all but the diehard gamers.



    And then, you imagine the size of the cooler. Anyway, there need to be dedicated fast memory too, otherwise performance will lag. I really don't see system memory providing the bandwidth of 512 bit 2200 MHz GDDR4 memory just yet.



    As far as I know, the next Radeon xpress chipset will include an integrated x700. I doubt the low end people will need more.
     0Likes 0Dislikes 0Informatives
  • Reply 23 of 27
    marcukmarcuk Posts: 4,442member
    Quote:
    Originally Posted by Zandros View Post


    And then, you imagine the size of the cooler. Anyway, there need to be dedicated fast memory too, otherwise performance will lag. I really don't see system memory providing the bandwidth of 512 bit 2200 MHz GDDR4 memory just yet.



    As far as I know, the next Radeon xpress chipset will include an integrated x700. I doubt the low end people will need more.



    Already thought of that....



    an x1600 has 157million transistors while todays athlon X2's have 230 million, so a quad core will likely have 500 million, so integrating an x1600 into the die, it contibutes about 1/4, so any extra cooler requiement is going to be minimal.



    I'd even suggest that the memory requirement of the GPU be built right into the die, or socketed in a very close socket to the CPU, on something like a 1:4 ratio, so its own memory ran at about 750-1000mhz and needs to only be something like 64MB,



    As an integrated solution, it would eat GMA.
     0Likes 0Dislikes 0Informatives
  • Reply 24 of 27
    zandroszandros Posts: 537member
    Quote:
    Originally Posted by MarcUK View Post


    Already thought of that....



    an x1600 has 157million transistors while todays athlon X2's have 230 million, so a quad core will likely have 500 million, so integrating an x1600 into the die, it contibutes about 1/4, so any extra cooler requiement is going to be minimal.



    There are more thing to consider than the die size, I think. Partially because GPUs are massively parallel. In any case, do we see any benefit from integrating the GPU to the processor core instead of the northbridge?



    And if they want this to last a long time, a Geforce 8800GTX has some 600 million transistors.
     0Likes 0Dislikes 0Informatives
  • Reply 25 of 27
    jvbjvb Posts: 210member
    Quote:
    Originally Posted by Zandros View Post


    There are more thing to consider than the die size, I think. Partially because GPUs are massively parallel. In any case, do we see any benefit from integrating the GPU to the processor core instead of the northbridge?



    And if they want this to last a long time, a Geforce 8800GTX has some 600 million transistors.



    There is honestly no point in an 8800 GTX for a Mac. That intense of a graphics card is needed almost entirely for gaming. Photo and video editing would have to be pushed really really hard on a Mac to even begin to tap what is available. And besides, the main advantage to an 8800 is the jump to DX10. Apple does not use DirectX.
     0Likes 0Dislikes 0Informatives
  • Reply 26 of 27
    zandroszandros Posts: 537member
    Quote:
    Originally Posted by jvb View Post


    There is honestly no point in an 8800 GTX for a Mac. That intense of a graphics card is needed almost entirely for gaming. Photo and video editing would have to be pushed really really hard on a Mac to even begin to tap what is available. And besides, the main advantage to an 8800 is the jump to DX10. Apple does not use DirectX.



    Oh, but I was not talking about Macs, I was talking about why I feel that what AMD is doing is odd.



    I think we'll see some use of G80-based graphics cards in Macs sometime at least.
     0Likes 0Dislikes 0Informatives
  • Reply 27 of 27
    thttht Posts: 6,018member
    Quote:
    Originally Posted by JeffDM View Post


    Adding cache supposedly doesn't increase power consumption much, but there's only so much that can do.



    The number of transistors on a chip is doubling about every 18 months, and those transistors have to go somewhere. Currently, adding cores is the best way to go. Doubling the clock on a given chip increases the power consumption by four. Doubling the cores increases the max power consumption by about two. Idled cores can theoretically be turned off.



    The CMOS power equation is of the form:



    Power ~= capacitance of the transistors x frequency x voltage^2 + other terms (leakage)



    The power consumption of a semiconductor device grows linearly with the number of transistors and the frequency, while it grows quadratically with the operating voltage. Increasing frequency only increases the power linearly by the ratio of the frequency increase.



    The frequency, the voltage, the fab quality, and the circuit design all are linked, and increases or decreases in each parameter don't come for free. Ie, the frequency can be increased at will while the voltage stays the same, or the voltage descreased while the frequency stays the same. It takes a bit of fab/CPU tuning to produce a crop of parts that can operate about certain band of frequency and voltage.



    All this technical stuff is fairly irrelevant now though.



    The future is all about the economics of semiconductor manufacturing. It was actually the past and is the present as well. Moore and all the old folks, along with predicting CPU transistor budgets doubling every 18 to 24 months, predicted that the cost of building succeeding fabs/nodes (130 nm, 90 nm, 65 nm, 45 nm, 32 nm, 22 nm) also will double. A 65 nm fab probably costs on the order of 2 to 5 billion dollars now, depending on where you start. A 45 nm fab 4 to 8 billion; a 32 nm fab, if it is possible, 8 to 16 billion.



    Intel made about 35 billion last year. The next highest was Samsung at 17 billion. AMD at 7.5 billion (with most of it due to the acquisition of ATI). IBM Semi isn't in the top 10. A company has to have enough market to recoup/profit from the cost of fab investment. How is a company going to move to the next node when the cost will be approaching their entire revenue. Market growth doesn't grow in integer factors for large companies, and if it is 20%, it's considered fantastic. It's going to be quite difficult to grow at 100% year after year to catch up with Intel.



    A company can increase the lifetime of the node (24/36 months). It can collaborate and share investment costs. AMD already does this with IBM. But this comes at the cost of time-to-market and specific fab tuning for a companies products. This is essentially what is killing Freescale at 90 nm, nee Motorola, now (and the fact that their market isn't huge). They have to share a fab with Philips and TSMC (?).



    AMD's Fusion and Torrenza seem to be moves toward protecting their server share, protecting the low-mid end, and moving to a "consumer electronic" market. I don't think they have choice because they won't have the money to produce the fastest processors in the personal computer space.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.