Intel vulnerable? IBM or Motorola to take advantage?

Posted:
in Future Apple Hardware edited January 2014
<a href="http://www.nytimes.com/2002/09/29/technology/circuits/29CHIP.html"; target="_blank">New York Times link</a>



Of course, how this ties into a company like Apple, which does not have a strong enterprise presence, is another question, or merely hearsay.



snippets:



[quote]Itanium, a joint project of Intel and Hewlett-Packard, Silicon Valley's two largest companies, has been in the laboratory for more than a decade. Itanium is designed to excel at a sweeping array of advanced computing tasks, from solving grand scientific challenges to rendering complex graphics to slicing through vast databases?



?But Google isn't buying?



?Intel still says its commitment to Itanium is unwavering. Its smaller rival, Advanced Micro Devices, has an alternative to Itanium that computer makers are seriously considering. And Intel has a fallback project, called Yamhill, in case Itanium founders. But mainly, Intel is redoubling its bet on Itanium.



It has taken an entire decade, an estimated $5 billion and teams of hundreds of engineers from the two companies to bring the first Itanium chip to market. As the struggles and costs mount for the companies, skeptical technologists say Itanium now has the hallmarks of a bloated project in deep trouble. It is already four years behind schedule, emerging just as companies are in no mood to spend money on technology.



?INCREASINGLY, Intel is facing the risk that it has chosen the wrong path to high-performance computing. It may have looked backward as it developed the microchip equivalent of the behemoth computers of the past.



Eric Schmidt, the computer scientist who is chief executive of Google, told a gathering of chip designers at Stanford last month that the computer world might now be headed in a new direction. In his vision of the future, small and inexpensive processors will act as Lego-style building blocks for a new class of vast data centers, which will increasingly displace the old-style mainframe and server computing of the 1980's and 90's.



It turns out, Dr. Schmidt told the audience, that what matters most to the computer designers at Google is not speed but power ? low power, because data centers can consume as much electricity as a city.<hr></blockquote>
«1

Comments

  • Reply 1 of 22
    bellebelle Posts: 1,574member
    Interesting article. Thank you for posting it.

    [quote]Originally posted by BuonRotto:

    <strong>Eric Schmidt, the computer scientist who is chief executive of Google, told a gathering of chip designers at Stanford last month that the computer world might now be headed in a new direction. In his vision of the future, small and inexpensive processors will act as Lego-style building blocks for a new class of vast data centers, which will increasingly displace the old-style mainframe and server computing of the 1980's and 90's.</strong><hr></blockquote>

    Isn't this also the approach that NVIDIA is taking? The idea that lots of little application-specific processors would be much faster and more efficient than one fat multi-purpose CPU?



    It makes a lot of sense to me.
  • Reply 2 of 22
    telomartelomar Posts: 1,804member
    I'd expect IBM to benefit most in the years to come depending on how soon they get some of the architecture taped out and producing.



    That said I also don't believe Intel is all that vulnerable. I believe the Itanium has taken off a little slower than Intel might have liked but I'm also fairly certain they were prepared for it. I also don't believe they will allow it to fail.



    They had to move away from their current architecture to something more modern, which is the EPIC instruction set. The Itanium has shown signs of steady improvement and with the first contributions from the ex-Alpha engineers appearing in the next revision of the Itanium that's when I would expect it to take off a little more.



    [quote] Eric Schmidt, the computer scientist who is chief executive of Google, told a gathering of chip designers at Stanford last month that the computer world might now be headed in a new direction. In his vision of the future, small and inexpensive processors will act as Lego-style building blocks for a new class of vast data centers, which will increasingly displace the old-style mainframe and server computing of the 1980's and 90's. <hr></blockquote>



    In case you weren't already aware that is basically a description of IBM's "cell" architecture. Not quite but close enough. My guess is Google has already signed to IBM. No biggy.



    [ 09-29-2002: Message edited by: Telomar ]</p>
  • Reply 3 of 22
    Yes, I suppose there were a couple of agendas that made me post this. One is that seems to describe the general idea behind the Power 4 architecture, the other is that Apple's implementation of QE o the GPU got me thinking some tie ago about this idea -- of having more specific processors perform more specific operations. Would the idea of the CPU go the way of the dodo? Would some processors (if they had feelings) be happier, for example, if floating point operations were moved somewhere else, leaving neat and clean integer operations to do their job? I suppose you could twist this into a "Raycer chipset" argument too. (Though I'm convinced that won't ever flesh out, as if anyone still thinks otherwise.)



    What surprised me the most wasn't the more specialized chipset chatter, it was the power consumption one. But unless there's another oil embargo type of emergency, I can't see that agenda moving into the consumer's consciousness maybe ever.
  • Reply 4 of 22
    xypexype Posts: 672member
    [quote]Originally posted by Telomar:

    <strong>That said I also don't believe Intel is all that vulnerable. I believe the Itanium has taken off a little slower than Intel might have liked but I'm also fairly certain they were prepared for it. I also don't believe they will allow it to fail.</strong><hr></blockquote>



    Uhm, yeah, but Intel may not want it to fail all they want, if the customers aren't happy it's not gonna sell. In my opinion it's one thing to sell CPUs to consumers who only care for MHZ, but if you're trying to sell to big organizations and the like you have to be prepared that the people deciding what to buy do have a clue and if they think, say, a AMD Hammer CPU would be a better investment, Intel is out of the loop.



    There are many people downtalking the Itanium and I will wait and see whether they'll put their money where their mouth is. _If_ they do, Intel should have a working Plan B already (which most likely they do).
  • Reply 5 of 22
    bellebelle Posts: 1,574member
    [quote]Originally posted by BuonRotto:

    <strong>I suppose you could twist this into a "Raycer chipset" argument too. (Though I'm convinced that won't ever flesh out, as if anyone still thinks otherwise.)</strong><hr></blockquote>

    Don't even go there, bucko.



  • Reply 6 of 22
    stoostoo Posts: 1,490member
    [quote]Would some processors (if they had feelings) be happier, for example, if floating point operations were moved somewhere else, leaving neat and clean integer operations to do their job?<hr></blockquote>



    Only if integer and FP operations were totally separated ,which is unlikely.
  • Reply 7 of 22
    [quote]Originally posted by Belle:

    <strong>Isn't this also the approach that NVIDIA is taking? The idea that lots of little application-specific processors would be much faster and more efficient than one fat multi-purpose CPU?</strong><hr></blockquote>



    It sounded to me like Schmidt was refering to blade servers, and that he thinks they will eventually replace current big-iron servers and mainframes in the datacenter.



    [quote]Originally posted by Telomar:

    <strong>In case you weren't already aware that is basically a description of IBM's "cell" architecture. Not quite but close enough. My guess is Google has already signed to IBM. No biggy.</strong><hr></blockquote>



    No, the Cell architecture is made up of multiple processor cores within a single chip. Besides, it's targeted at consumer electronics, so it most likely won't appear in server products any time soon, if ever.



    [quote]Originally posted by BuonRotto:

    <strong>What surprised me the most wasn't the more specialized chipset chatter, it was the power consumption one. But unless there's another oil embargo type of emergency, I can't see that agenda moving into the consumer's consciousness maybe ever.</strong><hr></blockquote>



    Power consumption matters if you're a business that runs large server farms, like Goggle does. Large datacenters consume ungodly amounts of electricity already, and as processors get faster and more power hungry, this is going to be more and more of a problem in the future. BTW, the article had nothing to do with specialized chipsets nor the consumer market.
  • Reply 8 of 22
    matsumatsu Posts: 6,558member
    One part that doesn't gel with big-hot multi-purpose vs small single purpose is the whole bit about the threat from AMD. Surely the Hammer will not be the epitome of cool application specific processors.



    I think the real reason Intel is hedging its bets, and this is the second time it's been mentioned, is the momentuum of X86. IF AMD can deliver X86-64 both sooner and cheaper than IA-64, then Intel faces a problem. While it may be positively ancient, x86 is huge, so many developers, so much legacy code. You think the 68K to PPC switch was tough? Maybe x86 should get left behind for a number of reasons, maybe programmers would love to do just that, but do customers?



    So, Intel also has an X86-64 product in the labs just in case. It may come to pass that x86 is so well entrenched in the Wintelon world that it will carry forward into a new generation.



    M$ can afford to support both IA-64 and X86-64 so they'll have something in the lab for both scenarios aswell. But, should they supoort one over the other, then the other guy's CPU is basically dead...



    Intel has really been jumping through a lot of Palladium specific hoops. They might be trying to win M$'s favor.
  • Reply 9 of 22
    thttht Posts: 5,421member
    Intel is not vulnerable. They are the best manufacturer of microprocessors in the world. No one comes close to producing the quantity and complexity of processors they do. They may not have the best microprocessor architectures, but when it comes to churning them out, Intel is not going to be beat. Having state-of-the-art fabs also means that mediocre designs will be high performance lower power consumption processors.



    In considering future markets, it's a cardinal sin not to consider Intel's fab power.
  • Reply 10 of 22
    Doesn't the building block approach also sound suspiciously like Moto 8500 series -- eBook spec I guess if we want to be PPC generic about it -- which has to been to date lots of sound and fury and very little product.



    THT is correct about Intel. They are winning on fab capacity and capability.



    [quote]What surprised me the most wasn't the more specialized chipset chatter, it was the power consumption one. But unless there's another oil embargo type of emergency, I can't see that agenda moving into the consumer's consciousness maybe ever. <hr></blockquote> DARPA has a low power research initiative going. IBM's Austin facility is involved.
  • Reply 11 of 22
    mr. memr. me Posts: 3,221member
    [quote]Originally posted by THT:

    <strong>Intel is not vulnerable. They are the best manufacturer of microprocessors in the world. No one comes close to producing the quantity and complexity of processors they do. They may not have the best microprocessor architectures, but when it comes to churning them out, Intel is not going to be beat. Having state-of-the-art fabs also means that mediocre designs will be high performance lower power consumption processors.



    In considering future markets, it's a cardinal sin not to consider Intel's fab power.</strong><hr></blockquote>Intel's fabrication prowess is not to be underestimated. However, this is brute force. History has shown us that brute force can take you a long way. However, products that are better adapted to the marketplace will eventually take over the market, or new markets will be created rendering old products irrelevant. So far, Intel has dumped $billions into the IA-64 in its attempt to leverage its current market dominance into future market dominance. In the meantime, others have made $billions selling into the future markets the Intel is trying to crack. This is a situation that cannot go on forever.



    Underestimate Intel? No. Can Intel be defeated? Most certainly, even if it is Intel that does the deed.
  • Reply 12 of 22
    Belle:



    [quote]Originally posted by Analogue bubblebath:

    <strong>BTW, the article had nothing to do with specialized chipsets nor the consumer market.</strong><hr></blockquote>



    I know. I'm extrapolating.
  • Reply 13 of 22
    123123 Posts: 278member
    [quote]Originally posted by Telomar:

    <strong>In case you weren't already aware that is basically a description of IBM's "cell" architecture. Not quite but close enough. My guess is Google has already signed to IBM. No biggy.

    </strong><hr></blockquote>



    Google doesn't quite work that way. They basically run their stuff on the cheapest x86 hardware, using the cheapest IDE drives. Their network is highly redundant. Every few minutes, a hard drive or another part dies, so what? Just replace it. It's cheap. That's their philosophy, always has been.
  • Reply 14 of 22
    telomartelomar Posts: 1,804member
    [quote]Originally posted by 123:

    <strong>



    Google doesn't quite work that way. They basically run their stuff on the cheapest x86 hardware, using the cheapest IDE drives. Their network is highly redundant. Every few minutes, a hard drive or another part dies, so what? Just replace it. It's cheap. That's their philosophy, always has been.</strong><hr></blockquote>



    To be perfectly honest I've never been overly familiar with the operation of search engines.



    Still redundancy isn't something that IBM is opposed to and their would definitely be room for improvement in that system design.



    Either way a move away from x86 to lower power chips wouldn't hurt. When you are using probably around 3 MW (very crude estimate) of power to run your sever farm you'd be wanting to look at how you can cut those numbers.
  • Reply 15 of 22
    Google runs on a cluster of PC's running linux. If google needs more power, they will just add some more PCs running linux.



    The speed of google can be attributed to the quality of their code, not their processors.
  • Reply 16 of 22
  • Reply 17 of 22
    jrcjrc Posts: 817member
    3 Ghz sounds to me like Intel is vulnerable. How long were they stuck at 2+ Ghz before the (near future) move to 3 Ghz? I mean, COME ON, move already!



    My Oracle-to-mainframe IDMS, using java middle-tiered infrastructure to the web front end is just soooo slow with 2+ Ghz.



    I'm glad Apple is at 4+ Ghz!



  • Reply 18 of 22
    [quote]Originally posted by Ptrash:

    <strong>And from today's Times (P1 of biz section):

    <a href="http://www.nytimes.com/2002/09/30/technology/30SPEE.html"; target="_blank">http://www.nytimes.com/2002/09/30/technology/30SPEE.html</a></strong><hr></blockquote>;



    The link wants you to sign up for the NY Times so you can read the article. Thanks anyways, but I don't want to sign up for the Times.
  • Reply 19 of 22
    ast3r3xast3r3x Posts: 5,012member
    Intel is vulnerable....but to AMD
  • Reply 20 of 22
    MacJedai. I've never tried it, but I believe spamfree / spamfree is a valid NYTimes login.



    (hey, the reg is free anyway, so why not just make your login, set the cookie, and forget about it forever?)



    neye
Sign In or Register to comment.