Intel shows new chips, outlines platform directions

1246789

Comments

  • Reply 61 of 177
    melgrossmelgross Posts: 32,977member
    Quote:

    Originally posted by Lemon Bon Bon

    Over the next ten years, I think Intel will do a far better job of achieving those goals than PPC.



    Intel/Sony have a Pentium M in a Vaio. Now. And .65 is nearly upon us.



    Apple/IBM don't have a G5 in a laptop now. And do we know when IBM is going .65?



    I don't think 'Performance per watt' is marketing 'FUD' at all. Very few things match the promise when it comes to 'actual' performance (re: broadband throughput scam...) but with 10 of their cpu projects looking towards quad-core while keeping power requirements low...we have a good change of nearing their stated goals.



    I welcome that finally, Intel have got us all out of the 'meghz' race. It means, that, at last, we can now focus on performance, instructions per clock, performance for given watt of power, the architecture of the chip, multitasking, emphasis on software coders becoming more clever and efficient with their program designs to take advantage of multicore cpus.



    It seems coders haven't caught up with the idea of GPU as co-processor and optimising for dual-core/processor systems. SLI isn't optimised yet. PCI-Express doesn't seem any faster.



    It's early days. Chicken and Egg. But as things are not merely getting 'faster' then a new paradigm will emerge.



    I think it may make software more efficient and push the hardware more. As we can't really on Intel 'just making things faster' through pure mhz.



    So, maybe apps like Photoshop are going to have to change to take advantage of dual core and apis like Core Image. Software developers, will have a big part to play in terms of bringing performance advantages. So, Cell maybe hard to program for? But it's a new way of doing things. And learning something new takes time. But judging from what I've seen of the PS3 games...a multi-cpu future coupled with a gpu as co-processor is going to be very exciting when it takes off.



    Lemon Bon Bon




    IBM will be going 65nm about the same time as Intel.



    PCI Express is faster. The GPU's have to get to the point (which they may be getting to with 7800 and 520) where their power can't be delivered through AGP.



    PS has taken advantage of two cores way back from pre OS X days. The support will continue to get better over time.



    The Cell may very well be a backwater and a dead end. Development doesn't seem to be going that way.
  • Reply 62 of 177
    snoopysnoopy Posts: 1,901member
    Quote:

    Originally posted by melgross





    . . . We don't know how the OS will handle 32 bit 64 bit switching, for example.









    I'm trying to understand this new issue that has come up. Is there something different about the way the PPC and x86 handle 32 and 64 bit applications? I understand it is no problem running 32 bit code on a 64 bit PPC Mac with 64 bit Tiger. A 32 bit application doesn't take up more memory nor run slower on the 64 bit system. It even runs a little faster than a 64 bit application I'm told.



    Will this no longer be true on Intel Macs? Does the PPC have some inherent advantage running 32 and 64 bit code?
  • Reply 63 of 177
    rickagrickag Posts: 1,626member
    Quote:

    Originally posted by snoopy

    I'm trying to understand this new issue that has come up. Is there something different about the way the PPC and x86 handle 32 and 64 bit applications? I understand it is no problem running 32 bit code on a 64 bit PPC Mac with 64 bit Tiger. A 32 bit application doesn't take up more memory nor run slower on the 64 bit system. It even runs a little faster than a 64 bit application I'm told.



    Will this no longer be true on Intel Macs? Does the PPC have some inherent advantage running 32 and 64 bit code?




    Not a technical answer but, PPC ISA was designed from the begining to be 64 bit and 32 bit compatible. x86 wasn't.
  • Reply 64 of 177
    imiloaimiloa Posts: 187member
    Quote:

    Originally posted by mdriftmeyer

    Laptops outselling Desktops trend is not sustainable. It will reach a saturation point, much faster than Desktops.



    The Remote Office still hasn't taken off like they had hoped and the notion that the Enterprise is going to solely go laptop is not a wise choice, from a physical security perspective.




    I see the ratio of laptops to desktops increasing a fair bit from here, especially with the promise of chips like the ones intel just announced.



    Logic:



    1) If Intel does achieve the power/watt they claim, then laptop batteries can become smaller/lighter, or stay the same and have much longer battery life (5W vs double-digit W). This will make laptops more attractive to more demographics.



    2) Everyone travels, even if it's just a weekend with family across town. A light laptop is attractive to anyone who relies on email for communication, or the web for day-to-day info (maps, weather, etc...).



    3) Modern performance is already sufficient for most folks needs (email, web, doc editing). These new chips will be faster still. So the only real needs for a desktop will be top-notch performance for pros and hardcore vid cards for gamers. Both of these markets combined are a scant minority compared to the business and casual home markets combined.



    And these days, laptops don't even cost that much more than desktops.
  • Reply 65 of 177
    Quote:

    IBM will be going 65nm about the same time as Intel.



    PCI Express is faster. The GPU's have to get to the point (which they may be getting to with 7800 and 520) where their power can't be delivered through AGP.



    PS has taken advantage of two cores way back from pre OS X days. The support will continue to get better over time.



    The Cell may very well be a backwater and a dead end. Development doesn't seem to be going that way.



    I'll believe IBM at .65 when I actually see shipping product in a PowerMac.



    I take your point about GPUs and PCI-E.



    PS3 and Cell will do alright. The games are nothing short of stunning. Merely making cpus faster is out. More cores to do more jobs is in. Software will have to multithread much better as time goes on to see speed gains. Currently, it's early days..?



    As for Photoshop? Well, I don't see Adobe supporting core image? Aren't they still on Code-Warrior? Although, seeing as it has been yanked (going onwards...) they'll have to move to X-Code? Photoshop seems to be getting bloated and slower to me over time. Are they REALLY using dual processors in the PowerMac?



    It will be interesting to see how apps respond to dual and quad-core cpus in the next few years. Maybe it will be games and not 'serious' apps that show the way?



    Lemon Bon Bon
  • Reply 66 of 177
    melgrossmelgross Posts: 32,977member
    Quote:

    Originally posted by Lemon Bon Bon

    I'll believe IBM at .65 when I actually see shipping product in a PowerMac.



    I take your point about GPUs and PCI-E.



    PS3 and Cell will do alright. The games are nothing short of stunning. Merely making cpus faster is out. More cores to do more jobs is in. Software will have to multithread much better as time goes on to see speed gains. Currently, it's early days..?



    As for Photoshop? Well, I don't see Adobe supporting core image? Aren't they still on Code-Warrior? Although, seeing as it has been yanked (going onwards...) they'll have to move to X-Code? Photoshop seems to be getting bloated and slower to me over time. Are they REALLY using dual processors in the PowerMac?



    It will be interesting to see how apps respond to dual and quad-core cpus in the next few years. Maybe it will be games and not 'serious' apps that show the way?



    Lemon Bon Bon




    I don't mean the Cell will just up and disappear. I mean that the development model might not translate to other mainstream personal computer cpu's. Multi cores seems to be the way things are going. Intel said that they will be coming out with four core chips in 2007. AMD will no doubt do the same.



    Yes, PS has had, as I said, multi chip ability for at least seven years now. It's not that it's bloated and slower. As more much wanted features get added, and cpu's get faster, Adobe modifies its filters to use better, but more complex algorithms. On older machines the new versions run more slowly. But the results are better. I imagine that they will move to X-code, as will everyone else; except that Intel made an announcement that I expected and mentioned on another thread:



    http://www.eweek.com/article2/0,1895,1851752,00.asp
  • Reply 67 of 177
    sjksjk Posts: 603member
    Quote:

    Originally posted by melgross

    I wish things were clear, but they're really not.



    Clearer.



    Obviously there are still plenty of unknowns, like you've mentioned.
  • Reply 68 of 177
    thttht Posts: 3,952member
    Quote:

    Originally posted by Lemon Bon Bon

    I'll believe IBM at .65 when I actually see shipping product in a PowerMac.



    There isn't going to be a 65 nm IBM PPC CPU in an Apple machine. It's virtually guaranteed. The 90 nm 970mp, if in fact Apple uses it, will be the last PPC CPU Apple uses.



    My bet for IBM shipping a volume CPU at 65 nm is a Xenon processor in December 2006. They'll try really hard for that, but I wouldn't be surprised with May 2007. December 2006 is hedging, May 2007 I'm confident they could do it they choose to. The 65 nm Cell in Summer of 07 if Sony needs help.



    For big iron Enterprise servers, IBM will live off its 90 nm Power5+ for another year, if not 2. For cheap servers, IBM may actually sell an Intel 65 nm Xeon way before any 65 nm product from Fishkill, if there is any IBM 65 nm CPU for cheap servers. Maybe they'll sell some Cell or PPE based servers, but those will likely be a niche market within the cheap server market.
  • Reply 69 of 177
    boogabooga Posts: 1,081member
    In other news, this doesn't seem to have gotten much press in the Mac community... I'm not sure if many realize how significant it is:

    http://www.eweek.com/article2/0,1895,1851752,00.asp



    Intel compilers have been known to produce code that is up to several times faster than gcc at many tasks, especially highly optimized, tight code. Although their tools won't support Objective C, just compiling the back-end libraries then linking them may provide a dramatic speedup for some Macintel apps (Carbon will benefit dramatically, and Cocoa apps may realize some back-end speedup as well. While this will further increase Carbon's performance lead over Cocoa, the real intensive tasks will probably be able to use it either way.)



    When you see discrepencies in x86 SPEC numbers published "officially" versus the ones that Apple used to compare the PowerPC with (and note that Apple's x86 numbers are significantly lower,) it's because Apple used gcc and the official numbers used Intel compilers.



    They also mention providing transition tools for developers moving from PowerPC, although they obviously won't be supplying a compiler that can generate PowerPC code, so "fat" binaries will need a little work to integrate the Intel compiler-generated binary code.



    Look for the same program recompiled from linux to MacOS X to see a nifty little performance gain...
  • Reply 70 of 177
    jousterjouster Posts: 460member
    Quote:

    Originally posted by Splinemodel

    You forget that IBM has the Cell, which at least for the time being, seems to be the big dog on the block....



    Since the Cell is, by all accounts, utterly unsuited to general computing tasks, how will it compete with whatever processors Intel produces for Apple?
  • Reply 71 of 177
    wmfwmf Posts: 1,164member
    Quote:

    Originally posted by snoopy

    I'm trying to understand this new issue that has come up. Is there something different about the way the PPC and x86 handle 32 and 64 bit applications? I understand it is no problem running 32 bit code on a 64 bit PPC Mac with 64 bit Tiger. A 32 bit application doesn't take up more memory nor run slower on the 64 bit system. It even runs a little faster than a 64 bit application I'm told.



    x86 is the same as PowerPC in this case.
  • Reply 72 of 177
    splinemodelsplinemodel Posts: 7,311member
    Quote:

    Originally posted by jouster

    Since the Cell is, by all accounts, utterly unsuited to general computing tasks, how will it compete with whatever processors Intel produces for Apple?



    1. How do you know this? It seems logical, but as far as I know it's still nothing more than an assumption. Programming 8 independent arithmetic or FPU threads is a lot more cannon that using a vector unit, which requires more attention to detail while programming. A compiler and a microkernel can do a lot to dish out these processes without any changes to the code.



    2. It doesn't matter anyway, since the target market will run software that's written for it, be it video games, super-computing, or nichey enterprise stuff.



    3. It's the shape of things to come, and software will change. In the last few years there has been a lot of emphasis on multi-threading and multiprocessing in software architecting. The trend is not going back the other way.
  • Reply 73 of 177
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by Splinemodel

    1. How do you know this? It seems logical, but as far as I know it's still nothing more than an assumption. Programming 8 independent arithmetic or FPU threads is a lot more cannon that using a vector unit, which requires more attention to detail while programming. A compiler and a microkernel can do a lot to dish out these processes without any changes to the code.



    A good friend of mine was involved with the design of Cell. He concurs that it is *NOT* suited for general purpose computing. You're welcome to challenge that, but I know whose opinion I'll listen to.
  • Reply 74 of 177
    hirohiro Posts: 2,663member
    Quote:

    Originally posted by Splinemodel

    1. How do you know this? It seems logical, but as far as I know it's still nothing more than an assumption. Programming 8 independent arithmetic or FPU threads is a lot more cannon that using a vector unit, which requires more attention to detail while programming. A compiler and a microkernel can do a lot to dish out these processes without any changes to the code.



    2. It doesn't matter anyway, since the target market will run software that's written for it, be it video games, super-computing, or nichey enterprise stuff.



    3. It's the shape of things to come, and software will change. In the last few years there has been a lot of emphasis on multi-threading and multiprocessing in software architecting. The trend is not going back the other way.




    1. Basic computer science. The lack of out-of-order execution and very limited branch prediction is highly non-optimal for general purpose code which is inherently branch heavy. It is also poorly optimized for handling the vast quantity of code generated by programmers who only know high level concepts and therefore are unable to write code that is in-order hardware friendly. I am a CS professor, this is not a small problem for the VAST majority of existing code.



    2. This assumes purpose written code. Which pushes CELL into niche non-general purpose markets. It will be great for the kind of code that it can handle well when coded by programmers that understand the hardware.



    3. More branchy threads will only make the problem worse for CELL, not better.
  • Reply 75 of 177
    melgrossmelgross Posts: 32,977member
    Quote:

    Originally posted by wmf

    [B quote:

    Originally posted by snoopy

    I'm trying to understand this new issue that has come up. Is there something different about the way the PPC and x86 handle 32 and 64 bit applications? I understand it is no problem running 32 bit code on a 64 bit PPC Mac with 64 bit Tiger. A 32 bit application doesn't take up more memory nor run slower on the 64 bit system. It even runs a little faster than a 64 bit application I'm told]



    x86 is the same as PowerPC in this case. [/B]



    I'll repeat this quote:



    I'll quote from an article in Tom's Hardware. This is the consensis on this.





    "However, the memory advantage can turn into a disadvantage if you don't have enough of it. As each data chunk is 64 bits long, 32 bit chunks of a 32 bit legacy application can consume double the memory compared to running under a 32 bit OS. From this point of view, it does not make much sense to run Windows XP x64 with only a small amount of memory. If you go for this latest version, we recommend installing at least a gigabyte of RAM."
  • Reply 76 of 177
    kickahakickaha Posts: 8,760member
    mel, you're describing also how WinXP/64 manages 32 bit apps... it is entirely possible that Tiger/x86/64 (what a mouthful) will do it differently. The OS matters. To what extent... dunno.
  • Reply 77 of 177
    melgrossmelgross Posts: 32,977member
    Quote:

    Originally posted by Kickaha

    mel, you're describing also how WinXP/64 manages 32 bit apps... it is entirely possible that Tiger/x86/64 (what a mouthful) will do it differently. The OS matters. To what extent... dunno.



    Both treat 32 bit apps the same in memory. The point is that the memory must be processed in double longs. You know. A word is 16 bits and a long is 32. 64 bits are double longs.



    The first 32 bits gets filled with the programs junk and the next 32 bits is just carried along. Empty bits, if you like. But they must be filled, so it takes twice as much memory to do an operation.



    It's like Roman numerals. There isn't a representation for zero. That screwed up their whole mathamatical system. How can zero count?



    Well we know better. 10 is one and zero. Zero is nothing but it's still there taking up space in the calculations. It's the same thing here. The computer must fill up the line. It's not calculated upon, but it's needed. Like a compositor in the old days of type. fill the end of the line with lead blanks.
  • Reply 78 of 177
    snoopysnoopy Posts: 1,901member
    Quote:

    Originally posted by melgross



    . . . The first 32 bits gets filled with the programs junk and the next 32 bits is just carried along. Empty bits, if you like. But they must be filled, so it takes twice as much memory to do an operation. . .









    Bummer! So the MacTel will need twice as much RAM as the PPC Mac to do the same job, right? I wonder how many more little surprises we are in for? Takes away a bit of enthusiasm for the Mac's new processors.



    So how can Intel overcome such disadvantages? Or are we stuck with them forever as old baggage from the original x86 design?
  • Reply 79 of 177
    kickahakickaha Posts: 8,760member
    Except that MacOS X on the PPC gets around this, melgross. Do we know for a fact that whatever chip Apple ends up using will certainly cause problems? New architecture coming down the pipe, and all that... \



    32bits -> 64bits is only going to be absolutely necessary for memory addresses (pointers). Ints, floats, etc, will still be packed as before, would be my guess, so the memory usage will not double, just increase a bit.



    Yes, more memory will be needed, but I don't believe it's a simple doubling.
  • Reply 80 of 177
    sunilramansunilraman Posts: 8,133member
    boy this whole 32/64 bit thing sure is confusing.



    after all the trouble with installing 64bit winXPro the greatest benefit i found was that AMD/Nvidia's 64-bit "blobby dancer" demo runs 3-5 fps faster



    then i thought to myself, okay, well this is a ***bleeped out by myself***



    mainly because dlink doesn't care to put out any linux or win64 drivers.



    thumbs up to nvidia and amd though for 64-bit support. the "blobby dancer" demo is pretty cool too, but other than these few select companies, and running Internet Explorer 64bit (why would you use Internet Explorer anyway), i'm lost on this 64bit thing.



    tried suse 64-bit, it definitely feels very responsive, i might even go so far as to pull a number out of my ass like "20% more snappy than the 32-bit version" ... but again, dlink wireless pci adapter is the deadweight again, ndiswrapper does not work because it has to wrap a 64-bit driver. these ACX100/111 texas instruments wifi cards have been a massive pain in the ass for the linux community.
Sign In or Register to comment.