Will Apple use another Power PC processor?

12357

Comments

  • Reply 81 of 126
    shadowshadow Posts: 373member
    What it takes to support more processor architectures?

    (a follow-up for my previous posts on this thread)



    Many members posting here think that changing processor architecture means rewriting a lot of code by the majority of developers in order to "optimize" it for the new processor. This is not the case. Here is why.



    From programmers point of view, processors differ mainly in the following:
    • instruction set

    • byte order (big endian vs. little endian)

    • data register size (may affect data alignment) and addressable memory (processor "bittness")

    • vector processing units or other processor-specific features

    The instruction set is handled by the compiler, unless you are writing in assembler (very, very few programmers do). Every processor vendor will make sure there is a compiler available for the most widely used languages for it's processor. One of the most widely used compilers, used by Apple's Xcode as well, is GCC - GNU Compiler Collection, which provides compilers for a number of programming languages for a wide range of processors. Other compilers, e.g. from Intel or IBM, may be used, but they are not free for developers and are bound to specific processors, and are not commonly used. So, in general, the programmer needs to do NOTHING to compile for the processor architecture on which he/she is compiling. The compilers are capable to cross-compile - to compile for an architecture, different from the one it is running on. For example, you can compile for PPC on Intel computer and vice versa.



    In many cases the developer has to take care of byte order, in particular, when reading data streams from disc or network. In order to keep the code portable, there are some simple rules which should be followed here. Apple provides several ways for bite-swapping, including macros and functions. Bite-swapping errors may introduce bugs, but now apple had already fixed most of them (it has to support both endian versions for now). It is not a big deal to keep the code endian-savvy. Moreover, it is considered bad programming practice not to do so.



    Most of the old mac fans tend to believe that mac developers were working a lot to support AltiVec. This is not the case. Apple did, and the the majority of the results of this work is embedded in a very small number of frameworks and libraries (compared to the entire code base). The only thing the developers need to do is to use them. The AltiVec optimized code is a small part of the Mac OS and large part of it is "concentrated" in a known places, so there is no need to go through all the code to make the switch if needed. Some developers do use processor vector units explicitly, but a very small percent of the developer community. Most of the developers rely on auto vectorization by the compiler, or, more importantly, on other compiler optimizations. You may be surprised to know, but most of the executable code out there, including the OS itself, is OPTIMIZED FOR SIZE, NOT FOR SPEED.



    There is much more trouble to support 32 bit and 64 bit than the other differences. This is more difficult to explain but I will give it a try. When you have a function call in your code, you don't care about the processors instruction set, endianness or vector units. This is handled by the compiler. But the function always has a definite return type, for example, integer or float. So far so good. But what you should do if you want to take advantage of the processors larger data register size? You need a new function which returns the corresponding long data type. In your code you have to check the processor for which the code compiles, and select the function you want to use. And you have to make this for every call for a potentially large number of functions. This is a HUGE problem. To make it worce, most of the code relies on different libraries. You need libraries which are specifically made to support 64 bit (as said earlier, just recompiling for a different processor is much simpler), so you may have to wait untill the author of the said libary makes it 64 bit savvy.

    Leopard will have full 64 bit support (that is, all Leopard's frameworks and libraries). Some of the applications (from Apple and third parties) will not. But Apple will provide tools to make the support for 32 and 64 bit much easier than the above scenario.



    To summarize, Apple's code portability is a HUGE advantage over Windows (Linux is very portable however). Now, when Apple improved and "tested" it's portability, it will be ultra-stupid not to keep it that way, even when the current crop of PPC installed base fades away. Most of the code in Mac OS will survive more than one processor architecture. And Apple will want to "have options", both for future processor architectures, and for licensing Mac OS to third parties (remember, Windows has big trouble supporting multiple architectures).



    Sorry for the long post. Hope it makes some points more clear.
  • Reply 82 of 126
    shadowshadow Posts: 373member
    Quote:
    Originally Posted by shadow View Post


    But what you should do if you want to take advantage of the processors larger data register size? You need a new function which returns the corresponding long data type.



    This is some simplification of the real situation. In reality, when a 64 processor processes 32 bit data, it's registers are "half empty". Sometimes you may improve data processing using all 64 bits - that is where you will want to consider 64 bit support.



    You can use 64 bit data in a 32 bit processor, but the compiler will need to replace a one-line operation with larger number of instructions, which is slower and you will not want to use it on a 32 bit processor if you can avoid this.



    Also, there are address/pointer related issues. Some data types and structures will need to change to support much larger address space.



    For computers, 64 bit is the main trend. But the processor in the iPhone or other small devices is unlikely to go 64 bit anytime soon. They simply don't need 64 bit in a meaningful way.
  • Reply 83 of 126
    user tronuser tron Posts: 89member
    Quote:
    Originally Posted by melgross View Post


    Sony OWNS the IP for the Cell, along with IBM and Toshiba. They ALL developed it.They were producing it themselves as well.



    Have you any evidence that Nintendo is paying IBM for the R&D for their chip?



    You're just making things up here.



    It was part of the 1billion deal between Nintendo and IBM.



    Quote:
    Originally Posted by melgross View Post


    As usual, you're wrong.



    I'l give you one just because you're lazy.



    http://www.extremetech.com/article2/...2135201,00.asp



    Again a meaningless link. Wow Intel continues to develop new Cpus, well that's a suprise! Oh and they are really faster than a 2 year old G5.



    Quote:
    Originally Posted by melgross View Post


    It shows how little you know about these companies.



    No it shows that google did provide you with a real sample.



    But here's one for you:



    http://arstechnica.com/articles/colu...c-20050710.ars
  • Reply 84 of 126
    onlookeronlooker Posts: 5,252member
    Quote:
    Originally Posted by shadow View Post


    What it takes to support more processor architectures?

    (a follow-up for my previous posts on this thread)



    Many members posting here think that changing processor architecture means rewriting a lot of code by the majority of developers in order to "optimize" it for the new processor. This is not the case. Here is why.



    From programmers point of view, processors differ mainly in the following:
    • instruction set

    • byte order (big endian vs. little endian)

    • data register size (may affect data alignment) and addressable memory (processor "bittness")

    • vector processing units or other processor-specific features

    The instruction set is handled by the compiler, unless you are writing in assembler (very, very few programmers do). Every processor vendor will make sure there is a compiler available for the most widely used languages for it's processor. One of the most widely used compilers, used by Apple's Xcode as well, is GCC - GNU Compiler Collection, which provides compilers for a number of programming languages for a wide range of processors. Other compilers, e.g. from Intel or IBM, may be used, but they are not free for developers and are bound to specific processors, and are not commonly used. So, in general, the programmer needs to do NOTHING to compile for the processor architecture on which he/she is compiling. The compilers are capable to cross-compile - to compile for an architecture, different from the one it is running on. For example, you can compile for PPC on Intel computer and vice versa.



    In many cases the developer has to take care of byte order, in particular, when reading data streams from disc or network. In order to keep the code portable, there are some simple rules which should be followed here. Apple provides several ways for bite-swapping, including macros and functions. Bite-swapping errors may introduce bugs, but now apple had already fixed most of them (it has to support both endian versions for now). It is not a big deal to keep the code endian-savvy. Moreover, it is considered bad programming practice not to do so.



    Most of the old mac fans tend to believe that mac developers were working a lot to support AltiVec. This is not the case. Apple did, and the the majority of the results of this work is embedded in a very small number of frameworks and libraries (compared to the entire code base). The only thing the developers need to do is to use them. The AltiVec optimized code is a small part of the Mac OS and large part of it is "concentrated" in a known places, so there is no need to go through all the code to make the switch if needed. Some developers do use processor vector units explicitly, but a very small percent of the developer community. Most of the developers rely on auto vectorization by the compiler, or, more importantly, on other compiler optimizations. You may be surprised to know, but most of the executable code out there, including the OS itself, is OPTIMIZED FOR SIZE, NOT FOR SPEED.



    There is much more trouble to support 32 bit and 64 bit than the other differences. This is more difficult to explain but I will give it a try. When you have a function call in your code, you don't care about the processors instruction set, endianness or vector units. This is handled by the compiler. But the function always has a definite return type, for example, integer or float. So far so good. But what you should do if you want to take advantage of the processors larger data register size? You need a new function which returns the corresponding long data type. In your code you have to check the processor for which the code compiles, and select the function you want to use. And you have to make this for every call for a potentially large number of functions. This is a HUGE problem. To make it worce, most of the code relies on different libraries. You need libraries which are specifically made to support 64 bit (as said earlier, just recompiling for a different processor is much simpler), so you may have to wait untill the author of the said libary makes it 64 bit savvy.

    Leopard will have full 64 bit support (that is, all Leopard's frameworks and libraries). Some of the applications (from Apple and third parties) will not. But Apple will provide tools to make the support for 32 and 64 bit much easier than the above scenario.



    To summarize, Apple's code portability is a HUGE advantage over Windows (Linux is very portable however). Now, when Apple improved and "tested" it's portability, it will be ultra-stupid not to keep it that way, even when the current crop of PPC installed base fades away. Most of the code in Mac OS will survive more than one processor architecture. And Apple will want to "have options", both for future processor architectures, and for licensing Mac OS to third parties (remember, Windows has big trouble supporting multiple architectures).



    Sorry for the long post. Hope it makes some points more clear.





    I think your taking this away from what the real question is. Which is will Apple use a PPC again. I'd say the overwhelming majority agree; that answer is no.
  • Reply 85 of 126
    snoopysnoopy Posts: 1,901member
    Quote:
    Originally Posted by onlooker View Post




    I think your taking this away from what the real question is. Which is will Apple use a PPC again. I'd say the overwhelming majority agree; that answer is no.






    I wouldn't call it an "overwhelming majority," but no matter what most people believe, it's Apple who will make this decision. Some here think Apple may use another PPC desktop or laptop chip someday. Others of us think it's more likely a consumer gadget that'll get a PPC, if anything does.



    All in all, the subject has created far more discussion than it's worth. Wouldn't you agree?



  • Reply 86 of 126
    hypoluxahypoluxa Posts: 694member
    Quote:
    Originally Posted by rickag View Post


    PPC is used from the embedded market to the main frame market. It has only recently lost the desktop market when Apple left.



    IBM manufactures PPC products for themselves and now for Microsoft and Sony game consoles. Who knows where this will lead. It definitely will keep IBM interested in advancing their cpus and the insturction set architecture.



    If I were Apple I would definitely maintain builds of Mac OSs on PPC.



    I concur. The " just in case" scenerio could happen again, like Steve had said when he first officially mentioned the Intel switch from PPC, they could then go back if need be in the future.
  • Reply 87 of 126
    slewisslewis Posts: 2,081member
    Quote:
    Originally Posted by snoopy View Post


    Read the history of this reply. I asked whether Apple would use a PPC chip in the iPhone if IBM made a superior cell phone processor. Onlooker's reply was no, because Apple got into bed with Intel. Naturally I asked why Apple should be more restricted than Dell and HP, or any other company for that matter? These companies use chips from other vendors, and it doesn't appear to hurt the relationship with Intel. Since I referred specifically to the iPhone, not Macintosh computers, a requirement to use only Intel chips makes even less sense.



    Your post here doesn't relate to the discussion between onlooker and me.







    Apple will not keep PPC code around forever, that doesn't restrict Apple to using only Intel either. Chances are they will use only Intel for the next 5-10 years or longer, if for no other reason than to keep it simple.



    Quote:
    Originally Posted by jamm View Post


    The latest retail version of OS X Server (10.4.7) is universal. I thought Leopard was going to be too.



    It is.



    Quote:
    Originally Posted by snoopy View Post


    No argument here. Apple will use X86 type chips in Macintosh computers from now on. However, this does not mean Apple should use only X86 chips in other products. In fact, the iPhone and iPod do not use the X86 as you know.



    Is it more difficult to keep OS X working on PPC than ARM? I don't think so. If IBM is developing a really great PPC chip for the cell phone, why not keep the PPC code and drop ARM? I believe this could happen. What are the odd? Not very good, but possible.



    We are all speculating here, and no one has yet provided an inside track to say which scenario will emerge. I agree that the PPC is not very likely, but I object to it being dismissed as essentially impossible. That's all.











    No, but if IBM wants to succeed greatly, it will learn from anyone and anything it can.







    Since we both agree that the chances of PPC knocking ARM out are unlikely, why suggest it? If Apple thinks they need to keep PPC around for this, they will, but they likely won't continue supporting PPC at all. The IPod uses it yes, but that's not even OS X, it's Pixo.



    The iPhone also probably uses ARM, even if it's an Xscale chip, it's still ARM, so there's little chance they will keep PPC around just for that.



    Apple originally used PPC because it was the only CPU that could emulate the Motorola 68000 series for the portions of System 7 written in Assembly.



    Sebastian
  • Reply 88 of 126
    slewisslewis Posts: 2,081member
    Quote:
    Originally Posted by rickag View Post


    Main frames and servers, so no it's not dead for computers, just desktops.



    Some people already consider the xbox a media center.



    both of which use PPC set architecture.



    Try Media Center Extender (TM).



    Microsoft really does make no sense sometimes. Windows Media Player and Windows Media Center share very little, whenever I used Media Center I always felt like I was navigation the Explorer for specific File Types in a MCE interface, and the idea of having both a Media Center and a Media Center Extender (both are made for connecting to the TV) is an oxymoron.



    Quote:
    Originally Posted by shadow View Post


    Sorry for the long post. Hope it makes some points more clear.



    No need to apologize, I enjoyed the read.



    Sebastian
  • Reply 89 of 126
    mr. memr. me Posts: 3,221member
    Quote:
    Originally Posted by Slewis View Post


    ...



    Apple originally used PPC because it was the only CPU that could emulate the Motorola 68000 series for the portions of System 7 written in Assembly.



    ...



    The emulator used in PPC-based Macs emulated 680x0 code irrespective of the language from which it was compiled. The PPC's ISA was well-suited for emulation, but was by no means necessary for emulation. There is no better proof of this than Rosetta, x86-based software which emulates the PPC.
  • Reply 90 of 126
    slewisslewis Posts: 2,081member
    Quote:
    Originally Posted by Mr. Me View Post


    The emulator used in PPC-based Macs emulated 680x0 code irrespective of the language from which it was compiled. The PPC's ISA was well-suited for emulation, but was by no means necessary for emulation. There is no better proof of this than Rosetta, x86-based software which emulates the PPC.



    It's what was, not what is. The Intel chips can emulate PPC chips now, but they had a very hard time doing so with Motorola Chips in 1994.



    Sebastian
  • Reply 91 of 126
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by snoopy View Post


    Read the history of this reply. I asked whether Apple would use a PPC chip in the iPhone if IBM made a superior cell phone processor. Onlooker's reply was no, because Apple got into bed with Intel. Naturally I asked why Apple should be more restricted than Dell and HP, or any other company for that matter? These companies use chips from other vendors, and it doesn't appear to hurt the relationship with Intel. Since I referred specifically to the iPhone, not Macintosh computers, a requirement to use only Intel chips makes even less sense.



    Your post here doesn't relate to the discussion between onlooker and me.







    You might remember that you did ask me the question first.



    But, the other point to your post was that Dell and others use AMD chips. You specifically referred to AMD. As AMD doesn't manufacture a chip for a phone, you moved your question back to computer chips.



    I properly replied to that with the correct rebuttal.



    You can roll your eyes all you want, but you must understand your own post first, before you criticize a reply to it.
  • Reply 92 of 126
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by snoopy View Post


    No argument here. Apple will use X86 type chips in Macintosh computers from now on. However, this does not mean Apple should use only X86 chips in other products. In fact, the iPhone and iPod do not use the X86 as you know.



    Is it more difficult to keep OS X working on PPC than ARM? I don't think so. If IBM is developing a really great PPC chip for the cell phone, why not keep the PPC code and drop ARM? I believe this could happen. What are the odd? Not very good, but possible.



    We've gone over this so many times already that it's becoming difficult to try to say it in another way.



    You won't let go of the idea that IBM has nothing to offer Apple. They aren't moving into the area of phone chips.



    Apple is through with using major chips in their products that don't have a proven lineage, and aren't being used by enough others so that they won't disappear, or will fail to be systematically updated.



    Every time Apple moves to a new chip type it means that they have to develop new IP. That means developing new API's, which is a lot of work.



    Apple is interested in using products from companies that are experienced in producing products for the industries they are moving into. Another Intel chip for the ATv, and possibly the well established family of ARM's for the phone.



    Why would Apple even consider an unproven chip from a company with no history of producing product in that field?



    IBM has now proven themselves to be fickle enough about a customer that they themselves were attempting to get.



    Don't forget that IBM knew quite well the number of chips that Apple was going to buy over the next few years. They also knew that Apple desperately wanted a high performance low power chip for their laptops.



    There was no excuse to back off on the R&D required for Apple's needs when they campaigned for Apple to use them, and knew of their needs.



    If Apple were my company, I would never trust IBM's chip making unit again. There are plenty of other companies who are thrilled at the possibility of having Apple as a customer, and who are bending over backwards to please them. Even Intel is doing that. We have plenty of evidence of it.



    Quote:

    We are all speculating here, and no one has yet provided an inside track to say which scenario will emerge. I agree that the PPC is not very likely, but I object to it being dismissed as essentially impossible. That's all.



    If that's all, will you accept a possibility of 0.00001%?



    Quote:

    No, but if IBM wants to succeed greatly, it will learn from anyone and anything it can.







    IBM is doing quite well. They are not in the business that Apple is. They, like almost every other company, have to divulge their future moves to their customers in advance.
  • Reply 93 of 126
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by shadow View Post


    3. Keeping many architectures for computers only complicates support, so Apple will avoid it. The reason for this is NOT OPTIMIZATION as many here seems to believe, design and manufacturing costs of motherboards, testing and compatibility are more important here - see my next post.



    Actually, it's both. The hardware problem is a good mention. But, if there is no new hardware to develop to, there will be no new boards to design and manufacture. If IBM stops making the 970, as now seems quite possible, then there will be NO new chips from them to work with. The Power line is simply not in Apple's sights.



    Quote:

    4. Apple is widening the base of hardware running Mac OS X. It may make sense to have different architecture for some range of devices (iPods, iPhones or whatever Apple decides to introduce next), if no suitable option from Intel is available. NOTE: iPod currently does not run OS X, but it is likely that the future versions will use it.



    Yes, we basically agree with that.
  • Reply 94 of 126
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by shadow View Post


    What it takes to support more processor architectures?

    (a follow-up for my previous posts on this thread)



    Many members posting here think that changing processor architecture means rewriting a lot of code by the majority of developers in order to "optimize" it for the new processor. This is not the case. Here is why.



    From programmers point of view, processors differ mainly in the following:
    • instruction set

    • byte order (big endian vs. little endian)

    • data register size (may affect data alignment) and addressable memory (processor "bittness")

    • vector processing units or other processor-specific features

    The instruction set is handled by the compiler, unless you are writing in assembler (very, very few programmers do). Every processor vendor will make sure there is a compiler available for the most widely used languages for it's processor. One of the most widely used compilers, used by Apple's Xcode as well, is GCC - GNU Compiler Collection, which provides compilers for a number of programming languages for a wide range of processors. Other compilers, e.g. from Intel or IBM, may be used, but they are not free for developers and are bound to specific processors, and are not commonly used. So, in general, the programmer needs to do NOTHING to compile for the processor architecture on which he/she is compiling. The compilers are capable to cross-compile - to compile for an architecture, different from the one it is running on. For example, you can compile for PPC on Intel computer and vice versa.



    In many cases the developer has to take care of byte order, in particular, when reading data streams from disc or network. In order to keep the code portable, there are some simple rules which should be followed here. Apple provides several ways for bite-swapping, including macros and functions. Bite-swapping errors may introduce bugs, but now apple had already fixed most of them (it has to support both endian versions for now). It is not a big deal to keep the code endian-savvy. Moreover, it is considered bad programming practice not to do so.



    Most of the old mac fans tend to believe that mac developers were working a lot to support AltiVec. This is not the case. Apple did, and the the majority of the results of this work is embedded in a very small number of frameworks and libraries (compared to the entire code base). The only thing the developers need to do is to use them. The AltiVec optimized code is a small part of the Mac OS and large part of it is "concentrated" in a known places, so there is no need to go through all the code to make the switch if needed. Some developers do use processor vector units explicitly, but a very small percent of the developer community. Most of the developers rely on auto vectorization by the compiler, or, more importantly, on other compiler optimizations. You may be surprised to know, but most of the executable code out there, including the OS itself, is OPTIMIZED FOR SIZE, NOT FOR SPEED.



    There is much more trouble to support 32 bit and 64 bit than the other differences. This is more difficult to explain but I will give it a try. When you have a function call in your code, you don't care about the processors instruction set, endianness or vector units. This is handled by the compiler. But the function always has a definite return type, for example, integer or float. So far so good. But what you should do if you want to take advantage of the processors larger data register size? You need a new function which returns the corresponding long data type. In your code you have to check the processor for which the code compiles, and select the function you want to use. And you have to make this for every call for a potentially large number of functions. This is a HUGE problem. To make it worce, most of the code relies on different libraries. You need libraries which are specifically made to support 64 bit (as said earlier, just recompiling for a different processor is much simpler), so you may have to wait untill the author of the said libary makes it 64 bit savvy.

    Leopard will have full 64 bit support (that is, all Leopard's frameworks and libraries). Some of the applications (from Apple and third parties) will not. But Apple will provide tools to make the support for 32 and 64 bit much easier than the above scenario.



    To summarize, Apple's code portability is a HUGE advantage over Windows (Linux is very portable however). Now, when Apple improved and "tested" it's portability, it will be ultra-stupid not to keep it that way, even when the current crop of PPC installed base fades away. Most of the code in Mac OS will survive more than one processor architecture. And Apple will want to "have options", both for future processor architectures, and for licensing Mac OS to third parties (remember, Windows has big trouble supporting multiple architectures).



    Sorry for the long post. Hope it makes some points more clear.



    That's nice, except that you've oversimplified dramatically.



    In a pefect world, what you said would be true. But, the world is far from perfect.



    Developers have complained mightily about XCode's deficiencies. this is not new.



    Despite what you say, using Altivec is NOT something that Apple can take care of except in the simplest of ways. That's why Adobe had to come out with a code package for PS to enable it to work with Altivec. And even that only worked for a few of the most popular filters.



    There are optimising compilers, and non-optimising compilers. There are compilers designed to make things look good on SPEC tests, but perform miserably on real-world assignments, etc. Are the libraries complete or not?



    There are so many areas to talk about that we can't do it here.



    I will remind people that when Jobs announced that they were moving to X86, we had a demo of Mathamatica. "It just took a few hours", they thrillingly proclaimed.



    But it took quite a few months before an x86 product was released.
  • Reply 95 of 126
    snoopysnoopy Posts: 1,901member
    Quote:
    Originally Posted by melgross View Post




    You won't let go of the idea that IBM has nothing to offer Apple. They aren't moving into the area of phone chips.






    I agree with your first sentence, IBM has nothing for Apple today. We don't know what IBM will be doing tomorrow however. The reason I picked on phone chips is the great number of such chips sold. If IBM believed they could build a much better chip than ARM for a phone, IBM just might take it on. I guess I don't like to write off a company like IBM just because of bad past experiences.



    It's likely not worthwhile for Apple to keep OS X up to date for the PPC for decades, but Apple could be prepared to go back to the PPC if the opportunity presents itself. That basically is all I'm saying. We disagree on the odds of something like this happening, but otherwise I haven't disagreed with very much of what you normally post.



    The biggest reason I keep replying is that I feel you misinterpret what I am trying to say, and I try to defend myself.



  • Reply 96 of 126
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by snoopy View Post


    I agree with your first sentence, IBM has nothing for Apple today. We don't know what IBM will be doing tomorrow however. The reason I picked on phone chips is the great number of such chips sold. If IBM believed they could build a much better chip than ARM for a phone, IBM just might take it on. I guess I don't like to write off a company like IBM just because of bad past experiences.



    It's likely not worthwhile for Apple to keep OS X up to date for the PPC for decades, but Apple could be prepared to go back to the PPC if the opportunity presents itself. That basically is all I'm saying. We disagree on the odds of something like this happening, but otherwise I haven't disagreed with very much of what you normally post.



    The biggest reason I keep replying is that I feel you misinterpret what I am trying to say, and I try to defend myself.







    I'm not misinterpreting it. I reply to exactly what you are saying. perhaps you are not saying what you mean to.



    But, you also ignore the reasons that I, and others, give for what we're saying, and then just keep repeating yourself.



    It makes for a difficult conversation.



    I gave you very good reasons why Apple shouldn't consider IBM again, but you don't respond to them at all.
  • Reply 97 of 126
    snoopysnoopy Posts: 1,901member
    Quote:
    Originally Posted by melgross View Post




    You might remember that you did ask me the question first.






    I did remember, and that's why I said "read the history of this reply." In the reply to onlooker I said, "I'll ask you the same question I asked melgross."





    Quote:



    But, the other point to your post was that Dell and others use AMD chips. You specifically referred to AMD. As AMD doesn't manufacture a chip for a phone, you moved your question back to computer chips.






    No, no, that's wasn't my point at all. I was referring to companies that use Intel chips a great deal, but are free to use other vendors. So I was stating that Apple should be free to use other vendors than Intel, for whatever product Apple makes: computers, iPods, iPhones and so on.



    In my reply to onlooker, where this all came from, I could have misinterpreted what onlooker said; I'm not sure. Onlooker disagreed with me, but never criticized my reply. (Unlike others in this discussion. Just joking around.)



  • Reply 98 of 126
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by snoopy View Post


    I did remember, and that's why I said "read the history of this reply." In the reply to onlooker I said, "I'll ask you the same question I asked melgross."









    No, no, that's wasn't my point at all. I was referring to companies that use Intel chips a great deal, but are free to use other vendors. So I was stating that Apple should be free to use other vendors than Intel, for whatever product Apple makes: computers, iPods, iPhones and so on.



    In my reply to onlooker, where this all came from, I could have misinterpreted what onlooker said; I'm not sure. Onlooker disagreed with me, but never criticized my reply. (Unlike others in this discussion. Just joking around.)







    So then let's get the history correct.



    We are talking about Apple's possibilities of going back to, or adding another PPC (or more likely POWER, as the PPC seems to be condemed to the trash shortly) to their lineup sometime in the future.



    You think that's possible, while we don't.



    So far, so good (I think).



    But, then you mentioned the phones, and asked if we though that Apple would use an IBM chip if they came out with one, and if it was better than what was out there. We said that we didn't think they would.



    Then you said, and I quote, so that you can see your own words:



    Quote:

    Why should Apple be more restricted than Dell and HP in what it does? Dell and HP use Intel chips, but do these companies not also use AMD at times?



    I replied that AMD and Intel both produce x86 chips, so it doesn't matter which one is used, and that the PPC is quite different, so it's not the same thing at all.



    No one is denying that Apple uses other chips when required. But, you have to actually read the entire post we write so carefully. You seem to be skipping most of what we say in your replies. I've said that before. I reply fully to your posts. I give reasons for what I think to be true. I write out a logical argument, give information. I take you seriously.



    You ignore all of it.



    You must reply to our points. You refuse to do that.
  • Reply 99 of 126
    rickagrickag Posts: 1,626member
    http://arstechnica.com/news.ars/post...n-to-32nm.html



    Kind of relevant in an indirect sort of way.



    "IBM alliance will take the fight with Intel down to 32nm"

    By Jon Stokes | Published: May 24, 2007 - 02:43PM CT



    Quote:

    IBM, Chartered, Samsung, Infineon, and Freescale will all be working together on the next-generation 32nm process, with IBM partner AMD also reaping the benefits through an existing deal. (Recall that 32nm is when IBM will introduce its "airgap" technology.)



    In the immortal words of Rosanne RosannaDanna, " You just never know Mr. Feder."
  • Reply 100 of 126
    mr. memr. me Posts: 3,221member
    Quote:
    Originally Posted by Slewis View Post


    It's what was, not what is. The Intel chips can emulate PPC chips now, but they had a very hard time doing so with Motorola Chips in 1994.



    ...



    In your previous post, you claimed that the PPC was chosen to emulate 680x0 assembly routines. You were wrong. The PPC was chosen to emulate 680x0 binaries. Learn the difference. In your previous post, you said that PPC processor was the only processor that could do the job. You were wrong. The PPC may have been the best, but there was a huge difference between being the best and being the only. That was true in 1994. It is true now.
Sign In or Register to comment.