Originally Posted by melgross
Except that they aren't exactly the same, as they use different technologies in a number of areas. Developers often code for one chip or the other. In fact, almost all code for Intel, making AMD performance inferior in a number of areas. The only place where that's not true is HPC, and Apple doesn't compete there, unless, very unlikely as it may be, Apple is considering it now.
Well, what do you mean by "developers coding for Intel", they are both use the same ISA, and compilers would generate the binary machine code for run time. Do you mean that these "developers" actually code in x86 assembly?
AMD has no leading technologies anymore, and a poor record of presenting product on time.
I assume you mean process technology, as that would be the standard terminology in ECE. Well, AMD has always been 9-12 months behind in transitionning to smaller nodes at their fabs (when they still owned them). So I'm not sure what do you mean by "anymore".
[quote]Just look at their problems in the past two years Now, without their own fab, they will have even more problems with optimization.[/quote[
So what are you suggesting, that design houses such as ARM and Nvidia all output sub-standard products? What are your justifications for such?
Unfortunately, I have to leave for the day, so I won't be back 'till late. Too bad, this is the most important discussion here in years, if true.
Well, I have been around looking around in this forum for a long time. But this is the first time that I feel compelled to post in a thread, since it is an interesting topic, and this specific post's assertions seems to warrant some definite answers.
I am a computer engineer, and I have studied computer microarchitectures for a long time. I actually post regularly on arstechnica and anandtech on these issues. I assume that, since you are able to make these claims, that you are a computer engineer/scientist as well, and that you will have good answers to these.
Looking forward to you getting back, and your reply.