Originally Posted by HardBall
Well, what do you mean by "developers coding for Intel", they are both use the same ISA, and compilers would generate the binary machine code for run time. Do you mean that these "developers" actually code in x86 assembly?
As a computer engineer, you must surely know that compilers can *and do* optimize for a particular target, even within the same ISA. Also note that while both AMD and Intel support x86-64, they each also support their own extensions. Their implementations of virtualization and vector extensions have diverged.
You must also know that the highest performing x86 compiler is generally acknowledged to be 'icc', and is provided by Intel. It is reasonable to assume that it is well tuned for Intel's products, and less well tuned for AMD's products.
I assume you mean process technology, as that would be the standard terminology in ECE. Well, AMD has always been 9-12 months behind in transitionning to smaller nodes at their fabs (when they still owned them). So I'm not sure what do you mean by "anymore".
As a computer engineer, I'm sure you understand that 'technology' is a generic term. It can include micro-architecture, process, or even packaging technologies. Yes Intel has, for a long time, led in *process technology*. However for several years AMD had an indisputable lead in processor performance. I assume the OP was referring to this time period.
Just look at their problems in the past two years Now, without their own fab, they will have even more problems with optimization.
So what are you suggesting, that design houses such as ARM and Nvidia all output sub-standard products? What are your justifications for such?
As a computer engineer, if you keep up with the trade you surely are aware of the process issues that Nvidia has had. You are probably also aware of the process problems that AMD has experienced when outsourcing production to foundries. Linking design and litho provides tremendous advantages that AMD will start to lose as GF becomes more independent.