Originally Posted by ConradJoe
Ok, so you're saying that in general, big orders are better than small orders. I can't argue with that as a general rule.
But you are also assuming that Apple pays enough to make their big orders a better alternative to Samsung than small orders from others. I don't think you have any evidence for that at tall, other than "in general, big orders are better than small orders".
But it all depends on how little Apple pays. And how much others are willing to pay.
You are still missing the point, probably because you have no idea how semiconductor production works. Take it from me I have a much better understanding of it, having worked in EDA before, and for the world-wide largest supplier of litho gear today.
If you have a whole fab full of streets of multi-million machines for producing chips, you want to have them fully booked 24/7, with minimal downtime, maximum efficiency. On every wafer you want to minimize defects, maximize yield, with as little surprises as possible, as every hour of downtime means you will lose revenue. A single wafer can easily have 200 to 300 chips on them, and a wafer scanner that is perfectly tuned for the reticle and process in production, running without any anomalies, can churn out around 200 to 250 wafers an hour. This should give you an impression how costly it is to shut down a wafer scanner for whatever reason: scheduled/unscheduled maintenance, re-calibration, diagnostics, or the most expensive of all: switching production jobs.
You see: when you are producing chips, all kinds of weird stuff happens to your wafer scanner. For example, lenses will warm up over time, and create optical aberrations that are specific to the pattern on the reticle you are imaging. Then, when you go to the next wafer, or the next production lot, the lens will cool down a little, again changing imaging performance. All of this affects defect rate and yield. Modern wafer scanners are full of sensors and actuators that can compensate for such effects, but the way you have to program the machine to use them to optimal effect is a learning process, and it is different for each reticle, and even different between machines of the same type, as imaging tolerances are close to nanometer scale these days. This means that the longer you are producing the same reticle on the same process technology, on the same machine, using the same wafer/resist stack, the more you learn how to control production, increasing yield and decreasing downtime. This process is continuous. I'm just giving one example of the kind of stuff you can expect making chips by the way, besides lens heating there are all kinds of scanner drift that may occur.
Now, say Apple wants 10 million A5 SoC's from Samsung. They first send engineers to Samsung and work with them to do trial runs, tweak the chip layout, the reticle, the process, etc. After that they will negotiate price tiers, for example $40 for the first million, $30 for the next, down to $10 for every chip after 5 million units. Samsung will commit to delivering the chips for the negotiated price, and they will commit to delivering at least x million of them each month, with volume ramping up during the first production runs. After that, it's up to Samsung to deliver. If they run into yield problems, its on them. If they have to shut down a whole production line to that could have been churning out easy stuff such as DRAM to fix the issue, its on them. If they fail to meet supply targets and have to compensate Apple in damages, its on them.
In other words: taking on new business is very risky, which is why any foundry will prefer large customers that place predictable orders, over small customers with different chips and unpredictable demand. In that regard, Apple is the perfect client: they need lots of the same IC's for a prolonged period, and the IC's they need are complex enough that you need a foundry such as Samsung, TSMC or GlobalFoundries to produce them in the first place, and cannot just make them yourself (such as many memory manufacturers do).
World-wide there are hardly any other customers that provide the same opportunities to foundries as Apple. Intel has its own fabs, AMD spun off its own foundry (GlobalFoundries), most memory manufactures run their own fabs (as DRAM is 'easy' to produce, as it's the same repetitive patterns all the time), and most other logic IC's don't have 2-year production runs, or erratic demand, lower complexity (hence lower margins), etc.
Maybe this helps you understand why no foundry will voluntarily turn down Apple's business. There's much more to it than just trading 1 big client for 5 small clients that add up to the same revenue.