I think that multi-core ARM chips will one day make it to the Mac. It is more work on the software side to make applications use multiple cores effectively, but Apple has made progress in this and I'm sure there is a lot more coming. I'm very interested to see if there will be more multi-core programming techniques coming out with Lion. They already have Grand Central Dispatch and OpenCL. When you look at the execution unit density of ARM vs Intel or AMD, the ARM chip is capable of a lot more operations per second in the same size die. It just can't perform a single stream of operations as fast, so things need to operate more in parallel on the software side. I think that Apple sees this as the future. Intel is already going multi-core because they hit the ceiling. If you are going multi-core, it makes more sense in the long run (when you don't have legacy software) to have chips with denser execution units like the ARM. The other way things can go in the future is more execution units customized for a particular task. The ARM design still makes sense as part of the stack in that case.
I think the main reason this may just happen is I believe you phone will be your only computer in the future. Set it down next to a keyboard and monitor and an you have a full desktop machine. I think we'll see this from Apple in the next five years: an iPhone capable of running iOS and desktop OS X. Microsoft is talking about porting Win 7 to ARM as well (probably for tablets, but it would give them similar capabilities.)
You do realize that you are arguing that what is available right now is inferior compared to something that is imaginary and exists in the future?
No I said the current iPad is slow and that future machines will make it look significantly worst. This becomes readily apparent when iPad attempts to do anything non trivial or for that matter do Javascript heavy web sites.
Quote:
I don't see a general consensus out in the market that the iPad is slow. Yes indeed it will be slower than the newer technology that is not here yet. But that is the true for any technology that exists right now.
Well maybe because for many users the processor performance is not the primary indicator of good performance. Look at it this way the iPad processor can be best seen as a 486 class performance.
Again this is the state of evolution of all technology.
That is Apple has to significantly increase performance just to realize its goals. IPad 1 can barely deliver on current needs so it should be obvious that a much faster SoC is coming.
I can't remember where (my mind is long past overflowing) but I read that Apple was planning on using some technique or construct that allowed direct execution of Objective C code (without compilation or linking?).
TIA Dick
BTW, how goes the battle?
You have to dig through the various patents but for some reason Apple has been patenting techniques for enhancing microprocessors over the recent years. I don't have any links handy but I believe the patents came around 2008 or 2009.
What I find interesting here is that Apple has been doing any research at all into this level of CPU architecture.
BIGGEST DISAPPOINTMENT: Tetris...talk about not taking advantage of the whole screen an washed out colors! Certainly not using the Retina Display to its advantage...very disappointed!
You're darn right about Tetris. I bought it when the price went down but the game design is seriously lacking to the point where I don't feel I got my money's worth.
A natural move. Apple is in control of their own destiny these days. They can go as proprietary and as in-house as they like, with the assurance that consumers will buy.
Further control = further strengthening of the elements that make for a superior User Experience.
A natural move. Apple is in control of their own destiny these days. They can go as proprietary and as in-house as they like, with the assurance that consumers will buy.
Mmm. No.
They can only go as proprietary and in-house as their developers will allow. If no one makes applications for OS X, no consumers will buy Macs.
Though, I think it would be interesting to see if Apple could just go all out and make a first-party replacement for absolutely everything available.
It looks like their intentions are to supplement the ARM instruction set with instructions that accelerate the execution of Objective C.
Quote:
Originally Posted by Dick Applebaum
Dave
Can you amplify on that?
I can't remember where (my mind is long past overflowing) but I read that Apple was planning on using some technique or construct that allowed direct execution of Objective C code (without compilation or linking?).
TIA Dick
BTW, how goes the battle?
Quote:
Originally Posted by wizard69
You have to dig through the various patents but for some reason Apple has been patenting techniques for enhancing microprocessors over the recent years. I don't have any links handy but I believe the patents came around 2008 or 2009.
What I find interesting here is that Apple has been doing any research at all into this level of CPU architecture.
I just woke up -- missed my first Rose Bowl Parade in 60 years.
The best source fore Apple Patent information is Patently Apple:
The site has a search capability. I did a couple of quick searches for "ARM" and "Objective C" and nothing (in the hit summaries) jumped out at me.
If you can think of additional search terms or want to peruse some of Apple's Patents, this is the site.
Jack Purcher owns the site and his wife is a patent attorney. They do a great job of finding Apple patents, and translating legalese/patentese into a language mortals can understand.
Later, I'll do a more exhaustive search -- or email Jack to see if he recalls anything along those lines.
Sigh! Coffee finished! For now, I have to go unclog the dishwasher -- our traditional New Years dinner of Nawlins' Red Beans and Rice depends on it!
The dishwasher AirGap on top of the sink was clogged -- easy fix.
I got back to the Patently Apple site and found this:
Quote:
Apple is Working on Multi-Core Processor Snoop Filtering for Macs
One common cache coherence technique involves bus snooping, in which processors broadcast memory references to each other on a dedicated bus so that data can be transferred between caches rather than accessing main memory. While bus snooping may enable cache coherence, bus snooping may also consume resources, such as power and time, and thus may reduce processor efficiency. Moreover, as the number of processors in a multi-core or multiprocessor system increases, the amount of snooping and broadcasting may increase exponentially, reducing the efficiency of such systems accordingly.
Of special interest is a comment (and link) by mdriftmeyer that the presentation of this patent as Intel-based solution may be a ruse -- he maintains that the patent could easily apply to an ARM Cortex A9.
Quote:
We'll agree to disagree on Sandy-Bridge as it's design doesn't remotely resemble the ARM schematic that Apple has lifted in this patent for presentational intentions, not to mention Apple won't be designing custom controllers and patenting them to use within the Intel IP vicinity, nor with AMD for that matter.
This is a mirror of the ARM Cortex-A9 Multicore 4CPU design.
Apple has access to this Hardware IP, because they paid heavily to license it [Intel did as well which probably explains why their Atom is trying to match the ARM] and it's reasonable this patent is w/ regards to Darwin leveraging this design for the iOS Platform and their future numbered revisions of the A# Processor that includes their own GPU integrated via IMGTech.
I like how they've generalized the patent for future desktop, laptop or other general purpose systems, but it seems rather clear it's immediate impact is for the A# [A5, A6 or whatever they call it] Apple Processor of the future with 4 Cores; hence the ARM A-9 Cortex as baseline to draw upon.
Posted by: Marc J. Driftmeyer | September 17, 2010 at 07:06 PM[/IMG]
It looks like their intentions are to supplement the ARM instruction set with instructions that accelerate the execution of Objective C.
I would turn this around a little. Changing and adding instructions can be a huge undertaking on the order of having to redesign an entire core because of the domino effect the new transistor placements would drive.
A very limited version of changing instructions could be a combination of a compiler and decode stage change. There you can break up a new "native" instruction into it's core native components for issue. That's still a hard task, but at least the domino effect is limited to a small portion of the CPU footprint. And in the end I don't know if the extra speed you would get from the decoder change would be fast enough to justify the resources spent compared to just a compiler driven change. It would take some serious low level analysis and simulation to answer a question like that with any certainty.
So I think the turnaround is that Apple has the ability on an instruction-by-instruction basis to determine if it is necessary to do a decoder+compiler change or just a compiler change, but even more profitably Apple can make adjustments in the SoC (low risk compared to messing with the core) to optimize I/O and graphics based on the same kind of iinstruction-by-instruction analysis. It is an advantage none of the other producers have or in the short to medium term could hope to match. The other guys are all producing and selling silicon to many clients, so a laser focused optimization just doesn't make sense for them.
I would turn this around a little. Changing and adding instructions can be a huge undertaking on the order of having to redesign an entire core because of the domino effect the new transistor placements would drive.
A very limited version of changing instructions could be a combination of a compiler and decode stage change. There you can break up a new "native" instruction into it's core native components for issue. That's still a hard task, but at least the domino effect is limited to a small portion of the CPU footprint. And in the end I don't know if the extra speed you would get from the decoder change would be fast enough to justify the resources spent compared to just a compiler driven change. It would take some serious low level analysis and simulation to answer a question like that with any certainty.
So I think the turnaround is that Apple has the ability on an instruction-by-instruction basis to determine if it is necessary to do a decoder+compiler change or just a compiler change, but even more profitably Apple can make adjustments in the SoC (low risk compared to messing with the core) to optimize I/O and graphics based on the same kind of iinstruction-by-instruction analysis. It is an advantage none of the other producers have or in the short to medium term could hope to match. The other guys are all producing and selling silicon to many clients, so a laser focused optimization just doesn't make sense for them.
You seem to be very knowledgeable and up to speed on this!
I have a question that's been bothering me.
AFAICT, Both Apple and Samsung are licensed to Design and Manufacture (two separate licenses) ARM CPUs.
If Apple has Samsung manufacture the A4 (and follow-on) chips) -- does that mean that Samsung could use the same design in their competitive CPUs?
The smart way to run a company is to have the ability to create something from raw materials to the finished product and have the production capabilities to do it. By doing that there would be nothing that could stop it from creating products it wants.
Some business models have all of their design and manufacturing done by others. The people in the office just do the marketing and billing. Apple is half way between the two. They design things but don't manufacture anything.
Apple needs to take the next step with its chips and create their own. They're starting out with the A4 and perhaps with their new team will get good enough at it to design chips from scratch. They've got to start somewhere.
I wish Apple would actually manufacture things. They could start off by letting Americans assemble iPods and iPhones. Then they could have Americans manufacture the cases to their computers and idevices in the USA. In time Apple could manufacture their own circuit boards and chips in the USA. With the premium prices of Apple products the company could afford to do these things within the USA. Just the good public relations alone would get them more sales from Americans who want to support US manufacturers.
If Foxcon employs one million people in China then Apple could eventually move that production to the USA and create at least that many jobs.
Until the cost to manufacture and assemble in the US falls below that of China this will never happen. Getting rid of the minimum wage and giving US manufacturers a tax holiday could be one way to increase US-based production, but in the age of taxation without representation I don't see this happening.
The smart way to run a company is to have the ability to create something from raw materials to the finished product and have the production capabilities to do it. By doing that there would be nothing that could stop it from creating products it wants.
Some business models have all of their design and manufacturing done by others. The people in the office just do the marketing and billing. Apple is half way between the two. They design things but don't manufacture anything.
Apple needs to take the next step with its chips and create their own. They're starting out with the A4 and perhaps with their new team will get good enough at it to design chips from scratch. They've got to start somewhere.
I wish Apple would actually manufacture things. They could start off by letting Americans assemble iPods and iPhones. Then they could have Americans manufacture the cases to their computers and idevices in the USA. In time Apple could manufacture their own circuit boards and chips in the USA. With the premium prices of Apple products the company could afford to do these things within the USA. Just the good public relations alone would get them more sales from Americans who want to support US manufacturers.
If Foxcon employs one million people in China then Apple could eventually move that production to the USA and create at least that many jobs.
Quote:
Originally Posted by SpamSandwich
Until the cost to manufacture and assemble in the US falls below that of China this will never happen. Getting rid of the minimum wage and giving US manufacturers a tax holiday could be one way to increase US-based production, but in the age of taxation without representation I don't see this happening.
It isn't just wages and taxes -- there are a whole slew of regulations that are unfriendly to manufacturing.
While semiconductor manufacturing is relatively "clean" it still involves hazardous chemicals usage and disposal.
Sadly, because of Federal and State wage, tax and regulation -- there is very little "silicon" manufactured in Silicon Valley.
Designing a the CPU architecture from scratch is pointless and futile. Licensing the IP to improve upon the officially designed and released architecture specs from ARM is a wise investment.
OS X may not be available for installation on non-Apple computers, but it's possible.
Windows may not be made for installation on Macs, but it's possible.
If Apple makes its own architecture, it will be physically impossible within the fundamental laws of the universe for it to work.
First of all, who cares, second of all, have you ever heard of boot camp? Sure you have, you just pretend it won't exist in the future.
Again, who cares. I'd go for a much better Mac, rather than a good Mac that also "does windoze". Who cares. If you need it that bad, buy a $99 windows box and call it a day.
Designing a the CPU architecture from scratch is pointless and futile. Licensing the IP to improve upon the officially designed and released architecture specs from ARM is a wise investment.
I wonder if that applies to servers as well as mobile devices.
I don't get the Objective C optimizations in hardware. Objective c is pre-processed down to C so any improvements would work for all. The place to do this is in the compiler.
The API is far too big to make any application specific tweaks.
A natural move. Apple is in control of their own destiny these days. They can go as proprietary and as in-house as they like, with the assurance that consumers will buy.
Further control = further strengthening of the elements that make for a superior User Experience.
When it comes to Macs they need, very badly, i86 compatibility. Do not underestimate this. There is just to much software out there that has to run under Windows or another environment. When you have to run such software a VM and the right OS makes the Mac a very versatile machine.
I would turn this around a little. Changing and adding instructions can be a huge undertaking on the order of having to redesign an entire core because of the domino effect the new transistor placements would drive.
There are all sorts of possibilities including the possibility of using reserved co processor instructions. For the most part ARM has already used up most of the possible instructions so I'm not sure what approach Apple would take.
Quote:
A very limited version of changing instructions could be a combination of a compiler and decode stage change. There you can break up a new "native" instruction into it's core native components for issue. That's still a hard task, but at least the domino effect is limited to a small portion of the CPU footprint. And in the end I don't know if the extra speed you would get from the decoder change would be fast enough to justify the resources spent compared to just a compiler driven change. It would take some serious low level analysis and simulation to answer a question like that with any certainty.
I'm not sure this is a problem at all for Apple. Apparently they had a significant hand in Alt-
Vec back in the PPC days.
Quote:
So I think the turnaround is that Apple has the ability on an instruction-by-instruction basis to determine if it is necessary to do a decoder+compiler change or just a compiler change, but even more profitably Apple can make adjustments in the SoC (low risk compared to messing with the core) to optimize I/O and graphics based on the same kind of iinstruction-by-instruction analysis.
Apple actually has a lot of IP related to GPU and or graphics processing. They bought a whole company a few years back. Beyond that Apple has a collection of patents related to flash memory so yeah they can do much outside of the CPU also.
Quote:
It is an advantage none of the other producers have or in the short to medium term could hope to match. The other guys are all producing and selling silicon to many clients, so a laser focused optimization just doesn't make sense for them.
I just see Apple patenting a lot of technology that frankly isn't of much use unless you are doing a lot of heavy design in and around the CPU core. Unfortunately I can't remember where I saw the patents. As to patentlyapple i've ben to the site a couple of times and frankly the search mechanism sucks.
I just see Apple patenting a lot of technology that frankly isn't of much use unless you are doing a lot of heavy design in and around the CPU core. Unfortunately I can't remember where I saw the patents. As to patentlyapple i've ben to the site a couple of times and frankly the search mechanism sucks.
I agree with the patently apple search -- I'll email Jack and see what he can do to add an advanced search -- simple && || ! and "" exact match would add a lot of utility.
Comments
I think that multi-core ARM chips will one day make it to the Mac. It is more work on the software side to make applications use multiple cores effectively, but Apple has made progress in this and I'm sure there is a lot more coming. I'm very interested to see if there will be more multi-core programming techniques coming out with Lion. They already have Grand Central Dispatch and OpenCL. When you look at the execution unit density of ARM vs Intel or AMD, the ARM chip is capable of a lot more operations per second in the same size die. It just can't perform a single stream of operations as fast, so things need to operate more in parallel on the software side. I think that Apple sees this as the future. Intel is already going multi-core because they hit the ceiling. If you are going multi-core, it makes more sense in the long run (when you don't have legacy software) to have chips with denser execution units like the ARM. The other way things can go in the future is more execution units customized for a particular task. The ARM design still makes sense as part of the stack in that case.
I think the main reason this may just happen is I believe you phone will be your only computer in the future. Set it down next to a keyboard and monitor and an you have a full desktop machine. I think we'll see this from Apple in the next five years: an iPhone capable of running iOS and desktop OS X. Microsoft is talking about porting Win 7 to ARM as well (probably for tablets, but it would give them similar capabilities.)
You do realize that you are arguing that what is available right now is inferior compared to something that is imaginary and exists in the future?
No I said the current iPad is slow and that future machines will make it look significantly worst. This becomes readily apparent when iPad attempts to do anything non trivial or for that matter do Javascript heavy web sites.
I don't see a general consensus out in the market that the iPad is slow. Yes indeed it will be slower than the newer technology that is not here yet. But that is the true for any technology that exists right now.
Well maybe because for many users the processor performance is not the primary indicator of good performance. Look at it this way the iPad processor can be best seen as a 486 class performance.
Again this is the state of evolution of all technology.
That is Apple has to significantly increase performance just to realize its goals. IPad 1 can barely deliver on current needs so it should be obvious that a much faster SoC is coming.
Dave
Can you amplify on that?
I can't remember where (my mind is long past overflowing) but I read that Apple was planning on using some technique or construct that allowed direct execution of Objective C code (without compilation or linking?).
TIA Dick
BTW, how goes the battle?
You have to dig through the various patents but for some reason Apple has been patenting techniques for enhancing microprocessors over the recent years. I don't have any links handy but I believe the patents came around 2008 or 2009.
What I find interesting here is that Apple has been doing any research at all into this level of CPU architecture.
...
BIGGEST DISAPPOINTMENT: Tetris...talk about not taking advantage of the whole screen an washed out colors! Certainly not using the Retina Display to its advantage...very disappointed!
You're darn right about Tetris. I bought it when the price went down but the game design is seriously lacking to the point where I don't feel I got my money's worth.
I downloaded Ponon! (free version @ http://itunes.apple.com/us/app/ponon...409576993?mt=8, Deluxe version @ http://itunes.apple.com/us/app/ponon...402642577?mt=8) and this is way, way, way better than Tetris in gameplay and design. My wife can't stop playing this.
Further control = further strengthening of the elements that make for a superior User Experience.
A natural move. Apple is in control of their own destiny these days. They can go as proprietary and as in-house as they like, with the assurance that consumers will buy.
Mmm. No.
They can only go as proprietary and in-house as their developers will allow. If no one makes applications for OS X, no consumers will buy Macs.
Though, I think it would be interesting to see if Apple could just go all out and make a first-party replacement for absolutely everything available.
It looks like their intentions are to supplement the ARM instruction set with instructions that accelerate the execution of Objective C.
Dave
Can you amplify on that?
I can't remember where (my mind is long past overflowing) but I read that Apple was planning on using some technique or construct that allowed direct execution of Objective C code (without compilation or linking?).
TIA Dick
BTW, how goes the battle?
You have to dig through the various patents but for some reason Apple has been patenting techniques for enhancing microprocessors over the recent years. I don't have any links handy but I believe the patents came around 2008 or 2009.
What I find interesting here is that Apple has been doing any research at all into this level of CPU architecture.
I just woke up -- missed my first Rose Bowl Parade in 60 years.
The best source fore Apple Patent information is Patently Apple:
http://www.patentlyapple.com/
The site has a search capability. I did a couple of quick searches for "ARM" and "Objective C" and nothing (in the hit summaries) jumped out at me.
If you can think of additional search terms or want to peruse some of Apple's Patents, this is the site.
Jack Purcher owns the site and his wife is a patent attorney. They do a great job of finding Apple patents, and translating legalese/patentese into a language mortals can understand.
Later, I'll do a more exhaustive search -- or email Jack to see if he recalls anything along those lines.
Sigh! Coffee finished! For now, I have to go unclog the dishwasher -- our traditional New Years dinner of Nawlins' Red Beans and Rice depends on it!
The dishwasher AirGap on top of the sink was clogged -- easy fix.
I got back to the Patently Apple site and found this:
Apple is Working on Multi-Core Processor Snoop Filtering for Macs
One common cache coherence technique involves bus snooping, in which processors broadcast memory references to each other on a dedicated bus so that data can be transferred between caches rather than accessing main memory. While bus snooping may enable cache coherence, bus snooping may also consume resources, such as power and time, and thus may reduce processor efficiency. Moreover, as the number of processors in a multi-core or multiprocessor system increases, the amount of snooping and broadcasting may increase exponentially, reducing the efficiency of such systems accordingly.
http://www.patentlyapple.com/patentl...33f454bf0b970b
Of special interest is a comment (and link) by mdriftmeyer that the presentation of this patent as Intel-based solution may be a ruse -- he maintains that the patent could easily apply to an ARM Cortex A9.
We'll agree to disagree on Sandy-Bridge as it's design doesn't remotely resemble the ARM schematic that Apple has lifted in this patent for presentational intentions, not to mention Apple won't be designing custom controllers and patenting them to use within the Intel IP vicinity, nor with AMD for that matter.
This is a mirror of the ARM Cortex-A9 Multicore 4CPU design.
Apple has access to this Hardware IP, because they paid heavily to license it [Intel did as well which probably explains why their Atom is trying to match the ARM] and it's reasonable this patent is w/ regards to Darwin leveraging this design for the iOS Platform and their future numbered revisions of the A# Processor that includes their own GPU integrated via IMGTech.
I like how they've generalized the patent for future desktop, laptop or other general purpose systems, but it seems rather clear it's immediate impact is for the A# [A5, A6 or whatever they call it] Apple Processor of the future with 4 Cores; hence the ARM A-9 Cortex as baseline to draw upon.
Posted by: Marc J. Driftmeyer | September 17, 2010 at 07:06 PM[/IMG]
It looks like their intentions are to supplement the ARM instruction set with instructions that accelerate the execution of Objective C.
I would turn this around a little. Changing and adding instructions can be a huge undertaking on the order of having to redesign an entire core because of the domino effect the new transistor placements would drive.
A very limited version of changing instructions could be a combination of a compiler and decode stage change. There you can break up a new "native" instruction into it's core native components for issue. That's still a hard task, but at least the domino effect is limited to a small portion of the CPU footprint. And in the end I don't know if the extra speed you would get from the decoder change would be fast enough to justify the resources spent compared to just a compiler driven change. It would take some serious low level analysis and simulation to answer a question like that with any certainty.
So I think the turnaround is that Apple has the ability on an instruction-by-instruction basis to determine if it is necessary to do a decoder+compiler change or just a compiler change, but even more profitably Apple can make adjustments in the SoC (low risk compared to messing with the core) to optimize I/O and graphics based on the same kind of iinstruction-by-instruction analysis. It is an advantage none of the other producers have or in the short to medium term could hope to match. The other guys are all producing and selling silicon to many clients, so a laser focused optimization just doesn't make sense for them.
I would turn this around a little. Changing and adding instructions can be a huge undertaking on the order of having to redesign an entire core because of the domino effect the new transistor placements would drive.
A very limited version of changing instructions could be a combination of a compiler and decode stage change. There you can break up a new "native" instruction into it's core native components for issue. That's still a hard task, but at least the domino effect is limited to a small portion of the CPU footprint. And in the end I don't know if the extra speed you would get from the decoder change would be fast enough to justify the resources spent compared to just a compiler driven change. It would take some serious low level analysis and simulation to answer a question like that with any certainty.
So I think the turnaround is that Apple has the ability on an instruction-by-instruction basis to determine if it is necessary to do a decoder+compiler change or just a compiler change, but even more profitably Apple can make adjustments in the SoC (low risk compared to messing with the core) to optimize I/O and graphics based on the same kind of iinstruction-by-instruction analysis. It is an advantage none of the other producers have or in the short to medium term could hope to match. The other guys are all producing and selling silicon to many clients, so a laser focused optimization just doesn't make sense for them.
You seem to be very knowledgeable and up to speed on this!
I have a question that's been bothering me.
AFAICT, Both Apple and Samsung are licensed to Design and Manufacture (two separate licenses) ARM CPUs.
If Apple has Samsung manufacture the A4 (and follow-on) chips) -- does that mean that Samsung could use the same design in their competitive CPUs?
The smart way to run a company is to have the ability to create something from raw materials to the finished product and have the production capabilities to do it. By doing that there would be nothing that could stop it from creating products it wants.
Some business models have all of their design and manufacturing done by others. The people in the office just do the marketing and billing. Apple is half way between the two. They design things but don't manufacture anything.
Apple needs to take the next step with its chips and create their own. They're starting out with the A4 and perhaps with their new team will get good enough at it to design chips from scratch. They've got to start somewhere.
I wish Apple would actually manufacture things. They could start off by letting Americans assemble iPods and iPhones. Then they could have Americans manufacture the cases to their computers and idevices in the USA. In time Apple could manufacture their own circuit boards and chips in the USA. With the premium prices of Apple products the company could afford to do these things within the USA. Just the good public relations alone would get them more sales from Americans who want to support US manufacturers.
If Foxcon employs one million people in China then Apple could eventually move that production to the USA and create at least that many jobs.
Until the cost to manufacture and assemble in the US falls below that of China this will never happen. Getting rid of the minimum wage and giving US manufacturers a tax holiday could be one way to increase US-based production, but in the age of taxation without representation I don't see this happening.
The smart way to run a company is to have the ability to create something from raw materials to the finished product and have the production capabilities to do it. By doing that there would be nothing that could stop it from creating products it wants.
Some business models have all of their design and manufacturing done by others. The people in the office just do the marketing and billing. Apple is half way between the two. They design things but don't manufacture anything.
Apple needs to take the next step with its chips and create their own. They're starting out with the A4 and perhaps with their new team will get good enough at it to design chips from scratch. They've got to start somewhere.
I wish Apple would actually manufacture things. They could start off by letting Americans assemble iPods and iPhones. Then they could have Americans manufacture the cases to their computers and idevices in the USA. In time Apple could manufacture their own circuit boards and chips in the USA. With the premium prices of Apple products the company could afford to do these things within the USA. Just the good public relations alone would get them more sales from Americans who want to support US manufacturers.
If Foxcon employs one million people in China then Apple could eventually move that production to the USA and create at least that many jobs.
Until the cost to manufacture and assemble in the US falls below that of China this will never happen. Getting rid of the minimum wage and giving US manufacturers a tax holiday could be one way to increase US-based production, but in the age of taxation without representation I don't see this happening.
It isn't just wages and taxes -- there are a whole slew of regulations that are unfriendly to manufacturing.
While semiconductor manufacturing is relatively "clean" it still involves hazardous chemicals usage and disposal.
Sadly, because of Federal and State wage, tax and regulation -- there is very little "silicon" manufactured in Silicon Valley.
You're blindly missing the point.
OS X may not be available for installation on non-Apple computers, but it's possible.
Windows may not be made for installation on Macs, but it's possible.
If Apple makes its own architecture, it will be physically impossible within the fundamental laws of the universe for it to work.
First of all, who cares, second of all, have you ever heard of boot camp? Sure you have, you just pretend it won't exist in the future.
Again, who cares. I'd go for a much better Mac, rather than a good Mac that also "does windoze". Who cares. If you need it that bad, buy a $99 windows box and call it a day.
Your posts are just whack.
Designing a the CPU architecture from scratch is pointless and futile. Licensing the IP to improve upon the officially designed and released architecture specs from ARM is a wise investment.
I wonder if that applies to servers as well as mobile devices.
The API is far too big to make any application specific tweaks.
A natural move. Apple is in control of their own destiny these days. They can go as proprietary and as in-house as they like, with the assurance that consumers will buy.
Further control = further strengthening of the elements that make for a superior User Experience.
When it comes to Macs they need, very badly, i86 compatibility. Do not underestimate this. There is just to much software out there that has to run under Windows or another environment. When you have to run such software a VM and the right OS makes the Mac a very versatile machine.
I would turn this around a little. Changing and adding instructions can be a huge undertaking on the order of having to redesign an entire core because of the domino effect the new transistor placements would drive.
There are all sorts of possibilities including the possibility of using reserved co processor instructions. For the most part ARM has already used up most of the possible instructions so I'm not sure what approach Apple would take.
A very limited version of changing instructions could be a combination of a compiler and decode stage change. There you can break up a new "native" instruction into it's core native components for issue. That's still a hard task, but at least the domino effect is limited to a small portion of the CPU footprint. And in the end I don't know if the extra speed you would get from the decoder change would be fast enough to justify the resources spent compared to just a compiler driven change. It would take some serious low level analysis and simulation to answer a question like that with any certainty.
I'm not sure this is a problem at all for Apple. Apparently they had a significant hand in Alt-
Vec back in the PPC days.
So I think the turnaround is that Apple has the ability on an instruction-by-instruction basis to determine if it is necessary to do a decoder+compiler change or just a compiler change, but even more profitably Apple can make adjustments in the SoC (low risk compared to messing with the core) to optimize I/O and graphics based on the same kind of iinstruction-by-instruction analysis.
Apple actually has a lot of IP related to GPU and or graphics processing. They bought a whole company a few years back. Beyond that Apple has a collection of patents related to flash memory so yeah they can do much outside of the CPU also.
It is an advantage none of the other producers have or in the short to medium term could hope to match. The other guys are all producing and selling silicon to many clients, so a laser focused optimization just doesn't make sense for them.
I just see Apple patenting a lot of technology that frankly isn't of much use unless you are doing a lot of heavy design in and around the CPU core. Unfortunately I can't remember where I saw the patents. As to patentlyapple i've ben to the site a couple of times and frankly the search mechanism sucks.
I just see Apple patenting a lot of technology that frankly isn't of much use unless you are doing a lot of heavy design in and around the CPU core. Unfortunately I can't remember where I saw the patents. As to patentlyapple i've ben to the site a couple of times and frankly the search mechanism sucks.
I agree with the patently apple search -- I'll email Jack and see what he can do to add an advanced search -- simple && || ! and "" exact match would add a lot of utility.