Yeah honestly I'm going to have to disagree with wizard. If $599 got you an i3, then yes I'd agree. I do think though that perhaps you should at least get a i7 [B]dual-core[/B] instead of an i5 but ah well.
Yeah honestly I'm going to have to disagree with wizard. If $599 got you an i3, then yes I'd agree. I do think though that perhaps you should at least get a i7 dual-core instead of an i5 but ah well.
That makes less difference than you think as all dual-core chips have hyperthreading enabled, which was one of the big i5 vs i7 points. It's just quad i5s where it is disabled. Those are desktop chips only, and of course their E3 Xeon equivalents.
Yeah honestly I'm going to have to disagree with wizard. If $599 got you an i3, then yes I'd agree. I do think though that perhaps you should at least get a i7 dual-core instead of an i5 but ah well.
A stiff price tag? Are you kidding? Compared to the other crap out there it is sure worth the money.$599.00 for a PC that is versatile is a no brain one.
Guys in the business world they simply don't care about the specifics of the processor. Mac failure in business has more to do with the unsupportable hardware and a lack of configurability for the task at hand. It is the IT department position to keep basic hardware common and upgrade it to the job's specifics.
Would it be possible to (down the line) have a base model quad core and mid level hex core?
Of course. Intel tends to filter things down. but much of their effort has gone into IGPs. Perhaps broadwell or later the 35W configurations will be predominantly quad models.
Right now for most consumer type applications and even many business applications four cores works out pretty well. Intels weak spot right now is in the GPU and as such I expect them to focus on significant improvements there for the next couple of years.
Would it be possible to (down the line) have a base model quad core and mid level hex core?
To look at this another way if Apple/Intel ever get OpenCL working on Mac OS then you will immediately have many more cores to use for apps that can exploit them. GPU computing is a big thing that some apps can really leverage and as such needs to be supported in the Intel only hardware. The endgame is full heterogeneous computing and the ability to run a broader array of code effectively on a GPU.
At this point in time I'd prefer four cores and OpenCL support on Intels GPUs. Preferably on a more advanced Intel GPU. It is the best way to address a broad array of performance needs on SoC like systems.
I'm thinking sometime after 14nm. Intel can make use of the transition to 14nm to implement the type of GPU they really need and that frankly Apple really needs for the retina machines. After 14 nm they can look at more i86 type cores. This could be years out though, it could be 2018 or later.
Of course. Intel tends to filter things down. but much of their effort has gone into IGPs. Perhaps broadwell or later the 35W configurations will be predominantly quad models.
Wattage will become very interesting in the future. Right now leakage is a huge problem, if they can control that we could see some rather interesting 35 watt class processors. Six cores though may be doable at 12 to 17 watts in the not to distance future. It really comes down to just how well the processes can control that leakage. Even so I still see Intel putting a lot of effort into the GPU for the next revision or two after Haswell.
The 13" rMBP is going to become what the 13" uMBP was for so many years and be their biggest seller, I can almost feel it. Intel's graphics will just keep getting better and better and includes for machines such as the mini. I still stand by the fact that the 15" rMBP and the iMac should always have discrete and I hope Apple agrees.
I'm thinking sometime after 14nm. Intel can make use of the transition to 14nm to implement the type of GPU they really need and that frankly Apple really needs for the retina machines. After 14 nm they can look at more i86 type cores. This could be years out though, it could be 2018 or later.
Wattage will become very interesting in the future. Right now leakage is a huge problem, if they can control that we could see some rather interesting 35 watt class processors. Six cores though may be doable at 12 to 17 watts in the not to distance future. It really comes down to just how well the processes can control that leakage. Even so I still see Intel putting a lot of effort into the GPU for the next revision or two after Haswell.
Well they did release one 35W quad with Sandy. The trend continued with Ivy. The earlier Ivy's had a 35W version with a different sku, but so far intel hasn't released many 35W quad cpus. They could be the 45W versions simply clocked lower.
The last link states that the spec has been finalized. Initially this product seems to be destined for servers but I see it as having great benefit for the Mac Pro at introduction. Once mass production kicks in technology like this would be awesome in Mini class machines and maybe even more importantly in Apple laptops. The technology offered up two huge benefits for Apple hardware, they are reduced power usage and much faster transfer speeds. This could greatly reduce the need for GT3 like buffer chips in the processor package.
The transfer speeds are important because both Intel and AMD have issues with the bottleneck to main memory for their APUs. This isn't the only attempt at faster memory systems but it does have very broad interest in the engineering community. There are some big players publically involved including ARM, the real question is are Apple and Intel involved. Interesting concepts pop up with this technology including the thought of iPhones with 8GB of main memory (RAM).
Well they did release one 35W quad with Sandy. The trend continued with Ivy. The earlier Ivy's had a 35W version with a different sku, but so far intel hasn't released many 35W quad cpus. They could be the 45W versions simply clocked lower.
Quads are certainly possible, in the lower power variants, but I still see a strong demand for much better graphics in machines like the Mini or AIRs. Thus I suspect big vendors like Apple are still pushing Intel to drive GPU improvements at the expense of the i86 complex. Lets face it Intels GPUs still effectively suck and don't even effectively compete with what AMD offers on year old chips. If Intel does address the GPU it will have a dramatic impact on low end hardware. This is good stuff.
On the flip side running an i86 core in a SoC doesn't really take a lot of power these days. The support circuitry and GPU are sucking up a great deal of power. This is especially the case in cache memory and the interface to main memory. So who knows maybe the transition to 14 nm will leave Intel with a massive surplus of transistors.
Quads are certainly possible, in the lower power variants, but I still see a strong demand for much better graphics in machines like the Mini or AIRs. Thus I suspect big vendors like Apple are still pushing Intel to drive GPU improvements at the expense of the i86 complex. Lets face it Intels GPUs still effectively suck and don't even effectively compete with what AMD offers on year old chips. If Intel does address the GPU it will have a dramatic impact on low end hardware. This is good stuff.
Well it has been that way for some time now. Expanding upon the capability of an ipad or macbook air can address a lot of people. Intel talks things up quite a bit, but they seem to be putting more effort into power management and integrated graphics. Their E/EP variants have gone the other way. Those are going further and further on core counts with ivy bridge EP supposedly going as high as 12 per chip, up from 8 with sandy. I had a leaked slide link before, but they all basically predict the same thing. A 50% increase in max core count is significant, although I could probably never afford one based on it without building it myself. I personally hope that more new tools start to leverage GPGPU functionality, as it's well suited to highly parallel workloads.
Quote:
On the flip side running an i86 core in a SoC doesn't really take a lot of power these days. The support circuitry and GPU are sucking up a great deal of power. This is especially the case in cache memory and the interface to main memory. So who knows maybe the transition to 14 nm will leave Intel with a massive surplus of transistors.
I wish I knew enough about engineering to write a decent response to that.
Well it has been that way for some time now. Expanding upon the capability of an ipad or macbook air can address a lot of people.
This is why it is a widely known "secrete" that Apple has been pushing Intel hard with respect to GPU performance. relative to the rest of the industry Intels GPU's have been just terrible, both hardware and driver wise. Industry pressure from basically all sides has driven Intel to rectify these issues. At this point Intel actually has pretty good drivers for just about all hardware except oddly Apple hardware. Even in Open source land they have basically went from trailing the pack to advancing to the lead position.
Intel talks things up quite a bit, but they seem to be putting more effort into power management and integrated graphics.
They do talk a good game but that has hurt them a lot. People still have a negative attitude with respect ot Intel drivers and GPU hardware. Some of that is well earned over the years.
Their E/EP variants have gone the other way. Those are going further and further on core counts with ivy bridge EP supposedly going as high as 12 per chip, up from 8 with sandy. I had a leaked slide link before, but they all basically predict the same thing. A 50% increase in max core count is significant, although I could probably never afford one based on it without building it myself. I personally hope that more new tools start to leverage GPGPU functionality, as it's well suited to highly parallel workloads.
Right, If you don't have to worry about supporting a GPU you are basically freeing up space to add another complete set of processors. Just look at the space taken up by GPUs on Intel or AMD APU's
I wish I knew enough about engineering to write a decent response to that.
Well if you still have that link look at the chips with chips with 10 to 12 cores on them. They aren't hugely larger than todays APU's in some cases. Ultimately the size of the cache and supporting circuitry does impact size. Those EP chips also start out at about the same wattage range as top of the end desktop chips. You are paying extra to get cores that all run at a much higher top clock rate without the heavy throttling seen in some of Intels desktop chips.
As to a product that one would like to have, sure I'd go for one if I could afford it. There is light at the end of the tunnel though as ARM based servers will soon be putting Intel under a lot of pressure. The cost of hardware will drop along with a significant reduction in power usage in the data center. Expect ARM servers to be a big hit if they deliver on the promise of power savings. The fact is the data center for the most part doesn't concern itself with the name on the box nor the Intel inside sticker. So maybe a 12 core mac on your desk won't be out of reach in 2 years or so.
I am hoping that Apple would never consider putting integrated graphics in anything more than the base model 21.5" iMac and even then, discrete graphics should be a BTO option. GT3e or not, not at $1,299. $999 yes.
Comments
Quote:
Originally Posted by Winter
Yeah honestly I'm going to have to disagree with wizard. If $599 got you an i3, then yes I'd agree. I do think though that perhaps you should at least get a i7 dual-core instead of an i5 but ah well.
That makes less difference than you think as all dual-core chips have hyperthreading enabled, which was one of the big i5 vs i7 points. It's just quad i5s where it is disabled. Those are desktop chips only, and of course their E3 Xeon equivalents.
Guys in the business world they simply don't care about the specifics of the processor. Mac failure in business has more to do with the unsupportable hardware and a lack of configurability for the task at hand. It is the IT department position to keep basic hardware common and upgrade it to the job's specifics.
Quote:
Originally Posted by Winter
Would it be possible to (down the line) have a base model quad core and mid level hex core?
Of course. Intel tends to filter things down. but much of their effort has gone into IGPs. Perhaps broadwell or later the 35W configurations will be predominantly quad models.
Mac failure try Asus or Lenovo and than you will see poor quality control. I owned both and they both crapped out on me within a short period of time.
To look at this another way if Apple/Intel ever get OpenCL working on Mac OS then you will immediately have many more cores to use for apps that can exploit them. GPU computing is a big thing that some apps can really leverage and as such needs to be supported in the Intel only hardware. The endgame is full heterogeneous computing and the ability to run a broader array of code effectively on a GPU.
At this point in time I'd prefer four cores and OpenCL support on Intels GPUs. Preferably on a more advanced Intel GPU. It is the best way to address a broad array of performance needs on SoC like systems.
Wattage will become very interesting in the future. Right now leakage is a huge problem, if they can control that we could see some rather interesting 35 watt class processors. Six cores though may be doable at 12 to 17 watts in the not to distance future. It really comes down to just how well the processes can control that leakage. Even so I still see Intel putting a lot of effort into the GPU for the next revision or two after Haswell.
Quote:
Originally Posted by wizard69
I'm thinking sometime after 14nm. Intel can make use of the transition to 14nm to implement the type of GPU they really need and that frankly Apple really needs for the retina machines. After 14 nm they can look at more i86 type cores. This could be years out though, it could be 2018 or later.
Wattage will become very interesting in the future. Right now leakage is a huge problem, if they can control that we could see some rather interesting 35 watt class processors. Six cores though may be doable at 12 to 17 watts in the not to distance future. It really comes down to just how well the processes can control that leakage. Even so I still see Intel putting a lot of effort into the GPU for the next revision or two after Haswell.
Well they did release one 35W quad with Sandy. The trend continued with Ivy. The earlier Ivy's had a 35W version with a different sku, but so far intel hasn't released many 35W quad cpus. They could be the 45W versions simply clocked lower.
http://www.hybridmemorycube.org/
http://electronicdesign.com/memory/hybrid-memory-cube-shows-new-direction-high-performance-storage
The last link states that the spec has been finalized. Initially this product seems to be destined for servers but I see it as having great benefit for the Mac Pro at introduction. Once mass production kicks in technology like this would be awesome in Mini class machines and maybe even more importantly in Apple laptops. The technology offered up two huge benefits for Apple hardware, they are reduced power usage and much faster transfer speeds. This could greatly reduce the need for GT3 like buffer chips in the processor package.
The transfer speeds are important because both Intel and AMD have issues with the bottleneck to main memory for their APUs. This isn't the only attempt at faster memory systems but it does have very broad interest in the engineering community. There are some big players publically involved including ARM, the real question is are Apple and Intel involved. Interesting concepts pop up with this technology including the thought of iPhones with 8GB of main memory (RAM).
Quads are certainly possible, in the lower power variants, but I still see a strong demand for much better graphics in machines like the Mini or AIRs. Thus I suspect big vendors like Apple are still pushing Intel to drive GPU improvements at the expense of the i86 complex. Lets face it Intels GPUs still effectively suck and don't even effectively compete with what AMD offers on year old chips. If Intel does address the GPU it will have a dramatic impact on low end hardware. This is good stuff.
On the flip side running an i86 core in a SoC doesn't really take a lot of power these days. The support circuitry and GPU are sucking up a great deal of power. This is especially the case in cache memory and the interface to main memory. So who knows maybe the transition to 14 nm will leave Intel with a massive surplus of transistors.
Quote:
Originally Posted by wizard69
Quads are certainly possible, in the lower power variants, but I still see a strong demand for much better graphics in machines like the Mini or AIRs. Thus I suspect big vendors like Apple are still pushing Intel to drive GPU improvements at the expense of the i86 complex. Lets face it Intels GPUs still effectively suck and don't even effectively compete with what AMD offers on year old chips. If Intel does address the GPU it will have a dramatic impact on low end hardware. This is good stuff.
Well it has been that way for some time now. Expanding upon the capability of an ipad or macbook air can address a lot of people. Intel talks things up quite a bit, but they seem to be putting more effort into power management and integrated graphics. Their E/EP variants have gone the other way. Those are going further and further on core counts with ivy bridge EP supposedly going as high as 12 per chip, up from 8 with sandy. I had a leaked slide link before, but they all basically predict the same thing. A 50% increase in max core count is significant, although I could probably never afford one based on it without building it myself. I personally hope that more new tools start to leverage GPGPU functionality, as it's well suited to highly parallel workloads.
Quote:
On the flip side running an i86 core in a SoC doesn't really take a lot of power these days. The support circuitry and GPU are sucking up a great deal of power. This is especially the case in cache memory and the interface to main memory. So who knows maybe the transition to 14 nm will leave Intel with a massive surplus of transistors.
I wish I knew enough about engineering to write a decent response to that.
Well if you still have that link look at the chips with chips with 10 to 12 cores on them. They aren't hugely larger than todays APU's in some cases. Ultimately the size of the cache and supporting circuitry does impact size. Those EP chips also start out at about the same wattage range as top of the end desktop chips. You are paying extra to get cores that all run at a much higher top clock rate without the heavy throttling seen in some of Intels desktop chips.
As to a product that one would like to have, sure I'd go for one if I could afford it. There is light at the end of the tunnel though as ARM based servers will soon be putting Intel under a lot of pressure. The cost of hardware will drop along with a significant reduction in power usage in the data center. Expect ARM servers to be a big hit if they deliver on the promise of power savings. The fact is the data center for the most part doesn't concern itself with the name on the box nor the Intel inside sticker. So maybe a 12 core mac on your desk won't be out of reach in 2 years or so.
Maybe the cost is to high to make it.Still you can always dream.
Apple want to save money by doing this to their graphics.They care more about themselves than their customers.
Originally Posted by marvfox
Apple want to save money by doing this to their graphics.They care more about themselves than their customers.
That conclusion cannot be drawn from that premise, and the premise is faulty.
Then they are undercutting sales of those models I feel.