or Connect
New Posts  All Forums:

Posts by nht

The mini 2.6ghz benches really well for the price compared to the rest of the lineup. Stick even a crappy GPU in the mini that'll run Resolve and other GPU required apps and its a significant iMac challenger.http://barefeats.com/imac12p2.html
  http://daringfireball.net/linked/2013/01/22/mac-mini     I have no idea why Gruber picked this out today.  Did Kasper pee in his cherrios or something happen to the mini? 
  I'd rather have the 630M over the HD4000...
  Hopefully not too ideal or we wont get it. :)   The only reason the mini lost discrete graphics was because it's become a CPU powerhouse.  The only things that the iMac does significantly better is anything GPU accelerated.   If the $799 2.3Ghz Core i7 Mini had the GT 630M I think a lot of folks wouldn't be buying the iMac.  Especially given the lack of supply.  Heck even as an extra $100 BTO option for the top end $899 2.6Ghz Core i7 model to push it to $999 and a lot...
    Ah right, I must have transposed those sizes in my memory.  My bad.       The Medfield has a SGX 540 at 400MHz which is slow.   I would expect that Apple would insist on being able to spec the GPU in any agreements...especially given Intel is currently using PowerVR SGX in their own designs at the moment.     An Apple design win puts Intel on top and Intel knows this.  This combined with the volume means that Apple has a lot of leverage in negotiations.   Again,...
  My recollection was the original FCP was a bit buggy and incomplete as well.  And FCP X was pretty much rebuilt from the ground up and is up to 10.0.7.  The features added (or re-added back as the case may be) aren't consumer features either.   Who puts in that kind of effort just to pull out?   Plus Boris released their plugin pack not long ago.  At $995 I guess they are bullish that there will be enough pro users of FCPX to have made that effort worth while.
  core die size       So I'm guessing you like the odds?   Of course Intel is playing catch up on power consumption.  Hence the 2016 date rather than a 2013 date.  To me it looks a little like 2003 with Intel looking like it has a great future roadmap even though AMD has been kicking the crap out of them.
  Intel seems to have a non-EUV path to 10mm with quad patterning immersion lithography.     Everything is iffy at these sizes and the end of Moore's Law could be on the horizon.  Then again, that's been said before too and yet, here we are.  I guess when we max out photolithography then technically we're at the end of Moore's Law but the continued exponential increase in computing capability could continue on anyway.   Of the companies listed here as the major players...
  The Cortex A15 is more power efficient than the A9 for the same work but at load it's a lot more power hungry at the top end.  When both the CPU and GPU are ramped the Exynos 5 Dual peak TDP is closer to 8W.     http://www.anandtech.com/show/6536/arm-vs-x86-the-real-showdown/14   Want to bet the A50 won't continue this trend of higher peak TDP? Apps, especially games, are more than willing to use all the CPU and GPU power available.   As Anand points out, Intel showed...
  You mean like in 2005?   The wall is they don't want to do business with Samsung and TSMC/GF/UMC/etc have problems getting to 16nm/14nm FinFET in volume production while Intel has already has 3 fabs doing 14nm in volume for a year and knocking on the doors of 10nm with 450mm wafers only a couple more year out. 
New Posts  All Forums: