Intel at CES to show off next-gen of Apple-bound Sandy Bridge processors

12346

Comments

  • Reply 101 of 136
    backtomacbacktomac Posts: 4,579member
    Quote:
    Originally Posted by macdad View Post


    Yes, excellent advice !! Indeed the CPU is not really the only bottleneck (although some audio tasks could definitely benefit from a quad-core cpu). The reason i hadn't maxed ram before was because i was advised a while ago that 2x1GB was better in some ways than 2GB+1GB, true or false?



    The Seagate Momentus hybrid drive seems like an awesome idea, i was not aware of its existence, but after quick research i found a review of someone with the same gen black macbook as mine stating day&night performance enhancements using it, however he said installation was "far from plug&play" not really sure what he means by that, also could you please elaborate on the "issues" you have heard about them?



    I am really leaning towards these upgrade options (3GB ram + Seagate Momentus Hybrid) as SSD are still really expensive in my country and i don't want to invest too much as i am still planning on upgrading to the next 13" MBP refresh which hopefully will come with SSD standard..



    For possible seagate issues see here.
  • Reply 102 of 136
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by Tailpipe View Post


    @Wiz, nht, bactomac, marv etc.



    Thank you!







    I'm not putting my money on a case redesign for the next MBP refresh.



    I actually think the potential for a major ovehaul is rather significant. On the other hand there isn't a lot that one can do with a laptop case. That is why I suggest focusing on what could potentially be in the case.

    Quote:

    I think it will be no more than a speedbump.



    This I disagree with, from the marketing standpoint Core 2 Dous are getting a little old to market in anything called a Pro.

    Quote:

    I truly do not think that Apple will remove the onboard DVd drive from the 13" MBP. In many uninformed people's eyes this would reduce the distinction between the MBP and MBA.



    I'm really not sure where Apple will go with respect to the optical in the 13" MBP. Whatever they do though it will have nothing to do with trying to distinguish it from the AIR. The number one point is the much faster processor in the MBP. Even today the processor is much faster even in the face of a bunch of misleading benchmarking. It wouldn't be impossible for an upgraded 13" MBP to have 3 to 4 times the CPU performance of an AIR. It all depends upon what they put in there.



    The question of course is how important is better performance to you.

    Quote:

    I still don't understand the difference between the Sandy Bridge chips that will arrive in january and the additional ones that will arrive in April. Given the length of time that has passed since the last refresh, I suspect that January will be new model time.



    I've only followed Sandy Bridge in general terms but this is a major upgrade of the internals. It is possible that the units coming later will be even more optimized for power.



    As to January updates in the Apple world nothing is a given!!!!!! There are all sorts of reasons as to why they might hold off new models. Everything from LightPeak to AMD chips could be in the cards. In any event it is the middle of December and we are only hearing rumors about iPad updates. I think that says a lot right there.
  • Reply 103 of 136
    MarvinMarvin Posts: 15,445moderator
    Quote:
    Originally Posted by nht View Post


    I don't think the 320M can do PhysX.



    It should do, the requirements are 32 cores + 256MB VRAM; the 320M has 48 cores. PhysX used to work on the 9400M before NVidia bumped up the requirements - I was able to run it with Unreal Tournament 3. It allows destructible environments in certain games:



    http://www.youtube.com/watch?v=QD8tA1Vo-XU



    That video is more or less how it plays on the 320M regarding the environment destruction.



    Quote:
    Originally Posted by nht View Post


    I also don't quite understand the contention that you can't do advanced shader techniques OpenGL via the OpenGL Shading Language. Graphics is not my area of specialty so perhaps you could explain what I'm missing. OpenCL is more open to general implementation but I would expect GLSL to be able to do any shader effect that a GPU is physically capable of doing.



    GLSL seems to be similar to RSL (Renderman) where shader code gets executed at certain parts of a graphics pipeline e.g per vertex, pixel etc. Normally meaning you store intermediate data in predefined caches such as image buffers and you have to use Shadeops (like what OpenCL is) to extend the capabilities. Some of the main advantages of OpenCL over GLSL given in the following siggraph paper are:



    Scattered writes

    Local memory

    Thread synchronization

    Atomic memory operations



    http://sa09.idav.ucdavis.edu/docs/SA09_GL_interop.pdf



    From NVidia:



    "What are the advantages of CUDA vs. graphics-based GPGPU?



    CUDA is designed from the ground-up for efficient general purpose computation on GPUs. Developers can compile C for CUDA to avoid the tedious work of remapping their algorithms to graphics concepts.



    CUDA exposes several hardware features that are not available via graphics APIs. The most significant of these is shared memory, which is a small (currently 16KB per multiprocessor) area of on-chip memory which can be accessed in parallel by blocks of threads. This allows caching of frequently used data and can provide large speedups over using textures to access data. Combined with a thread synchronization primitive, this allows cooperative parallel processing of on-chip data, greatly reducing the expensive off-chip bandwidth requirements of many parallel algorithms. This benefits a number of common applications such as linear algebra, Fast Fourier Transforms, and image processing filters.



    Whereas fragment programs in graphics APIs are limited to outputting 32 floats (RGBA * 8 render targets) at a pre-specified location, CUDA supports scattered writes - i.e. an unlimited number of stores to any address. This enables many new algorithms that were not possible using graphics APIS to perform efficiently using CUDA.



    Graphics APIs force developers to store data in textures, which requires packing long arrays into 2D textures. This is cumbersome and imposes extra addressing math. CUDA can perform loads from any address.



    CUDA also offers highly optimized data transfers to and from the GPU."



    http://forums.nvidia.com/index.php?s...0&#entry478583



    It does note you don't write direct to the framebuffer so you have to make sure when you do graphics operations not to introduce overheads vs GLSL.



    OpenCL is also device agnostic so it chooses the best hardware to run on.



    You're right though, GLSL is very powerful and can do complex effects, OpenCL is just a different design that allows you to go beyond its capabilities. The following site has some experiments:



    http://machinesdontcare.wordpress.co.../10/07/opencl/



    He reaches an interesting conclusion recently:



    http://machinesdontcare.wordpress.co...asariley-glsl/



    "Well, after all that messing around with OpenCL kernels, it turns out it?s much easier, and more importantly, much faster, to do it in GLSL."



    The following demo also shows GLSL being used for physics computation and it seems to run faster than OpenCL:



    http://www.youtube.com/watch?v=anNClcux4JQ



    In the end though, being able to work without the limits of the GPU APIs is better as it lets you do things like GPU video encoding like you see with Badaboom. This can't be done nearly as fast on the CPU and can't be done at all with GLSL.



    Quote:
    Originally Posted by nht View Post


    Yah, it's basically Intel grabbing low hanging fruit to say "See...that GPGPU stuff isn't worth the complexity". I think for some market segments that might be true.



    Users of Motion, FCE, etc probably aren't going to be happy with any IGP. Users of iMovie on a MacBook probably aren't going to miss the OpenCL performance delta if the new MacBook is faster than the old MacBook.



    Given that GLSL can handle the bulk of graphics computation and fixed-function encoding hardware can encode popular codecs much faster than even GPGPU can do currently, why we need OpenCL in the low-end is certainly a valid consideration. But still, Intel's GPUs have always shown significant lack of OpenGL support and performance so even GLSL etc run poorly vs NVidia/AMD.



    Preview benchmarks of the Sandy Bridge GPU came out slower than the NVidia GPU we have now so gamers take a step back while people using the CPU take a step up. Kind of a zero-sum game because you just end up pissing off a different group of people who wonder why their upgrade is slower and less compatible with 3D apps and games.



    I think Apple are going to have no choice but to go with i-series + dedicated. They can go with an i3 and it would be an improvement over C2D in terms of power consumption. It won't be a huge jump like an i5 but it differentiates it from the 15". I originally thought they'd do this just for the marketing:



    i3 = 13", i5=15", i7=17"



    I think it has to be cost more than anything for the i5 + 330M. Sony put this combo in a 13" along with a Blu-Ray drive and quad RAID-0 SSD but it was expensive (~$2000). Apple starts the MBP line much lower than that.



    If they started $100 higher at $1299 and pushed out the 13" MBA but keeping a similar design, that might work out ok and I think it would be the most sensible option going forward giving consumers the best value while maintaining competitive hardware.
  • Reply 104 of 136
    nhtnht Posts: 4,522member
    Quote:
    Originally Posted by macdad View Post


    The reason i hadn't maxed ram before was because i was advised a while ago that 2x1GB was better in some ways than 2GB+1GB, true or false?



    It's been a while but I think someone benchmarked this and 2GB + 1GB ended up being better anyway even if the dual channel aspect was lost.
  • Reply 105 of 136
    solipsismsolipsism Posts: 25,726member
    Ask your Sandy Bridge questions now at AnandTech: http://www.anandtech.com/show/4056/a...questions-here





    Quote:
    Originally Posted by nht View Post


    It's been a while but I think someone benchmarked this and 2GB + 1GB ended up being better anyway even if the dual channel aspect was lost.



    I think macsales.com used to test the RAM speed between different pairings of RAM modules, but, as you state, it’s pretty much a moot point if you need more RAM as virtual RAM will be a huge step backwards.



    edit:
  • Reply 106 of 136
    john.bjohn.b Posts: 2,742member
    So the Sandy Bridge CPUs are out (engadget.com), but no mention about OpenCL support via the IGP...
  • Reply 107 of 136
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by John.B View Post


    So the Sandy Bridge CPUs are out (engadget.com), but no mention about OpenCL support via the IGP...



    They are looking good, too, in every way except OpenCL support. Would Apple not use OpenCL for their next line of 11? to 13? notebooks after using it in every Mac for a couple years and likely in the next iOS-based iDevices? I?m not so sure they will.
  • Reply 108 of 136
    emacs72emacs72 Posts: 356member
    Quote:
    Originally Posted by solipsism View Post


    They are looking good, too, in every way except OpenCL support.



    OpenCL 1.x support via Intel is still in alpha state http://software.intel.com/en-us/arti...el-opencl-sdk/



    by the time OpenCL is matured we'll see Ivy Bridge (a die shrink of the Sandy Bridge architecture) slated for release in 2012 that introduces quad-core CPUs at the entry level market segment.
  • Reply 109 of 136
    MarvinMarvin Posts: 15,445moderator
    Quote:
    Originally Posted by solipsism View Post


    They are looking good, too, in every way except OpenCL support. Would Apple not use OpenCL for their next line of 11? to 13? notebooks after using it in every Mac for a couple years and likely in the next iOS-based iDevices? I?m not so sure they will.



    I wouldn't expect them to abandon OpenCL support but there's no harm really in having it only supported on the CPU in the low-end. That's what Intel are planning to do.



    Right now, the GPU in the low-end is about 10x faster than the CPU for OpenCL compute. Sandy Bridge will very likely bring that down to under 5x given that we have 2 generation old C2D now. So while it is a downgrade, entry users won't notice it all that much and probably not at all for most GPU apps as they will use GLSL.



    More games demos show that Sandy Bridge IGP comes out close to what we have:



    http://www.youtube.com/watch?v=tRJ34LC-PUQ

    http://blog.laptopmag.com/intel-sand...#axzz19ySTw3ws



    There is an AES benchmark which was crazy fast. They have optimised that process massively so things like making encrypted disk images or whatever should be much quicker.



    We'll be able to see how it compares using a better OpenCL benchmarking tool like the Lux GPU rendering engine:



    http://www.macupdate.com/app/mac/33632/smallluxgpu



    Apple can still go the dedicated GPU route on the low-end. AMD have some very low-powered dedicated GPUs. A 4 thread i5 Mini with one of the mobile Radeon 6-series chips with 512MB VRAM would be a very nice Christmas present. Add in the SSD chip with supplementary HDD and take out the optical and shrink it down a bit and you have one awesome little machine.



    The update to expect for now is just the Macbook Pro and possibly the iMac. They use dedicated chips for the most part. It will be interesting to see what they do with the 13" MBP.
  • Reply 110 of 136
    MarvinMarvin Posts: 15,445moderator
    Here are my results from the OpenCL benchmark comparing the 320M to a 2.4GHz Core 2 Duo that's used throughout the low-end. Numbers show rays/sec on CPU+GPU, then GPU-only and finally % speedup GPU gives vs CPU:



    luxball - 1.5M, 800K, +15%



    luxball chrome - 1.5M, 1.1M, +275%



    sponza - 590K, 380K, +80%



    BMW - 1.5M, 670K, -20%



    cat - 2.7M, 2.2M, +440%



    instances - 220K, 120K, +20%



    glasstable- 1.2M, 650K, +18%



    In most cases, the 320M matches the C2D CPU in terms of performance. What this would imply is that if the SB processor is 2x the C2D, which it pretty much is, then running OpenCL on the SB CPU alone would come out round about the same in most cases to a C2D + 320M.



    This is not an upgrade of course, which is what you'd hope for but not being a significant downgrade might persuade Apple to switch over to it in the low-end.



    This only affects MB, MBA and Mini users who aren't going to be getting stellar performance in the area of computation anyway and those people would benefit much more from having as much as 4x faster AVC encoding.



    Any issues we have now will be alleviated in 12 months with Ivy Bridge anyway.
  • Reply 111 of 136
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by Marvin View Post


    I wouldn't expect them to abandon OpenCL support but there's no harm really in having it only supported on the CPU in the low-end. That's what Intel are planning to do.



    If they drop the GPU from the small Mac notebooks you look to save quite a bit of power from the 320M, not to mention space, heat considerations. For instance, the 11” MBA has a 10W TDP C2D and 320M. I can’t find solid info on the TDP of the 320M but I assume it’s more than 10W and Sandy Bridge’s Turbo Mode only uses 17W max. Everything seems to indicate very real power savings, which have proven to be very important to Apple.
  • Reply 112 of 136
    john.bjohn.b Posts: 2,742member
    Quote:
    Originally Posted by Marvin View Post


    I wouldn't expect them to abandon OpenCL support but there's no harm really in having it only supported on the CPU in the low-end. That's what Intel are planning to do.



    I see what you're saying, but wasn't the whole point of OpenCL to offload certain processes from the CPU to the GPU? Would there really be a gain if the CPU is throttled back somewhat in order that the same CPU can handle OpenCL processing calls (which were supposed to be done on the GPU in the first place)? Why not just skip OpenGL in that case and do all the processing the "normal" way, on the CPU itself?
  • Reply 113 of 136
    Can we stop complaining about Intel's integrated graphics now? The reviews are out. Here's Anandtech's comparison of gaming performance, which includes a Geforce 320M Macbook.



    And OpenCL? **** OpenCL. Get Apple/Intel to provide a QuickSync driver and demand software makers support it.
  • Reply 114 of 136
    MarvinMarvin Posts: 15,445moderator
    Quote:
    Originally Posted by solipsism View Post


    If they drop the GPU from the small Mac notebooks you look to save quite a bit of power from the 320M, not to mention space, heat considerations. For instance, the 11? MBA has a 10W TDP C2D and 320M. I can?t find solid info on the TDP of the 320M but I assume it?s more than 10W and Sandy Bridge?s Turbo Mode only uses 17W max. Everything seems to indicate very real power savings, which have proven to be very important to Apple.



    There's a stat somewhere that puts the 320M at 11W and the old Intel IGP at 7W. SB has a few other power saving features like killing idle peripherals. Overall I'd expect it to be pretty good in this regard.



    Quote:
    Originally Posted by John.B


    I see what you're saying, but wasn't the whole point of OpenCL to offload certain processes from the CPU to the GPU?



    No, it was just about performance. People saw how many processing units went into GPUs and that they were largely unused. The idea really is to more than double the processing power of your machine for free. In the case of the Lux GPU app above, it does that.



    It really doesn't matter where the processing happens as long as it's fast.



    Quote:
    Originally Posted by FuturePastNow


    Can we stop complaining about Intel's integrated graphics now? The reviews are out. Here's Anandtech's comparison of gaming performance, which includes a Geforce 320M Macbook.



    It doesn't look like it. Although the low quality modes are performing well, the next page with the mid-level quality shows it performs slower than the 320M as well as shows visual artifacts due to compatibility issues - that's not good for GPU-supported pro apps, nor is it a fair test of performance if the games don't look the same. Also:



    "both Mafia II and Metro 2033 fail to get above 30FPS, regardless of setting"



    Mafia 2 plays around 30FPS at 1024 x 768 and medium quality on the 320M. It's perfectly playable:



    http://www.youtube.com/watch?v=xvOW3GzWAeo



    Perhaps they are being overly harsh with their review but if it is in fact unplayable then that's not a good start. I'd actually not buy a machine solely based on the playability of that game as it's one of the best games ever made.



    Still, like I say, all this won't matter when Ivy Bridge hits and going SB-alone will only seem like a side-step GPU-wise for most. If Apple would use dedicated chips through the whole lineup, there would be no complaints at all.
  • Reply 115 of 136
    macdadmacdad Posts: 16member
    Not to mention those anandtech SB tests were made using the top line QUAD-CORE SV 3.4 Ghz ( i72820QM ) chip ! i think we can expect significantly lower performances from the actual SB chips that will be used in the next MBP refresh.. i really think/hope apple has no choice but to incorporate dedicated graphics if it doesnt want to alienate gamers!
  • Reply 116 of 136
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by stevetim View Post


    Correct me if i'm wrong but this appears to be a very nominal upgrade.



    There is not a massive increase in general purpose performance butthe processor should provide strong performance in many specific cases. GPU performance is up significantly.



    For some Macs these processors represent a significant update so I wouldn't sell them short.
  • Reply 117 of 136
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by wizard69 View Post


    There is not a massive increase in general purpose performance butthe processor should provide strong performance in many specific cases. GPU performance is up significantly.



    For some Macs these processors represent a significant update so I wouldn't sell them short.



    Looking at the AnandTech review alone this looks like a very impressive update.
  • Reply 118 of 136
    backtomacbacktomac Posts: 4,579member
    Quote:
    Originally Posted by FuturePastNow View Post


    Can we stop complaining about Intel's integrated graphics now? The reviews are out. Here's Anandtech's comparison of gaming performance, which includes a Geforce 320M Macbook.



    And OpenCL? **** OpenCL. Get Apple/Intel to provide a QuickSync driver and demand software makers support it.



    Gotta admit that I didn't expect the new Intel IGP to perform that well. I was expecting more like 9400m level performance.



    If they were only OCL capable, that would alleviate all my concerns for using them. Still they probably will work out just fine in the Macs that end up using them.
  • Reply 119 of 136
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by solipsism View Post


    Looking at the AnandTech review alone this looks like a very impressive update.



    Yeah great; test the GPU at the lowest settings and then declare it awesome. There is so much fudge in AnandTechs articles you have to wonder how much Intel is paying him.



    That isn't to dismiss some of SB good points but many of those good points require rewritten software to take advantage of the new features. Running your old software the results are much more of a mixed bag. A 10% boost might not be worth it for some people especially if the GPU tanks at higher quality levels.



    By the way SB ought to really shine in a Mac Mini used as a HTPC. I'm just not convinced that overall performance will satisfy everybody. The addition of a discrete GPU will make SB a more interesting processor but then you do away with its low cost nature. In the end I just don't want to jump to conclusions based on obviously cooked benchmarks.
  • Reply 120 of 136
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by wizard69 View Post


    Yeah great; test the GPU at the lowest settings and then declare it awesome. There is so much fudge in AnandTechs articles you have to wonder how much Intel is paying him.



    That isn't to dismiss some of SB good points but many of those good points require rewritten software to take advantage of the new features. Running your old software the results are much more of a mixed bag. A 10% boost might not be worth it for some people especially if the GPU tanks at higher quality levels.



    I?ve never read anything AnandTech article that made me think they were getting paid to speak favourably. What previous reviews have they done are fudged?



    Quote:

    By the way SB ought to really shine in a Mac Mini used as a HTPC. I'm just not convinced that overall performance will satisfy everybody. The addition of a discrete GPU will make SB a more interesting processor but then you do away with its low cost nature. In the end I just don't want to jump to conclusions based on obviously cooked benchmarks.



    I don?t get this, on two points. 1) An HTPC doesn?t have to be a super computer and the Mac Mini with the C2D+320M with more than sufficient for streaming or decoded 1080p MKV files while pushing to an HDTV. 2) The Mac Mini as an HTPC is an expensive but poor option unless you?re using a UI designed for it, which the Mac Mini does not come with. Front Row is not an HTPC UI, it?s a 10-foot UI, but it barely has the parts needed and still requires a keyboard/mouse to operate half the time.
Sign In or Register to comment.