Sandy Bridge CPU preview at Anandtech

13

Comments

  • Reply 41 of 63
    backtomacbacktomac Posts: 4,579member
    Quote:
    Originally Posted by FuturePastNow View Post


    Maybe, but even with open CL, the % of code that can run on a GPU is very small.



    Maybe I don't know as I'm not a programer.



    Ars had an interesting article a while back on gpus. Apparently Wall Street is one industry that is in big demand for higher end gpus. They do there financial modeling on the gpu.



    From the outside looking in, it just doesn't seem like SW developers are taking advantage of OCL and its potential. Here is an interesting view of the cpu/gpu battle.
     0Likes 0Dislikes 0Informatives
  • Reply 42 of 63
    Quote:
    Originally Posted by backtomac View Post


    Maybe I don't know as I'm not a programer.



    Ars had an interesting article a while back on gpus. Apparently Wall Street is one industry that is in big demand for higher end gpus. They do there financial modeling on the gpu.



    From the outside looking in, it just doesn't seem like SW developers are taking advantage of OCL and its potential. Here is an interesting view of the cpu/gpu battle.



    I'm not a programmer, either, but I know that GPUs are essentially floating point engines. They run that FP code very well, and it's parallelizable, so doubling the number of units in the GPU doubles the speed. But they don't run integer code, which makes up most of the programs we use.



    Stuff like video and music encoding can benefit from that (the work of video decoding is already offloaded to the GPU). Some modeling and scientific computing tasks can, and some can't. It just depends. Typical home computer stuff like word processing and web browsing doesn't benefit at all, except insofar as the web contains a lot of video.



    GPGPU is great for what it's great for, and not much else. I'm sure that explanation is flawed and someone will correct me, but for now that's my story and I'm sticking to it.



    Oh, and I've read that AMD's Fusion strategy is part of a long-term plan to significantly change the processor. The first gen Fusion is a CPU and GPU on the same die but otherwise separate entities. The second gen will give the CPU scheduler the option of sending FP code to the GPU instead of to the CPU's FP unit. The third gen will eliminate the FP hardware entirely, leaving an integer core attached to all those GPU stream processors. That's essentially what the Ars article you linked to is talking about, though they didn't speculate quite so far as others have.
     0Likes 0Dislikes 0Informatives
  • Reply 43 of 63
    Quote:
    Originally Posted by backtomac View Post


    No. I think tablets will diminish the demand for machines with IGPs.



    Which is a bit odd since tablet are machines with IGPs.
     0Likes 0Dislikes 0Informatives
  • Reply 44 of 63
    Quote:
    Originally Posted by Programmer View Post


    Which is a bit odd since tablet are machines with IGPs.



    Ok. You got me there.
     0Likes 0Dislikes 0Informatives
  • Reply 45 of 63
    Quote:
    Originally Posted by FuturePastNow View Post


    I'm not a programmer, either, but I know that GPUs are essentially floating point engines.



    I suppose this is place for me to comment then.



    Calling them floating point engines isn't all that useful or informative. Modern CPUs are floating point engines too. The difference is that GPUs are highly parallel throughput engines. Large volume data parallel computation is usually in floating point these days, so usually people talk about FLOPS and that's what they are optimized for... but you could do integer calculations as well. What they aren't so good at is code where the flow of operations is highly variable for each element of data. As long as the same operations are being applied to each element of data, and these operations are more or less independent of each other, then a GPU is probably going to do well.



    The GPU is highly optimized to increase the number of data elements computed per unit time, instead of minimizing the time to compute each unit element.



    Think of the CPU as a sports car. It can take 2-4 people somewhere really fast. If you want to take 8 people, however, you need to make two trips or you need 2 cars.



    The GPU, on the other hand, is more like a bus. Or a train. Or an ocean liner. It can take dozens, if not hundreds or thousands, of people to the same destination by the same route all at once. They each take longer to get there, but in the end they move far more people in the same amount of time. If you only want to take one person, its far more efficient in the CPU.



    Quote:

    They run that FP code very well, and it's parallelizable, so doubling the number of units in the GPU doubles the speed. But they don't run integer code, which makes up most of the programs we use.



    It depends on what you mean by "integer code". If you literally mean "doing calculations on integers" then GPUs can do it do and their parallelism helps. Usually, however, "integer code" is used to mean complex logic that does things in serial fashion with lots of conditions, branches, function calls, dispatch tables, etc. GPUs fall over badly on this sort of thing... largely because they can't do lots of it in parallel.



    You're right that most of the lines of code are of this nature. Fortunately most of the lines of code executed aren't. Since data parallel computations (media oriented stuff... video, audio, 3D) do the same lines of code to each element of the data and there is lots and lots of the data, certain lines of code are executed very very frequently. These lines therefore take most of the time, and these lines are the ones you want to make go faster. Doing many of them at the same time means you get a lot done in the time it takes to do just one. And therein lies the power of parallel processing.



    A comment on "doubling the number of units in the GPU doubles the speed": sorry, wish that were true. Sadly, very often these days the bottleneck is not how fast the hardware can compute an operation... it is instead how quickly it can retrieve a piece of data from memory in order to operate on it. The two biggest factors in performance of most algorithms are (1) how long it takes a piece of data in memory to arrive at the core after you ask for it, and (2) how many pieces of data can be sent from memory to core (or the other way) per second. These are called memory latency and memory bandwidth. And they are very often the gating factor of performance. They are why CPUs have enormous caches and have brought memory controllers on-die. They are why GPUs have faster and faster GDDR memories, wider busses and more VRAM. The GPU builders could double the number of units in their device, but if they didn't improve the memory subsystem they would usually see a zero increase in performance of most algorithms.



    Quote:

    Stuff like video and music encoding can benefit from that (the work of video decoding is already offloaded to the GPU).



    The decoding of certain codecs is offloaded to fixed function hardware. That hardware is hard-wired to handle specific codecs, however, so if somebody comes up with a new codec then decoding that should be able to leverage GPGPU.



    Quote:

    Some modeling and scientific computing tasks can, and some can't. It just depends.



    The majority of times it can, its "just" a matter of figuring out how to do it and doing it. Don't want to minimize the effort involved, but want to point out that OpenCL can make it a whole lot more achievable. Most long running tasks run long because there is lots of data, as opposed to lots of code. Occasionally tasks are such that you can't calculate the next value until you've calculated the previous one, and in those cases you're usually stuck working serially and the GPU doesn't do you any good. Most of the time though you can find something to do in parallel.



    Quote:

    Typical home computer stuff like word processing and web browsing doesn't benefit at all, except insofar as the web contains a lot of video.



    This is usually because the software is bottlenecked on other things. Bringing data from the network or disk (which takes milliseconds to seconds), for example. Most GUI apps actually spend most of their time waiting for the user to do something.





    Quote:

    GPGPU is great for what it's great for, and not much else. I'm sure that explanation is flawed and someone will correct me, but for now that's my story and I'm sticking to it.



    You're mostly correct. What you're missing though is that we often don't do calculations because in the past it was impractical. Sliding windows and icons smoothly around on the screen, morphing them in and out using a genie effect, and so on... it slowed the machine down. Now the machine can do these things "in its spare time" and they can be used to enhance (or derail) the user experience. Historically iPhoto's faces feature would have been hopelessly impractical because the user would have had to wait... now it happens in the background so fast they generally don't notice. Word processors now check your spelling and grammar on-the-fly because they have nothing else to do between your keystrokes. So plenty of opportunity exists to improve the experience by putting computing power to work. OpenCL makes this computing more accessible than it has been in the past, and that means that more developers will have the time and ability to put it to innovative uses.



    Quote:

    Oh, and I've read that AMD's Fusion strategy is part of a long-term plan to significantly change the processor. The first gen Fusion is a CPU and GPU on the same die but otherwise separate entities. The second gen will give the CPU scheduler the option of sending FP code to the GPU instead of to the CPU's FP unit. The third gen will eliminate the FP hardware entirely, leaving an integer core attached to all those GPU stream processors. That's essentially what the Ars article you linked to is talking about, though they didn't speculate quite so far as others have.



    I remain skeptical about how that will actually work. I agree that the CPU and GPU will continue to merge, but I don't think it'll happen quite as described. The blending process will be even more interesting.
     0Likes 0Dislikes 0Informatives
  • Reply 46 of 63
    See, I knew Programmer would reply to that post and do a much better job than I could.
     0Likes 0Dislikes 0Informatives
  • Reply 47 of 63
    wizard69wizard69 Posts: 13,377member
    I do need to point out a few things.



    For one modern web browsers can benefit from GPU acceleration. We could argue about the wisdom of doing 3D and some of the other new browser technologies, but they do benefit from GPU acceleration. People who make use of the webkit nightlies have seen how these new technologies benefit from acceleration.



    There are instance where SIMD processing can benefit integer operations. The problem is the same as FP in that you need to be able to organize data in such a way that the massively parallel capabilities of a GPU can be used.



    Programmer hit upon one of the biggest problems of the day. That is the movement of data on a chip. There is a speed and more importantly a power (as in heat) penalty for moving data around. A single FP operation these days is very cheap heat wise, pushing data a few mm across a chip uses a lot of energy comparatively.



    The big issue in my mind isn't the limited usefulness of the GPU for random FP calculation. That is a reality that is fairly well understood, the issue is that the hardware is already there and won't go away in the future. In other words it is silly not to use the hardware where there is an advantage.



    The other thing, looking forward, is that the manufactures don't have to maintain the limited capabilities of the GPU ALUs. In fact I'd have to say that engineers are freer to adjust the GPU to future needs than the i86 segment of a chip. AMD seems to have a real long term plan so I'm giving them a lot of credit. There is always that question of the success of their approach, but atleast they are trying to innovate. If it works well we will be seeing a lot of performance out of smaller chips. By smaller I mean lower transistor count as overlapping functionality is limited to the parts of the chip where it is done best.



    Well we can hope. As a side note process shrinks mean more transistors for free which AMD can turn into other functional hardware.





    Dave
     0Likes 0Dislikes 0Informatives
  • Reply 48 of 63
    Quote:
    Originally Posted by wizard69 View Post


    For one modern web browsers can benefit from GPU acceleration.



    Certainly 3D, image manipulation, video, and audio "in the web browser" benefit enormously from GPU acceleration. The other parts of web browsing (parsing HTML, network activity, running javascript) not so much. Perhaps one day we will see a way that web apps running in the browser can use GPGPU, but until then...



    Quote:

    There are instance where SIMD processing can benefit integer operations. The problem is the same as FP in that you need to be able to organize data in such a way that the massively parallel capabilities of a GPU can be used.



    Yes, absolutely... in fact I've seen cases of much bigger gains in integer computations from using SIMD than is typical for floating point computations. It is highly dependent on the particular SIMD instruction set, but PowerPC AltiVec and the Cell processor have some great integer capabilities.



    Quote:

    The big issue in my mind isn't the limited usefulness of the GPU for random FP calculation. That is a reality that is fairly well understood, the issue is that the hardware is already there and won't go away in the future. In other words it is silly not to use the hardware where there is an advantage.



    There is one consideration that's worth mentioning... if the hardware isn't in use, it doesn't burn nearly as much power or generate as much heat. That's why Apple's MacBook Pros have both an integrated and a discrete GPU and use the discrete one selectively depending on workload. If you don't need the horsepower, don't use it and save the battery or the environment.



    On the other hand, if you have a computation that can be GPU accelerated then it is often more power efficient to use the GPU. This is because the performance/watt of a GPU is quite high for problems that the GPU is good at.



    Quote:

    The other thing, looking forward, is that the manufactures don't have to maintain the limited capabilities of the GPU ALUs. In fact I'd have to say that engineers are freer to adjust the GPU to future needs than the i86 segment of a chip.



    Very true. The x86 ISA is complex and there is a great deal of code written that depends on all of its details and would break if Intel or AMD were to make radical changes. The changes they do make are adopted slowly. With GPUs there are very few pieces of code written directly to the device's ISA -- the driver and the shader (& OpenCL C) compilers. All of which are under control of the company building the hardware. Thus they have a great deal of agility. What's more they aren't stuck with the traditional programming models... the shader and OpenCL programming models are brand new and still evolving quickly. They also have carefully chosen constraints that leave the hardware guys more flexibility.
     0Likes 0Dislikes 0Informatives
  • Reply 49 of 63
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by Programmer View Post


    ......... What's more they aren't stuck with the traditional programming models... the shader and OpenCL programming models are brand new and still evolving quickly. They also have carefully chosen constraints that leave the hardware guys more flexibility.



    This last little point of yours is why I would like to believe that AMD has a fighting chance to build Fusion processors that are significantly better than what Intel can offer up. For one they have more skilled GPU engineers plus they have demonstrated in the past the willingness to take i86 in new directions.



    Fusion is pretty ambitious and even if they don't reach their final goal, the steps along the path to that end each have benefits. It will be very interesting to see how the firt run of Fusion chips do with the integrated GPU. Will these implementation actually be faster than the equivalent GPU hardware off chip. It isn't an easy question to pursue as the memory structure and other things are significantly different.



    It sounds like I'm pulling for the underdog here but really i believe the potential is there for AMD to be in the drivers seat. If nothing else they will give Intel a few headachs.
     0Likes 0Dislikes 0Informatives
  • Reply 50 of 63
    AMD had the momentum during the transition to x86 64. Once Intel was allowed to copy AMD, Intel did better in the end. I don't think Intel will be able to do it this time with APU/FUSION technology. Not sure how far Intel is behind, but AMD should thrive in near future unless Intel has something we have not seen. The APU/Fusion technology seems to fit well with future needs for consumer appliances and computing.



    But.... Intel can flex its muscles out of desperation to steer the market to elsewhere. Nothing we've not seen before. Best products aren't the most successful and you know this already if you're an Apple faithful.



    Would AMD APU/Fusion make its way into appleTV/iPad/ibook/Macmini? Or A4 would evolve further?
     0Likes 0Dislikes 0Informatives
  • Reply 51 of 63
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by bitemymac View Post


    AMD had the momentum during the transition to x86 64. Once Intel was allowed to copy AMD, Intel did better in the end.



    There is this perception in the market place that Intel is so much better than AMD. I don't think that is the case, they are marginally better for some workloads but not all. Certainly not enough to justify the price differential.



    What I do believe is that Intel could bury AMD by simply turning up the clock rate on many of its chips. They don't do this for fear of another clock rate race.

    Quote:

    I don't think Intel will be able to do it this time with APU/FUSION technology. Not sure how far Intel is behind, but AMD should thrive in near future unless Intel has something we have not seen.



    I suspect that this is very true for the low end and very low power markets. At the desktop end of things it isn't so clear because many people still look for CPU performance even if in the end going for GPU performance is a better deal.

    Quote:

    The APU/Fusion technology seems to fit well with future needs for consumer appliances and computing.



    Remember there are multiple devices under the Fusion unbrella that serve different markets. However the Bobcat based Ontario could go into a wde range of products especially if it comes in quad core. It should be suitable for everything from netbooks to Mini class machines. Plus a limitless number of embedded solutions.

    Quote:

    But.... Intel can flex its muscles out of desperation to steer the market to elsewhere. Nothing we've not seen before. Best products aren't the most successful and you know this already if you're an Apple faithful.



    Intel is not likely to be able to twist many more arms after pissing so many manufactures off. Lets face it having crappy GPUs forced upon you does not generate friends. This combined with other Intel manipulation is likely to cause customers to flock to AMD if they have anything at all suitable for the low power market.



    Ontario doesn't have to be the best offering it just has to be an alternative. Hopefully free of the restrictions and dicking around that comes from Intel.

    Quote:



    Would AMD APU/Fusion make its way into appleTV/iPad/ibook/Macmini? Or A4 would evolve further?



    This question is silly. Apple has marked out ARM for iOS devices. Ontario could easily go into a Mini or notebook but ultimately it depends upon performance. It isn't exactly ckear where Ontario tops out clock rate wise but i suspect it could go into AIR and deliver better performance. Again clock rate is part of the equation but cores count too, put an Ontario with four cores into a Mini and it might look pretty good.



    The problem is all we have right now is unreliable information about all of Fusions various implementations. Over time the Fusion family will be suitable for all of Apples desktop and laptop lines. The question really revolves around what will be delivered in the short term. Without performance figures that are solid we just don't know how high up the ladder Ontario will be suitable for. Obviously some of us are hoping for something that can displace Intel in many of Apples low end products.



    Maybe we are wishing for too much but it would be really nice to see very light weight laptop with excellent battery lifetimes. Maybe an AIR with a 12 hour battery is to much to wish for, on the other hand accepting less of an improvement wouldn't be rejected at all. Obviously we are not focused on high performance machines here. Rather we are looking for incremental performance increases that enable long batter life or very low power usage.
     0Likes 0Dislikes 0Informatives
  • Reply 52 of 63
    Quote:
    Originally Posted by wizard69 View Post


    Ontario could easily go into a Mini or notebook but ultimately it depends upon performance. It isn't exactly ckear where Ontario tops out clock rate wise but i suspect it could go into AIR and deliver better performance. Again clock rate is part of the equation but cores count too, put an Ontario with four cores into a Mini and it might look pretty good.



    The problem is all we have right now is unreliable information about all of Fusions various implementations. Over time the Fusion family will be suitable for all of Apples desktop and laptop lines. The question really revolves around what will be delivered in the short term. Without performance figures that are solid we just don't know how high up the ladder Ontario will be suitable for. Obviously some of us are hoping for something that can displace Intel in many of Apples low end products.



    Zacate is the higher-clocked version of Ontario intended for full-size (but budget-priced) notebooks. Ontario is 9W and Zacate is 18W, both use the Bobcat core which is a performance unknown, but some leaked figures show it as equivalent to a Core 2 Duo clock-for-clock with a test sample running at 1.6GHz. That's not bad for the power range, with a Radeon 5400 class GPU attached.



    Then there's Llano, which should have versions from midrange notebooks through midrange desktops. Llano will have 3-4 Phenom II cores with AMD's version of turbo boost and a Radeon 5600 class GPU integrated. That's all we know about Llano.
     0Likes 0Dislikes 0Informatives
  • Reply 53 of 63
    Quote:
    Originally Posted by wizard69 View Post


    This last little point of yours is why I would like to believe that AMD has a fighting chance to build Fusion processors that are significantly better than what Intel can offer up. For one they have more skilled GPU engineers plus they have demonstrated in the past the willingness to take i86 in new directions.



    I disagree that AMD is more willing to take x86/x64 in new directions. x86-16, x86-32, MMX, SSE1..4.2, AVX, Larrabee, the tick-tock methodology, Atom, Arrandale & SandyBridge on-die GPUs are my evidence. Intel also had a strong history in non-x86 ISAs, including i860, IA-64, and their purchasing and flirtations with Alpha and StrongARM. The Terascale project. They also have a whole range of chipsets and other devices. No, I actually have more faith in Intel in that regard.



    I will grant that AMD has ATI's expertise now, and that is obviously standing them in good stead. That is an eroding advantage, however.
     0Likes 0Dislikes 0Informatives
  • Reply 54 of 63
    wizard69wizard69 Posts: 13,377member
    If it is only a higher clocked version i wonder why the different name? So far i haven't dug up any info nor have I actually looked.

    Quote:
    Originally Posted by FuturePastNow View Post


    Zacate is the higher-clocked version of Ontario intended for full-size (but budget-priced) notebooks. Ontario is 9W and Zacate is 18W, both use the Bobcat core which is a performance unknown,



    The unknown performance is an issue, but thoose power figures have to be very attractive to people building AIR class notebooks. After all you wouln't buy one if you where interested in performance.

    Quote:

    but some leaked figures show it as equivalent to a Core 2 Duo clock-for-clock with a test sample running at 1.6GHz.



    That is better than I've heard. Even AMD was saying 90 percent of a laptop CPU. Of course the actual workload will be a big factor, mayby bigger than we are use to. However for practicle reasons Zacate will have to hit at least 2.5 GHz at the low end to make it into the likes of a Macbook or Mini. Unless of course it has four cores then people might be willing to give up a little.

    Quote:

    That's not bad for the power range, with a Radeon 5400 class GPU attached.



    It is very good indeed and might provide for greatly extended battery life in AIR of the Mac Books. Battery life has become an extremely important issue for people on the go. Also it is common for AMD to indicate maximum power where as Intel indicates thermal design power so thoose numbers are indeed very good.

    Quote:

    Then there's Llano, which should have versions from midrange notebooks through midrange desktops. Llano will have 3-4 Phenom II cores with AMD's version of turbo boost and a Radeon 5600 class GPU integrated. That's all we know about Llano.



    My current Mac is a early 2008 MBP, it isn't a bad machine but the battery life sucks. If I could get similar performance with an 8 hour battery lifetime I'd be happy. Of course better performance would make me even happier.



    Since I'm not in the market this year I can wait. However I'm a little bit concerned about GPU performance due to the slow memory pathways. That being said it is hoped that Llano provides a better all around experience.



    If Apple did jump ship it is interesting to speculate where they will implement AMDs fusion line and with what product. For example would a Bobcat based solution go into the Mac Mini? Some may not like that but it depends upon your reasons for wanting the Mini. For a server or media center PC, the low power nature of the chip would be a big win.



    Lets just hope that Apple can see the light here.
     0Likes 0Dislikes 0Informatives
  • Reply 55 of 63
    Quote:
    Originally Posted by wizard69 View Post


    That is better than I've heard. Even AMD was saying 90 percent of a laptop CPU. Of course the actual workload will be a big factor, mayby bigger than we are use to. However for practicle reasons Zacate will have to hit at least 2.5 GHz at the low end to make it into the likes of a Macbook or Mini. Unless of course it has four cores then people might be willing to give up a little.



    Well we just won't know for sure until the processors are out in the wild. I think the clock speeds will probably be low, given the power range, but AMD now has its own version of dynamic frequency scaling aka turbo.



    Here's a recent article on Ontario/Zacate that summarizes everything known.



    Quote:

    If information that Hans received about the performance of the Bobcat cores, at the 1.6 GHz to 1.8 GHz range, it should be just slightly slower than the older Core 2 duo processors at 1.6 GHz.



    So, 90% it may be. That's very impressive given the die size, power consumption, and GPU.
     0Likes 0Dislikes 0Informatives
  • Reply 56 of 63
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by Programmer View Post


    I disagree that AMD is more willing to take x86/x64 in new directions. x86-16, x86-32,



    With the world + dog moving to 64 bit hardware and operating systems do these matter? I'm not to sure, AMD is actually deleting legacy functionality from their chips.

    Quote:

    MMX, SSE1..4.2, AVX,



    These are not exactly well thought out evolutions and frankly appear to be a reaction to Apples success with Alt-Vec all those years ago. Even today use of the feature set is sporadic and driven more by Intels poor FP unit implementation.

    Quote:

    Larrabee,



    Larrabee? A bad idea right from the start.

    Quote:

    the tick-tock methodology,



    Isn't this more a case of leveraging Intels massive production capacity?

    Quote:

    Atom,



    I actually find ATOM to be very interesting. However it isn't like Intel has managed the concept well at all. First; for a chip that is meant to be low power you should really be targetting the lowest power process you have. Second; the market Intel wants to play in demands custom chips or highly integrated pre-baked solutions. Intel instead implemented a chipset system that is much like its laptop solutions.

    Quote:

    Arrandale & SandyBridge on-die GPUs are my evidence.



    Crappy GPUs are evidence? Besides on Arrandale one could say that the chip was designed to damage NVidia. As to SandyBridge, Intel knows what AMD is up to here and thus are being extremely aggressive with the part. Even then it still looks like it has a crappy GPU.

    Quote:

    Intel also had a strong history in non-x86 ISAs, including i860, IA-64, and their purchasing and flirtations with Alpha and StrongARM.



    Yes no doubt there. In any event I'm not saying that Intel doesn't have strong engineering capabilities. What they seem to be lacking is the ability to produce the hardware manufactures want. One point would be Arrandale, a chip that Apple apparently thinks has a useless GPU. A second issue is strong ARM, dumped right at the point where the market was starting to see lots of interests in mobile devices running alternative OS's. Many of these issues are a failure in management not engineering.

    Quote:

    The Terascale project. They also have a whole range of chipsets and other devices. No, I actually have more faith in Intel in that regard.



    There is no doubt that Intel is a mammoth corporation, but they suffer from poor management the same as any other. For Intels customers they have to put up with some of Intels failings until alternatives arrive. This is where I believe AMD has an opening, mostly because Intels GPUs don't fit in with the direction the industry is going in. That plainly is acceleration of the OS, via the GPU wherever it makes sense. Second is suppirt of SoC technology, to lower power and decrease system size.

    Quote:

    I will grant that AMD has ATI's expertise now, and that is obviously standing them in good stead. That is an eroding advantage, however.



    Eventually Intel will make a GPU that people will want to use, but right now it is hardly suitable for the current version of Windows. Meanwhile AMD is working on tying the GPU and CPU together more tightly and has apprently went after power usage aggressively. I'm less clear on pheripheral support from AMD but it looks like the new chips will support USB 3 either directly or via a chipset.



    Beyound that AMD has been very innovative in the past. Lets face it goingto a 64 bit i86 platform was pretty agressive. Hypertransport was nothing to sneeze at and took Intel years to match as did integrated memory controllers.



    AMD hasn't been all that smart either and frankly got fat and lazy when they should have been innovating with their 64 bit architecture and low power systems. Will they be able to recover. That in the nut shell is what this discussion is about. Right now I'd have to say these new products are their best chance.



    Look at it this way, Apple only has to implement a couple of laptops with Bobcat cores in them to put AMD in a good position. If Apple ships a couple of million a quarter out of their total laptop shipments it gives Fusion a little creed in the wider market place. At the same time it allows Apple to send a silent message to Intel.



    Now maybe a couple of million chips a quarter won't save AMD alone. A Mac Book with excellent performance, especially battery life will not go unnoticed and is awesome advertising. If Apple gets an early release with exclusivity for a few months all the better.



    It isn't that Intel can't do better, rather it is the fact that they haven't put an effort in and apparently have made many decisions that are contrary to customer wishes. It isn't much but it does give AMD an opportunity.
     0Likes 0Dislikes 0Informatives
  • Reply 57 of 63
    wizard69wizard69 Posts: 13,377member
    I just did a little surfing and apparently Sept 13 is the big day when they spill the beans. Lots of postings over the last 24 hours.



    Quote:
    Originally Posted by FuturePastNow View Post


    Well we just won't know for sure until the processors are out in the wild. I think the clock speeds will probably be low, given the power range, but AMD now has its own version of dynamic frequency scaling aka turbo.



    What I've been reading indicates relatively low clocks which is a concern but a 1080p capable video engine is in each chip. That will almost immediately make this a great HTPC product.



    Hopefully AMD will surprise us with performance beyound what is being leaked. Even so the numbers are hard to believe for the given wattages.

    Quote:

    Here's a recent article on Ontario/Zacate that summarizes everything known.



    Thanks for the link. It gave me incentive to look for more info thus finding out that the 13th is the big day.

    Quote:





    So, 90% it may be. That's very impressive given the die size, power consumption, and GPU.



    Yeah the more I read the more I wonder how they crammed all of this performance into one die running at 9 to 18 watts. I suspect that there will be weaknesses that are glossed over. Still this is going to be a game changer if you ask me. Performance clise to double ATOM and a good GPU with video acceleration. AMD is being very coy about Zacates clock rate, I'm hoping it can break the 2.5GHz barrier, if so AMD will be able to cover a wide range of uses with this hardware.



    Apparently AMD has a lot of info to release next week. Not all of it Fusion related either. Interesting!!!



    Dave
     0Likes 0Dislikes 0Informatives
  • Reply 58 of 63
    Quote:
    Originally Posted by wizard69 View Post


    ...Crappy GPUs are evidence?...



    My point with the entire list is that Intel has been experimenting with change aggressively for decades, not that any of the particular efforts was a failure or a smashing success. Even the failures are important because they are learning opportunities. And don't be too quick to label things as failures. AMD tends to be more conservative, with x64 being the major noteworthy exception.



    Note that AltiVec came along after MMX and various other architecture's SIMD extensions... it benefited from the earlier experience and a strong joint development process by AIM's members.
     0Likes 0Dislikes 0Informatives
  • Reply 59 of 63
    wizard69wizard69 Posts: 13,377member
    Unfortunately with carefully picked demos and specs catering to its strength. Actually Intel has IDF going too but it seems that most of what is to learn there is already known.



    Even after this slow roll out of info, we still have little on clock rate or actual CPU performance. It does lead one to underestimate the over all performance of this "APU". What ever happened to releasing the whole ball of wax at once?



    Even with the sparse info it still looks like the Bobcats will be excellent processors for AOliR like notebooks and possibly Minis and the plastic Mac Books. Hopefully info will develop further as the week progresses. As to the Mini, even if Xacate only weny into the low end Mini it would make for an excellent and COOL mini.



    The 18 watts here is very impressive even if we don't know where the clock rates live. The other thing here is just how tiny the processor is. With a process shrink the chip should be very economical and lower power yet again. Or maybe more cores in a simularly sized die. Of course my adoration might fly out the window when we find out what the clock rates are for the CPU half.



    This will be one interesting week. Hopefully real hardware will be with us in a months time.





    Dave
     0Likes 0Dislikes 0Informatives
  • Reply 60 of 63
    The Bobcat die size is about 70-80% of a brand new Atom (at 9W vs 8.5 for Atom dual core), Out-of-order scheduling, 1MB total L2 cache and a radeon GPU. This thing is going to print money for AMD.



     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.