Modular designs + High-level frameworks

Posted:
in Future Apple Hardware edited January 2014
This thread might look a little off-topic in these times of MacTel speculation, but well, here's an idea I had...



First the facts :

* Apple has been producing several high-level frameworks aiming at specific areas, lately (CoreImage for image processing, CoreVideo for video processing, CoreAudio for audio processing, vBlabla for vector-based operations, etc etc...), so that programmers can code their apps without caring for the underlying technologies.

* The GPU is becoming the most powerful part in a computer, far beyond the CPU. This is because GPUs achieve very specific tasks and therefore are extremely optimized to perform those tasks very quickly. And Apple tries to harness the power of the GPU as much as possible with CoreImage.

* Intel is moving from a chip-centric strategy to a platform/system-centric approach. And Apple is transitionning to Intel as you all know...

* Embedded designs have been using SoC (Systems on Chips) for quite some time to boost specific tasks.



Now the idea :

What if Apple moved from CPU-centric designs to designs where the CPU(s) would communicate with several dedicated coprocessors? Put 2 or 3 GPUs on the mobo if you need high-performance graphics. Put Cells if you need good vector-based performance (SSE3 being quite lame in this area). Put a few dedicated DSPs if you need power for audio. Etc etc... That's what I call modular designs, and here's a typical lineup I'm imagining for these modular-designed Macs (pure fantasy here, I don't believe we'll ever see a Cell in a Mac for instance) :

* Mac Mini : Intel Yonah + integrated GPU

* iBook : Intel Yonah + GPU

* Powerbook : Intel Merom + 1-2 GPUs + 1-2 Cells

* iMac : Intel Sossaman/Merom + 1-2 GPUs + 1-2 Cells

* PowerMac : Intel Conroe + 2-3 GPUs + 4 Cells + Audio DSPs...

Furthermore, Apple could build "on-demand" computers. Imagine a PowerMac with 8 or 16 GPUs... that could give a serious speedup to 3D apps!



But there's an issue here :

You can put everything you want on the mobo, if software don't use it, that's no use. That's where the Apple's process of writing high-level frameworks becomes important.

Take CoreImage... When your GPU is powerful enough, you get a nice ripple everytime you create a new widget in Dashboard. Whoa! Now, you can imagine a new version of the CoreImage framework where some new enhancements would appear when you have more than 2/4/8/16 GPUs in the computer. From a developer's point of view, since CoreImage is designed not to care of the underlying technology, implementing a feature would always lead to the same function call from the CoreImage framework, whatever the number of GPUs in the Mac. But then, the function would behave differently depending on how many GPUs you have. Simple.

The same scheme could work for audio or vector-based apps.



Frameworks that allow developers not to care of the underlying tech + modular mobo designs... That's just about connecting the dots, as Jobs says!



I really don't know if Apple plans to do such a thing but I'm quite interested in what you think of the idea, so please post replies!

Comments

  • Reply 1 of 4
    programmerprogrammer Posts: 3,458member
    It is certainly part of Apple's goal in providing sophisticated system libraries -- it makes it easier for developers to do these things, and insulates them from the hardware. As long as the developer wants to do what Apple provides functionality for, then this works great. Unfortunately Apple doesn't provide all possible functionality (and cannot do so) so this will not help in all cases. Nonetheless it is a very useful trend, and something Apple has always been pretty good at (although they do have a history of introducing technologies that are not adopted widely and thus abandoned later: QDGX, QD3D, etc).
  • Reply 2 of 4
    Quote:

    Originally posted by Programmer

    Unfortunately Apple doesn't provide all possible functionality (and cannot do so) so this will not help in all cases.



    Sure thing...

    Anyway, in most cases, you do not need 16 GPUs to work properly, so you don't need to have solutions in *all* cases. For instance, most apps don't care about Altivec because they just don't need Altivec to work well. And when one developer feels that he needs to use Altivec for his app, then he can either use Apple's v frameworks or write Altivec code.

    Basically, the same scheme could be used here : most apps would use optimized frameworks written by Apple while very specific apps that need features not implemented in Apple frameworks could include hardware-specific optimization code if it needs it.

    Of course, that sure would be a pain in the a$$ for developers...
  • Reply 3 of 4
    macchinemacchine Posts: 295member
    What you are talking about with the core libraries is called hardware abstraction and Macs have been the best at that since the early 90s.



    As far as multiple GPUs I am sure everyone would like to have scalable technology with those like there are now with multiple CPU but it is up to the CPU to schedule the work, it controls who does what and when.



    So it is up to Intel to support multiple GPUs with their CPUs which you would think they would do any time that they could because that would sell more processors.



    I notice that treads are now handled at the CPU level, which is a big surprise to me because that just seems too high level, but any I guess it makes the code execution much cleaner.



    Well, you could certainly break multiple graphics portals into multiple threads and then to multiple GPUs couldn't you, but it would be best to handle it with the CPU, NOT in the OS -- less work for Apple.







    I notice many people talk down multiple processors like there is some kind of inherent problem with it. I think it is just a matter of time until two processors are as efficient as a single processor with twice as many transistors.



    Then when you add to that efficient cooling or processors that don't produce any heat then use as many processors as you want to get as much computing power as needed, its MUCH cheaper because you don't have to constantly push the chip manufacturing envelope, which is VERY EXPENSIVE !!!



    THIS IS PRECISELY WHY AS SJ SAID, THE PERFORMANCE TO HEAT GENERATED CURVE IS THE MOST IMPORTANT RATIO WHEN DESIGNING HIGH PERFORMANCE LOST COST SYSTEMS.
  • Reply 4 of 4
    xoolxool Posts: 2,460member
    Quote:

    Originally posted by MACchine

    [BI notice many people talk down multiple processors like there is some kind of inherent problem with it. I think it is just a matter of time until two processors are as efficient as a single processor with twice as many transistors.

    [/B]



    Personally I love multiple processors. Multiple cores should also be a viable solution, depending on the implementation. Hyperthreading is overrated though, especially if you're used to a machine with two real processors/cores. However, this might be a case where better is good enough for most users.
Sign In or Register to comment.