I think Apple may have a trick up its sleeve -- "Q5"

Posted:
in Future Apple Hardware edited January 2014
Now, I'm not claiming any insider information here. I'm just a guy who follows Apple news and tries to read the tea leaves -- it's a hobby, I guess. However, some of those tea leaves recently percolated into a theory which to me seems to account for several pieces of the Apple puzzle quite well.



Exhibit A: The front side bus is the major G4 bottleneck, but Motorola is not eager to change it. Hence, the Xserve's fire hydrant (ddr ram) into a garden hose (fsb) situation.



Exhibit B: With Quartz Extreme, Apple seems to have gotten the message and is beginning to offload bandwidth-intensive processes (in this case, window compositing) onto other components.



Exhibit C: Apple's recent acquisitions of high-end video compositing (and titling/motion graphics) solutions, which seem to point to an Apple-published "Final Effects."



Exhibit D: Apple has Hollywood in its sights, but likely knows that Final Effects (to be competitive with an Avid system or After Effects on x86) needs something better than what the current PowerMac architecture can provide.



Exhibit E (this one's a bit shakier): Rumors of Apple and nVidia collaborating on some powerful new hardware.



Given the above, I conclude that the following is likely within the year: Apple and nVidia will announce a new, "faster-than-a-G4 (for video)" display chip aimed squarely at video professionals -- simultaneously making both the G4's clock rate and frontside bus issues moot points for that market. The chip will have direct access to DDR memory, bypassing the FSB, and tap into Quartz Extreme technology.



I'm calling this Quartz-Extreme-friendly chip "Q5." It would not replace but would complement the existing G4, and would work seamlessly with Final Cut and Final Effects.



The Q5 would, of course, represent a tacit admission that "the G4 was, in hindsight, not a great idea." Nonetheless, if Apple can swallow its pride, this seems like an effective way to make an end-run around the G4's limitations, go after a target market in a very focused way, and introduce a new differentiating element to the Mac product.



(The Raycer acquisition might also turn out to be a part of the Q5 story, though I don't know enough about that enterprise to say.)



So, I'm looking for a Q5 announcement come Macworld. What do you think? Technically feasible? Economically feasible? Would non-video markets regard the Q5 as an affront ("no better chip for you")? Etc.
«1

Comments

  • Reply 1 of 33
    macroninmacronin Posts: 1,174member
    So what you are speculating is an ICE-type hardware/software combination from Apple/nVidia...?!?



    That would be interesting, would allow for future advances in performance from both the hardware & software, and would allow the CPU to focus on running the OS and the basic frameworks of the app, with the bulk of the filter processing/prerendering being done by the GPU... Throw in a healthy framebuffer cache that dumps to a file on the HDD in the background, for handy reference later...



    Seems to match with the rumors of an Apple/nVidia card with dual GPUs and 256MB DDR SDRAM (@1GHz!), which others have 'rumor-inflated' to a card with dual GPUs, 512MB DDR SDRAM AND Component Video Out...!



    And the talk (I cannot remember where I saw it...) of a different interface for the video card than AGP... HyperTransport or 3GIO or something... Something FAT & FAST... I think (???) the 3GIO was supposed to do up to 12GB/s of throughput...!!!



    Sounds good to me...!



    Now if Apple/nVidia would get some stellar OpenGL performance into this ÜberCard...



    For running Maya, duh...!



    Dual ADCs feeding dual Apple 23" Cinema HD Displays for the workspace, AND a 24" Sony HD Broadcast monitor running fchecks... Sweet, can we get 1920x1080/24p from that component out please...?!?



    Hello, full-featured production studio-in-a-box...



    <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" /> Maya for Mac OS X <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" />



    [ 06-24-2002: Message edited by: MacRonin ]</p>
  • Reply 2 of 33
    bartobarto Posts: 2,246member
    So basically, give the graphics chip some general units (AltiVec maybe).



    There is a problem. The AGP bus is 1GB/s. The MPX bus is, guess what, 1GB/s (only probably more efficent). You would have more of a bandwidth problem if you were doing any work involving traditional-CPU tasks and graphics work if they were on the same bus.



    Barto
  • Reply 3 of 33
    macroninmacronin Posts: 1,174member
    Barto, see my edit...
  • Reply 4 of 33
    bartobarto Posts: 2,246member
    Guess what I think will happen at Macworld NY?



    Thats right, an desktop version of the Xserve mobo.



    Guess what I think will happen at Macworld SF?



    Thats right, a mobo with Firewire2, USB2, maybe a pipelined memory bus.



    Guess what I think will happen at Macworld NY 03?



    Thats right, a Power4 derivative or an Apple specific G5.



    Its so much more plausible.



    Lets call the GPU and the CPU equally important.



    Therefore, they should have sperate busses.



    When the CPU becomes much less important than the GPU, then we think about intergrating them.



    Nvidia has loads of cash, has conquored the world. More plausible is them developing a PowerPC, using the experience gained from graphics cards.



    It's not a dumb idea, but probably a few years off.



    Barto
  • Reply 5 of 33
    bartobarto Posts: 2,246member
    When it's released, PCI-Express will be nowhere near 12GB/s



    When PCI-Express is up around there, then CPU FSBs will probably be too.



    Barto
  • Reply 6 of 33
    bartobarto Posts: 2,246member
    And as far as a video workstation goes, then clustered Xservers would be far cheaper for Apple (not having to contract nVidia to make a new GPU)



    Barto
  • Reply 7 of 33
    kecksykecksy Posts: 1,002member
    This all sounds very implausible to me.



    Yes, NVIDIA did do something like this when they teamed up with Microsoft to develop the Xbox, but NVIDIA is now paying the price for that partnership.



    As John Carmack said, NVIDIA spent too many of its resources on the Xbox and has now fallen behind in the 3D chip space.



    Upcoming chipsets from Matrox, 3DLabs, and ATI simply kill NVIDIA?s next generation. The NV30 is rumored to be little more than a Geforce4 Ultra, which is why the R300 was running DOOM III at E3 and not NVIDIA?s chip.



    Would NVIDIA sacrifice more of its precious resources to develop a product for a niche market? No way in hell. They have other things to worry about, like ATI.



    If Apple was working on hardware similar to what you mentioned above, it would likely use Raycer technology as opposed to NVIDIA. NVIDIA is simply too busy and doesn't care.



    Besides, wouldn't Raycer make more since Apple did buy them awhile back? Not that I buy into your friend's rumor, but Apple could finally put those wasted Raycer assets to use.



    That said; the chances of a G5 this year are greater.
  • Reply 8 of 33
    [quote]Originally posted by Barto:

    So basically, give the graphics chip some general units (AltiVec maybe).<hr></blockquote>



    Rather, the Q5 would represent an additional graphics chip, one highly tuned for real-time compositing of video/titles/motion graphics. This is quite different from the typical graphics cards today, which are designed and evaluated almost solely on the basis of pushing polygons.



    [quote]There is a problem. The AGP bus is 1GB/s. The MPX bus is, guess what, 1GB/s (only probably more efficent).<hr></blockquote>



    That's good to know about the AGP bus, but I didn't actually suggest the Q5 would use AGP. Rather, it could be connected to memory in its own proprietary way. Since users won't be swapping Q5s in and out, I don't see much need for the Q5 to use an industry-standard bus.



    [quote]You would have more of a bandwidth problem if you were doing any work involving traditional-CPU tasks and graphics work if they were on the same bus.<hr></blockquote>



    In my theory, one of the major advantages of the Q5 strategy is that the G4 and Q5 would not be using the same bus. The G4 would connect to memory via MPX; the Q5 would connect to memory via its own channel.



    Of course the two processors would need to share info at some points; the trick would be to minimize this "chatter" -- since inter-proc discussion would still bottleneck at the MPX. We'd want, I think, the G4 to keep its fingers out of the Q5's business as much as possible. Again, Quartz Extreme suggests that this can be accomplished.



    Another point to consider is that, if Apple is really about selling hardware, it behooves Cupertino to commodotize software. As folks pick up more software more cheaply, they have more need of and can better afford what Apple wants to sell: Macs. (Hence: free iApps, cheap licensing on OS X Server, rumors of "Glove" licensing.)



    But in this strategy, what Apple doesn't want to do is commoditize its hardware by reducing it to common-denominator PC parts in stylish cases. Apple should be looking at ways to introduce unique and differentiating elements which Wintel PCs will not incorporate. A proprietary compositing chip, connected to memory through a proprietary channel, fits the bill nicely in my opinion.
  • Reply 9 of 33
    sizzle chestsizzle chest Posts: 1,133member
    [quote]Originally posted by MacRonin:

    <strong>



    Now if Apple/nVidia would get some stellar OpenGL performance into this ÜberCard...

    </strong><hr></blockquote>



    Yikes, MacRonin, don't say "ÜberCard!" Don't you remember the trouble that ensued from the mere mention of the ÜberMac?!!?!! There are hypersensitive scholars of German language usage present!
  • Reply 10 of 33
    stevessteves Posts: 108member
    [quote]Originally posted by Keeksy:

    <strong>As John Carmack said, NVIDIA spent too many of its resources on the Xbox and has now fallen behind in the 3D chip space.

    </strong><hr></blockquote>



    I recall Carmack commenting on nVidia getting side tracked with XBox, but I don't recall him suggesting nVidia has fallen behind anyone in 3D performace. Do you have the link where this comment was made? I believe you are mis-interpreting what he said.



    [quote]<strong>

    Upcoming chipsets from Matrox, 3DLabs, and ATI simply kill NVIDIA?s next generation. The NV30 is rumored to be little more than a Geforce4 Ultra, which is why the R300 was running DOOM III at E3 and not NVIDIA?s chip.

    </strong><hr></blockquote>



    Which rumor of the nv30 are you referring to? nVidia's own comments suggest otherwise. Matrox is not up to par in the gaming market. The Parhelia is a good chip, but it's not up to par with nVidia. Here's Carmack's take on it:



    <a href="http://www.3dchipset.com/news.shtml"; target="_blank">http://www.3dchipset.com/news.shtml</a>;



    As for ATI, they always look good on paper and should theoretically be faster. However, there drivers are always problematic and never reach performance potentials.



    As for your logic behind Carmack's use of ATI (mystery chip) for E3, why do you assume it was because it provided the best performance? Doom3 was first viewed on a Mac. Was that because it provided the best performance? Not likely.



    nVidia has taken the lead with the original Geforce card. Since that time, everyone else has been a generation behind. I'd like to see ATI challenge them, I just doubt that actually happening.



    If Apple is working with nVidia beyond what we already know, I'd expect something like this:



    <a href="http://www.nvidia.com/view.asp?IO=IO_20020605_4756"; target="_blank">http://www.nvidia.com/view.asp?IO=IO_20020605_4756</a>;



    Steve
  • Reply 11 of 33
    lplp Posts: 1member
    Barto may be right about Nvidia making cpus in the future. This month's wired magazine has a cover story about Nvidia, and their plans to take on Intel. Working with apple on special projects sounds entirely plausible.
  • Reply 12 of 33
    b8rtm8nnb8rtm8nn Posts: 55member
    hypertransport



    and nVidia is going to use it anyways for the new AMDs come early 2003 so it's not an XBox scenerio
  • Reply 13 of 33
    From my expierience - which includes working for a large microprocessor company - it would be exremely hard for nvidia to enter the cpu market. For all their successes, they do NOT have the resources to do what the likes of Intel and AMD do... that, and their lack of fab's would be a huge disadvantage. Its not that it is impossible, but even the biggest guys have a difficult time with it, and thats all they do, and have ever done... NVIDIA would have a very, very hard time.
  • Reply 14 of 33
    jasonppjasonpp Posts: 308member
    I think Nvidia is not going to try to beat Intel at its game.



    According to a WIRED article I read recently Nvidia has recognized that the need in future computers will move away from the processor and onto the GPU. As new display technologies mature (e-ink, OLED etc) we will see high resolution screens EVERYWHERE, and they all need a GPU.



    If this works they'll be bigger than Intel by the end of the decade. Bonus is we all get wall sized 200dpi displays, and...



    this...



    <a href="http://wearables.unisa.edu.au/projects.php"; target="_blank">http://wearables.unisa.edu.au/projects.php</a>;



  • Reply 15 of 33
    [quote]From my expierience - which includes working for a large microprocessor company - it would be exremely hard for nvidia to enter the cpu market.<hr></blockquote>



    [quote]I think Nvidia is not going to try to beat Intel at its game.<hr></blockquote>



    I agree that nVidia would not be wise to enter the desktop CPU arena. But this is not the same as building on proven GPU expertise to build a Quartz-Extreme-friendly GPU targeted at media professionals. The latter seems like a natural next step for Apple's and nVidia's relationship, with some obvious benefits for both parties.



    [quote]According to a WIRED article I read recently Nvidia has recognized that the need in future computers will move away from the processor and onto the GPU.<hr></blockquote>



    I would submit that Apple sees the same possibility.



    So far, I have only been considering a GPU tuned for QE compositing. However, it's possible I'm being too conservative. We know that, while Quartz Extreme is a feat, it's not the whole Quartz story. There's also Quartz2D, which handles the drawing of vector primitives. It's generally agreed that current GPUs are not capable of accelerating Quartz2D.



    Could it be that nVidia has developed the technology to accelerate 2D vectors in hardware and Q5 will offer this boost as well? Consider the attraction of real-time scale/rotate/transform operations in Illustrator or Freehand for users of those programs.



    If the Q5 really could deliver all this, Apple would have its visual-media bases covered. There might be little of real processor burden left for the G4 to bear in most situations. (Audio and scientific applications come to mind as processor-intensive niches not helped by the Q5.)



    [ 06-27-2002: Message edited by: iconmaster ]</p>
  • Reply 16 of 33
    programmerprogrammer Posts: 3,458member
    Doom3 was demo'd on the ATI R300 because they had hardware first. ATI really cranked out the R300 project and they beat nVidia to first silicon on this generation, but that doesn't necessarily mean anything in terms of when either chip will actually hit the market in quantity. It also says nothing of the performance of either of these chips -- and both of them are going to be monsters, especially once DX9 (&OGL2?) software arrives to take advantage of the real power of the next gen GPU.



    nVidia isn't going to get into the CPU business, and it remains to be seen how serious they will really be with the chipset business. AMD, for example, is bringing the memory-controller onto the CPU which takes it out of the chipset. It will be interesting to see what nVidia does in an environment where their chipset is sidelined just to do I/O. We may see them building the system I/O into their GPU and communicating with the CPU via HyperTransport. Imagine an entire system in 2 chips. I'm sure nVidia can imagine this, and I'm pretty sure they're happy to "own" one of them... leaving the harder one to somebody else.



    Some of Moki's comments, plus the Racyer & Apple patents mentioned in another thread point at an Apple-designed set of processing units in the chipset on the motherboard (not AltiVec, which is a PowerPC extension, but instead more similar in nature to a GPU's shader units). This would work in parallel to the nVidia graphics engine, not instead of it. To me this is much more likely than Apple trying to compete with nVidia (and ATI, Matrox, etc). Alternatively, they could be doing a cooperative design with nVidia... but I kind of doubt this as well. My own pet hope is that this somehow ties into the "Wolf" rumour from yet another thread -- if it doesn't I hope some guy at Apple reads this and realizes that I am, in fact, a genius and this is the greatest idea since jelly donuts. Or not.

    <img src="graemlins/oyvey.gif" border="0" alt="[No]" />
  • Reply 17 of 33
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by iconmaster:

    <strong>(Audio and scientific applications come to mind as processor-intensive niches not helped by the Q5.)</strong><hr></blockquote>



    As sort-of implied by my previous post, I think you are 100% wrong with this particular statement. Computationally expensive scientific calculations, and heavy audio processing are exactly the kind of thing that a fast SIMD unit is good at. That's why AltiVec has made such in-roads in these areas. I wouldn't be the least bit surprised if Apple, faced with questionable CPU futures but happy with the success of AltiVec, turned around and equipped all of their machines with additional vector units. The key would be to make them easily accessible to developers, and in a way that also transparently uses AltiVec and potentially other kinds of hardware (perhaps even distributed).



    Or maybe not. <img src="graemlins/bugeye.gif" border="0" alt="[Skeptical]" />
  • Reply 18 of 33
    mr.emr.e Posts: 20member
    I'm not saying this will be introduce at MW'02, but it does relate to this topic, which I have found very interesting.

    I envision a one pound tablet, that right, a tablet that primarily contains an lcd, GPU, second generation Airport(54mb per sec; name your own protocol), one firewire port(in case you want to charge it and a battery ? all in a 3/4 or 1/2" thickness, final weight - unknown. Instant on. Perhaps have a optional charging doc. And I know other have thought of this as well. A basic VNC(I think is the term) for remotely accessing/controlling your computer, you could run the thing with a stylus or your finger.
  • Reply 19 of 33
    mr.emr.e Posts: 20member
    Forgot to mention that one could have an on-screen keyboard when using Ink with the stylus. This is perhaps on of the reasons Stevo preintroduced Ink? Also, I would have no idea if this would be cost effective or not, but would be fucting cool. Another thing would be that if a laptop came in the proximate area, a dialogue would appear and ask if you would like to connect to the laptop type in yes or no - bam, bing badda bing. Another thing Stevo introduced was Rendezvous &lt;http://www.apple.com/macosx/newversion/&gt; Multiple tablets could be used in the same area. And through AppleRemoteDesktop communicate with another via iChat. You would still have to have it near a Mac to handle the processing(or would there be hacks to connect to a PeeCee?)

    So, what are the ramifications of this VNC if it could control a Wintel, Apple might lose market share on lack of hardware sales? I'm not sure what price point this would be at right now? Be it would surely be cheaper to produce than a full blown "Tablet," which MS will probably lose on.

    Insanely great!
  • Reply 20 of 33
    useroneuserone Posts: 55member
    mr.e - I like it!

    But what kind of casing do you think? Robust or sleek and lightweight?



    I thought it was an interesting point made earlier in the thread about the feedback of the writing being really close to as you would see if you were using a real pencil - Any marks made by the user are anti-aliased and characters could be written 5mm high!!! I would hope for this.



    I would think this would be a wonderful device for learning Chineese/Japanese/Korean ?Kanji? script where you could trace a faint original of the letter form over and over again aiding your learning.



    I am still wanting to see how such a product would benefit me professionally. I am keen to have a digital note pad where all inputs could be tracked by a user friendly relational database (like an automatic Finder). Pictures/Photos/Tests/emails would all be at the end of the stylus aiding me in the development and simple presentation of ideas.



    Back to Hardware - My feeling is that many things are now in place i.e. OSX, TFT Prices and a strong desire by many of us for this type of product. 2 things that make Apple hesitate releasing this products are:



    A. Even though I might say I want it... on release I *may* (as a typical Visual Artist user) still opt for a PowerBook and that such a Pad would be a supplementary purchase in support of the first. This makes me think that any Pad is a second computer - this is one of the reasons why Apple may feel they need to double their market share before letting such a thing out the door. Lots of people have the first unit so here is a very cool supporting device.



    B. Weight. I don?t know what the optimum weight would be. It has to have enough weight to feel substantial but light enough to easily hold/grip/rest while writing or controlling with the other hand. How heavy is that?



Sign In or Register to comment.