AMD and Apple have been talking...

124

Comments

  • Reply 61 of 98
    programmerprogrammer Posts: 3,458member
    Quote:

    Originally posted by Blackcat

    Speed.



    It seems easier to write slow apps in Cocoa, look at the first iCal and iPhoto and now iMovie.



    On the other hand Safari whizzes along.



    I think it's just a matter of getting used to a new thing.




    It really doesn't matter whether you're coding using Cocoa or C++, its possible to make slow applications and fast applications in both. Cocoa, IMO, lets you build a more sophisticated application much more rapidly and more sophisticated applications need more work to make them run fast. Cocoa programmers can therefore quickly build an app, ship a functional version 1, and then start speed it up for later versions. This is common practice and isn't a bad thing because user feedback is usually more important than ensuring that it is lighting fast. It doesn't matter how fast it is if nobody is going to use it, or if you spend a lot of time making the wrong things fast.
  • Reply 62 of 98
    Quote:

    Originally posted by johnsonwax

    It wouldn't require Apple to sell the transition - it happens naturally.



    I assume you mean a transition from 970 based 64 bit OS & apps to Power? Yes the transition would be seamless. But if you meant a transition from 970 to AMD then it would be far from natural.

    Quote:

    I agree that if IBM is willing to say in the game with low-end 64bit PPC, Apple would be wise to stay with it. I'm really only doubting IBMs committment to that product - not based on anything other than the probable size of the 970 hardware market which will seem very small to someone the size of IBM.





    I refer you to a post I made in another thread:

    http://forums.appleinsider.com/showt...488#post337488

    I think comments about IBM's potential lack of commitment to producing PPC processors suitable to Apple ignore an important distinction between IBM and Motorola. This is that Motorola only produces processors for embedded and desktop machines and since the size of the latter market is small in comparison to the former their commitment to the desktop suffers. But IBM make processors for the embedded, desktop, workstation, server, mainframe and supercomputer markets. This is a hugh range from the low to high end of machine types (with processors often crossing over multiple categories) with most of them being compatible with the PPC ISA. This means their commitment to PPC is hugh, covering all their business. They are not going to drop PPC anytime soon.



    The 970 is only the latest IBM processor in the desktop and workstation categories with the 604, POWER3 and POWER4 being others, and it also covers an emerging market, namely the blade server. Thanks to pressure/money/whatever from Apple IBM have also added an AltiVec unit to make it suitable to them. The only question with regard to IBM's commitment to making processors for Apple is "will they always include AltiVec in their desktop level processors"? If Apple keep doing what they did, "Yes". If IBM make use of it in their blades, "Yes". If they add it to Power5, "Yes". (A small possibility I'm going to start a thread on soon unless disscussed B4. Links anyone?)



    IBM will be a reliable supplier for Apple in the future.





    MM
  • Reply 63 of 98
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by grad student

    "make it work - then make it work fast" is good for most software... unless your definition of working software is software that works fast.



    Well, sure, you can moot any statement by arbitrarily changing the terms, but it's a pointless exercize.



    "Works" means "bug free." All behavior is as expected in terms of cause and effect (i.e., it might take a few blinks to open a window, but a window opens; the application doesn't crash). It's important to establish this so that you don't spend a lot of time and energy making broken code go really fast. Naive code is much easier to bugfix than highly optimized code, all else being equal.



    Quote:

    as for coccoa apps being slow... it isnt the framework that makes OSX apps slow [indeed, coccoa allows easy creation of very efficient/fast apps]. its the lack of exploiting hardware exceleration combined with the heavy use of bitmapped widgits and procedural textures (ie: brushed metal).



    No, it's the fact that both the framework and the language most commonly used with the framework (Objective-C) can be used to write clean, short, pretty code that happens to be slow. It's easy to make it go faster, but why not release it while it's easy to maintain, so that you can get bugfixes in and then optimize bugfixed code? It's also very easy to use heavy objects for light duty, and when you do you get OmniWeb's display layer.



    Note that very little of this has anything to do with the UI, and everything to do with the fact that dynamic messaging and OO can get pretty sluggish. (Note: can get, not are. The variable here is the programmer.)



    Quote:

    if apple wanted to make the OSX UI fast... they could have ditched:



    1. bitmapped widgits

    2. textured windows

    3. mandatory double buffering of every window (which also consumes tons of memory)

    4. mandatory anti-aliasing of text

    5. all rendering done by the CPU





    If 1. has anything to do with speed, then why was CopyBits() the first optimization introduced to QuickDraw? Very few display methods are faster than blindly splatting bits into a window buffer. Ask any game developer.



    3 is false. You can specify buffering in Interface Builder on a per-window basis, and Cocoa allows you to change it on the fly. 99.999% of all Cocoa apps use the double buffering because it saves the developer from having to do all that lovely manual invalidating and repainting that OS 9 required, and it keeps the interface from looking like utter crap.



    4 is a minor speed hit at best. To the extent that it is, it contradicts 1: Fonts were "faster" in OS 9 because (and to the extent that) screen fonts were bitmapped in OS 9, not because they were rendered with aliasing.



    5. All rendering done by the CPU. As opposed to...? As additional hardware comes on line that can relieve the CPU, you can expect Apple to use it. cf. Quartz Extreme, and the hardware-accelerated QuickDraw in 1986.



    Quote:

    and there are a few more lesser things as well. Incidently, if they had ditched these things... OSX would have run a LOT better on a LOT older hardware (makes you wonder).



    Messaging and events need a bit of a goose, but what gets lost in your rant is the fact that OS X has been running faster, even on older hardware. First, you make it work. Then, you make it fast.
  • Reply 64 of 98
    [/B][/QUOTE]
    Quote:

    "Works" means "bug free." All behavior is as expected in terms of cause and effect (i.e., it might take a few blinks to open a window, but a window opens; the application doesn't crash).



    really... does a 3d first person shooter work at 1 frame per second? your statement is correct when you look at a program on a von-neumann or touring completeness level. However, when user interaction is taken into the equation as an input, the user behaves differently as a function of time - its not just numerical equivalence. A first person shooter doesn't work at 1 FPS, and a GUI doesn't work properly if interacting with it unnecessarily inhibits productivity (due to it being unnecessarily slow).



    Quote:

    it's the fact that both the framework and the language most commonly used with the framework (Objective-C) can be used to write clean, short, pretty code that happens to be slow.



    Objective-C is no slower than C or C++ for procedural code. For OO code - there is a slight overhead per method invocation on un-cached methods, however this again, is not what makes the OSX UI slow for cocoa OR carbon apps. [see below]



    Quote:

    If 1. has anything to do with speed, then why was CopyBits() the first optimization introduced to QuickDraw? Very few display methods are faster than blindly splatting bits into a window buffer. Ask any game developer.



    my programming experience with the mac began shortly after the introduction of color qd, so I don't have authority on what got accelerated first. however, blindly splatting bits is not fast at all.



    As for asking a game developer - I am one - so I'll just tell you, and give you some homework to try on your own (it won?t even require coding homie). drawing a lit, shaded, z-buffered 3d projected triangle is MUCH faster on 3d hardware than calculating the resulting image on the CPU, and blitting it to graphics memory over the system bus. Why? cause the graphics subsystem was designed for it. it has the algorithms in hardware, and direct access to MUCH higher bandwidth memory than the CPU. what do you think the fill rate is on modern 3d cards? [not including all the necessary computations before memory gets filled] the radeon 9700 puts 883 Mtexels on screen per second at 1024x768x32 bits (a rage 128 does 540 mtexels). That?s 883 000 000 texels * 4 bytes /(1024^2 bytes/meg). Thats 3.3 GB/s of pure fill rate [not including all of the complex 3d calculations, and memory bandwidth is actually MUCH higher]. the 8x AGB bus has a max bandwidth of 2.1 GB/sec - and we don't even have that on a mac - we have 4x tops. If the CPU was doing NOTHING but pushing pixels across the system bus with NO calculations on them, it STILL couldn't blit fast enough to keep up with the graphics hardware.



    you say "3 is false". wrong. here is some homework for you:



    open the Interface Builder you like to talk about so much. Create a window. drag some widgets into it, a tabbed widget would be good. Now, check the non-retained radio button, and test the UI [resize the window]. Doesn't work right does it? Now, check the retained radio button - doesn't work right either. The only option that works reliable on all mac hardware is double buffered. Now, a better example. Set the window to non-retained, and run it. Now check the textured window check box, and run it again. Slow as F*CK right? it aint cause of Obj-C method overhead buddy. its cause bitmaps are being blitted over the system bus (which is pathetically slow) by the CPU; and in the case of textured window mode: the CPU is calculating a procedural texture, AND drawing it over the system bus - slowness city. An eraserect/fillrect is MUCH faster than a copybits with graphics acceleration. Filling a rectangle with a solid color on the graphics card would be MUCH faster than having the CPU do any of it. THIS IS WHY THE OSX UI IS SLOW (in addition to all the anti-aliased text) ? it is the design of the drawing system.



    as for 4: bit-mapped text works great IF:

    \t1. the font bitmap is cached on the graphics card

    \t2. the destination buffer is cached on the graphics card

    \t3. the graphics card is doing all the drawing



    this was the scenario on OS 9. The VRAM contained the screen image, and the graphics card had a cache of the font bitmap. Not so on OSX: the windows contents are drawn to a buffer in RAM. the CPU draws the widgets into the buffer, over the system bus. this final image is then blitted over the system bus to the graphics card (and possibly cached there ala quartz extreme). However, whenever this buffer changes... its CPU time to redraw all of it. That?s why moving a window is quick and like glass (its contents are static and cacheable) while resizing it is slow (its contents are constantly getting redrawn by the CPU).



    Quote:

    5. All rendering done by the CPU. As opposed to...?



    as opposed to the graphics hardware - as it should be, and as it has been since graphics subsystems were invented for the purpose relieving the CPU from doing the rendering.



    Quote:

    Messaging and events need a bit of a goose, but what gets lost in your rant is the fact that OS X has been running faster, even on older hardware. First, you make it work. Then, you make it fast.



    messaging and events need NO attention. That is not where the speed hit on OSX is. what needs attention is the graphics layers of the OS, and the approach to drawing the UI. I don't rant, nor do I overlook how OS X has been running faster every new release. However - the FUNDAMENTAL cause of slowness of the UI is due to the process of rendering it, not cause of Obj-C, method invocation overhead, nor cocoa. Perhaps you will be less uninformed after you do your homework. To optimize an app, you speed up the portion of code using the most time [after it is functionally correct - as you point out] - this is not Obj-C dynamic messaging. IF the slow portion of your code is due to a poor DESIGN, then optimization requires a change in the design of your code - tweaking message passing is a waste of time.



    the overhead of dynamic messaging occupies a very tiny amount of the CPU time necessary to draw the UI. If all your doing is inserting and removing objects into a NSList - it will be slower than a C++ STL class - fine; but that is NOT the hold up with the OSX UI. OmniWeb's display layer was a victim of multiple flaws and inefficiencies. Messaging overhead may have been ONE of them, however, OmniWeb is a poor example of a typical GUI app (it does LOTS and LOTS of dynamic creation of widgets, most apps do little or none); and the speed problems with OW were compounded by OSX drawing system.



    [PS - no offense meant... I just don't like being told I'm ranting from someone who doesn't have a complete grasp of the topic]...
  • Reply 65 of 98
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by grad student





    really... does a 3d first person shooter work at 1 frame per second?




    Is iPhoto a first person shooter? What's your point? Your argument seems to be purely opportunistic. If you have done any development at all you know that optimization is done on an as-needed basis, and needs vary.



    In the case of applications, it's often impossible to know which bugs will crop up until you do a general release, and speed is not as important as it is in a game (where bugs are more widely tolerated, because of the need for speed).



    Quote:

    Objective-C is no slower than C or C++ for procedural code. For OO code - there is a slight overhead per method invocation on un-cached methods, however this again, is not what makes the OSX UI slow for cocoa OR carbon apps. [see below]



    I explicitly say that they can be that way, not that they are that way, and you respond that they aren't that way. Clever.



    Have you written any Framework applications in Objective-C? Have you seen any of Apple's WWDC sessions on the subject? If so, you know full well what I'm talking about. If not, bone up.



    Quote:

    my programming experience with the mac began shortly after the introduction of color qd, so I don't have authority on what got accelerated first. however, blindly splatting bits is not fast at all.



    As for asking a game developer - I am one - so I'll just tell you, and give you some homework to try on your own (it won?t even require coding homie). drawing a lit, shaded, z-buffered 3d projected triangle is MUCH faster on 3d hardware than calculating the resulting image on the CPU, and blitting it to graphics memory over the system bus.




    In other words, OpenGL is hardware accelerated.



    What on earth does this have to do with bitmaps, which are precalculated and cached on the video card (hint: textures)? Painting a bitmap involves a memcpy() into a display buffer, more or less. Formac's cards in particular cached font bitmaps, so that display was incredibly fast; the other cards aren't as aggressive, but the basic idea has been in place since QuickDraw was first hardware accelerated. In fact, that's what most of the hardware acceleration consisted of in the first place.



    Or did you actually think that OS X drew a bunch of vectors and calculated lighting and Gaussian blur every time it wanted to draw a button?



    [snip the metal UI, which I didn't talk about because his original description was accurate]



    Quote:

    THIS IS WHY THE OSX UI IS SLOW (in addition to all the anti-aliased text) ? it is the design of the drawing system.



    So the UI is metal? Or did I miss something?



    It might explain why some apps are a bit slow, although I haven't heard too many complaints about FCP or Safari. And I'll bet that iPhoto's sluggishness persists even if you repaint it in Aqua, or your interface of choice.



    As for double-buffering, well of course it works most reliably. That's the point.



    Quote:

    as for 4: bit-mapped text works great IF:

    \t1. the font bitmap is cached on the graphics card

    \t2. the destination buffer is cached on the graphics card

    \t3. the graphics card is doing all the drawing



    this was the scenario on OS 9. The VRAM contained the screen image, and the graphics card had a cache of the font bitmap. Not so on OSX: the windows contents are drawn to a buffer in RAM. the CPU draws the widgets into the buffer, over the system bus. this final image is then blitted over the system bus to the graphics card (and possibly cached there ala quartz extreme). However, whenever this buffer changes... its CPU time to redraw all of it. That?s why moving a window is quick and like glass (its contents are static and cacheable) while resizing it is slow (its contents are constantly getting redrawn by the CPU).




    Exactly, except that only Formac's graphics cards were aggressive about font caching.



    So, why is this the case? Because graphics hardware hasn't caught up to what the CPU's doing yet (at least, not graphics hardware that Apple can ship across the majority of their lines. So:



    Quote:

    as opposed to the graphics hardware - as it should be, and as it has been since graphics subsystems were invented for the purpose relieving the CPU from doing the rendering.



    So you concede my point: First, the CPU does the rendering. It works. Then graphics subsystems are developed to make it fast. This is how it was with QuickDraw, again with 3D GLs, and again with Aqua. Every time there's been a paradigm shift in display technology, the hardware's played catch up, because you have to figure out how to make something work before you can make it work fast.



    Using your logic, there would be no 3D apps at all, because all the graphics acceleration available at the time accelerated bitmaps.



    Quote:

    messaging and events need NO attention. That is not where the speed hit on OSX is. what needs attention is the graphics layers of the OS, and the approach to drawing the UI. I don't rant, nor do I overlook how OS X has been running faster every new release. However - the FUNDAMENTAL cause of slowness of the UI is due to the process of rendering it, not cause of Obj-C, method invocation overhead, nor cocoa. Perhaps you will be less uninformed after you do your homework.



    Or maybe once you stop leaping around from "OS X" to "the OS X UI" I'll have a better idea of what you're talking about in the first place. Almost everything you've stated in this post is as trivially obvious as it is irrelevant to my point.



    I'm talking about application design in general. Event and messaging speed is crucial to that, especially in Cocoa, and especially with Objective-C.



    Quote:

    To optimize an app, you speed up the portion of code using the most time [after it is functionally correct - as you point out] - this is not Obj-C dynamic messaging. IF the slow portion of your code is due to a poor DESIGN, then optimization requires a change in the design of your code - tweaking message passing is a waste of time.



    Nevertheless, 1) you have to wait until it's functionally correct, which is my point, and 2) making something work fast often involves leaning less heavily on Obj-C's messaging and Cocoa's objects by Apple's own advice. You are putting words in my mouth, I suppose because I have not taken the time to spell out the obvious. Well, I'll spell out the obvious: 90% of optimization is algorithm design. OK? Are we on the same page? Now, given that, can we talk about something interesting, please?
  • Reply 66 of 98
    TOPIC: OSX UI slow



    amorph - honestly - did you even try what I told you to do in Interface Builder?



    look - you design a program first, optomize later. fine. Optomization != redesign. For the OSX UI to be fast would require a redesign of the way it is drawn.



    iPhoto isnt a FPS. No kidding. My point is - the graphics hardware can draw very complicated images faster than the CPU can push simple bitmaps over the system bus - which is how OSX is drawing its UI.



    Yes - I have written applications in Obj-C, using the cocoa frameworks. Your wording sounds weird. Have you written applications in Obj-C using cocoa? how about c++ with powerplant? What about c++ using qt? c++ using MFC? c++ using MacAPP [back in the day]? I have, and am famaliar with all of them. I recognize that there is something truthfull you are talking about: message passing is slower than a function call. However - my point is that this is not a bottleneck of the OSX UI, nor of any performance critical code in OSX apps.



    Quote:

    What on earth does this have to do with bitmaps, which are precalculated and cached on the video card (hint: textures)?



    simply this: the textures that comprise a window on OSX are NOT cached on the video card during performance critical code, ie: rendering the window. [Quartz Extreme does use cached window buffers to hardware accelerate window moving... which is fast as a result. window resizing, and general window rendering is slow]. The CPU renders the window in system memory, afterwich it is copied over the AGP bus to the graphics card. This is slower than simply haveing the graphics subsystem render it to VRAM all by itself. Its not that the graphics cards have to "catch-up" to the CPU, its that the OS is using TONS of bitmaps that have to be pushed over the system bus; and is not using the painting algorithms implemented in hardware on the graphics subsystem. There are no algorithms that need to be put into hardware here... the OS simply needs to use whats there. MAYBE when we all have 128MB+ graphics cards, can apple use its current approach, cache EVERYTHING on the graphics card, and then we'll get a fast UI due to the graphics subsystem. However, this is totally unnecessary.



    the important part is that the graphics hardware is not being properly exploited due to bad design of the drawing layer - which is procedural, and independant of the framwork/api used to design an app. Optomization won't fix it - redesign of the way the graphics subsystem does things, and elimination of procedurally rendered textured images and unnecessary bitmap usage would.



    iPhoto is slow because of heavy graphics drawing, but also - and unecessarly - because of textured window mode. a widgitless textured window is extremely slow in comparison to a normal window. and a normal window is extremely slow compared to what is possible if all drawing is done by the graphics subsystem into VRAM, not on the CPU into main memory.



    Safari is quick for general use because apple has incorporated direct to screen rendering/selective updating of the graphics buffer to speed screen redraws. However - this does little for window resize performance which is still very slow.



    without trying to be condescending [as I was before] - really, try what I told you to do in Interface Builder. You'll notice a huge difference and also notice that retained, and non-retained are non-functional. I said double buffering is what works reliably only because I couldn't verify that retained and non-retained failed on all systems; even though I everything indicates that it does. Its not that double-buffering works reliably, its that the alternatives were broken by apple [possibly purposly, because they worked at verying times in developer previews...].



    Your point was that: first the CPU does the rendering, it works, then graphics subsystems are developed to make it fast.



    the thing is: these subsystems have already been developed; apple isn't exploiting the hardware already in your mac. even with all of the bitmap usage in the OSX UI, the hardware can be better exploited. Remove textured/bitmapped windows - and use hardware accelerated painting on the graphics card, and things would fly.



    As for application design in general: event and messaging are crucial to that? well: method invocation as used in OSX plays a VERY VERY VERY small part in % of CPU time of an OSX app. if iMovie is slow? - it aint method invocation. NONE of the performance critical code on OSX is held up by method invocation.



    Open IB, make a window, set it to non-retained, test the UI, resize the window. what you will see is the window painted white (very quickly), followed by a visibly slow painting of the window with the background:stripes for normal windows, brushed metal for textured windows. If this slow process of painting the windows with huge bitmaps was eliminated, and a solid color or a gradient was used instead [or even a clever usage of lines with varying colors], the UI would be MUCH MUCH faster. It is not because of method invocation. Please try it - I think you'll see what I'm saying.
  • Reply 67 of 98
    programmerprogrammer Posts: 3,458member
    Wow this topic wandered from the original point. Both Cocoa and Carbon are fully double-buffered, both use Quartz which rasterizes using the CPU & MPX bus, and both get the benefits of Quartz Extreme's use of the GPU to do the compositing.



    QE's approach is to cache bitmaps and unchanging window buffers in both AGP-accessible RAM and VRAM. Buffers which change must still be drawn to using the CPU, but everything else the GPU can do by itself and that leads to tremendous performance improvements in the general operation of the GUI because much of the time the elements are static.



    The Quartz rasterizer itself cannot yet be hardware accelerated because it is based on spline rasterization which is something that current GPU hardware doesn't really do (the new geForceFX might have some potential there). Programmable GPUs might be contorted into doing some of this, but the benefits are probably marginal. I don't really expect Quartz to change but the arrival of the 970 will obviously hugely increase drawing performance in a manner similar to what Quartz Extreme did for the compositing. The rumoured VSP's in the chipset might be useful for this kind of thing as well, but we'll have to wait and see about that one.





    Edit: iPhoto isn't necessarily slow due to the Quartz engine, however. It was probably slinging its data around more than it needed to. You would have to actually profile iPhoto to tell what the root cause of its speed problem is/was.
  • Reply 68 of 98
    Quote:

    Originally posted by Programmer

    Wow this topic wandered from the original point. Both Cocoa and Carbon are fully double-buffered, both use Quartz which rasterizes using the CPU & MPX bus, and both get the benefits of Quartz Extreme's use of the GPU to do the compositing.



    QE's approach is to cache bitmaps and unchanging window buffers in both AGP-accessible RAM and VRAM. Buffers which change must still be drawn to using the CPU, but everything else the GPU can do by itself and that leads to tremendous performance improvements in the general operation of the GUI because much of the time the elements are static.



    The Quartz rasterizer itself cannot yet be hardware accelerated because it is based on spline rasterization which is something that current GPU hardware doesn't really do (the new geForceFX might have some potential there). Programmable GPUs might be contorted into doing some of this, but the benefits are probably marginal. I don't really expect Quartz to change but the arrival of the 970 will obviously hugely increase drawing performance in a manner similar to what Quartz Extreme did for the compositing. The rumoured VSP's in the chipset might be useful for this kind of thing as well, but we'll have to wait and see about that one.




    exactly. My point about the AGP bus being slower than the memory subsystem on a graphics card standing, resulting in slower performance than if the card were doing the drawing from local memory, or even more so - algorithmicaly.



    the 970 will help things due to the shear performance improvement in the chip, and (potentially more importantly) due to the improvement in bus speed.



    However... for the last time... the speed of the OSX UI is NOT held up by method invocation in Obj-C.



    perhaps a steering back onto something future hardware & AMD related would be in order.
  • Reply 69 of 98
    airslufairsluf Posts: 1,861member
  • Reply 70 of 98
    Quote:

    Originally posted by AirSluf

    Maybe AMD can manufacture that Raycer chip you so dearly love?



    [ducks!]



    Actually, I agree with almost all of what you are saying here. The one difference is that I agree with Programmer and think this next generation of graphics chip is finally getting to the point of being able to do the general purpose content creation rendering that is required for real Quartz acceleration. Further development of programming tools like Cg will help as well. QE was an excellent idea and helps markedly, but as you point out it doesn't help in the rendering at all.




    raycer - I'll get you yet AirSlurf...
  • Reply 71 of 98
    Quote:

    Originally posted by grad student

    exactly. My point about the AGP bus being slower than the memory subsystem on a graphics card standing, resulting in slower performance than if the card were doing the drawing from local memory, or even more so - algorithmicaly.



    the 970 will help things due to the shear performance improvement in the chip, and (potentially more importantly) due to the improvement in bus speed.



    However... for the last time... the speed of the OSX UI is NOT held up by method invocation in Obj-C.



    perhaps a steering back onto something future hardware & AMD related would be in order.




    My understanding has always been:



    A - If you already know the problem (as in, you've coded for it before, at least twice) and objects are what makes sense for the solution, then Cocoa or Carbon, it doesn't really make a ton of difference.



    B - If you're starting a whole new project, don't exactly know what you're doing, but you know you need objects for the solution, and are probably going to need to iterate thru several architectures before you finally settle on one that works the best ... if you don't go with Cocoa, you're crazy.



    Cocoa eventually gets much faster than Carbon if you're getting into uncharted OO territory, because you can fix architectural problems much quicker: if you already know pretty much exactly what you want (read: iterating thru architectures isn't going to be an issue) then, Carbon's perfectly fine, so run with it.



    In short, a bad architecture running ever-so-slightly-faster in Carbon than it otherwise would in Cocoa; will get it's pants whipped every time by a better architecture in Cocoa that might run ever so slightly faster had it been written in Carbon.



    whew.
  • Reply 72 of 98
    strobestrobe Posts: 369member
    My primary beef with Cocoa apps (other than Cocoa authors not understanding Mac HI conventions) are file i/o and text views. If you compare Carbon SimpleText to TextEdit.app most of the problems become evident. In terms of file i/o if you move or rename an open file or it's host directory the application will actually lose track of the file and start blindly writing to the same POSIX file path. Insane. As for the text view the selection behavior is so insane I can't tolerate it. For example in Carbon apps when selecting up or down on the left or right side you can choose whether to select trailing linefeeds, however in Cocoa you can't differentiate. There are other glaring problems with Cocoa's text views but this post is long enough as it is.
  • Reply 73 of 98
    programmerprogrammer Posts: 3,458member
    With regard to the original topic, I see 5 levels of interaction being possible. I'm not saying anything about the likelyhood of these, I'm just making a list. In no particular order:



    1) Apple will ship Opteron machines in addition to their PowerPC machines. Probably just running MacOS X server and intended for the server market. OpenSource software and Apple's server support apps are about all that is needed.



    2) Apple will license MacOS X server to somebody else for use on their Opteron machines.



    3) AMD will fab PowerPC chips as a 2nd source for IBM.



    4) Apple, AMD (and possibly nVidia?) are talking about HyperTransport and common chipsets. For the 970 this could be a memory controller with HT interface to the rest of the system, or perhaps they are sharing components at the chip level.



    5) Apple and AMD aren't talking and this is all just a pipedream of somebody smokin' too much weed.



    Did I miss any? Notice that I omitted the very unlikely (i.e. AMD designing and building a PowerPC) on purpose. Personally I think #2 is possible, #1 slightly less, and #5 the safest bet.
  • Reply 74 of 98
    Quote:

    Originally posted by Programmer

    With regard to the original topic, I see 5 levels of interaction being possible. I'm not saying anything about the likelyhood of these, I'm just making a list. In no particular order:



    1) Apple will ship Opteron machines in addition to their PowerPC machines. Probably just running MacOS X server and intended for the server market. OpenSource software and Apple's server support apps are about all that is needed.



    2) Apple will license MacOS X server to somebody else for use on their Opteron machines.



    3) AMD will fab PowerPC chips as a 2nd source for IBM.



    4) Apple, AMD (and possibly nVidia?) are talking about HyperTransport and common chipsets. For the 970 this could be a memory controller with HT interface to the rest of the system, or perhaps they are sharing components at the chip level.



    5) Apple and AMD aren't talking and this is all just a pipedream of somebody smokin' too much weed.



    Did I miss any? Notice that I omitted the very unlikely (i.e. AMD designing and building a PowerPC) on purpose. Personally I think #2 is possible, #1 slightly less, and #5 the safest bet.




    Yeah, I think that's the list. I just want to point out that moki was one of the first to suggest AMD as a 2nd OS X platform way, way, way, way back when - right around the time he first pointed us toward this GP-UL thing. I'm inclined to take his accuracy on that as pretty strong suggestion that #1 might be the most likely, with #3 next, then #2. #5 will always be the safe bet, though.



    I think that Apple is less likely to license than to work out a common 64bit hardware platform that can (as much as possible) readily accept both Opt and 970 chips.
  • Reply 75 of 98
    Quote:

    Originally posted by Programmer

    With regard to the original topic, I see 5 levels of interaction being possible. I'm not saying anything about the likelyhood of these, I'm just making a list. In no particular order:



    1) Apple will ship Opteron machines in addition to their PowerPC machines. Probably just running MacOS X server and intended for the server market. OpenSource software and Apple's server support apps are about all that is needed.



    2) Apple will license MacOS X server to somebody else for use on their Opteron machines.



    3) AMD will fab PowerPC chips as a 2nd source for IBM.



    4) Apple, AMD (and possibly nVidia?) are talking about HyperTransport and common chipsets. For the 970 this could be a memory controller with HT interface to the rest of the system, or perhaps they are sharing components at the chip level.



    5) Apple and AMD aren't talking and this is all just a pipedream of somebody smokin' too much weed.



    Did I miss any? Notice that I omitted the very unlikely (i.e. AMD designing and building a PowerPC) on purpose. Personally I think #2 is possible, #1 slightly less, and #5 the safest bet.




    I think #3 & #4 in equal weighting, then #2 and finally #5. #1 not likely at all. If they have comparable perf and a good roadmap then it will be a PPC only future.



    MM
  • Reply 76 of 98
    My take, in order:



    Most likely:



    #1 -- only because Moki more or less said as much. I don't think he mentioned AMD specifically though, if memory serves.



    Then:



    #5 -- though it contradicts the former.



    #4 -- Why not? If the 970 and Opteron have broadly comparable bandwidth needs, then it would be in both their interests to amortise research & production costs as far as possible for their chipsets.



    #2 -- not so likely for an Opteron-only deal. But I wouldn't rule out IBM licensing OSX (all flavours) for sale through IBM's usual channels (i.e. not department stores, etc.).



    Least likely:



    #3 -- I can't see this happening. IBM's the one with the spare capacity, not AMD, the last time I heard.
  • Reply 77 of 98
    jwdawsojwdawso Posts: 389member
    #1 is a 0 chance. The PPC is not an expensive chip, so there would be no advantage to using AMD's chip. I have no doubt that MacOS X is running on all sorts of hardware platforms within Apple, but make up one reason why it would make business sense to start selling now Mac OS X on any other hardware. It will happen only when there is no more PPC.



    And riddle me this - why would Apple embrace AMD chips when they are on shakey financial ground? Do you think Apple would throw in the towel on PPC and then go to another chip maker where Apple would again be small potatoes? If the PPC is gone, hello Intel.



    Why is there such a romance with AMD on this board?
  • Reply 78 of 98
    blackcatblackcat Posts: 697member
    Quote:

    Originally posted by jwdawso

    Why is there such a romance with AMD on this board?



    Because of a belief that AMD is not Intel so not evil. I find it odd as at the end of the day AMD still makes x86 chips that indirectly eat in to Apple market share.



    The enemy of my enemy is my friend? I think people forget AMD would like PPC to go away as much as Intel.
  • Reply 79 of 98
    Quote:

    Originally posted by Blackcat

    Because of a belief that AMD is not Intel so not evil. I find it odd as at the end of the day AMD still makes x86 chips that indirectly eat in to Apple market share.



    Well, now they make x86-64 processors, which are a 64-bit extension of a 32-bit extension of a 16-bit micro with full compatibility to an 8-bit successor of a 4-bit micro.



    Just to drive home the amount of technological inelegance going on here



    BTW, does anybody know what the registers in the x86-64 are called? EEAX?
  • Reply 80 of 98
    eskimoeskimo Posts: 474member
    Quote:

    Originally posted by jwdawso

    [B



    Why is there such a romance with AMD on this board? [/B]



    Maybe because AMDers are damn sexy and smarter than the average bear?



    Quote:

    5) Apple and AMD aren't talking and this is all just a pipedream of somebody smokin' too much weed.



    #5 isn't true I met a nice Apple employee in the bathroom once



    What an interesting topic
Sign In or Register to comment.