Carbon vs Cocoa: I'm curious

Posted:
in macOS edited January 2014
Which will run faster:



- Cocoa app written in objC. (assume the coding is perfect)

- Carbon app written in C. (assume the coding is perfect).



???



I am tempted to believe that the Carbon app will be faster, but I haven't factored in any dealings with Mach, Darwin, etc. I was hoping that I could write the Carbon app and basically have it run right on top of the hardware. (to hell with protected memory: these blocks of code are small enough to comb for errors & make bug-free, plus the extra speed will help much)

Comments

  • Reply 1 of 19
    kickahakickaha Posts: 8,760member
    Er... not a viable comparison.



    The event models are sufficiently different that you have two completely different sets of computations occurring under the hood.



    Besides, show me a 'perfectly coded' app and I'll sell you a bridge.



    You're going to get all sorts of responses to this, but I can sum up the majority of them:



    Carbon is faster because C is faster than Obj-C, d00d!



    Cocoa is faster because it's more efficient in its delegation model, you cretin.
  • Reply 2 of 19
    kickahakickaha Posts: 8,760member
    I just saw this...



    [quote]Originally posted by Splinemodel:

    <strong>I was hoping that I could write the Carbon app and basically have it run right on top of the hardware. (to hell with protected memory: these blocks of code are small enough to comb for errors & make bug-free, plus the extra speed will help much)</strong><hr></blockquote>



    1) You CANNOT run 'right on top of the hardware'. There's this little thing called an OS in between.



    2) You CANNOT run without protected memory, since it's handled by the OS.



    3) NO block of code is small enough to be made bug-free.



    Perhaps you should start with a lower layer treatise first? &lt;file:///Developer/Documentation/Essentials/SystemOverview/index.html&gt; should get you started.
  • Reply 2 of 19
    My understanding is that Carbon would be faster because of the overhead Cocoa inherently has. Since Cocoa objects are essentially links to more detailed chunks of code, the time to call up those things will take a bit longer. Carbon has more direct access to its code I think.



    [edit1] On the other hand, Cocoa apps are usually more succinct with their commands, so they might pull even or past carbon at that point.



    Of course, mixing and matching is fairly common too now, isn't it? I thought that CoreFoundation basically makes their code interoperable.



    [edit2] In response to Kihaha's comment, I don't think either Carbon or Cocoa can sneak underneath the kernel, which handles memory management (?), scheduling, access to hardware, etc. Do games even do this now? In OS 9, OpenGL and Quickdraw could get direct access to hardware. Maybe if your code only accesses OpenGL, you can bypass the kernel and the CPU, and run on the GPU?? Maybe not the kernel.



    (disclaimer: I'm not an engineer in the slightest.)



    [ 09-28-2002: Message edited by: BuonRotto ]</p>
  • Reply 4 of 19
    Hmm, Darwin and Mach are open sourced, I believe. Tis time to write myself a backdoor to the hardware layer.



    Reality check: not going to happen



    Essentially, I would be writing such a program core in a PPC assembler, but compiled C isn't much slower -- it is often faster -- than human written assembly.



    Anyway, I'll have to investigate further. I also need to know if there are performance penalties related to switching between carbon and cocoa in a single app. Thanks for the links.
  • Reply 5 of 19
    amorphamorph Posts: 7,112member
    For the most part, Carbon and Cocoa both end up calling CoreFoundation to get their work done. So if you wanted to avoid overhead, you could always try burrowing down and calling CoreFoundation routines directly (to the extent that they're documented...).



    If there is a measurable overhead to commingling Cocoa and Carbon, I've never heard of it.



    Short of some really dedicated profiling, I can't see any way to say that one will be faster than the other. It depends on what you're trying to get each part of the OS to do. Of course, if it addresses your problem, you can beat any compiler silly by (efficiently!) handcoding AltiVec routines, but that doesn't engage either Carbon or Cocoa.



    [ 09-28-2002: Message edited by: Amorph ]</p>
  • Reply 6 of 19
    kickahakickaha Posts: 8,760member
    BuonRutto: NOTHING gets past the kernel. Period. If you want to bypass the kernel, you need to write your own OS to boot from. It's as simple as that. You could, theoretically, hack the Darwin source to provide that backdoor Splinemodel mentioned, but that's just horrendous. You'd still have to write support for your own OS functions since you'd be bypassing the OS to access hardware with your code... ie, you're back to writing your own OS.



    Another thing to remember... premature optimization is not only wasted time, it generally leads to other problems down the line with system design.



    What task are you trying to perform that you think you need to toss everything that the OS gives you?



    My advice? Use Cocoa and Obj-C. This will give you a *HUGE* amount of pre-done work and elegance that you can benefit from. Remember, other than the method invocations, Obj-C *IS* C. The functions run just as fast, internally, as raw C... because they are. Then, profile your app, and *then* optimize the crud out of it.



    Writing mega-ultra-super-keen-fast hand tuned code that accesses the hardware directly is a pipe dream under OS X.
  • Reply 7 of 19
    [quote]Of course, if it addresses your problem, you can beat any compiler silly by (efficiently!) handcoding AltiVec routines, but that doesn't engage either Carbon or Cocoa.<hr></blockquote>



    I will have to look into this, if you're indeed serious.



    For some background info: I'm not doing anything that's going to change the world. I have a mostly open ended project for a class, and doing some programming work that uses computer hardware directly (or close to directly) is the gist of it.
  • Reply 8 of 19
    kickahakickaha Posts: 8,760member
    He's serious in that, for SIMD style computations, you can currently beat the compiler for AltiVec implementations because the gcc AltiVec generation, um, sucks. If you have a good thorough knowledge of AltiVec at the assembler level, you can hand craft code that beats the compiler's emitted code, easily.



    But this *ONLY* works for SIMD computations... *NOT* general computation or programming. AltiVec isn't a magic box.



    Code FIRST, optimize LAST. Otherwise you're just wasting your time.



    [ 09-28-2002: Message edited by: Kickaha ]</p>
  • Reply 9 of 19
    amorphamorph Posts: 7,112member
    Or, as the NeXTies said, make it work, then make it fast.



    The trick to really getting a PPC smoking - and this applies to AltiVec code also - is to be aware of how best to use the caches onboard, and the cache hinting instructions. It's a black art that you won't learn quickly or easily, and which is of limited use unless you'll spend a lot of time banging on bare metal (which, in OS X, you won't be).
  • Reply 10 of 19
    Thanks for all the advice.



    "Code first optimize last"

    This is an EE project. The point of the project is to write optimized code running close to the hardware. So the program can work right and still fail to meet the objective.



    I'm guessing that DSP stuff can be made to work well as SIMD processes.



    Another note: I'm writing for OS X because it is, in my opinion, the best OS for the PPC platform. The PPC is the reason I'm using OS X. I like banging on metal, and understanding the machine architecture is fine with me. Reminder: I'm an EE student, not a CS student.



    [ 09-28-2002: Message edited by: Splinemodel ]</p>
  • Reply 11 of 19
    [quote]Originally posted by Splinemodel:

    <strong>Thanks for all the advice.



    "Code first optimize last"

    This is an EE project. The point of the project is to write optimized code running close to the hardware. So the program can work right and still fail to meet the objective.





    [ 09-28-2002: Message edited by: Splinemodel ]</strong><hr></blockquote>





    Assuming there's no perfect code, only attempts at perfection, thus:



    Generally, the code will be better and faster the easier the framework makes version iteration, than a competing framework that promises more speed based on assumptions of perfection and therefore, difficulty of iteration.



    In short, just go Cocoa, it's cooler.





    <img src="graemlins/lol.gif" border="0" alt="[Laughing]" />
  • Reply 12 of 19
    [quote]Cocoa is faster because it's more efficient in its delegation model, you cretin. <hr></blockquote>



    I tend to be of this school. If you have to load shared libraries, sure that's an overhead. but once those libs have been loaded, they remain in ram, so the next OO app that launches is faster. And so on. It's also 'faster' to develop for, and faster for the purposes of upgrading stuff system wide (ie glibc or gtk+).



    I use bash scripts that launch at startup to preload various things into ram, so that all my common apps/tasks are fast. This, arguably, also makes my computer faster (though you have to wait longer at startup).
  • Reply 13 of 19
    kickahakickaha Posts: 8,760member
    [quote]Originally posted by Splinemodel:

    <strong>Thanks for all the advice.



    "Code first optimize last"

    This is an EE project. The point of the project is to write optimized code running close to the hardware. So the program can work right and still fail to meet the objective.</strong><hr></blockquote>



    I hear ya, but think about this... when do you know when it's optimized?



    Get it running first. Benchmark it. Then start optimizing. Continue doing so until you can't squeeze out any more. At that point reconsider your design, see if there isn't a more efficient approach. If so, repeat. If not, stop, you're done.



    90% optimized functioning code will serve you better, both in the real world and in class, than 100% optimized *non*-functioning code.



    If you try and do a fully optimized version first off, you're going to make the task of making it work much harder. Trust me, I've been doing this for quite a few years... and even raw assembler (of which I've done more than my share... most without an OS ) is best developed this way.
  • Reply 14 of 19
    123123 Posts: 278member
    Splinemodel, as I understand it:



    -Your code should be small, low-level, run fast, consist of only a few functions and should show that you understand basic optimization techniques and computer hardware.



    -Your code doesn't have to be expandable, doesn't need to integrate with other components and doesn't need a beautiful and fast GUI. As for you code design is completely unimportant, oo programming is basically a waste of time.





    Use plain C and gcc. If you need a GUI or graphical output, use Carbon C, C++ or Cocoa Obj-C, it doesn't matter at all because you'll do the important stuff in C/Assembler anyway.



    I don't know your task, but you should study:



    - computer architecture in general (processor pipeline stages, stalls/hazards, cache architectures etc.)



    - optimization techniques (software pipelining etc.)



    (I assume, you've already done that)



    - basic PPC assembler



    - your processor (pipeline, register sets, types of ALUs, instruction timing, load folding etc.), caches, memory etc.



    Then:

    - Write a c program that works + include benchmarking code

    - Compile without compiler optimization

    - Try to understand the assembler code

    - Compile with compiler optimizations turned on

    - Benchmarkt both versions

    - Look at the optimized assembler code, try to understand why it is faster

    - Try to improve speed by optimizing your c source (loop unrolling etc.)



    Repeat these steps until you can't see a way to further improve your c code.



    - Try to optimize the generated assembler code (never start from scratch!) based on what you know about the architecture. Identify hazards and find a workaround. There's always a way to beat the compiler.



    If you think that your task might benefit from SIMD instructions, write a vectorized version of your function in plain C (or pseudo-code) first. It's likely that you find out that you can't parallelize your code after all. If you succeed, learn how to program AltiVec.



    ((not recommended: switch to OS 9 if raw speed is more important than code optimization, you won't gain more than a few percent, though))



    123
  • Reply 15 of 19
    Thanks 123, I think now I understand what kickaha meant by "code first, optimize later."



    Thanks for all the directional guidance. I didn't know how technical Apple Insider is, but I'm pleasantly surprised.



    Thanks once again. If you have anything that you might like to say, send email.



    By the way: send it to jnorair(at)princeton.edu

    (hoping to avoid address finding spambots)
  • Reply 16 of 19
    kickahakickaha Posts: 8,760member
    Uh yeah, what he said.



    Get it running, get your baseline for performance, then start tweaking the crap out of it... but get that baseline first. Measurements are useless unless you know what you're measuring *against*.



    Oh heck, just listen to 123.
  • Reply 17 of 19
    amorphamorph Posts: 7,112member
    Definitely.



    If nothing else, think about what would happen if you optimized the dickens out of a routine... and then found a bug in the logic.



    Work the logic out first.



    If I'm not mistaken, Mot has a wealth of documentation on optimizing for the PPC. You might have to dig around, but it's there.
  • Reply 18 of 19
    kickahakickaha Posts: 8,760member
    [quote]Originally posted by Amorph:

    <strong>If nothing else, think about what would happen if you optimized the dickens out of a routine... and then found a bug in the logic.</strong><hr></blockquote>



    Been there, done that, bought the t-shirt that said "D'OH!"



    Listen to your elders, Spline...
  • Reply 19 of 19
    dfilerdfiler Posts: 3,420member
    [quote]Originally posted by Kickaha:

    <strong>



    Been there, done that, bought the t-shirt that said "D'OH!"



    Listen to your elders, Spline... </strong><hr></blockquote>



    I third the sentiment. Achieve functionality prior to optimization. With properly written test cases, you'll be able to that ensure future optimization doesn't produce incorrect output.



    Also, note that the most effective optimization results from iterating on efficient high-level programatic flow structures. Modern compilers do an excellent job at generating optimized binaries for most things except special instruction sets like altivec. I'm not sure this matters for an EE project dealing with lower level optimization, but its good to keep in mind. While hardware level optimization is still called for in some instances, in most cases, you'll get a better return on development resources by working on high level optimization. I took an algorithms course about 7 years ago and it was the most valuable education I've ever received.
Sign In or Register to comment.