John Siracusa worries OS X may become like Copland

Jump to First Reply
Posted:
in macOS edited January 2014
In his new blog "Fat Bits" over at Arstechnica, Siracusa discusses how OS X's native code development language doesn't have automatic memory management like MS has with C# and how this could impact the OS X platform later on down the road.



Here's the link

http://arstechnica.com/staff/fatbits.ars/2005/9/27/1372



Any thoughts?
«13

Comments

  • Reply 1 of 47
    pbpb Posts: 4,255member
    Automatic memory management is a big help in avoiding memory leaks and I think that Apple should at some point take this path. But I am not sure if this is as critical as the above article means.
     0Likes 0Dislikes 0Informatives
  • Reply 2 of 47
    placeboplacebo Posts: 5,767member
    Copland never came out. OS X has been out with millions of satisfied users for almost five years.
     0Likes 0Dislikes 0Informatives
  • Reply 3 of 47
    chuckerchucker Posts: 5,089member
    Quote:

    Originally posted by Placebo

    Copland never came out. OS X has been out with millions of satisfied users for almost five years.



    Have you read the two articles? The point is, Siracusa believes that OS X lacks decent support a crucial feature, just like OS 7 lacked support (at all) for two then-crucial features, PMT and MP. Copland was created to implement those features, but took too long to become marketable, which is why it was scrapped. Similarly, a version (or replacement) of OS X that supports AMM could ultimately take too long to develop as compared to competitors that are ahead of Apple.



    Whether I agree with Siracusa that AMM is even a crucial feature is another matter. I do feel, though, that OS X could use a beginner-level programming language, akin Visual Basic.NET.
     0Likes 0Dislikes 0Informatives
  • Reply 4 of 47
    mr. memr. me Posts: 3,221member
    Quote:

    Originally posted by MPMoriarty

    In his new blog "Fat Bits" over at Arstechnica, Siracusa discusses how OS X's native code development language doesn't have automatic memory management like MS has with C# and how this could impact the OS X platform later on down the road.



    Here's the link

    http://arstechnica.com/staff/fatbits.ars/2005/9/27/1372



    Any thoughts?




    Truth is that which works. MacOS X is a relatively bug-free shipping OS. The shipping version of Windows is a bloated bug-ridden mess that Microsoft now acknowledges is a mistake. So much for the weaknesses of Objective-C and the strengths of C#.
     0Likes 0Dislikes 0Informatives
  • Reply 5 of 47
    Well the question really becomes...



    How is NOT having AMM going to hinder Mac OS X in the future. I can see how MP and PMT were important because these were features that really affects users. AMM somewhat affects users, but its more of a feature a developer would see benefits from and not the average user.
     0Likes 0Dislikes 0Informatives
  • Reply 6 of 47
    chuckerchucker Posts: 5,089member
    Quote:

    Originally posted by MPMoriarty

    AMM somewhat affects users, but its more of a feature a developer would see benefits from and not the average user.



    Affected developers in the near future are affected users in the further future.
     0Likes 0Dislikes 0Informatives
  • Reply 7 of 47
    hirohiro Posts: 2,663member
    Siracusa is an ignorant blowhard on this particular topic. OS X is already adding an equivalent memory management model to C#. The very same thing he critized in OS X/Obj-C: "And no, don't talk to me about adding garbage collection to Objective-C. That is exactly the sort of "half-way" mindset that led to Copland." is EXACTLY the thing used in C#/.Net and Java everywhere for automatic memory management!



    He also fails to mention that just about any high performance or mission critical or realtime system avoids garbage collection like the plague because it makes the temporal execution of a process non-deterministic.



    Hmmm, doesn't that make a language like Obj-C which gives you both manual AND auto memory management a more flexible choice???



    Disclaimer: I let my Ars subscription lapse because it is sliding down the slippery slope. It got too popular and is now becoming too polarized by more and more opinionated reporting and the Battlefront is everywhere, not just the Battlefront. It's almost impossible to avoid a flame war there anymore.
     0Likes 0Dislikes 0Informatives
  • Reply 8 of 47
    chuckerchucker Posts: 5,089member
    Quote:

    Originally posted by Hiro

    OS X is already adding an equivalent memory management model to C#.



    I presume you mean Obj-C.



    (edit) Never mind, my grammar parser didn't work. \ (Not that Hiro's grammar was entirely correct :P )
     0Likes 0Dislikes 0Informatives
  • Reply 9 of 47
    relicrelic Posts: 4,735member
    Quote:

    Originally posted by Mr. Me

    Truth is that which works. MacOS X is a relatively bug-free shipping OS.



    True, but the finder needs a whole lot more work. I find my self killing Aqua and using Xwin-Nextstep half the time because I'm getting tired of all the App crashes and spinning f#cking balls.
     0Likes 0Dislikes 0Informatives
  • Reply 10 of 47
    kim kap solkim kap sol Posts: 2,987member
    Quote:

    Originally posted by Chucker

    I presume you mean Obj-C.



    He means that OS X (Obj-C) is going to get a model similar the one found in C#.
     0Likes 0Dislikes 0Informatives
  • Reply 11 of 47
    kaiwaikaiwai Posts: 246member
    Quote:

    Originally posted by MPMoriarty

    In his new blog "Fat Bits" over at Arstechnica, Siracusa discusses how OS X's native code development language doesn't have automatic memory management like MS has with C# and how this could impact the OS X platform later on down the road.



    Here's the link

    http://arstechnica.com/staff/fatbits.ars/2005/9/27/1372



    Any thoughts?




    Darling, we have Java - maybe we'll see, at a later day, Objective-Java at a later date, as SUN works towards making the Java JVM more non-Java language friendly - but until then, I think that Objective-C still offers alot to developers, and its just a matter of Apple properly marketing their EXISTING technology, which can already solve ALOT of problems that programmers are still doing manually, rather than using the tools and language features that Apple has made available.



    Oh, and regards to garbage collection, Objective-C/Cocoa already includes a limited amount of garbage collection already, so lets not go nuts on the garbage collection front.



    What is required is not necessarily a new framework and language - Microsoft NEED it because win32 sucks so royally, and C++/C with win32 is the eqivilant of being kicked in the nutts a number of times, then having a cactus rammed up ones ass - not a pleasent experience by any stretch of the imagination.



    Cocoa + Objective-C is a match made in heaven; and as long as Apple keep working towards providing better development tools for all the coders out there, the issue isn't so much features and garbage collection, its about developing tools that make tracking development easier, especially on large projects and providing as much ready-to-use code for developers to use rather than having to write their own code from scratch each time they want a certain function. It shouldn't be able removing control, the move should be providing more feature complete classes, better documentation, and superior tools.
     0Likes 0Dislikes 0Informatives
  • Reply 12 of 47
    kickahakickaha Posts: 8,760member
    I'm all for enforcing policy through language constructs over idioms, but memory management is such an app-specific thing that I can't see a strong benefit to a one-size-fits-all approach.



    GC is *GREAT*... when you can get away with it. There are times when you simply can't, and situations where GC barfs and leaves things hanging out there in the wind anyway. The best GC algorithms for completeness (boxcar, mark/sweep, etc) have hellacious run-times on any but rather trivial systems, while the efficient ones (ref-counting, etc) have a bad propensity for leaving dead objects lying about in cross-referring cycles. You know... memory leaks. You *STILL* have to come up with schemes for dealing with such in ref-counted GC systems, it doesn't make them just go away.



    The people I know that use GC in serious production code use the more time-expensive GC engines to *debug*... it lets them know where there's a cycle, they take care of it, then they turn the damned thing off for deployment, or drop back to a ref-counted system at most.



    Because of that, GC becomes a debugging *tool* instead of an expected panacea. It helps the developer pinpoint problems, but it does *NOT* make them just go away. There's a need in software engineering for all sorts of memory and resource management. GC, in its various forms, is *one* way of dealing with it. Personally, I like ref-counting with some manual selection of what gets automatically handled, and what doesn't, with a mark/sweep tool to detect those pesky cycles for me. (C++ is even moving in this direction with a combination of auto_ptr, shared_ptr, and unique_ptr.)



    I think Siracusa has a point, but I think it's more a matter of *perception* than developer need. The same developers that think "GC cures all" are generally those that don't really understand the issues, and think VB is just groovy. Those folks are more likely to say "C# has GC, and the MAC doesn't? l@m3!", but the lowest common denominator shouldn't be your target audience for developers, IMO.



    I agree that conventions and idioms are a poor way to enforce memory management policy, and I'd like to see better support for same at the language or even library level. I disagree, however, that GC is The One Thing that will kill Apple... unless folks like John (sorry man) keep spreading FUD about it. GC is an important *tool*, and one that we definitely need in the toolbox, but you don't force someone to use a screwdriver to hammer in a nail. GC needs to be a dev-selectable tool, one that is available, but not mandatory.
     0Likes 0Dislikes 0Informatives
  • Reply 13 of 47
    johnjohn Posts: 99member
    Quote:

    OS X is already adding an equivalent memory management model to C#. The very same thing he critized in OS X/Obj-C: "And no, don't talk to me about adding garbage collection to Objective-C. That is exactly the sort of "half-way" mindset that led to Copland." is EXACTLY the thing used in C#/.Net and Java everywhere for automatic memory management!



    You seem to be assuming that I don't know that C# and Java use forms of garbage collection. I'm not sure where you got that idea. When I called Obj-C with GC a "half-solution," I meant that it is an attempt to have your cake and eat it too, so to speak, much like Copland was.



    IOW, it's an attempt to add a feature to an existing technology where it may not fit in well and where it doesn't really solve the problem completely (e.g., Copland's plan to add MP/PMT to Mac OS while still allowing some processes to run "unprotected") instead of cutting the Gordian knot by pushing the old ways off to the side in favor of a (relatively) clean slate (e.g., Mac OS X's design with its BSD base, Cocoa, and the classic environment for backward compatibility, which allowed Carbon to abandon the worst of the cruft and break compatibility where necessary)



    In part 2 you can read more about why I think Obj-C with GC is not an adequate solution in the long run.



    Quote:

    He also fails to mention that just about any high performance or mission critical or realtime system avoids garbage collection like the plague because it makes the temporal execution of a process non-deterministic.



    I also talk more about performance issues in part 2.



    Quote:

    Disclaimer: I let my Ars subscription lapse because it is sliding down the slippery slope. It got too popular and is now becoming too polarized by more and more opinionated reporting and the Battlefront is everywhere, not just the Battlefront. It's almost impossible to avoid a flame war there anymore.



    That's an interesting statement from someone who began his post with "Siracusa is an ignorant blowhard"
     0Likes 0Dislikes 0Informatives
  • Reply 14 of 47
    johnjohn Posts: 99member
    Quote:

    I agree that conventions and idioms are a poor way to enforce memory management policy, and I'd like to see better support for same at the language or even library level. I disagree, however, that GC is The One Thing that will kill Apple...



    Kill Apple? That's way beyond what I was talking about. I was just thinking ahead, trying to see where Mac OS X may eventually find itself behind the competition, and even behind the "minimum standard" of the industry. Classic Mac OS found itself in this position and it didn't kill Apple--and that was before the iPod. As I wrote in part 2, I think Mac OS X is well positioned in all areas except the one I talked about (memory-managed language/API).



    Quote:

    GC is an important *tool*, and one that we definitely need in the toolbox, but you don't force someone to use a screwdriver to hammer in a nail. GC needs to be a dev-selectable tool, one that is available, but not mandatory.



    In the desktop and server markets, automatic memory management will eventually become like virtual memory is today: on by default, and not worth disabling. It took a while for that to be the case, especially in desktop OSes. But it happened, and the same thing will happen for automatic memory management. Yes, there will still be markets and applications where it doesn't make sense (e.g., embedded systems don't always use virtual memory even today) but I'm talking specifically about server/desktop OSes like Mac OS X.
     0Likes 0Dislikes 0Informatives
  • Reply 15 of 47
    zozo Posts: 3,117member
    i love these little moments
     0Likes 0Dislikes 0Informatives
  • Reply 16 of 47
    What moments are those?
     0Likes 0Dislikes 0Informatives
  • Reply 17 of 47
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by John

    Kill Apple? That's way beyond what I was talking about. I was just thinking ahead, trying to see where Mac OS X may eventually find itself behind the competition, and even behind the "minimum standard" of the industry. Classic Mac OS found itself in this position and it didn't kill Apple--and that was before the iPod. As I wrote in part 2, I think Mac OS X is well positioned in all areas except the one I talked about (memory-managed language/API).



    Well, that and multi-threading in Cocoa.



    I appreciate your pointing out an area where you think it could be improved - I agree that it needs improvement, I just think we disagree on the degree to which it needs tinkering. My 'kill Apple' was sheer hyperbole on my part, and nothing to be construed as a speculation of yours. Oops.



    Quote:

    In the desktop and server markets, automatic memory management will eventually become like virtual memory is today: on by default, and not worth disabling. It took a while for that to be the case, especially in desktop OSes. But it happened, and the same thing will happen for automatic memory management. Yes, there will still be markets and applications where it doesn't make sense (e.g., embedded systems don't always use virtual memory even today) but I'm talking specifically about server/desktop OSes like Mac OS X.



    I understand where you're coming from, but until the *algorithm* exists that can efficiently handle cyclic groups, it's pretty much a losing game. With higher levels of abstraction come denser object groupings. With denser object groupings come worse performance with fully auto GC. Hardware can make the existing abstractions faster, but since you're hoping to increase the abstraction level simultaneously, and relying on an inefficient GC to handle any problems that may crop up...



    This is one place where speeding up the hardware alone isn't getting the gains you might think it is. Now, it may be that tomorrow someone will produce The Golden GC Algorithm, and the whole point is moot, but I'm not convinced that that is going to happen anytime soon.



    Now, if the level of abstraction were *fixed*, then yes, the hardware gains would almost certainly, eventually, make fully-auto GC viable, if not predictable. But you state that it is your belief that fully-auto GC will only help push the level of abstraction up.



    I like your focus on levels of abstraction, it's actually my primary research area, but I think that you're hoping to get increased abstraction without paying a cost in efficiency, and I see the two as in direct contention when GC is involved. IMO, automatic GC is great for simple (and I mean simple) systems, but once you get into situations involving potential race conditions, cyclic orphan groupings, and such, it actually becomes more beneficial to performance *AND* to abstraction to have the developer become more involved. It's kooky, but it's true, in my experience. Algorithms that try and solve every possible boundary condition find themselves bogged down in minutia at the syntactic level, when the decisions can quickly be made at the semantic level by a knowledgeable engineer. Because of that, using GC as a diagnostic tool as much as, or perhaps more than, a runtime catch-all, is a much more productive use of the technology, *and* leads to a better encapsulation of proper abstractions in the developer's mind.
     0Likes 0Dislikes 0Informatives
  • Reply 18 of 47
    mrmistermrmister Posts: 1,095member
    I love it when Mr. Siracusa shows up, and suddenly a lot of tails are tucked between people's legs.



    It pays to keep your rhetoric under control--because you get called on it, when you least expect it.
     0Likes 0Dislikes 0Informatives
  • Reply 19 of 47
    johnjohn Posts: 99member
    Quote:

    I understand where you're coming from, but until the *algorithm* exists that can efficiently handle cyclic groups, it's pretty much a losing game. With higher levels of abstraction come denser object groupings. With denser object groupings come worse performance with fully auto GC. Hardware can make the existing abstractions faster, but since you're hoping to increase the abstraction level simultaneously, and relying on an inefficient GC to handle any problems that may crop up...



    First, you assume that no new algorithms will be invented. Second, there are plenty of existing garbage-collected languages that can already "run with the big boys" in certain benchmarks. And don't forget that "most" code (as measured by lines of code) is not performance-critical. The parts that are can always drop down to "unmanaged" code as needed (while we all wait for hardware to get "fast enough" for task XYZ).



    Quote:

    This is one place where speeding up the hardware alone isn't getting the gains you might think it is. Now, it may be that tomorrow someone will produce The Golden GC Algorithm, and the whole point is moot, but I'm not convinced that that is going to happen anytime soon.



    Hardware will cure this; it's only a matter of time. How much time is open for debate, but the eventual result is not.



    To give just one simple example of how this might come to pass, imagine that RAM is "unlimited" by today's standard, and 10000x faster to boot. Suddenly, hunting down and freeing every last scrap of memory that can be freed the second it should be freed is a lot less important. Sound fantastical? Imagine what a computer scientist in 1975 would think of today's computers.



    Quote:

    I think that you're hoping to get increased abstraction without paying a cost in efficiency



    Quite the contrary. I expect to get increased abstraction at a huge efficiency cost--in terms of CPU cycles and memory usage, that is. Human efficiency will increase, however



    Quote:

    Algorithms that try and solve every possible boundary condition find themselves bogged down in minutia at the syntactic level, when the decisions can quickly be made at the semantic level by a knowledgeable engineer.



    See my earlier example. When performance increases dramatically, the rules of the game change.



    Anyway, specifically addressing the timeline of my blog posts (2010-15ish, as clarified in my second post), I think that Java has already shown that It Can Be Done by powering very serious server applications today. Everyone initially bashed Java for being "slow," and some still do. And it still is slower at many things. But that doesn't prevent it from being widely, and successfully deployed, even in performance-critical server applications. I also think that MS will make great strides in the area of performance with C# and CLR, simply because they have the resources and the motivation to do so. By 2010-15ish, I think any performance issues will be dwarfed by the advantages of a "managed" development and runtime environment.
     0Likes 0Dislikes 0Informatives
  • Reply 20 of 47
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by John

    First, you assume that no new algorithms will be invented.



    Actually, I assume nothing, since I stated later that while someone *could* invent a new algorithm, I'm not holding my breath. One may pop up. One may not. If one pops up, yay, we all win. If one doesn't, then pinning one's hopes on an algorithm that never arrived is *very* Copland-esque, if I may say so.



    Personally, I hope the algorithm shows up.



    Quote:

    Second, there are plenty of existing garbage-collected languages that can already "run with the big boys" in certain benchmarks. And don't forget that "most" code (as measured by lines of code) is not performance-critical. The parts that are can always drop down to "unmanaged" code as needed (while we all wait for hardware to get "fast enough" for task XYZ).



    Sooooo... what you're saying is that you want fully-auto GC for when it makes sense, and the option of dropping out of it when it doesn't.



    Which is *PRECISELY* my position. Your articles seem to press for auto-GC 24/7, because hey, the hardware will catch up. If that's not your position, then there was clearly a misunderstanding somewhere along the way.



    Just for the sake of being a right bastard though, and assuming that you do want the hardware to just take care of all the ills: First off...



    Quote:

    Hardware will cure this; it's only a matter of time. How much time is open for debate, but the eventual result is not.



    Not if the abstractions increase the object density faster than the hardware increases, it won't. Perhaps that ramp-up of abstraction efficiency is a pipe-dream, but hey, we can both dream.



    Sure, the history of programming abstractions is a punctuated equilibrium model, and is very likely that hardware will provide speed boosts greater than the plateaus in the model, but if you look at the spikes, hardware, historically, can't keep up. It catches up, and then the next evolution occurs, and blammo.



    You're talking about a 1-2 punch though: increasing the load via computationally intensive GC algorithms (assuming that The Golden GC doesn't come to pass), and expecting a jump in abstraction because of it. Yeouch. It's going to take longer for hardware to recover from that. Will it? Probably. Will it before the next abstraction jump? I kinda hope not... the abstractions are the more important part.



    Quote:

    To give just one simple example of how this might come to pass, imagine that RAM is "unlimited" by today's standard, and 10000x faster to boot. Suddenly, hunting down and freeing every last scrap of memory that can be freed the second it should be freed is a lot less important. Sound fantastical? Imagine what a computer scientist in 1975 would think of today's computers.



    But if the abstractions provide for even denser object models... look at it this way - compare the speed of an app in 1975 with a similar relative class of app now. Not that different in many cases, such as word processors, drawing tools, etc. Why? Because we ask the app to do so much more now. Why? Because the levels of abstraction for the developer allow it.



    As we increase hardware speed, we also increase the abstraction levels, and end up asking the hardware to do more. If we ask more faster than the hardware improves, we lose in the long-run, yes?



    That's the hardware problem.



    Now for the other, and more troublesome aspect...



    Quote:

    Quite the contrary. I expect to get increased abstraction at a huge efficiency cost--in terms of CPU cycles and memory usage, that is. Human efficiency will increase, however



    For *some* things, yes. However, if you look at most of the effective higher level abstractions in use today, such as design patterns, you find that their biggest area where developer decisions are critical is in object management... and not just 'object management' in the general sense, but management styles specific to the patterns themselves. You can't just lump all the object management issues into one bin labeled GC without destroying a lot of the utility of *higher* abstractions. Until there's a better handle on how to map at the *semantic* level the object management issues to the needs of higher abstractions such as design patterns, something like GC not only confuses the issue, it makes some abstractions only more difficult to implement and comprehend.



    And that's the level of abstraction you need to be thinking of, if you want to increase human efficiency in the long term.



    Quote:

    Anyway, specifically addressing the timeline of my blog posts (2010-15ish, as clarified in my second post), I think that Java has already shown that It Can Be Done by powering very serious server applications today. Everyone initially bashed Java for being "slow," and some still do. And it still is slower at many things. But that doesn't prevent it from being widely, and successfully deployed, even in performance-critical server applications. I also think that MS will make great strides in the area of performance with C# and CLR, simply because they have the resources and the motivation to do so. By 2010-15ish, I think any performance issues will be dwarfed by the advantages of a "managed" development and runtime environment. [/B]



    I agree with you in the direction of the vector, just not the magnitude that seems to be proposed in your articles.



    My research is in building abstractions from lower level concepts, specifically in the area of design patterns. I'm defending my dissertation in four weeks, on a tool I designed, SPQR, that finds instances of design patterns automatically from source code, regardless of the OO language. It uses a new denotational semantics and catalog of Elemental Design Patterns to describe patterns as intertwining concepts that can be incrementally taught, comprehended, and searched for in code. Design patterns turn out to be just the beginning of where this can go - the abstractions possible are, well, limitless.



    I say this not to have a pissing contest, but only to show definitively that I *do* believe that higher abstractions are the way to go, and that the benefits from them far outweigh the performance issues in the long-term. BUT... having done the detailed analysis of design patterns literature from the viewpoint of lower level concepts, I can fairly solidly say that object management is the single biggest stumbling block to producing necessary and sufficient pattern compositions. I have some preliminary work in the area, but the core intents of the higher abstractions that you and I both want to see more prevalent rely *so much* on the subtle differences of object management, that I really don't see how those higher abstractions can effectively be produced and utilized if the subtleties are swept under the cover of GC, all the time, every place.



    What I *do* want to see, someday, is a methodology that allows the developer to state specifically what object management system is desired for sections of the code at a conceptual level, and have it be enforced by the language, or at the very least, the libraries. Maybe, when that day comes, a default of auto-GC will be the best approach, as you state, but until that day comes, and until the precise semantics of the various object management schemes can be captured and enforced systematically, I will continue to have the opinion that auto-GC is an important tool, but one that cannot be pushed as a universal solution.



    Not because I fear for the performance, but because I fear for the abstractions.



    Won't you think of the abstractions, John?
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.