No, I have not seen Billy Madison. When you quote a source, it is standard practice to credit the source. This requirement is often ignored when the quote is so well-known that it is part of the language. Many lines from Shakespeare have reached this status. Billy Madison has not yet attained the status of a Shakespeare play.
I want to see links for these claims of 10%, 20%, and 30% performance improvements from XLC. The last benchmarks I saw pegged XLC and GCC as essentially neck and neck, with XLC clearly better at some things, and GCC clearly better at others. Both have received raftloads of improvements since then, but I'd be genuinely surprised to see anything as radical as a 20% speed advantage on arbitrary code. GCC was a lame PPC compiler when OS X started out, but as of (Apple's branch of) v4 it's not bad at all. There's plenty of room for improvement, but that's a given.
I remember claims that GCC 4 cut compilation time by 30%, which is an impressive accomplishment, but not something the vast majority of end users will ever know or care about.
How much performance is being wasted in all Macs due to inefficient compilation?
- Jasen.
A great deal is being lost through the use of a poor compiler. On the otherhand if you have the right release of GCC you do get a relatively stable compiler.
Also so don't confuse all of the performance increases seen in each release of OS/X with an improved compiler. yes that may be part of the equation, but there are a lot fo other factors in that equation. It probally would be fair to say that the improvements made to GCC to date, are only minimall impacting the improvemnts seen in OS/X.
Just installed Tiger today (and I must say: Wow, this is a terrific update -- the CoreImage stuff is really amazing) and poking around in GCC4 shows up a few really encouraging things for the coders among us:
"Accelerated Objective-C Dispatch"
"Enable Objective-C Garbage Collection"
... sound like they fix my two biggest complaints about Cocoa.
"Auto-vectorization"
... I'm interested to see how well this works. Not expecting much, but will give it a chance.
"Feedback-Direct Optimization"
... very cool stuff. Even if this version doesn't include a lot of new optimizations, this lays the groundwork for a lot of future potential.
Believe me, running on a DP500 with plenty or RAM it feels like Safari and the Finder gained a new life! Maybe not a lot of optimizations, but the right ones here!
Wow. If Cocoa gets garbage collection, a LOT more people
are going to be able to use it to create bug free apps. This
could be a really big deal for Application development on
the Mac.
Bring on the next killer app!
Garbage collection doesn't create the next killer app. Java is quite evident of this reality.
NSAutoReleasePool has always bothered people who don't want to concern themselves with releasing objects and wished automagically this was done for them.
Personally, I would use garbage collection to develop the app and then to fine tune it work with autorelease pools and tighten the application up.
I want to see links for these claims of 10%, 20%, and 30% performance improvements from XLC. The last benchmarks I saw pegged XLC and GCC as essentially neck and neck, with XLC clearly better at some things, and GCC clearly better at others.
I was going to say the same thing. Every test I've seen or done shows recent versions of GCC and XLC to be roughly similar in performance across the board. XLC seems to have developed somewhat of a mythical reputation that has no basis in reality.
I was going to say the same thing. Every test I've seen or done shows recent versions of GCC and XLC to be roughly similar in performance across the board. XLC seems to have developed somewhat of a mythical reputation that has no basis in reality.
Apple's own engineers attest to the performance of IBM's XL compilers in the arena of high-performance computing. I think that the error that XLC fans may be making is expecting to see the same performance advantage in general purpose apps as are self-evident in numerically-intensive programs.
Garbage collection doesn't create the next killer app. Java is quite evident of this reality.
Java doesn't have the equivalent of Interface Builder. A good GUI builder would make Java a much better platform to develop on, IMO. Even simple Java GUIs takes tons of code to instantiate the objects, set their properties, lay them out in the JFrame as well as creating & managing event handlers. Interface Builder and Cocoa has Java hands down beat in GUI building and event handling.
I remember claims that GCC 4 cut compilation time by 30%, which is an impressive accomplishment, but not something the vast majority of end users will ever know or care about.
Hehe yeah somehow the GCC 4 compilation time got translated into faster applications. The "snappiess" obsession is ever present. Not that that's a bad thing so much.
Garbage collection doesn't create the next killer app. Java is quite evident of this reality.
No, but IMO this is the biggest weakness in Cocoa. I've done a fair amount of C#/.NET programming now, and have come to the conclusion that, from a programmer productivity point of view, having an automatic garbage collector is a huge advantage. As you point out, having this one advantage alone is not sufficient to make a successful environment, but Cocoa has pretty much everything else that it takes so adding automagic GC to it should result in "something wonderful...".
Off topic, but has anyone else been wowed by Quartz Composer and the Grapher application? The first is a very cool tool that demonstrates the power in Apple's graphic subsystems (a tad unstable though -- after about 15 minutes of playing with it I did something that brought down the window server!). The second is an updated Graphic Calculator with some extremely cool features (I don't know how it compares to the new [purchased] version of Graphing Calculator, but it seems to have many of the same bells and whistles).
No, but IMO this is the biggest weakness in Cocoa. I've done a fair amount of C#/.NET programming now, and have come to the conclusion that, from a programmer productivity point of view, having an automatic garbage collector is a huge advantage. As you point out, having this one advantage alone is not sufficient to make a successful environment, but Cocoa has pretty much everything else that it takes so adding automagic GC to it should result in "something wonderful...".
As long as I can turn it off when necessary to manually handle destructors, sweet.
I have come to *LOATHE* GC that insists it knows better than you do. GC is wonderful for most cases, but for those that it fails on, it does so *spectacularly*.
As long as I can turn it off when necessary to manually handle destructors, sweet.
I have come to *LOATHE* GC that insists it knows better than you do. GC is wonderful for most cases, but for those that it fails on, it does so *spectacularly*.
++
It does compliment Cocoa's general paradigm of doing things automagically well enough for general (light) use and rapid prototyping, but also letting you do it yourself if you need to.
I hope this will significantly reduce the number of OS X apps with memory leaks and random crashes.
Hmm. I'm up to my eyeballs in Perl code right now, but on a tangent, it'd be nice to see how they did this. If I squint, I think I can see how they'd do this with the existing pool/reference count approach, with just a little extra overhead...
It does compliment Cocoa's general paradigm of doing things automagically well enough for general (light) use and rapid prototyping, but also letting you do it yourself if you need to.
I hope this will significantly reduce the number of OS X apps with memory leaks and random crashes.
Hmm. I'm up to my eyeballs in Perl code right now, but on a tangent, it'd be nice to see how they did this. If I squint, I think I can see how they'd do this with the existing pool/reference count approach, with just a little extra overhead...
I think there could be a problem with the reference count though. Look at it this way; I do a [someObj addObj:theObj];
This has increased theObj's retain count by 1. If I didn't know this I might thing the refence was moved into someObj, that the retain count wasn't increased, just the object doing the retaining had changed. Now if I deallocated someObj, theObj's retain count is decremented, but who handles the original retain even if nothing is supposedly referencing theObj?
How would the runtime environment know that nothing is truly accessing theObj now?
Hopefully I explained myself so you get my meaning.
One thing of interest is that the Aspyr guys said that they tried compiling DOOM3 on both their normal compiler and xlc and saw no speed improvement in xlc.
So it sounds like xlc provides high performance for certain applications, but not all.
I think there could be a problem with the reference count though. Look at it this way; I do a [someObj addObj:theObj];
This has increased theObj's retain count by 1. If I didn't know this I might thing the refence was moved into someObj, that the retain count wasn't increased, just the object doing the retaining had changed. Now if I deallocated someObj, theObj's retain count is decremented, but who handles the original retain even if nothing is supposedly referencing theObj?
How would the runtime environment know that nothing is truly accessing theObj now?
Hopefully I explained myself so you get my meaning.
The whole point of reference qounting is to get situations like that automated so you don't have to think about it. Get a new handle, increment refCount. Delete a handle, decrement count. refCount <= 0, delete theObj. If it is done as part of the runtime the mechanism doesn't even have to know about the code, just whether or not a pointer has been assigned/deassigned/de-scoped.
The real problem can crop up if the programmer explicitly deletes something manually which still has a refCount > 0. I would like to know exactly how the GC would handle that situation before I started mixing GC and manual memory ops (if that is a legal mix).
Throw a runtime warning is the easiest method IMO.
If the obj were aware of who had handles to it (requires the runtime to track handlers as well as handled, but doable), then those handles could be pointed to a guard obj in mem that threw an exception when access was atttempted. ie, if the dev *KNOWS* that the now deleted obj will never be accessed again, it works. If the access is attempted through a change in the code or (*gasp*) dev error, it tells them where it happened, and what was attempted.
Of course the *simplest* way is to have objs be either auto-collected *OR* manually handled: you can't manually delete a GC handled obj. That way you make the choice during design of which approach to take for specific objects. GC by default, but declare objs with a keyword 'manmem' or such, and they'll be explicitly not GC controlled.
Comments
Originally posted by unixguru
It was a JOKE! Ever seen the movie Billy Madison?
http://www.great-quotes.com/movie_quotes.htm
...
No, I have not seen Billy Madison. When you quote a source, it is standard practice to credit the source. This requirement is often ignored when the quote is so well-known that it is part of the language. Many lines from Shakespeare have reached this status. Billy Madison has not yet attained the status of a Shakespeare play.
I want to see links for these claims of 10%, 20%, and 30% performance improvements from XLC. The last benchmarks I saw pegged XLC and GCC as essentially neck and neck, with XLC clearly better at some things, and GCC clearly better at others. Both have received raftloads of improvements since then, but I'd be genuinely surprised to see anything as radical as a 20% speed advantage on arbitrary code. GCC was a lame PPC compiler when OS X started out, but as of (Apple's branch of) v4 it's not bad at all. There's plenty of room for improvement, but that's a given.
I remember claims that GCC 4 cut compilation time by 30%, which is an impressive accomplishment, but not something the vast majority of end users will ever know or care about.
Originally posted by jasenj1
How much performance is being wasted in all Macs due to inefficient compilation?
- Jasen.
A great deal is being lost through the use of a poor compiler. On the otherhand if you have the right release of GCC you do get a relatively stable compiler.
Also so don't confuse all of the performance increases seen in each release of OS/X with an improved compiler. yes that may be part of the equation, but there are a lot fo other factors in that equation. It probally would be fair to say that the improvements made to GCC to date, are only minimall impacting the improvemnts seen in OS/X.
Dave
"Accelerated Objective-C Dispatch"
"Enable Objective-C Garbage Collection"
... sound like they fix my two biggest complaints about Cocoa.
"Auto-vectorization"
... I'm interested to see how well this works. Not expecting much, but will give it a chance.
"Feedback-Direct Optimization"
... very cool stuff. Even if this version doesn't include a lot of new optimizations, this lays the groundwork for a lot of future potential.
Originally posted by Programmer
"Accelerated Objective-C Dispatch"
"Enable Objective-C Garbage Collection"
Wow. If Cocoa gets garbage collection, a LOT more people
are going to be able to use it to create bug free apps. This
could be a really big deal for Application development on
the Mac.
Bring on the next killer app!
Originally posted by Tom Mornini
Wow. If Cocoa gets garbage collection, a LOT more people
are going to be able to use it to create bug free apps. This
could be a really big deal for Application development on
the Mac.
Bring on the next killer app!
Garbage collection doesn't create the next killer app. Java is quite evident of this reality.
NSAutoReleasePool has always bothered people who don't want to concern themselves with releasing objects and wished automagically this was done for them.
Personally, I would use garbage collection to develop the app and then to fine tune it work with autorelease pools and tighten the application up.
Originally posted by Amorph
Wow, this got off track.
I want to see links for these claims of 10%, 20%, and 30% performance improvements from XLC. The last benchmarks I saw pegged XLC and GCC as essentially neck and neck, with XLC clearly better at some things, and GCC clearly better at others.
I was going to say the same thing. Every test I've seen or done shows recent versions of GCC and XLC to be roughly similar in performance across the board. XLC seems to have developed somewhat of a mythical reputation that has no basis in reality.
Originally posted by Tuttle
I was going to say the same thing. Every test I've seen or done shows recent versions of GCC and XLC to be roughly similar in performance across the board. XLC seems to have developed somewhat of a mythical reputation that has no basis in reality.
Apple's own engineers attest to the performance of IBM's XL compilers in the arena of high-performance computing. I think that the error that XLC fans may be making is expecting to see the same performance advantage in general purpose apps as are self-evident in numerically-intensive programs.
Originally posted by mdriftmeyer
Garbage collection doesn't create the next killer app. Java is quite evident of this reality.
Java doesn't have the equivalent of Interface Builder. A good GUI builder would make Java a much better platform to develop on, IMO. Even simple Java GUIs takes tons of code to instantiate the objects, set their properties, lay them out in the JFrame as well as creating & managing event handlers. Interface Builder and Cocoa has Java hands down beat in GUI building and event handling.
Originally posted by Amorph
I remember claims that GCC 4 cut compilation time by 30%, which is an impressive accomplishment, but not something the vast majority of end users will ever know or care about.
Hehe yeah somehow the GCC 4 compilation time got translated into faster applications. The "snappiess" obsession is ever present. Not that that's a bad thing so much.
Originally posted by mdriftmeyer
Garbage collection doesn't create the next killer app. Java is quite evident of this reality.
No, but IMO this is the biggest weakness in Cocoa. I've done a fair amount of C#/.NET programming now, and have come to the conclusion that, from a programmer productivity point of view, having an automatic garbage collector is a huge advantage. As you point out, having this one advantage alone is not sufficient to make a successful environment, but Cocoa has pretty much everything else that it takes so adding automagic GC to it should result in "something wonderful...".
Off topic, but has anyone else been wowed by Quartz Composer and the Grapher application? The first is a very cool tool that demonstrates the power in Apple's graphic subsystems (a tad unstable though -- after about 15 minutes of playing with it I did something that brought down the window server!). The second is an updated Graphic Calculator with some extremely cool features (I don't know how it compares to the new [purchased] version of Graphing Calculator, but it seems to have many of the same bells and whistles).
Originally posted by Programmer
No, but IMO this is the biggest weakness in Cocoa. I've done a fair amount of C#/.NET programming now, and have come to the conclusion that, from a programmer productivity point of view, having an automatic garbage collector is a huge advantage. As you point out, having this one advantage alone is not sufficient to make a successful environment, but Cocoa has pretty much everything else that it takes so adding automagic GC to it should result in "something wonderful...".
As long as I can turn it off when necessary to manually handle destructors, sweet.
I have come to *LOATHE* GC that insists it knows better than you do. GC is wonderful for most cases, but for those that it fails on, it does so *spectacularly*.
Originally posted by Kickaha
As long as I can turn it off when necessary to manually handle destructors, sweet.
I have come to *LOATHE* GC that insists it knows better than you do. GC is wonderful for most cases, but for those that it fails on, it does so *spectacularly*.
++
It does compliment Cocoa's general paradigm of doing things automagically well enough for general (light) use and rapid prototyping, but also letting you do it yourself if you need to.
I hope this will significantly reduce the number of OS X apps with memory leaks and random crashes.
Hmm. I'm up to my eyeballs in Perl code right now, but on a tangent, it'd be nice to see how they did this. If I squint, I think I can see how they'd do this with the existing pool/reference count approach, with just a little extra overhead...
Originally posted by Amorph
++
It does compliment Cocoa's general paradigm of doing things automagically well enough for general (light) use and rapid prototyping, but also letting you do it yourself if you need to.
I hope this will significantly reduce the number of OS X apps with memory leaks and random crashes.
Hmm. I'm up to my eyeballs in Perl code right now, but on a tangent, it'd be nice to see how they did this. If I squint, I think I can see how they'd do this with the existing pool/reference count approach, with just a little extra overhead...
I think there could be a problem with the reference count though. Look at it this way; I do a [someObj addObj:theObj];
This has increased theObj's retain count by 1. If I didn't know this I might thing the refence was moved into someObj, that the retain count wasn't increased, just the object doing the retaining had changed. Now if I deallocated someObj, theObj's retain count is decremented, but who handles the original retain even if nothing is supposedly referencing theObj?
How would the runtime environment know that nothing is truly accessing theObj now?
Hopefully I explained myself so you get my meaning.
So it sounds like xlc provides high performance for certain applications, but not all.
Originally posted by Akac
So it sounds like xlc provides high performance for certain applications, but not all.
Or the game is constrained by GPU and not CPU, which is more likely.
Originally posted by Telomar
Or the game is constrained by GPU and not CPU, which is more likely.
Nope, Doom is majorly constrained by the CPU last I heard.
Originally posted by PBG4 Dude
I think there could be a problem with the reference count though. Look at it this way; I do a [someObj addObj:theObj];
This has increased theObj's retain count by 1. If I didn't know this I might thing the refence was moved into someObj, that the retain count wasn't increased, just the object doing the retaining had changed. Now if I deallocated someObj, theObj's retain count is decremented, but who handles the original retain even if nothing is supposedly referencing theObj?
How would the runtime environment know that nothing is truly accessing theObj now?
Hopefully I explained myself so you get my meaning.
The whole point of reference qounting is to get situations like that automated so you don't have to think about it. Get a new handle, increment refCount. Delete a handle, decrement count. refCount <= 0, delete theObj. If it is done as part of the runtime the mechanism doesn't even have to know about the code, just whether or not a pointer has been assigned/deassigned/de-scoped.
The real problem can crop up if the programmer explicitly deletes something manually which still has a refCount > 0. I would like to know exactly how the GC would handle that situation before I started mixing GC and manual memory ops (if that is a legal mix).
If the obj were aware of who had handles to it (requires the runtime to track handlers as well as handled, but doable), then those handles could be pointed to a guard obj in mem that threw an exception when access was atttempted. ie, if the dev *KNOWS* that the now deleted obj will never be accessed again, it works. If the access is attempted through a change in the code or (*gasp*) dev error, it tells them where it happened, and what was attempted.
Of course the *simplest* way is to have objs be either auto-collected *OR* manually handled: you can't manually delete a GC handled obj. That way you make the choice during design of which approach to take for specific objects. GC by default, but declare objs with a keyword 'manmem' or such, and they'll be explicitly not GC controlled.