I was reading comparisons online and it sounded like most people liked C# better than C++..... I guess that was wrong. Well I spose I will have to go with C++ then.
I was reading comparisons online and it sounded like most people liked C# better than C++..... I guess that was wrong. Well I spose I will have to go with C++ then.
No, I certainly like C# a lot better than C++, if you're writing for .NET. But you shouldn't do that on a Mac.
Why not? C# is a good language. And mono is a pretty darn good C# environment that works on the Mac. So why not support it? Because MS created it? Then what about AJAX? That was something Microsoft created.
The fact is that each language has its use. I'm a C++ guy myself, but if I needed to write business apps for an IT group, C# or Java would be my #1 tool to look at depending on the business needs.
Being tied to a single language is stupid. Learn multiple languages and environments and use the best one.
Compiles the function the first time it's called, then stores it when it's done, and just goes back to the same already-compiled code each subsequent time? That makes sense. Then, the efficiency compared to static compilation depend on the compiler?
yes.
Quote:
Now I feel dumb. Most of the programs I've been writing in class are console apps. Duh. Please clarify, you CAN output machine code rather than bytecode? How?
The standard allows it, but nobody has publicly released a compiler that will do it. So while it should be possible, it currently isn't.
Quote:
Console apps are still platform dependent though, I believe.
Only because of the above. Other languages console apps are not platform dependent unless you use something not part of the language standard.
Quote:
Point taken.
Please clarify?
Point taken.
These 3 go together and equate to: Use the library functions because the chance you can write something faster is slim. If you pay the development time price and do write something faster, it usually won't be faster enough to have made up for the dev time you spent on it. Sure there are exceptions, but when you find them they will stand out like sore thumbs as prime places to put in the effort. If you don't forsee a BIG payoff before you start rolling your own replacement for a library function it is best to just use the library function, even in higher level languages.
Split your codebase into a cross-platform backend, which you might as well write in C, and a platform-specific frontend for which you better use a platform-specific environment.
Of course keeping the backend well isolated from the GUI is an important design concept, but often there's a lot of work, and logic, in the GUI itself and how it functions.
Take a program like iTunes. I have no idea how Apple actually goes about writing this code, but iTunes (from what I last heard -- anyone can correct me if I'm wrong) is a Carbon app, not Cocoa on the Mac. If true, I'll bet that's because the GUI, which is so similar across platforms, has an awful lot of common code and is built around some sort of cross-platform GUI library.
What's the "backend" for iTunes anyway? Not much there but Quicktime, iPod synchronization, and some code to maintain to iTunes library database. If you have a good cross-platform GUI library, and it's flexible enough so you don't always have to go "lowest common denominator" with your interface, why not take advantage of it?
I can't imagine ruling that out in favor of trying to maintain to completely separate GUI projects, one in Objective C/Cocoa, the other in C++ or C# written for the Windows API, both of which have to converge separately on an almost identical interface, but not sharing anything more than, perhaps, some common graphics and text resources.
Of course keeping the backend well isolated from the GUI is an important design concept, but often there's a lot of work, and logic, in the GUI itself and how it functions.
Take a program like iTunes. I have no idea how Apple actually goes about writing this code, but iTunes (from what I last heard -- anyone can correct me if I'm wrong) is a Carbon app, not Cocoa on the Mac. If true, I'll bet that's because the GUI, which is so similar across platforms, has an awful lot of common code and is built around some sort of cross-platform GUI library.
Yes, iTunes is Carbon, and yes, the vast majority of iTunes's code is shared between Windows and Mac OS X. But that's okay, because a design goal with the Windows version was familiarity. It doesn't feel as much as a Windows app as it could, while at the same time accounting for Windows-specific features such as a taskbar toolbar band and a system tray icon.
Quote:
What's the "backend" for iTunes anyway? Not much there but Quicktime, iPod synchronization, and some code to maintain to iTunes library database. If you have a good cross-platform GUI library, and it's flexible enough so you don't always have to go "lowest common denominator" with your interface, why not take advantage of it?
Right, for the most part, iTunes is just an elaborate QuickTime client. But you're only proving my point: QuickTime, in turn, does indeed have a rather strict separation between front-end and back-end. Most of it is back-end, and that doesn't use Carbon code at all. Think of all the optimizations towards SSE3, etc.
Yes, I was very sloppy with my language in that post.
BUT
You'd have to show me some evidence before I buy that bytecode can approach the speed of machine code.
C# and Managed C++ compile down to MSIL code and executes in the CLR. C++ can actually run slower than C# but depending on application you may not see much difference at all (ie when you are IO bound). There are benchmarks where C# ran faster than managed C++ but slower than Java or C++.
For an example of speed try NASA World Wind. Written in C# and Managed Direct X.
The biggest advantage of C# over unmanaged C++ is that the .NET framework is soooo much nicer to work in. Likewise Java vs C++ though STL and newer extensions have closed the gap. A little.
IMHO as a C/C++/Java/C# developer I think that C++ isn't very compelling anymore. C for speed and C#/Java for ease of programming. ObjectiveC for Mac.
C++ is kinda half way inbetween low level (pointers) and modern high level languages.
Quote:
Bytecode MUST be translated into machine code before it can be used, thereby adding at LEAST one extra cycle for every command. How can that be anywhere close to the speed that you can get from a machine code compile? By using an efficient JIT compiler and an inefficient machine code compiler? Go get an efficient machine code compiler.
Granted that gcc is one of the slower (if not slowest commonly used) compilers. Presumably using an Intel, Sun or even MS compiler would have resulted in a faster executable. Definiately Intel...it's one of the nicer advantages of moving to the intel platform for the mac. The potential of using Intel's compilers to generate fast code on the platform.
Quote:
Additionally: Every program written in C# requires .NET framework to be present on the computer... I THINK.... verify please?
Yes and no. Mono's CLI isn't called .NET but it is a CLI. However, it's like complaining that Java must be installed. So what? The installer takes care of it for you.
Quote:
Additionally: C# is EXTREMELY high level. The higher the language, the less efficient it is, by necessity.
See above. In any case C++ is considered a high level language as well. Just an older one with modern extensions hacked on.
Efficiency is goverened by many things but typically programmer efficiency is considered more important than language efficiency these days...or we'd still be using Assembly.
Quote:
Additionally: C# does stupid things like require you to store the Main function (called methods in C#) as a class object. You then must instantiate an object of that class before you can use any functions that you may have written outside of the Main. That should be implicit.
Never learned about static methods have we? If you have a static method called Bar in class Foo you can call Foo.Bar() without instantiating a Foo. In fact you can't call Bar except as Foo.Bar(). Calling it via FooInstance.Bar() should result in a compilation error.
Quote:
Additionally: It's 3 am and I can't think of anthing else right now. Go with C++, and use visual studio if you wish, so that you can take advantage of the .NET libraries. You will have the benefits of C# (the library) and also the ability to use the same language on multiple platforms.
Managed C++ appears to be the worst of all worlds in terms of performance (the collection class called appear to be abysmally slow in comparison to C#), syntax and cross platform compatibility (managed C++ is the least likely to run under Mono).
Java is more cross platform and C is faster. If you know both C#/Java and C you can likely handle most idiosyncracies of C++.
Vinea
PS. C#/.NET was one of the two hot software careers on Money. The other was quality assurance (ugh). Can't find the link but that's what I remember seeing last year. Fortunately it seems we're on a rebound with Software Engineer (in general) topping the charts again:
Obj-C 2.0 in Leopard will have full GC if you want it.
The only piece of C# that has intrigued me is the 'delegate' keyword. Nice move there, pulling the concept of a First Responder into the language itself. Other than that? Yet Another Language. Whoo.
Eh, its all yet another language kinda like C until you get a little esoteric like Haskell.
Oddly, all the "new" languages are all contemporaries from the late 80s or early 90s. C# is later but it's not really "new". Creating new languages appears to be passe.
Its the frameworks that are new and constantly evolving.
Vinea
(JavaScript, PHP and Python are from the early-mid 90s, Ruby from 1993)
IMHO as a C/C++/Java/C# developer I think that C++ isn't very compelling anymore. C for speed and C#/Java for ease of programming. ObjectiveC for Mac.
I'm tempted to go back to C++ for a cross-platform app I have in mind simply because I'm not convinced from what others have said here (for whom, perhaps, a "front end" is seldom more than a dialog or two and an OK button ) that making my Mac and Windows interfaces two almost entirely different projects written in two different languages is such a hot idea. Given that I'd still like to use more or less the same codebase for the GUI, C++ seems to present me with the best options -- for example, using Trolltech's Qt library. Both platforms provide good C++ APIs for nearly anything else that's platform native that I'd want to do.
If I did stick with Java, just to make life easier during a lot of the coding and debugging, I'd at least have to see how well SWT works. Swing is functional enough for some things, but it's still pretty far from what you need to get a good "shrink-wrapped" commercial quality interface.
Even with Java and SWT, however, I'd envision having to use a lot of JNI to access native platform capabilities, and that starts to get ugly. I'd like my app to be scriptable on OS X (AppleEvents/AppleScript) and Windows (COM). From what I've seen of trying to make a non-Cocoa Java app scriptable on the Mac, things get even uglier. Further, I've heard that SWT is fairly Windows-centric, so I might end up having to waste a lot of my time browbeat SWT into doing what I want on a Mac.
C++ is faster than Java for most things, taking on the grief of alloc/dealloc memory management at least means never having to worry about garbage collection hiccups, and C++ can be just as fast as C if you want it to be, where you want it to be, simply by dropping down to pure C code where necessary.
Quote:
Fortunately it seems we're on a rebound with Software Engineer (in general) topping the charts again:
I wonder if part of the reason for this might be how often outsourcing doesn't pan out, and that perhaps more businesses are beginning to realize that paying the extra for a software engineer in the US is often a better investment.
Right, for the most part, iTunes is just an elaborate QuickTime client. But you're only proving my point: QuickTime, in turn, does indeed have a rather strict separation between front-end and back-end. Most of it is back-end, and that doesn't use Carbon code at all. Think of all the optimizations towards SSE3, etc.
I don't know if that's exactly "proving your point". iTunes is a damned elaborate front end for Quicktime, and Apple didn't seem to think that maintaining two almost entirely separate codebases in two different languages, one for Mac and one for Windows, was the best way to go.
I hate to be deliberately vague, but I can't talk about the details of what I have in mind. The front end, however, can and should look very similar on Mac and Windows, as iTunes does, and it's a bit complicated as a GUI, with a fair amount of gestural interaction, drag-and-drop support, etc. The back end is actually less complex than the front end, at least for first-rev features. It's still a sh*tload of work for one guy to take on by himself as a shareware project, and I'm pretty sure maintaining two very different GUI code bases would cause me much more grief and duplicated effort than could be compensated for by any extra zing using Cocoa and Objective C might bring me.
If you're using Firefox right now, like I am, there's another example of a cross-platform app where the supposed wisdom of maintaining what would have been not just two, but three almost entirely separate GUI code bases (for Mac, Windows, and Linux) was avoided.
You should definitely try to keep the back end separate from the front end no matter what you use. In a project, I had to put a Cocoa GUI on someone's 10,000 line C++ codebase, which originally used Tcl/Tk. It wasn't all that difficult because the code was well designed.
Personally, I'd use C++ for the back end, Qt for the GUI toolkit and Python for scripting because an interpreter can be embedded easily in C code.
Quote:
Originally Posted by Chucker
My condolences.
I feel the same. Someone at work uses Firefox all the time and he gets so many broken downloads and sites that don't work that he often has to end up using Safari anyway.
Someone at work uses Firefox all the time and he gets so many broken downloads and sites that don't work that he often has to end up using Safari anyway.
(Casually allowing the thread to veer wildly off on a tangent, he said, "...) Odd. I rarely have any problems with Firefox at all. The most notable problem I've run into now and then is an annoying issue with Firefox getting confused about keyboard focus and ignoring Command-C (or Ctrl-C when I'm on Windows) to copy text I've selected. So far I haven't seen that problem in Firefox 2.0, so maybe that's finally been fixed.
At any rate, I was on Windows at work when writing that post, so Safari wasn't an option. When on my Mac, I use Safari either to test compatibility of my own web code, or on the rare occasion a page doesn't work for me in Firefox (I honestly don't remember the last time that happened). I used to have to use Safari for accessing my bank account, because my bank for a while only officially supported IE on Windows and Safari on Mac, but that's fortunately not an issue anymore.
I like the geekier nature of Firefox. I like being able to control pop-ups and cookie filtering on a site-by-site basis. I really like being able to use Adblock. And it's not like Safari is bug free -- there have been times I've been using Safari and couldn't get a web page to work properly, and I've had to turn to Firefox to get around the problem.
I wonder if part of the reason for this might be how often outsourcing doesn't pan out, and that perhaps more businesses are beginning to realize that paying the extra for a software engineer in the US is often a better investment.
1. The top Indian programmers move here.
2. The global separation makes management a PITA.
3. For projects smaller than a few million dollars, the extra management required often isn't worth it.
My company recently had a bust with offshore development. We actually outsourced the project in general, and the US company that was doing that outsourced part of the job to India. The results were not pleasant. A few people in the US company lost their jobs due to failure to deliver.
But with that said, software engineers are still an overpaid lot. Finding a good one can be tough, and the average ones cost more than they're worth. How a software engineer can demand a higher wage than a hardware engineer has been, in my opinion, the greatest magic trick of the 90's. The increasing dominance of the embedded arena in the high-tech job market, however, seems to be correcting this oddity. Incidentally, this is why I suggested Java. C++ has been eschewed in the embedded world in favor of plain C and AOT Java. A lot of this has to do with the ubiquity and compactness of the ulibc in embedded Linux distributions. Then there's eCos, and the odd truth that a standard Java KVM has a smaller footprint than a C++ library.
Perhaps developing software is harder than writing small programs might imply?
Naw...its because software geeks pulled a fast one...of course that doesn't make hardware geeks any brighter. It has absolutely nothing to do with a reasonably free market and it paying relative worth...
Naw...its because software geeks pulled a fast one...of course that doesn't make hardware geeks any brighter. It has absolutely nothing to do with a reasonably free market and it paying relative worth...
There are really good software engineers, developers, coders -- whatever you want to call them. There are also a great deal of mediocre and downright bad software developers. With hardware, there's instant quality control: cost, obviousness, and lower tolerance for error (fire). It's much easier to detect a bad hardware design job, so in the United States, Taiwan, and Japan there aren't too many bad or even medicore hardware design teams in business. Anywhere else, though, YMMV. But with software, it's much easier for a crappy product to slip through the cracks.
There are really good software engineers, developers, coders -- whatever you want to call them. There are also a great deal of mediocre and downright bad software developers. With hardware, there's instant quality control: cost, obviousness, and lower tolerance for error (fire). It's much easier to detect a bad hardware design job, so in the United States, Taiwan, and Japan there aren't too many bad or even medicore hardware design teams in business. Anywhere else, though, YMMV. But with software, it's much easier for a crappy product to slip through the cracks.
Which has zip to do with compensation and relative worth (from a business perspective).
While it does cost more to spin a hardware prototype I've seen my share of bad board designs requiring the respin of a set of boards. There's nothing "instant" about quality control in either the hardware or software domain.
Besides, perhaps the lower error rates is due to the lower complexity of the task? There's two ways to spin any given perspective.
Besides, perhaps the lower error rates is due to the lower complexity of the task? There's two ways to spin any given perspective.
The general consensus is that software development paradigms are flawed, and that the ridiculous syntax of C-based languages causes a lot of errors. At least this is the understanding at embedded level, since most of the industry magazines have run series of articles pertaining to these very issues.
The reason why hardware jobs often pay less is because that industry has already reached its world-market equilibrium. Software is getting there. The open source movement is making this happen more quickly and smoothly than it otherwise would. But having done both, I can tell you that hardware design takes a lot more thought and concentration.
Comments
I was reading comparisons online and it sounded like most people liked C# better than C++..... I guess that was wrong. Well I spose I will have to go with C++ then.
Thanks everybody...
I was reading comparisons online and it sounded like most people liked C# better than C++..... I guess that was wrong. Well I spose I will have to go with C++ then.
No, I certainly like C# a lot better than C++, if you're writing for .NET. But you shouldn't do that on a Mac.
The fact is that each language has its use. I'm a C++ guy myself, but if I needed to write business apps for an IT group, C# or Java would be my #1 tool to look at depending on the business needs.
Being tied to a single language is stupid. Learn multiple languages and environments and use the best one.
Why not? C# is a good language. And mono is a pretty darn good C# environment that works on the Mac.
What? Mono? On a Mac?
Next you'll be suggesting we write GUIs in Tk.
So why not support it? Because MS created it?
No, nothing to do with that.
Being tied to a single language is stupid. Learn multiple languages and environments and use the best one.
Agreed.
Compiles the function the first time it's called, then stores it when it's done, and just goes back to the same already-compiled code each subsequent time? That makes sense. Then, the efficiency compared to static compilation depend on the compiler?
yes.
Now I feel dumb. Most of the programs I've been writing in class are console apps. Duh. Please clarify, you CAN output machine code rather than bytecode? How?
The standard allows it, but nobody has publicly released a compiler that will do it. So while it should be possible, it currently isn't.
Console apps are still platform dependent though, I believe.
Only because of the above. Other languages console apps are not platform dependent unless you use something not part of the language standard.
Point taken.
Please clarify?
Point taken.
These 3 go together and equate to: Use the library functions because the chance you can write something faster is slim. If you pay the development time price and do write something faster, it usually won't be faster enough to have made up for the dev time you spent on it. Sure there are exceptions, but when you find them they will stand out like sore thumbs as prime places to put in the effort. If you don't forsee a BIG payoff before you start rolling your own replacement for a library function it is best to just use the library function, even in higher level languages.
The answer to that is: don't do it. Ever.
Split your codebase into a cross-platform backend, which you might as well write in C, and a platform-specific frontend for which you better use a platform-specific environment.
Of course keeping the backend well isolated from the GUI is an important design concept, but often there's a lot of work, and logic, in the GUI itself and how it functions.
Take a program like iTunes. I have no idea how Apple actually goes about writing this code, but iTunes (from what I last heard -- anyone can correct me if I'm wrong) is a Carbon app, not Cocoa on the Mac. If true, I'll bet that's because the GUI, which is so similar across platforms, has an awful lot of common code and is built around some sort of cross-platform GUI library.
What's the "backend" for iTunes anyway? Not much there but Quicktime, iPod synchronization, and some code to maintain to iTunes library database. If you have a good cross-platform GUI library, and it's flexible enough so you don't always have to go "lowest common denominator" with your interface, why not take advantage of it?
I can't imagine ruling that out in favor of trying to maintain to completely separate GUI projects, one in Objective C/Cocoa, the other in C++ or C# written for the Windows API, both of which have to converge separately on an almost identical interface, but not sharing anything more than, perhaps, some common graphics and text resources.
Of course keeping the backend well isolated from the GUI is an important design concept, but often there's a lot of work, and logic, in the GUI itself and how it functions.
Take a program like iTunes. I have no idea how Apple actually goes about writing this code, but iTunes (from what I last heard -- anyone can correct me if I'm wrong) is a Carbon app, not Cocoa on the Mac. If true, I'll bet that's because the GUI, which is so similar across platforms, has an awful lot of common code and is built around some sort of cross-platform GUI library.
Yes, iTunes is Carbon, and yes, the vast majority of iTunes's code is shared between Windows and Mac OS X. But that's okay, because a design goal with the Windows version was familiarity. It doesn't feel as much as a Windows app as it could, while at the same time accounting for Windows-specific features such as a taskbar toolbar band and a system tray icon.
What's the "backend" for iTunes anyway? Not much there but Quicktime, iPod synchronization, and some code to maintain to iTunes library database. If you have a good cross-platform GUI library, and it's flexible enough so you don't always have to go "lowest common denominator" with your interface, why not take advantage of it?
Right, for the most part, iTunes is just an elaborate QuickTime client. But you're only proving my point: QuickTime, in turn, does indeed have a rather strict separation between front-end and back-end. Most of it is back-end, and that doesn't use Carbon code at all. Think of all the optimizations towards SSE3, etc.
Yes, I was very sloppy with my language in that post.
BUT
You'd have to show me some evidence before I buy that bytecode can approach the speed of machine code.
C# and Managed C++ compile down to MSIL code and executes in the CLR. C++ can actually run slower than C# but depending on application you may not see much difference at all (ie when you are IO bound). There are benchmarks where C# ran faster than managed C++ but slower than Java or C++.
For an example of speed try NASA World Wind. Written in C# and Managed Direct X.
The biggest advantage of C# over unmanaged C++ is that the .NET framework is soooo much nicer to work in. Likewise Java vs C++ though STL and newer extensions have closed the gap. A little.
IMHO as a C/C++/Java/C# developer I think that C++ isn't very compelling anymore. C for speed and C#/Java for ease of programming. ObjectiveC for Mac.
C++ is kinda half way inbetween low level (pointers) and modern high level languages.
Bytecode MUST be translated into machine code before it can be used, thereby adding at LEAST one extra cycle for every command. How can that be anywhere close to the speed that you can get from a machine code compile? By using an efficient JIT compiler and an inefficient machine code compiler? Go get an efficient machine code compiler.
Java is faster than gcc in some benchmarks:
http://java.sys-con.com/read/45250.h...84C96D46498798
Granted that gcc is one of the slower (if not slowest commonly used) compilers. Presumably using an Intel, Sun or even MS compiler would have resulted in a faster executable. Definiately Intel...it's one of the nicer advantages of moving to the intel platform for the mac. The potential of using Intel's compilers to generate fast code on the platform.
Additionally: Every program written in C# requires .NET framework to be present on the computer... I THINK.... verify please?
Yes and no. Mono's CLI isn't called .NET but it is a CLI. However, it's like complaining that Java must be installed. So what? The installer takes care of it for you.
Additionally: C# is EXTREMELY high level. The higher the language, the less efficient it is, by necessity.
See above. In any case C++ is considered a high level language as well. Just an older one with modern extensions hacked on.
Efficiency is goverened by many things but typically programmer efficiency is considered more important than language efficiency these days...or we'd still be using Assembly.
Additionally: C# does stupid things like require you to store the Main function (called methods in C#) as a class object. You then must instantiate an object of that class before you can use any functions that you may have written outside of the Main. That should be implicit.
Never learned about static methods have we? If you have a static method called Bar in class Foo you can call Foo.Bar() without instantiating a Foo. In fact you can't call Bar except as Foo.Bar(). Calling it via FooInstance.Bar() should result in a compilation error.
Additionally: It's 3 am and I can't think of anthing else right now. Go with C++, and use visual studio if you wish, so that you can take advantage of the .NET libraries. You will have the benefits of C# (the library) and also the ability to use the same language on multiple platforms.
Managed C++ appears to be the worst of all worlds in terms of performance (the collection class called appear to be abysmally slow in comparison to C#), syntax and cross platform compatibility (managed C++ is the least likely to run under Mono).
Java is more cross platform and C is faster. If you know both C#/Java and C you can likely handle most idiosyncracies of C++.
Vinea
PS. C#/.NET was one of the two hot software careers on Money. The other was quality assurance (ugh). Can't find the link but that's what I remember seeing last year. Fortunately it seems we're on a rebound with Software Engineer (in general) topping the charts again:
http://money.cnn.com/magazines/money...p50/index.html
Obj-C 2.0 in Leopard will have full GC if you want it.
The only piece of C# that has intrigued me is the 'delegate' keyword. Nice move there, pulling the concept of a First Responder into the language itself. Other than that? Yet Another Language. Whoo.
Eh, its all yet another language kinda like C until you get a little esoteric like Haskell.
Oddly, all the "new" languages are all contemporaries from the late 80s or early 90s. C# is later but it's not really "new". Creating new languages appears to be passe.
Its the frameworks that are new and constantly evolving.
Vinea
(JavaScript, PHP and Python are from the early-mid 90s, Ruby from 1993)
IMHO as a C/C++/Java/C# developer I think that C++ isn't very compelling anymore. C for speed and C#/Java for ease of programming. ObjectiveC for Mac.
I'm tempted to go back to C++ for a cross-platform app I have in mind simply because I'm not convinced from what others have said here (for whom, perhaps, a "front end" is seldom more than a dialog or two and an OK button
If I did stick with Java, just to make life easier during a lot of the coding and debugging, I'd at least have to see how well SWT works. Swing is functional enough for some things, but it's still pretty far from what you need to get a good "shrink-wrapped" commercial quality interface.
Even with Java and SWT, however, I'd envision having to use a lot of JNI to access native platform capabilities, and that starts to get ugly. I'd like my app to be scriptable on OS X (AppleEvents/AppleScript) and Windows (COM). From what I've seen of trying to make a non-Cocoa Java app scriptable on the Mac, things get even uglier. Further, I've heard that SWT is fairly Windows-centric, so I might end up having to waste a lot of my time browbeat SWT into doing what I want on a Mac.
C++ is faster than Java for most things, taking on the grief of alloc/dealloc memory management at least means never having to worry about garbage collection hiccups, and C++ can be just as fast as C if you want it to be, where you want it to be, simply by dropping down to pure C code where necessary.
Fortunately it seems we're on a rebound with Software Engineer (in general) topping the charts again:
I wonder if part of the reason for this might be how often outsourcing doesn't pan out, and that perhaps more businesses are beginning to realize that paying the extra for a software engineer in the US is often a better investment.
Right, for the most part, iTunes is just an elaborate QuickTime client. But you're only proving my point: QuickTime, in turn, does indeed have a rather strict separation between front-end and back-end. Most of it is back-end, and that doesn't use Carbon code at all. Think of all the optimizations towards SSE3, etc.
I don't know if that's exactly "proving your point".
I hate to be deliberately vague, but I can't talk about the details of what I have in mind. The front end, however, can and should look very similar on Mac and Windows, as iTunes does, and it's a bit complicated as a GUI, with a fair amount of gestural interaction, drag-and-drop support, etc. The back end is actually less complex than the front end, at least for first-rev features. It's still a sh*tload of work for one guy to take on by himself as a shareware project, and I'm pretty sure maintaining two very different GUI code bases would cause me much more grief and duplicated effort than could be compensated for by any extra zing using Cocoa and Objective C might bring me.
If you're using Firefox right now, like I am, there's another example of a cross-platform app where the supposed wisdom of maintaining what would have been not just two, but three almost entirely separate GUI code bases (for Mac, Windows, and Linux) was avoided.
If you're using Firefox right now, like I am[..].
My condolences.
Personally, I'd use C++ for the back end, Qt for the GUI toolkit and Python for scripting because an interpreter can be embedded easily in C code.
My condolences.
Someone at work uses Firefox all the time and he gets so many broken downloads and sites that don't work that he often has to end up using Safari anyway.
(Casually allowing the thread to veer wildly off on a tangent, he said, "...) Odd. I rarely have any problems with Firefox at all. The most notable problem I've run into now and then is an annoying issue with Firefox getting confused about keyboard focus and ignoring Command-C (or Ctrl-C when I'm on Windows) to copy text I've selected. So far I haven't seen that problem in Firefox 2.0, so maybe that's finally been fixed.
At any rate, I was on Windows at work when writing that post, so Safari wasn't an option. When on my Mac, I use Safari either to test compatibility of my own web code, or on the rare occasion a page doesn't work for me in Firefox (I honestly don't remember the last time that happened). I used to have to use Safari for accessing my bank account, because my bank for a while only officially supported IE on Windows and Safari on Mac, but that's fortunately not an issue anymore.
I like the geekier nature of Firefox. I like being able to control pop-ups and cookie filtering on a site-by-site basis. I really like being able to use Adblock. And it's not like Safari is bug free -- there have been times I've been using Safari and couldn't get a web page to work properly, and I've had to turn to Firefox to get around the problem.
I wonder if part of the reason for this might be how often outsourcing doesn't pan out, and that perhaps more businesses are beginning to realize that paying the extra for a software engineer in the US is often a better investment.
1. The top Indian programmers move here.
2. The global separation makes management a PITA.
3. For projects smaller than a few million dollars, the extra management required often isn't worth it.
My company recently had a bust with offshore development. We actually outsourced the project in general, and the US company that was doing that outsourced part of the job to India. The results were not pleasant. A few people in the US company lost their jobs due to failure to deliver.
But with that said, software engineers are still an overpaid lot. Finding a good one can be tough, and the average ones cost more than they're worth. How a software engineer can demand a higher wage than a hardware engineer has been, in my opinion, the greatest magic trick of the 90's. The increasing dominance of the embedded arena in the high-tech job market, however, seems to be correcting this oddity. Incidentally, this is why I suggested Java. C++ has been eschewed in the embedded world in favor of plain C and AOT Java. A lot of this has to do with the ubiquity and compactness of the ulibc in embedded Linux distributions. Then there's eCos, and the odd truth that a standard Java KVM has a smaller footprint than a C++ library.
Naw...its because software geeks pulled a fast one...of course that doesn't make hardware geeks any brighter. It has absolutely nothing to do with a reasonably free market and it paying relative worth...
Vinea
Naw...its because software geeks pulled a fast one...of course that doesn't make hardware geeks any brighter. It has absolutely nothing to do with a reasonably free market and it paying relative worth...
There are really good software engineers, developers, coders -- whatever you want to call them. There are also a great deal of mediocre and downright bad software developers. With hardware, there's instant quality control: cost, obviousness, and lower tolerance for error (fire). It's much easier to detect a bad hardware design job, so in the United States, Taiwan, and Japan there aren't too many bad or even medicore hardware design teams in business. Anywhere else, though, YMMV. But with software, it's much easier for a crappy product to slip through the cracks.
What? Mono? On a Mac?
Next you'll be suggesting we write GUIs in Tk..
Yes, Mono on the Mac. For commercial apps? No. For hobby, learning, or even doing ASP.NET work on someone else's sever - definitely.
There are really good software engineers, developers, coders -- whatever you want to call them. There are also a great deal of mediocre and downright bad software developers. With hardware, there's instant quality control: cost, obviousness, and lower tolerance for error (fire). It's much easier to detect a bad hardware design job, so in the United States, Taiwan, and Japan there aren't too many bad or even medicore hardware design teams in business. Anywhere else, though, YMMV. But with software, it's much easier for a crappy product to slip through the cracks.
Which has zip to do with compensation and relative worth (from a business perspective).
While it does cost more to spin a hardware prototype I've seen my share of bad board designs requiring the respin of a set of boards. There's nothing "instant" about quality control in either the hardware or software domain.
Besides, perhaps the lower error rates is due to the lower complexity of the task? There's two ways to spin any given perspective.
Vinea
Besides, perhaps the lower error rates is due to the lower complexity of the task? There's two ways to spin any given perspective.
The general consensus is that software development paradigms are flawed, and that the ridiculous syntax of C-based languages causes a lot of errors. At least this is the understanding at embedded level, since most of the industry magazines have run series of articles pertaining to these very issues.
The reason why hardware jobs often pay less is because that industry has already reached its world-market equilibrium. Software is getting there. The open source movement is making this happen more quickly and smoothly than it otherwise would. But having done both, I can tell you that hardware design takes a lot more thought and concentration.