or Connect
AppleInsider › Forums › Software › Mac OS X › Apple's other open secret: the LLVM Complier
New Posts  All Forums:Forum Nav:

Apple's other open secret: the LLVM Complier

post #1 of 52
Thread Starter 
SproutCore, profiled earlier this week, isn't the only big news spill out from the top secret WWDC conference due to Apple's embrace of open source sharing. Another future technology featured by the Mac maker last week was LLVM, the Low Level Virtual Machine compiler infrastructure project.

Like SproutCore, LLVM is neither new nor secret, but both have been hiding from attention due to a thick layer of complexity that has obscured their future potential.

Looking for LLVM at WWDC

Again, the trail of breadcrumbs for LLVM starts in the public WWDC schedule. On Tuesday, the session "New Compiler Technology and Future Directions" detailed the following synopsis:

"Xcode 3.1 introduces two new compilers for Mac OS X: GCC 4.2 and LLVM-GCC. Learn how the new security and performance improvements in GCC 4.2 can help you produce better applications. Understand the innovations in LLVM-GCC, and find out how you can use it in your own testing and development. Finally, get a preview of future compiler developments."

There's a lot of unpronounceable words in all capital letters in that paragraph, LOLBBQ. Let's pull a few out and define them until the synopsis starts making sense.

Introducing GCC

The first acronym in our alphabet soup is GCC, originally the GNU C Compiler. This project began in the mid 80s by Richard Stallman of the Free Software Foundation. Stallman's radical idea was to develop software that would be shared rather than sold, with the intent of delivering code that anyone could use provided that anything they contribute to it would be passed along in a form others could also use.

Stallman was working to develop a free version of AT&T's Unix, which had already become the standard operating system in academia. He started at the core: in order to develop anything in the C language, one would need a C compiler to convert that high level, portable C source code into machine language object code suited to run on a particular processor architecture.

GCC has progressed through a series of advancements over the years to become the standard compiler for GNU Linux, BSD Unix, Mac OS X, and a variety of embedded operating systems. GCC supports a wide variety of processor architecture targets and high level language sources.

Apple uses a specialized version of GCC 4.0 and 4.2 in Leopard's Xcode 3.1 that supports compiling Objective-C/C/C++ code to both PowerPC and Intel targets on the desktop and uses GCC 4.0 to target ARM development on the iPhone.

The Compiler

A compiler refers to the portion of the development toolchain between source code building and debugging and deployment. The first phase of compiling is the Front End Parser, which performs initial language-specific syntax and semantic analysis on source code to create an internal representation of the program.

Code is then passed through an Optimizer phase which improves it by doing things like deleting any code redundancies or dead code that doesn't need to exist in the final version.

The Code Generator phase then takes the optimized code and maps it to the output processor, resulting in assembly language code which is no longer human readable.

The Assembler phase converts assembly language code into object code that can be interpreted by a hardware processor or a software virtual machine.

The final phase is the Linker, which combines object code with any necessary library code to create the final executable.

Introducing LLVM

GCC currently handles all those phases for compiling code within Xcode, Apple's Mac OS X IDE (Integrated Development Environment). However, there are some drawbacks to using GCC.

One is that it is delivered under the GPL, which means Apple can't integrate it directly into Xcode without making its IDE GPL as well. Apple prefers BSD/MIT style open source licensees, where there is no limitation upon extending open projects as part of larger proprietary products.

Another is that portions of GCC are getting long in the tooth. LLVM is a modern project that has aspired to rethink how compiler parts should work, with emphasis on Just In Time compilation, cross-file optimization (which can link together code from different languages and optimize across file boundaries), and a modular compiler architecture for creating components that have few dependencies on each other while integrating well with existing compiler tools.

LLVM only just got started at the University of Illinois in 2000 as a research project of Chris Lattner. It was released as version 1.0 in 2003. Lattner caught the attention of Apple after posting questions about Objective-C to the company's objc-language mailing list. Apple in turn began contributing to the LLVM project in 2005 and later hired Lattner to fund his work.

Clang and LLVM-GCC

Last year the project released Clang as an Apple led, standalone implementation of the LLVM compiler tools aimed to provide fast compiling with low memory use, expressive diagnostics, a modular library-based architecture, and tight integration within an IDE such as Xcode, all offered under the BSD open source license.

In addition to the pure LLVM Clang project, which uses an early, developmental front end code parser for Objective C/C/C++, Apple also started work on integrating components of LLVM into the existing GCC based on Lattner's LLVM/GCC Integration Proposal. That has resulted in a hybrid system that leverages the mature components of GCC, such as its front end parser, while adding the most valuable components of LLVM, including its modern code optimizers.

That project, known as LLVM-GCC, inserts the optimizer and code generator from LLVM into GCC, providing modern methods for "aggressive loop, standard scalar, and interprocedural optimizations and interprocedural analyses" missing in the standard GCC components.

LLVM-GCC is designed to be highly compatible with GCC so that developers can move to the new compiler and benefit from its code optimizations without making substantial changes to their workflow. Sources report that LLVM-GCC "compiles code that consistently runs 33% faster" than code output from GCC.

Apple also uses LLVM in the OpenGL stack in Leopard, leveraging its virtual machine concept of common IR to emulate OpenGL hardware features on Macs that lack the actual silicon to interpret that code. Code is instead interpreted or JIT on the CPU.

Apple is also using LLVM in iPhone development, as the project's modular architecture makes it easier to add support for other architectures such as ARM, now supported in LLVM 2.0 thanks to work done by Nokia's INdT.

On page 2 of 2: LLVM and Apple's Multicore Future; and Open for Improvement.

LLVM and Apple's Multicore Future

LLVM plays into Apple's ongoing strategies for multicore and multiprocessor parallelism. CPUs are now reaching physical limits that are preventing chips from getting faster simply by driving up the gigahertz. Intel's roadmaps indicate that the company now plans to drive future performance by adding multiple cores. Apple already ships 8-core Macs on the high end, and Intel has plans to boost the number of cores per processor into the double digits.

Taking advantage of those cores is not straightforward. While the classic Mac OS' and Windows' legacy spaghetti code was made faster through a decade of CPUs that rapidly increased their raw clock speeds, future advances will come from producing highly efficient code that can take full advantage of multiple cores.

Existing methods of thread scheduling are tricky to keep in sync across multiple cores, resulting in inefficient use of modern hardware. With features like OpenCL and Grand Central Dispatch, Snow Leopard will be better equipped to manage parallelism across processors and push optimized code to the GPU's cores, as described in WWDC 2008: New in Mac OS X Snow Leopard. However, in order for the OS to efficiently schedule parallel tasks, the code needs to be explicitly optimized for for parallelism by the compiler.

Open for Improvement

LLVM will be a key tool in prepping code for high performance scheduling. As the largest contributor to the LLVM project, Apple is working to push compiler technology ahead along with researchers in academia and industry partners, including supercomputer maker Cray. Apple is also making contributions to GCC to improve its performance and add features.

Because both projects are open source, it's easy to find hints of what the company is up to next. Enhancements to code debugging, compiler speed, the speed of output code, security features related to stopping malicious buffer overflows, and processor specific optimizations will all work together to create better quality code.

That means applications will continue to get faster and developers will have an easier time focusing on the value they can add rather than having their time consumed by outdated compiler technology.

For Apple, investing its own advanced compiler expertise also means that it can hand tune the software that will be running while it also optimizes the specialized processors that will be running it, such as the mobile SoCs Apple will be building with its acquisition of PA Semi, as noted in How Apples PA Semi Acquisition Fits Into Its Chip History.

There's more information on The LLVM Compiler Infrastructure Project. Lattner also published a PDF of his presentation of The LLVM Compiler System at the 2007 Bossa Conference.
post #2 of 52
"The Code Generator phase then takes the optimized code and maps it to the output processor, resulting in assembly language code which is no longer human readable."

Assembly isn't human readable? I guess that makes me a machine.
post #3 of 52
Quote:
Originally Posted by exscape View Post

"The Code Generator phase then takes the optimized code and maps it to the output processor, resulting in assembly language code which is no longer human readable."

Assembly isn't human readable? I guess that makes me a machine.

And me too, I'd guess Well, at least someone finally told me...
post #4 of 52
Quote:
Originally Posted by exscape View Post

Assembly isn't human readable? I guess that makes me a machine.

Quote:
Originally Posted by eAi View Post

And me too, I'd guess Well, at least someone finally told me...

Count me in... I guess we're either machines or dinosaurs!
post #5 of 52
Even more. I have know guys that can read pages and pages of 0's and 1's and know what they are saying. But now that you mention it, the guys were sort of different. Maybe machines?
post #6 of 52
That was a nice, insightfull read

Seeing how apple is charging ahead in optimizing their OS and other software and IDE to really get the maximum performance out of their hardware and the direction the CPU/GPU makers are going makes me feel very warm and fuzzy about apple's future

These developments also really highlight the rationale behind Apple's strategy to engineer their products as an integrated union of hard- and software and focus on the most optimal, efficient en powerfull integration/symbiosis of both. Combined with their renewed efforts in internet connectivity using open standards and a great user experience (iPhone, sproutcore) and they really can't go anywhere but up.

Makes you wonder when companies that focus on a single aspect will see the light.
Microsoft, Dell, Samsung, Sony, Nokia, Google, I'm looking at you!
post #7 of 52
Quote:
Originally Posted by exscape View Post

"The Code Generator phase then takes the optimized code and maps it to the output processor, resulting in assembly language code which is no longer human readable."

Assembly isn't human readable? I guess that makes me a machine.

This is another example in a long line that reminds us (but obviously not them) that AI should keep away from analysis. They really come across as not knowing what they are talking about.

The fact they think that a compiler is involved in scheduling threads, that it 'optimises for parallelism' at the code level for multiple processors and that the OS is involved is scheduling parallel code makes them look silly.

The article just sounds like a bunch of gobbledegook. They're taking what is a pretty mundane improvement (for people other than compiler hackers) and trying to turn it into something special. It's got nothing to do with multiple architectures or specialised versions of ARM processors (which all run basically identical instruction sets, no matter how they are applied).

Just give it up, AI, please.
post #8 of 52
Very interesting read, thanks
post #9 of 52
Quote:
Originally Posted by dutch pear View Post

That was a nice, insightfull read

Seeing how apple is charging ahead in optimizing their OS and other software and IDE to really get the maximum performance out of their hardware and the direction the CPU/GPU makers are going makes me feel very warm and fuzzy about apple's future

These developments also really highlight the rationale behind Apple's strategy to engineer their products as an integrated union of hard- and software and focus on the most optimal, efficient en powerfull integration/symbiosis of both. Combined with their renewed efforts in internet connectivity using open standards and a great user experience (iPhone, sproutcore) and they really can't go anywhere but up.

Makes you wonder when companies that focus on a single aspect will see the light.
Microsoft, Dell, Samsung, Sony, Nokia, Google, I'm looking at you!

Your post is very buzzword compliant. You should work for AI.
post #10 of 52
Interesting article Dan!
Citing unnamed sources with limited but direct knowledge of a rumoured device - Comedy Insider (Feb 2014)
Reply
Citing unnamed sources with limited but direct knowledge of a rumoured device - Comedy Insider (Feb 2014)
Reply
post #11 of 52
The developers I spoke to at WWDC were super excited about LLVM - if they were, so should you be.

It makes me laugh to think that some supposedly well educated people in the IT field still can't see past Apple's eye-candy hardware and UIs, and don't realise that they are actually a serious computer-science company.

LLVM + GCD + Open CL + <redacted> + <redacted> = a perfect Snow Leopard storm
post #12 of 52
Quote:
Originally Posted by exscape View Post

"The Code Generator phase then takes the optimized code and maps it to the output processor, resulting in assembly language code which is no longer human readable."

Assembly isn't human readable? I guess that makes me a machine.

You obviously are a machine because you're incapable of actually understanding anything but machine code. What AI is saying is not that assembly is unreadable...it's saying that the optimized assembly code that is being generated by LLVM is very difficult to read or "no longer human readable" (which is somewhat of a hyperbole since some people can sit down in front of the code and spend some time understanding what it's doing).

But like I said...you're a machine and wouldn't understand this. Good job on luring other machines though.
post #13 of 52
Quote:
Originally Posted by merdhead View Post

This is another example in a long line that reminds us (but obviously not them) that AI should keep away from analysis. They really come across as not knowing what they are talking about.

The fact they think that a compiler is involved in scheduling threads, that it 'optimises for parallelism' at the code level for multiple processors and that the OS is involved is scheduling parallel code makes them look silly.

The article just sounds like a bunch of gobbledegook. They're taking what is a pretty mundane improvement (for people other than compiler hackers) and trying to turn it into something special. It's got nothing to do with multiple architectures or specialised versions of ARM processors (which all run basically identical instruction sets, no matter how they are applied).

Just give it up, AI, please.

I think you're being a bit harsh. LLVM's intermediate bytecode approach is pretty interesting, both in terms of linear optimization and parallel optimization. I can imagine a case where LLVM opcodes hint at parallelizable tasks that can be dynamically compiled for any number of processors. Thus the language would simply say "the elements in this loop can be executed in parallel" and the compiler and OS would take care of the rest. And when you're dynamically recompiling, you have awareness of not only the system architecture but its resource availability at compile time.

LLVM seems to be driving toward this a lot faster than Java or CLR's intermediate bytecode. They seem intent on staying pretty close to the metal with LLVM so they can generate everything from GPU to embedded native code.

I find it pretty exciting stuff. I'd love it if some of the LLVM slides leaked. Hopefully Apple will post them soon for those who weren't able to attend WWDC.
post #14 of 52
Quote:
Originally Posted by starwarrior View Post

Even more. I have know guys that can read pages and pages of 0's and 1's and know what they are saying. But now that you mention it, the guys were sort of different. Maybe machines?

We (more progressive) programmers write code in octal absolute. A programmer with any credentials would not trust these newfangled assemblers as they introduce cruft and generate un-optimized code.

Further, the practice of using macro-instructions should be avoided at all costs-- it is a sign of laziness that will only serve to further estrange the programmer from the hardware and introduce bloat into the resulting program.
"...The calm is on the water and part of us would linger by the shore, For ships are safe in harbor, but that's not what ships are for."
- Michael Lille -
Reply
"...The calm is on the water and part of us would linger by the shore, For ships are safe in harbor, but that's not what ships are for."
- Michael Lille -
Reply
post #15 of 52
Quote:
Originally Posted by exscape View Post

Assembly isn't human readable? I guess that makes me a machine.

I've always suspected that everyone on the internet is a Bot, except me... hence my alias.


Quote:
Originally Posted by merdhead View Post

Your post is very buzzword compliant. You should work for AI.


While I don't agree with the sentiment, I did find it very humorous.
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
post #16 of 52
Quote:
Originally Posted by Booga View Post

I think you're being a bit harsh. LLVM's intermediate bytecode approach is pretty interesting, both in terms of linear optimization and parallel optimization. I can imagine a case where LLVM opcodes hint at parallelizable tasks that can be dynamically compiled for any number of processors. Thus the language would simply say "the elements in this loop can be executed in parallel" and the compiler and OS would take care of the rest. And when you're dynamically recompiling, you have awareness of not only the system architecture but its resource availability at compile time.

LLVM seems to be driving toward this a lot faster than Java or CLR's intermediate bytecode. They seem intent on staying pretty close to the metal with LLVM so they can generate everything from GPU to embedded native code.

I find it pretty exciting stuff. I'd love it if some of the LLVM slides leaked. Hopefully Apple will post them soon for those who weren't able to attend WWDC.

I'm not being harsh and I'm not saying its not interesting to someone.

The important thing you mention in your post is 'language': you're going to need language support to parallelise. A compiler might take C code and improve parallel execution within one core, using multiple units or whatever, but its not going to make it run on multiple processors without library and OS support which is outside the job description of the compiler and requires code to be written to some required spec. The scope for automatically make code automatically parallel is very limited indeed, ie not really practical.
post #17 of 52
Quote:
Originally Posted by da2357 View Post

Count me in... I guess we're either machines or dinosaurs!

Cylons! Someone get those mother frackers before they ruin the entire fleet.
Hard-Core.
Reply
Hard-Core.
Reply
post #18 of 52
Replicants, the whole lot of ya, I believe.

And God bless you all for making my life so nice and personal computing experiences so rewarding. May you all be richly compensated for devoting your lives to something we mere mortals can't fathom.

post #19 of 52
Quote:
Originally Posted by aplnub View Post

Cylons! Someone get those mother frackers before they ruin the entire fleet.

You can always tell they're the ones that need the most interspecies lovin' too.
post #20 of 52
While I agree with most of what is in the article, I would like to remind the writer that one tiny "factoid" isn't true.

I'm getting a bit tired of writers, even some knowledgeable ones, writing that "now that individual cores aren't getting faster, cpu manufacturers are relying on multi core to move things forward".

While it's true that when moving to dual cores, and above, IBM, Intel, and AMD moved the speeds of those cores down to allow for the heavier use of current on the chip required by the multi-core designs.

Also, Intel particularly, needed to move back from their aggressive marketing for speed.

But, despite writers seeming to miss it, we constantly talk about how chip speeds are moving up. And overclockers seem to get speeds that are quite a bit higher than the highest achieved by Prescott back then.

Current Core 2 designs will get to at least 3.4 GHz, and possibly 3.6 Ghz before they are phased out. Nehalen will easily get to 3.6 GHz early, and will possibly exceed 4.0 GHz, and we will see 5 GHz some time after that in the next two years or so.

So while the march to ever higher clock speeds has slowed, it continues.

Other major advances show that adding more cores isn't the only method to add power. Nehalen has a host of new features to do that. Taking advantage of those new advanced features will provide most of the raw power increase we will see in processing, with the same number of cores.

Sometimes, writers make it look as though the only advances in cpu design is in adding more cores. Not true.

These newer compilers will be working very hard to take that advantage. If all we had to look forwards to were more cores, we would be in bad shape for the next few years.
post #21 of 52
You mean that all of Apple's previous OS's were spaghetti code? Really? Do you double as an AP journalist as well? Both this article and their reporting are seriously fact-challenged.
post #22 of 52
Is SproutCore Apples Flash killer?
post #23 of 52
Quote:
Originally Posted by merdhead View Post

The fact they think that a compiler is involved in scheduling threads, that it 'optimises for parallelism' at the code level for multiple processors and that the OS is involved is scheduling parallel code makes them look silly.

Actually, given that they effectively control Objective-C, the compiler, the hardware and the OS they are better positioned than everyone else to provide the language hooks and the compiler, OS and hardware support to better support parallelism for developers.

More than even Intel, MS or Sun (with the dubious exception of Java on OpenSolaris running on a Sparc).

In any case, there are compilers that optimize for parallelism (OpenUH, VFC, PGI, etc) but mostly they're just more efficient at targetting things like OpenMP or MPI.

Meh, I guess someone like MS or Intel could get by with something like pragma statements to provide the needed hints but that's just inelegant.
post #24 of 52
Quote:
Originally Posted by aplnub View Post

Cylons! Someone get those mother frackers before they ruin the entire fleet.

*shoots aplnub* Dun touch my Six...


*cough*


With that unfortunate business out of the way, I'm still pretty damn excited about the stuff I see coming out of the Snow Leopard announcement.
post #25 of 52
So what is LLVM for in plain English for mere mortals? Can it be used to automatically make iPhone applications from Mac applications? Thanks.
post #26 of 52
Quote:
Originally Posted by Aurora72 View Post

You mean that all of Apple's previous OS's were spaghetti code? Really? Do you double as an AP journalist as well? Both this article and their reporting are seriously fact-challenged.

Yeah I forgot to mention that one. The author doesn't even know what spaghetti is.
post #27 of 52
Quote:
Originally Posted by zunx View Post

So what is LLVM for in plain English for mere mortals? Can it be used to automatically make iPhone applications from Mac applications? Thanks.

No. It's a compiler back end, the bit of the compiler that does the hard work (producing an executable). The front end of a compiler reads the code written in whatever language and translates it into a form the back end can handle.
post #28 of 52
Quote:
Originally Posted by zunx View Post

So what is LLVM for in plain English for mere mortals? Can it be used to automatically make iPhone applications from Mac applications? Thanks.

hey zunx.

the short answer is, no, llvm is not about converting the UI from desktop osx to mobile osx.

llvm is a tool that system-designers (at apple) create to make it easier for system them (ie the designers at apple) to create tools that developers can use to make their applications use the hardware in a more efficient way.

as an end-user, the main impact that llvm will have on you is that you apps will run faster :-)

dont be offended, but the explanation in the article is already pretty basic, so if you dont understand that then there is not much more that can help you understand.

but dont be afraid to keep asking questions!
post #29 of 52
Quote:
Originally Posted by AppleInsider View Post

LLVM only just got started at the University of Illinois in 2000 as a research project of Chris Lattner. It was released as version 1.0 in 2003. Lattner caught the attention of Apple after posting questions about Objective-C to the company's objc-language mailing list. Apple in turn began contributing to the LLVM project in 2005 and later hired Lattner to fund his work.

Actually, I became aware of LLVM because of Chris' early posts to the FSF GCC development lists. I called attention to LLVM to Ted Goldstein, then head of Developer Tools at Apple, who summarily ignored it because management above him was known to favor GCC despite any technical merits other projects might have.

Along the way, I evangelized LLVM to Nate Begeman and Louis Gerbarg. They ended up writing the PowerPC backend for LLVM as a side project. It wasn't until much later that LLVM's apparent maturity, coupled with the unknowns about the forthcoming FSF GPLv3, and other needs such as a replacement for the OpenGL AltiVec JIT, that Apple committed to LLVM.

LLVM has been a huge win for Apple and we should all be excited about it as users because this is the path via which better, faster, and less buggy code will be written.

In a similar manner of shake-up, look into the new "gold" linker that Ian Lance Taylor and others have built to replace the normal FSF linker. Apple has a custom linker that can't benefit from this work, but it's a fascinating development nevertheless.
post #30 of 52
I've been using computers since before desk tops and never understood explanations of what a compiler was or did. Now, thanks to Merdhead and DavidfO1, I have at least a glimmer of understanding.
And, although I didn't gut level understand 99% of this thread, it was truly interesting. Thanx, guys. You're awesome.
ADS
Reply
ADS
Reply
post #31 of 52
Quote:
Originally Posted by da2357 View Post

Count me in... I guess we're either machines or dinosaurs!

Heres to us non-Humans!
post #32 of 52
Quote:
Originally Posted by dutch pear View Post

Makes you wonder when companies that focus on a single aspect will see the light.
Microsoft, Dell, Samsung, Sony, Nokia, Google, I'm looking at you!

Lattner used to work on Microsoft's Phoenix advanced compiler framework - essentially Phoenix is Microsoft's equivalent of LLVM.
post #33 of 52
Quote:
Originally Posted by Dick Applebaum View Post

We (more progressive) programmers write code in octal absolute. A programmer with any credentials would not trust these newfangled assemblers as they introduce cruft and generate un-optimized code.

Further, the practice of using macro-instructions should be avoided at all costs-- it is a sign of laziness that will only serve to further estrange the programmer from the hardware and introduce bloat into the resulting program.

"I'll tell you the problem with the scientific power that you're using here: it didn't require any discipline to attain it. You read what others had done and you took the next step. You didn't earn the knowledge for yourselves, so you don't take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could and before you even knew what you had you patented it and packaged it and slapped it on a plastic lunchbox, and now you're selling it."
- Dr Ian Malcolm, Jurassic Park

I think we need fewer developers and more programmers, people who know how the the whole system works and can optimize it and make it work well. My problem with Flash isn't [just] the technology. It's how people use/abuse it. It's people who use it because it's easy, not because it's the right tool for the job.
post #34 of 52
Quote:
Originally Posted by Wiggin View Post


I think we need fewer developers and more programmers, people who know how the the whole system works and can optimize it and make it work well. My problem with Flash isn't [just] the technology. It's how people use/abuse it. It's people who use it because it's easy, not because it's the right tool for the job.

That simply won't work anymore. The days when a single programmer could write a complex program for a modern OS is pretty much done and over with. There's too much to understand, and too much to code.

Could you see a program such as any of the major ones being done by one programmer? No, you need a number of them, possibly dozens, all specialized in their own area of expertise. None of them understand the entire program.
post #35 of 52
I actually became aware of LLVM a while back via iPhone hacking. A bunch of folks were trying to figure out how to create software for the iPhone and quickly found that no known compiler/linker combination could generate MachO-ARM binaries (that naming combination still cracks me up).

Scouring through the Apple GCC mailing list for clues, they found mention of LLVM and ARM (a discussion thread which ended abruptly) and realized that was the missing piece of the puzzle. The rest is history (what basically allowed for all of the early iPhone software development).

I'm still a bit fuzzy on some of the technical details of LLVM (the JIT aspect of it), but do I know that it's a nice modular system for more easily generating optimized code on many different platforms.
 
Reply
 
Reply
post #36 of 52
Quote:
Originally Posted by melgross View Post

That simply won't work anymore. The days when a single programmer could write a complex program for a modern OS is pretty much done and over with. There's too much to understand, and too much to code.

Could you see a program such as any of the major ones being done by one programmer? No, you need a number of them, possibly dozens, all specialized in their own area of expertise. None of them understand the entire program.

You're overlooking people with actual Engineering field credentials. Everyone I worked with at NeXT Engineering knew from the basics of kernel programming to userspace APIs and how ObjC and the mach microkernel worked together. It was a small, deeply knowledge set of geniuses who actually loved to help and expand other engineers learning it along the way.

You want to know how many developers wrote TIFFany 3? Two. They now both work at Apple. Many of TIFFany 2.x technologies were copied by Adobe for Photoshop.

How many developers write Create and the suite of Stone Design Tools? One main developer and some help from a few others.

The OmniGroup of Developers is extremely small.

The Lighthouse Design group, now owned by SUN, was a very small group as well.

Some are now working on iWork.

Sorry, but if companies would hire actual individuals holding Engineering degrees they would be better served.

Accredited universities in CS tend to have the following:

Course sequences allowing students to specialize in specific areas such as:
  • computer graphics,
  • computer networking,
  • computer systems software,
  • software engineering, or
  • computer engineering.

Of course, I'd recommend companies first hire individuals who've worked for Operating System companies or within Operating System projects [Linux, RiscOS, Haiku, et.al], before hiring the fresh grad who knows Flash, Javascript and Photoshop.

If they want Liberal Arts computer graphic artists that's all they'll ever get.

As someone who did Mechanical Engineering and Computer Science do you think I'd be hired to be a Mechanical Engineering in-training without my degree in said field? No.

What makes the Software Industry think it can actually get away with it and not feel the ripples along the way?

One problem is that we are driven by 6 month cycles to bring out the next best thing before figuring out if it's got a foundation worth building on.

There is a reason C is still vital and FORTRAN is critical in many Engineering applications, especially Numerical Analysis--there is decades of refinement and design behind it.

The dubious C Programming language is so deeply ingrained in Computer Graphics, Compilers, Kernels, to every aspect of programming where it shares duties with C++, ObjC, Java, Fortran, Ada, Pascal, Eiffel and more that it's clear that Python, Ruby, Javascript, the many ML functional languages and more are just one big family serving to solve our needs.

The goal isn't to learn their languages and syntax but to learn the mathematical approach to design solutions [applications, services, classes, methods, delegates, etc] to solve programmatically large data sets where regular symbolic mathematics works if we only had a million years to crunch it out.

However, the more deeply knowledgeable one is in the mathematical theories the smaller and more rapidly the solutions will surface and thus picking the right tool for the job is the road to completion.

That takes years of learned skill development that many industries stopped doing over a decade prior. They want temporary solutions that mainly drive entertainment revenue streams.

Industry jumped on the WEB bandwagon and ductaped it together.

An example of this is Apache 1 and 2 Web Servers. Great products within their original design scopes. Unfortunately, they've had to be bloated to adapt to each new web technology that comes along and gets "popular" with the business markets.

The law of diminishing returns eventually rears it's ugly head and thats when people start to look around for those who realized the end game would require them to have knowledge to see the general scope to solve this dilemma.

One said example is the next generation of Apache 3.0.

Go read up on what Apache 3.0 will do for the Web. It's a complete rewrite and guts out about 90% of the crap thrown on their from past ideas that made no sense.

Here is the ApacheCon PDF (13 MB)
http://roy.gbiv.com/talks/200804_Apache3_ApacheCon.pdf


Don't get me wrong: most people do a bad job at faking their careers while keeping themselves distracted through the weekends knowing they return to another weak of bs.

However, that keeping up with the Jones mentality doesn't work in the long run.

I agree with the grandparent poster who pointed out the abuse and misuse of Flash by people who use it to make money as glorified presentations for business leaders too ignorant to know whether it's smart to pay them or not.

The number one driver of the Net has been Entertainment. People are not excited about the hard sciences, but sure as hell hunt down solutions to their Flash player problems as if it's a matter of life or death--YouTube generation indeed.
post #37 of 52
Quote:
Originally Posted by melgross View Post

That simply won't work anymore. The days when a single programmer could write a complex program for a modern OS is pretty much done and over with. There's too much to understand, and too much to code.

Could you see a program such as any of the major ones being done by one programmer? No, you need a number of them, possibly dozens, all specialized in their own area of expertise. None of them understand the entire program.

Yeah, what mdriftmeyer said!! Plus that I never said it should be done by a single person. Every reference was in the plural.

I remember having to deal with 14.4 kbps being the fastest internet connection people had and cleaning up HTML code by hand to remove every extraneous bit so it would download as fast as possible. The early WYSIWYG HTML apps created bigger files that took twice as long to download; and when Word started exporting to HTML, it drove me nuts the boated crap people started generating.
post #38 of 52
Quote:
Originally Posted by mdriftmeyer View Post

You're overlooking people with actual Engineering field credentials. Everyone I worked with at NeXT Engineering knew from the basics of kernel programming to userspace APIs and how ObjC and the mach microkernel worked together. It was a small, deeply knowledge set of geniuses who actually loved to help and expand other engineers learning it along the way.

You want to know how many developers wrote TIFFany 3? Two. They now both work at Apple. Many of TIFFany 2.x technologies were copied by Adobe for Photoshop.

How many developers write Create and the suite of Stone Design Tools? One main developer and some help from a few others.

The OmniGroup of Developers is extremely small.

The Lighthouse Design group, now owned by SUN, was a very small group as well.

Some are now working on iWork.

Sorry, but if companies would hire actual individuals holding Engineering degrees they would be better served.

Accredited universities in CS tend to have the following:

Course sequences allowing students to specialize in specific areas such as:
  • computer graphics,
  • computer networking,
  • computer systems software,
  • software engineering, or
  • computer engineering.

Of course, I'd recommend companies first hire individuals who've worked for Operating System companies or within Operating System projects [Linux, RiscOS, Haiku, et.al], before hiring the fresh grad who knows Flash, Javascript and Photoshop.

If they want Liberal Arts computer graphic artists that's all they'll ever get.

As someone who did Mechanical Engineering and Computer Science do you think I'd be hired to be a Mechanical Engineering in-training without my degree in said field? No.

What makes the Software Industry think it can actually get away with it and not feel the ripples along the way?

One problem is that we are driven by 6 month cycles to bring out the next best thing before figuring out if it's got a foundation worth building on.

There is a reason C is still vital and FORTRAN is critical in many Engineering applications, especially Numerical Analysis--there is decades of refinement and design behind it.

The dubious C Programming language is so deeply ingrained in Computer Graphics, Compilers, Kernels, to every aspect of programming where it shares duties with C++, ObjC, Java, Fortran, Ada, Pascal, Eiffel and more that it's clear that Python, Ruby, Javascript, the many ML functional languages and more are just one big family serving to solve our needs.

The goal isn't to learn their languages and syntax but to learn the mathematical approach to design solutions [applications, services, classes, methods, delegates, etc] to solve programmatically large data sets where regular symbolic mathematics works if we only had a million years to crunch it out.

However, the more deeply knowledgeable one is in the mathematical theories the smaller and more rapidly the solutions will surface and thus picking the right tool for the job is the road to completion.

That takes years of learned skill development that many industries stopped doing over a decade prior. They want temporary solutions that mainly drive entertainment revenue streams.

Industry jumped on the WEB bandwagon and ductaped it together.

An example of this is Apache 1 and 2 Web Servers. Great products within their original design scopes. Unfortunately, they've had to be bloated to adapt to each new web technology that comes along and gets "popular" with the business markets.

The law of diminishing returns eventually rears it's ugly head and thats when people start to look around for those who realized the end game would require them to have knowledge to see the general scope to solve this dilemma.

One said example is the next generation of Apache 3.0.

Go read up on what Apache 3.0 will do for the Web. It's a complete rewrite and guts out about 90% of the crap thrown on their from past ideas that made no sense.

Here is the ApacheCon PDF (13 MB)
http://roy.gbiv.com/talks/200804_Apache3_ApacheCon.pdf


Don't get me wrong: most people do a bad job at faking their careers while keeping themselves distracted through the weekends knowing they return to another weak of bs.

However, that keeping up with the Jones mentality doesn't work in the long run.

I agree with the grandparent poster who pointed out the abuse and misuse of Flash by people who use it to make money as glorified presentations for business leaders too ignorant to know whether it's smart to pay them or not.

The number one driver of the Net has been Entertainment. People are not excited about the hard sciences, but sure as hell hunt down solutions to their Flash player problems as if it's a matter of life or death--YouTube generation indeed.

That's a long, and interesting post. But I don't care how many degrees a person has, they simply can't come up with a Photoshop, or a Word, or similar program by themselves anymore. These have gotten far too complex, and the number of API's in a modern OS are large. Add that in with parallel processing, GPU takeovers etc. and it requires a team of specialized people.

Stone has been around for quite some time, and when they started, NEXT was far simpler than OS X is today. So were the computers.

Starting programs that get increasingly more complex as the years go on is one thing. Starting a new program that has to compete with those programs that are now in high digit versions is difficult.
post #39 of 52
*YAAAAAAAAWN*
2011 13" 2.3 MBP, 2006 15" 2.16 MBP, iPhone 4, iPod Shuffle, AEBS, AppleTV2 with XBMC.
Reply
2011 13" 2.3 MBP, 2006 15" 2.16 MBP, iPhone 4, iPod Shuffle, AEBS, AppleTV2 with XBMC.
Reply
post #40 of 52
Quote:
Originally Posted by melgross View Post

That's a long, and interesting post. But I don't care how many degrees a person has, they simply can't come up with a Photoshop, or a Word, or similar program by themselves anymore. These have gotten far too complex, and the number of API's in a modern OS are large. Add that in with parallel processing, GPU takeovers etc. and it requires a team of specialized people.

Stone has been around for quite some time, and when they started, NEXT was far simpler than OS X is today. So were the computers.

Starting programs that get increasingly more complex as the years go on is one thing. Starting a new program that has to compete with those programs that are now in high digit versions is difficult.

You obviously haven't written any software professionally, or worked for a large development team.

A large set of APIs make software easier to write, not harder. I can get the gist of thousands of API calls and lookup what I need as I need it. It radically reduces the amount of software I have to write.

Software is harder to write the more people you add, not easier. It's fine if you have a concrete spec which has long been frozen and doesn't change very much, and you know exactly what you are writing (eg Linux Kernel). But in every other case (and most interesting cases) it means more co-ordination, more communication and less freedom to act. Large software projects (in the billions) have known to collapse under their own weight (read "The Mythical Man Month"). Most good software was written by individuals. See Microsoft products for what happens with large teams (I might add that the Mac versions of their Office suite used to be developed by small teams).

Programs get more complex over time because of more developers and other people getting involved. Writing a program yourself, even a big one, means you have much more chance of it working and it being elegant and efficient.

Finally computers are not any more complex today than they were 10 or 20 years ago, they just have more capacity. In most senses they are simpler because as a programmer you can harness that capacity. You can't have to hand code in assembler to get things to run at a reasonable speed, for instance. Computer software hasn't changed appreciably since 1984. Most good computer science ideas invented in the 60 and 70's (and maybe early 80's), its been a process of refinement ever since, largely. Here's a hint for the morons: there is no new computer science in Facebook.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Mac OS X
AppleInsider › Forums › Software › Mac OS X › Apple's other open secret: the LLVM Complier