LinuxJournal: OS X is doomed

in macOS edited January 2014
The experts at LinuxJournal are saying how Mac OS X is doomed because of its obsolete Mach microkernel. That's too bad, I really liked OS X...

<a href=""; target="_blank"></a>;

Apple's quaint choice of a microkernel for Mac OS X means Linux will lead in performance on Mac hardware.

Disagreements exist about whether or not microkernels are good. It's easy to get the impression they're good because they were proposed as a refinement after monolithic kernels. Microkernels are mostly discredited now, however, because they have performance problems, and the benefits originally promised are a fantasy.

The microkernel zealot believes that several cooperating system processes should take over the monolithic kernel's traditional jobs. These several system processes are isolated from each other with memory protection, and this is the supposed benefit.

Monolithic kernels circumscribe the kernel's definition and implementation as "the part of the system that would not benefit from memory protection".

When I state the monolithic design's motivation this way, it's obvious who I believe is right. I think microkernel zealots are victims of an overgeneralization: they come to UNIX from legacy systems such as Windows 3.1 and Mac OS 6, which deludes them into the impression that memory protection everywhere is an abstract, unquestionable Good. It's sort of like the common mistake of believing in rituals that supposedly deliver more security, as if security were a one-dimensional concept.

Memory protection is a tool, and it has three common motivations:

1.\tto help debug programs under development with less performance cost than instrumentation. (Instrumentation is what Java or Purify uses.) The memory protection hopefully makes the program crash nearer to the bug than without it, while instrumentation is supposed to make the program crash right at the bug.

2.\tto minimize the inconvenience of program crashes.

3.\tto keep security promises even when programs crash.

"Because MS-DOS doesn't have it and MS-DOS sucks'' is not a motivation for memory protection.

Motivation #1 is a somewhat legitimate argument for the additional memory protection in microkernel systems. For example, QNX developers can debug device drivers and regular programs with the same debugger, making QNX drivers easier to write. QNX programmers are neat because drivers are so easy for them to write that they don't seem to share our idea of what a driver is; they think everything that does any abstraction of hardware is a driver. I think the good debugging tools for device drivers maintain QNX as a commercially viable Canadian microkernel. Their claims about stability of the finished product become suspicious to any developer who actually starts working with QNX; the microkernel benefits are all about ease of development and debugging.

Motivation #2 is silly. A real microkernel in the field will not recover itself when the SCSI driver process or the filesystem process crashes. Granted, if there's a developer at the helm who can give it a shove with some special debugging tool, it might, but that advantage is really more like that stated in motivation #1 than #2.

Since microkernel processes cooperate to implement security promises, the promises are not necessarily kept when one of the processes crashes. Therefore motivation #3 is also silly.

These three factors together show that memory protection is not very useful inside the kernel, except perhaps for kernel developers. That's why I claim the microkernel's promised benefits are a fantasy.

Before we move on, I should point out that the two microkernel systems, Mach and QNX, have different ideas about what is micro enough to go into the microkernel. In QNX, only message passing, context switching and a few process scheduling hooks go into the microkernel. QNX drivers for the disk, the console, the network card and all the hardware devices are ordinary processes that show up next to the user's programs in sin or ps. They obey kill, so if you want, you can kill them and crash the system.

Mach, which Apple has adopted for Mac OS X, puts anything that accesses hardware into the microkernel. Under Mach's philosophy, XFree86 still shouldn't be a user process. In the single server abuses of microkernels, like mkLinux, the Linux process made a system call (not message passing) into Mach whenever it needed to access any Apple hardware, so the filesystem's implementation would be inside the Linux process, but the disk drivers are inside the Mach microkernel. This arrangement is a good business argument for Apple funding mkLinux: all the drivers for their proprietary hardware, thus much of the code they funded, stays inside Mach, where it's covered by a more favorable (to them) license.

However, putting Mach device drivers inside the microkernel substantially kills QNX's motivation #1 because Mach device drivers are now as hard to debug as a monolithic kernel's device drivers. I'm not sure how Darwin's drivers work, but it's important to acknowledge this dispute about the organization of real microkernel systems.

What about the performance problem? In short, modern CPUs optimize for the monolithic kernel. The monolithic kernel maps itself into every user process's virtual memory space, but these kernel pages are marked somehow so that they're only accessible when the CPU's supervisor bit is set. When a process makes a system call, the CPU implicitly sets and unsets the supervisor bit when the call enters and returns, so the kernel pages are appropriately lit up and walled off by flipping a single bit. Since the virtual memory map doesn't change across the system call, the processor can retain all the map fragments that it has cached in its TLB.

With a microkernel, almost everything that used to be a system call now falls under the heading "passing a message to another process". In this case, flipping a supervisor bit is no longer enough to implement the memory protection, as a single user process's system calls involve separate memory maps for 1 user process + 1 microkernel + n system processes, but a single bit has enough states for only two maps. Instead of using the supervisor bit trick, the microkernel must switch the virtual memory map at least twice for every system-call-equivalent; once from the user process to the system process, and once again from the system process back to the user process. This requires more overhead than flipping a supervisor bit; there's more overhead to juggle the maps, and there are also two TLB flushes.

A practical example might involve even more overhead since two processes is only the minimum involved in a single system-call-equivalent. For example, reading from a file on QNX involves a user process, a filesystem process and a disk driver process.

What is the TLB flush overhead? The TLB stores small pieces of the virtual-to-physical map so that most memory access ends up consulting the TLB instead of the definitive map stored in physical memory. Since the TLB is inside the CPU, the CPU's designers arrange that TLB consultations shall be free.

All the information in the TLB is a derivative of the real virtual-to-physical map stored in physical memory. The TLB can represent one virtual-to-physical mapping, but the whole point of memory protection is to give each process a different virtual-to-physical mapping, thus reserving certain blocks of physical memory for each process. The virtual-to-physical map stored in physical memory can represent this multiplicity of maps, but the map-fragment represented in the high-speed hardware TLB can represent only one mapping. That's why switching processes involves TLB flushing.

Once the TLB is flushed, it becomes gradually reloaded from the definitive map in physical memory as the new process executes. The TLB's gradual reloading, amortized over the execution of each newly-awakened process, is overhead. It therefore makes sense to switch between processes as seldom as possible and make maximal use of the supervisor bit trick.

Microkernels also harm performance by complicating the current trend toward zero copy design. The zero copy aesthetic suggests that systems should copy around blocks of memory as little as possible. Suppose an application wants to read a file into memory. An aesthetically perfect zero copy system might have the application mmap(..) the file rather than using read(..). The disk controller's DMA engine would write the file's contents directly into the same physical memory that is mapped into the application's virtual address space. Obviously it takes some cleverness to arrange this, but memory protection is one of the main obstacles. The kernel is littered conspicuously with comments about how something has to be copied out to userspace. Microkernels make eliminating block copies more difficult because there are more memory protection barriers to copy across and because data has to be copied in and out of the formatted messages that microkernel systems pass around.

Existing zero copy projects in monolithic kernels pay off. NetBSD's UVM is Chuck Cranor's rewrite of virtual memory under the zero copy aesthetic. UVM invents page loanout and page transfer functions that NetBSD's earlier VM lacked. These functions embody the zero copy aesthetic because they sometimes eliminate the kernel's need to copy out to userspace, but only when the block that would have been copied is big enough to span an entire VM page. Some of his speed improvement no doubt comes from cleaner code, but the most compelling part of his PhD thesis discusses saving processor cycles by doing fewer bulk copies.

VxWorks is among the kernels that boasted zero copy design earliest, with its TCP stack. They're probably motivated by reduced memory footprint, but their zero copy stack should also be faster than a traditional TCP stack. Applications must use the zbuf API to experience the benefit, not the usual Berkeley sockets API. For comparison, VxWorks has no memory protection, not even between the kernel and the user's application.

BeOS implements its TCP stack in a user process, microkernel-style, and so does QNX. Both have notoriously slow TCP stacks.

Zero copy is an aesthetic, not a check-box press release feature, so it's not as simple as something a system can possess or lack. I suspect the difference between VxWorks's and QNX's TCP stack is one of zero copy vs. excessive copies.

The birth and death of microkernels didn't happen overnight, and it's important to understand that these performance obstacles were probably obvious even when microkernels were first proposed. Discrediting microkernels required actually implementing them, optimizing message-passing primitives, and so on.

It's also important not to laugh too hard at QNX. It's somewhat amazing that one can write QNX drivers at all, much less do it with unusual ease, given that their entire environment is rigidly closed-source.

However, I think we've come to a point where the record speaks for itself, and the microkernel project has failed. Yet this still doesn't cleanly vindicate Linux merely because it has a monolithic kernel. Sure, Linux need no longer envy Darwin's microkernel, but the microkernel experiment serves more generally to illustrate the cost of memory protection and of certain kinds of IPC.

If excessive switching between memory-protected user and system processes is wasteful, then might not also excessive switching between two user processes be wasteful? In fact, this issue explains why proprietary UNIX systems use two-level thread architectures that schedule many user threads inside each kernel thread. Linux stubbornly retains one-level kernel-scheduled threads, like Windows NT. Linux could perform better by adopting proprietary UNIX's scheduler activations or Masuda and Inohara's unstable threads. This performance issue is intertwined with the dispute between the IBM JDK's native threads and the Blackdown JDK's optional green threads.

Given how the microkernel experiment has worked out, I'm surprised by Apple's quaint choice to use a microkernel in a new design. At the very least, it creates an opportunity for Linux to establish and maintain performance leadership on the macppc platform. However, I think the most interesting implications of the failed microkernel experiment are the observations it made about how data flows through a complete system, rather than just answering the obvious question about how big the kernel should be.

Miles Nordin is a grizzled FidoNet veteran and an activist with Boulder 2600 (the 720) currently residing in exile near the infamous Waynesboro Country Club in sprawling Eastern Pennsylvania.

[ 05-29-2002: Message edited by: lolo ]</p>


  • Reply 1 of 34
    scott_h_phdscott_h_phd Posts: 448member
    So .... uh .... Linux on a Mac is better than Mac OS X on a Mac. So .... uh .... people buy Macs and run Linux on them. How does Apple lose out?
  • Reply 2 of 34
    rogue27rogue27 Posts: 607member
    Well, they're taking a stance in a computer science debate that promotes linux... I think their opinion/stance would be a bit biased here.

    Nothing new. Michael Dell keeps saying that Apple is doomed as well. I suppose some people will believe this.
  • Reply 3 of 34
    This is not the reason Apple (or OS X) is doomed. Who cares, other than a few geeky programmers who feel the need to prove that Linux is superior to everything else out there?

    Of all the reasons apple might die, this is by far the silliest I've read.

    Who says, "I was gonna get a Mac, but with it's silly outmoded microkernel architecture, I'm now having second thoughts."?
  • Reply 4 of 34
    eat@meeat@me Posts: 321member
    Linux zealots are just envious of Apple from the moment they shipped OS X bundled on their systems became the largest seller of UNIX systems on the planet. And developers want install base among other things. Apple has Cocoa, which just rocks IMHO. It is one of the best development environments I have seen or used. That is a HUGE incentive for developers. I would rather have a solid useful app that runs a little slower with the added UI overhead than a fast running buggy app any day.

    BTW, Aren't the old UNIX wars over, yet?

    That is so 80's.

    [ 05-29-2002: Message edited by: eat@me ]</p>
  • Reply 5 of 34
    nostradamusnostradamus Posts: 397member
    The author is technically wrong and is spreading FUD. He has little knowledge of Darwin and the kernel implementation in Darwin.

    The Mac OS X kernel is a hybrid of the monolithic and microkernel models.

    <a href=""; target="_blank">From Apple's Developer's site,</a>


    The core of any operating system is its kernel. The Mac OS X kernel is also known as XNU. Though Mac OS X shares much of its underlying architecture with BSD, the kernel is one area where they differ significantly. XNU is based on the Mach microkernel design, but it also incorporates BSD features. It is not technically a microkernel implementation, but still has many of the benefits of a microkernel.

    Why is it designed like this? Pure Mach allows you to run an operating system as a separate process on the system that allows for flexibility, but can also slow things down because of the translation between Mach and the layers above it. With Mac OS X, since the desired behavior of the operating system is known, BSD functionality has been incorporated in the kernel alongside Mach. The result is that the kernel combines the strengths of Mach with the strengths of BSD.


    It's obvious to me that this author decided to throw together a diatribe of microkernels and in the process of contemplating such action decided that Apple's new operating system would be a suitable target.

    However, I do believe MacOS X is doomed because Apple is doomed due to hardware inadequacy.

    [ 05-29-2002: Message edited by: Nostradamus ]</p>
  • Reply 6 of 34
    eugeneeugene Posts: 8,254member
    At 3 MB, xnu hardly feels like a microkernel!
  • Reply 7 of 34
    frawgzfrawgz Posts: 547member
    [quote]Originally posted by Nostradamus:

    <strong>However, I do believe MacOS X is doomed because Apple is doomed due to hardware inadequacy.</strong><hr></blockquote>

    If this were true, Apple certainly has had a long time to die. What's the big holdup?

    The truth is that as long as Apple's hardware is adequate for most tasks and there is enough RDF to go around to make us believe that it's even faster at certain, important tasks, then Apple is not "doomed." Get a grip.

    As long as there are people out there who put a higher value on an enjoyable computing experience than on sheer MHz, Apple will be there to sell them products.
  • Reply 8 of 34
    cliveclive Posts: 720member
    [quote]Originally posted by frawgz:

    <strong>As long as there are people out there who put a higher value on an enjoyable computing experience than on sheer MHz, Apple will be there to sell them products.</strong><hr></blockquote>

    That's kind of true, because 9.x is mature operating system and people are familiar with it then they will stick with Mac, as long as the OS doesn't get in the way.

    The situation with X is different, in that generally it is perceived as not being as fast as 9.x (in user interface elements) and it "gets in the way" because it's different.

    Therefore I think Apple really does need a hardware boost to sell X to the masses, so that the perceived speed issues recede.
  • Reply 9 of 34
    aquaticaquatic Posts: 5,602member

    Great, so now he can use all that extra power to run sh faster!

    Hmm. What happened to their website? I can't access it. Supa Performa, Linux, couldn't handle the load!?

    Linux zealots who bash Apple (they're out there) are so dumb. And jealous. We're on the same side, they're just behind, that's all. When I try to tell them what they want for a *desktop* is OS X, they think I'm stupid. They of course have never used OS X. Linux is for learning comp. science and Apache/networking. Maybe for cheap x86 workstations.

    Linux people may have the same mindset of the Windoze people they abhor. They spent their whole life mastering a hard-to-use OS. So what? Now they are "powerful"? Uh, guess again. Ooh, you can make folders in an sh command line? Well I'll be dipped in $hit and rolled in breadcrumbs, I'm not worthy! This guy comes across as pedantic.

    Guys, don't give this site traffic. <img src="graemlins/oyvey.gif" border="0" alt="[No]" />

    We need to educate certain Linux people.. The ones that aren't already. Most Linux people are really cool, but some must still have a hint of the Dark Side (FUD and the hard=powerful mindset)

    [ 05-29-2002: Message edited by: Aquatik ]</p>
  • Reply 10 of 34
    keyboardf12keyboardf12 Posts: 1,379member
    i think they are just jealous cause they can not run any commerical apps on their revolutionary os
  • Reply 11 of 34
    emaneman Posts: 7,204member
    [quote]Originally posted by scott_h_phd:

    <strong>So .... uh .... Linux on a Mac is better than Mac OS X on a Mac. So .... uh .... people buy Macs and run Linux on them. How does Apple lose out?</strong><hr></blockquote>

    Uh... yes Apple will make money on hardware, but they'll lose some money by people not buying their software.
  • Reply 12 of 34
    airslufairsluf Posts: 1,861member
  • Reply 13 of 34
    stroszekstroszek Posts: 801member
    [quote]Originally posted by Aquatik:


    Ooh, you can make folders in an sh command line? Well I'll be dipped in $hit and rolled in breadcrumbs, I'm not worthy!</strong><hr></blockquote>

    <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" />
  • Reply 14 of 34
    posterboyposterboy Posts: 147member
    [quote]Originally posted by rogue27:

    <strong>Nothing new. Michael Dell keeps saying that Apple is doomed as well.</strong><hr></blockquote>

    Mikey Boy better hope Apple doesn'tgo out of business, because if they do then where will Dell get their product design ideas?

  • Reply 15 of 34
    bryan furybryan fury Posts: 169member
    i knew we shoulda gone with be....

    <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" />
  • Reply 16 of 34
    frawgzfrawgz Posts: 547member
    [quote]Originally posted by Clive:

    <strong>That's kind of true, because 9.x is mature operating system and people are familiar with it then they will stick with Mac, as long as the OS doesn't get in the way.

    The situation with X is different, in that generally it is perceived as not being as fast as 9.x (in user interface elements) and it "gets in the way" because it's different.

    Therefore I think Apple really does need a hardware boost to sell X to the masses, so that the perceived speed issues recede.</strong><hr></blockquote>

    "Perceived speed issues," whether from hardware inadequacies or from unoptimized software, are a problem, and will continue to be dealt with as much as possible (given limited time, man-power, etc).

    I'd argue again, however, that these speed issues are not significant enough for most people to put people off from buying the Mac.

    As for 9 vs. X, it's a one-dimensional (and fast becoming outdated) argument, IMO. When I switch to 9 now, I feel like the OS is holding me back. I feel like I can't switch apps while they're busy, not to mention overall OS stability! Let's leave it at that - we don't need to rehash the old 9/X argument.

    Edit: Ok, so I'm getting slightly OT. To bring it back to the original point, then: OS X is not doomed, and neither is Apple for the foreseeable future. I'm surprised - well, not so much with you guys - but with Apple doing so well now despite the economy and in a position infinitely better than the one it was at 5 years ago, I would not have thought all these Chicken Little prophecies would have gotten as much attention as they have.

    [ 05-30-2002: Message edited by: frawgz ]</p>
  • Reply 16 of 34
    bradbowerbradbower Posts: 1,068member
    Not much to say other than .
  • Reply 18 of 34
    stimulistimuli Posts: 564member
    First: This article is weak. Nostradamus pointed out that the author needs to research a little.

    Second, to combat the anti-FUD, linux isn't that hard, the kernel does not resist change (it changes very rapidly, IMHO), and it isn't inherently insecure. In fact, you be surprised how much code Linux and BSD share. By slagging one, you are kind of slagging the other.Like Apache, ssh, bash, gcc, etc.

    Yes, Linux users are jealous there are no commercial apps, namely Photoshop, running on their otherwise spiffy OS.

    Yes, linux is also designed primarily for cheap x86 workstations.

    Yes, it probably is marginally faster on Mac hardware than Mac OSX.At the moment and for the next while, at any rate. Perhaps even forever.

    No, quartz being (currently) slow and the finder being slow as well have little/nothing to do with Mach/XNU.

    Yes, this guy is a zealot. No idea what his problem is. No, most linux users aren't a bunch of platform lamers, and many use linux on Mac hardware, because ie Powerbooks are better hardware (style, features, battery life, Gb ethernet, airport, etc etc) than some fugly Compaq. Go to google and type 'mac linux' and see how many results you dig up.

    I can't see this guy's rationale for publishing that article. Calling it: "Linux faster than OSX" would be one thing, and accurate, but this whole Apple is dead because... is crap. For heck's sake, if Apple wanted to, they could switch to Monolithic tomorrow. Even the linux kernel, with a little work.
  • Reply 19 of 34
    trevormtrevorm Posts: 841member
    (I dont know if I can swear) but what a **** article, and a week one at that! <img src="graemlins/oyvey.gif" border="0" alt="[No]" />
  • Reply 20 of 34
    airslufairsluf Posts: 1,861member
Sign In or Register to comment.