or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Rumor: Apple to ship Haswell-powered Retina MacBook Pros in October
New Posts  All Forums:Forum Nav:

Rumor: Apple to ship Haswell-powered Retina MacBook Pros in October

post #1 of 38
Thread Starter 
New Retina MacBook Pros with Intel's new Haswell processors could arrive even later than expected, in October, if the latest rumor is to be believed.

rMBP


The details were published this week by China Times, and highlighted by Macotakara on Friday. According to the report, new MacBook Pros equipped with Retina displays won't ship until October, well after the June debut of Haswell-powered MacBook Airs.

That rumored date is later than well-connected analyst Ming-Chi Kuo of KGI Securities expects the new MacBook Pros to be introduced. He said in a note provided to AppleInsider this week that he expects MacBook Pros with Haswell to debut in mid-September.

Apple is said to have experienced continued yield problems with the high-resolution Retina display in its notebooks. That's led to apparent internal delays, despite the fact that benchmarks for both the 13-inch and 15-inch models have appeared online.

Prices for high-resolution LCD panels, as well as solid-state drives, have dropped in recent months. But the price of DRAM is also rising, which is why sources believe the new MacBook Pros will be sold at the same price point as the current models.

The new 13-inch MacBook Pro, in particular, is said to be even thinner than the current Retina display model. Kuo also expects the portable Macs to include 1080p "full HD" FaceTime cameras.
post #2 of 38

Hope they come with 32GB of Ram (though this is unlikely to happen if the price of RAM is rising); at the rate the average OS X first-party app eats memory (especially Safari), even that won't last long.  For reference: the Notes app eats up 100 times more RAM in 2013 than the entire Microsoft Office suite did in 1994, despite doing much less...

 

I'm looking forward to replacing my late-2011 15" high-end MacBook Pro with a high-end version of whatever 15" model comes next; too bad that they won't put a Retina display on the classic models, since those are the only laptops worth the Pro monicker.

post #3 of 38

So the "expected" date was "mid-September" and the new rumored date is October and that's considered "even later than expected?"  Half a month seems well within any reasonable margin for error on an expected product refresh.

post #4 of 38
Quote:
Originally Posted by Epsico View Post

Hope they come with 32GB of Ram (though this is unlikely to happen if the price of RAM is rising); at the rate the average OS X first-party app eats memory (especially Safari), even that won't last long.  For reference: the Notes app eats up 100 times more RAM in 2013 than the entire Microsoft Office suite did in 1994, despite doing much less...

I suspect you don't understand what you see in Activity Monitor. My Mac has been on for a few weeks, and I run Safari all the time with lots of tabs. Currently I have four tabs open and Safari is using 292MB of RAM. I just launched Notes and it's using 35MB.

post #5 of 38
Quote:
Originally Posted by malax View Post

Half a month seems well within any reasonable margin for error on an expected product refresh.
being that this is all speculation and innuendo; half a month seems well within any reasonable margin of error on an unannounced product refresh.
post #6 of 38
Quote:
Originally Posted by chabig View Post

I suspect you don't understand what you see in Activity Monitor. My Mac has been on for a few weeks, and I run Safari all the time with lots of tabs. Currently I have four tabs open and Safari is using 292MB of RAM. I just launched Notes and it's using 35MB.

how does your comment enlighten those of us who "don't understand what you see in Activity Monitor"? my version eats up 17.5MB on launch. then i create a bunch more and it eats up 40.5MB. BFD. both your comment and mine add zero to the conversation. I suspect even you don't understand what you see in Activity Monitor.
post #7 of 38

Uggggggh!

 

Apple. You're killing me.

post #8 of 38
Quote:
Originally Posted by chabig View Post

I suspect you don't understand what you see in Activity Monitor. My Mac has been on for a few weeks, and I run Safari all the time with lots of tabs. Currently I have four tabs open and Safari is using 292MB of RAM. I just launched Notes and it's using 35MB.

The original poster knows perfectly how to read Activity Monitor. My PC from 1996 has 32MB of RAM in it, which ran Office 95 comfortably. I’m looking at my Word 2011 process on my Activity Monitor, and it shows 40MB. 40MB for a full blown word processor compared to 35MB for a plain text editor is pretty worrying.

post #9 of 38
Quote:
Originally Posted by chabig View Post

I suspect you don't understand what you see in Activity Monitor. My Mac has been on for a few weeks, and I run Safari all the time with lots of tabs. Currently I have four tabs open and Safari is using 292MB of RAM. I just launched Notes and it's using 35MB.


Please don't dismiss people who have different experience than you. The fact that you don't see increasing memory usage doesn't mean they're wrong.

I've seen Safari gobbling up extra memory over time for years. As you use it, it continues to grow in usage until the machine starts to slow down. Quitting Safari and relaunching solves the problem (and the fact that it reopens the tabs you were working on is a big help).

I don't understand it - it's been constant at least since I got my current MBP (late 2006 model). Since not everyone experiences it, it may be model-specific. Or maybe it's because of the relatively low RAM in my machine (which is maxed out at 3 GB).
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
post #10 of 38
Quote:
Originally Posted by chabig View Post

I suspect you don't understand what you see in Activity Monitor. My Mac has been on for a few weeks, and I run Safari all the time with lots of tabs. Currently I have four tabs open and Safari is using 292MB of RAM. I just launched Notes and it's using 35MB.

When you launch them, they don't take a lot of RAM, yes.  Safari is eating 1093MB of RAM with two processes here (613MB for Safari and 480MB for WebProcess); it hasn't been running for long.  Notes is eating 231MB.

 

The field you're reading is RSIZE, which only accounts for the resident footprint, meaning backing store (swap) and deferred allocations are not account for; if you want to have a real perspective of memory usage, you have to look at the amount of virtual memory privately allocated by each process (private allocation exclude memory from shared libraries, which are always present in the Darwin runtime environment regardless of application requirements), which is given by VPRVT.

 

PS: I don't use Activity Monitor; I use top and vmmap.

 

EDIT: It must be mentioned that even VPRVT isn't perfectly accurate, because it accounts for deferred allocations in the stack.  Since each thread has an 8MB deferred-allocated stack, VPRVT may be off by as much memory per thread in a single process, meaning that for a Safari process running 18 threads (which is the case here) and a WebProcess running 9 threads (also the case here), VPRVT can theoretically be off by as much as 216MB (though in practice the error is never this high since I'm assumes a 0% stack usage).  This also begs the question as to why Safari isn;t using Grand Central Dispatch yet.


Edited by Epsico - 7/26/13 at 9:22am
post #11 of 38
Quote:
Originally Posted by chabig View Post

I suspect you don't understand what you see in Activity Monitor. My Mac has been on for a few weeks, and I run Safari all the time with lots of tabs. Currently I have four tabs open and Safari is using 292MB of RAM. I just launched Notes and it's using 35MB.


The poster is actually correct, but it's not limited to first party apps, and not necessarily an Apple issue. ALL Software in general is constantly increasing in size and demanding more and more resources. Not sure about the "100 times more" ratio of the Notes app, but for example, Office 95 requirements were 8-16 MB of memory, and 28MB - 89MB of HD space. Office 2010 Standard (for comparison) requires 3GB of HD space and 512MB of ram recommended, not to mention they recommend a 1Ghz processor to run all its features. It's just the normal evolution of software. 

 

I just hope somehow they allow the user to upgrade the memory like the cMBP.

post #12 of 38

Quote:

Originally Posted by zoffdino View Post

The original poster knows perfectly how to read Activity Monitor. My PC from 1996 has 32MB of RAM in it, which ran Office 95 comfortably. I’m looking at my Word 2011 process on my Activity Monitor, and it shows 40MB. 40MB for a full blown word processor compared to 35MB for a plain text editor is pretty worrying.

Much more is necessary and expected of modern software. Modern apps have multiple languages installed; multiple sets of high resolution graphics for every icon, button etc.; Notes was mentioned: it and many other apps are capable of iCloud Sync—a feature I absolutely love for storing part numbers (car, boat, bike—oil filter, air filter, tire sizes, spark plugs and gap etc.) for shopping lists and many other times for which I need quick reference. The way machines handle and prioritize memory usage, processing load, background apps, virtual memory etc. has also changed radically since the 90s, as have security requirements. System architecture and memory allocation has changed as we’ve moved from 16 to 32 to 64 bit processes.

 

In short: there are plethora of reasons modern software uses more memory than Mid-90s software—most are perfectly legitimate and no cause for worry. I say most because that damned ‘Clippy’ was certainly the illegitimate child of Steve Balmer and that infernal dog helper…

post #13 of 38
Blaming this delay on the screen seems rather odd. Unless Apple or its LCD partner(s) wanted to change the manufacturing process of the screen to make building them less costly, I don't see Apple using a different Retina screen this year from last. Apple generally likes to get at least a couple years' worth out of their intricate engineering before switching things up. Well, one noteable exception being the iPad 2 vs original iPad. Last year was the first of the MBP Retina, so it just seems too soon for Apple to be changing up the LCD after just a year. So if the screen is the same, there should be no reason for it causing a delay. It's not like the current Pro Retinas are flying off the shelves, causing a great backlog in supplying LCDs to the new models. I know the MacBook Pro is supposed to be using a different class of Haswell chips, not the same "ultrabook" ones in the MacBook Air. Are other OEMs using these other class chips destined for the Pro yet? If not, the delay may be everyone (including Apple) waiting for Intel to ship the processors in volume. And it may also be delayed for Mavericks.
post #14 of 38
What kind of crap is this? Text editor vs. full blown word processor. That word processor dumps to hundreds of MBs in an instant when you actually use it more than typing a Dear Mom letter.

The memory is being stored due to the shared frameworks being pre-loaded. More importantly, those Core Data Frameworks are shared across the system of OS X. Please just stop while you're all behind in understanding shared resources.
post #15 of 38
Quote:
Originally Posted by Epsico View Post

When you launch them, they don't take a lot of RAM, yes.  Safari is eating 1093MB of RAM with two processes here (613MB for Safari and 480MB for WebProcess); it hasn't been running for long.  Notes is eating 231MB.

 

The field you're reading is RSIZE, which only accounts for the resident footprint, meaning backing store (swap) and deferred allocations are not account for; if you want to have a real perspective of memory usage, you have to look at the amount of virtual memory privately allocated by each process (private allocation exclude memory from shared libraries, which are always present in the Darwin runtime environment regardless of application requirements), which is given by VPRVT.

 

PS: I don't use Activity Monitor; I use top and vmmap.

 

EDIT: It must be mentioned that even VPRVT isn't perfectly accurate, because it accounts for deferred allocations in the stack.  Since each thread has an 8MB deferred-allocated stack, VPRVT may be off by as much memory per thread in a single process, meaning that for a Safari process running 18 threads (which is the case here) and a WebProcess running 9 threads (also the case here), VPRVT can theoretically be off by as much as 216MB (though in practice the error is never this high since I'm assumes a 0% stack usage).  This also begs the question as to why Safari isn;t using Grand Central Dispatch yet.

 

Because WebKit 2 isn't fully implemented yet is the main reason Safari isn't joinly leveraging GCD and OpenCL correctly across the GPGPU and CPU. OS X 10.9 clearly points to a massive improvement on both these fronts.

post #16 of 38

Modern apps have multiple languages installed

 

That don't need to be loaded all at once.

 

multiple sets of high resolution graphics for every icon

 

Loaded to VRAM, not main memory.

 

Notes was mentioned: it and many other apps are capable of iCloud Sync

 

The Usenet already did that -- in the early 80s, on PDP-11s, over 2.4kbps modems (i.e.: it's trivial).

 

The way machines handle and prioritize memory usage, processing load, background apps, virtual memory etc. has also changed radically since the 90s, as have security requirements. System architecture and memory allocation has changed as we’ve moved from 16 to 32 to 64 bit processes.

 

Security requirements have not changed that much for the applications themselves, in fact the change was quite seamless for them, it was mostly achieved by implemented canaries in compilers to prevent buffer overflows, randomizing memory allocations in the runtime environments, disabling execution in the stack (a hardware feature), static analysis, and dynamic analysis through instrumentation; memory management has received mostly performance-enhancing optimizations, some of which (like deferred allocations and copy on write) actually drastically reduce memory usage.

 

In short: there are plethora of reasons modern software uses more memory than Mid-90s software—most are perfectly legitimate and no cause for worry. I say most because that damned ‘Clippy’ was certainly the illegitimate child of Steve Balmer and that infernal dog helper…

 

One of the main reasons why this is happening is because of generic frameworks targeting languages that are not designed to support generic programming.  Objective-C, for example, makes the most mundane of tasks, like accessing an array element, extremely complex, but this isn't even the main culprit: the main culprit are lots of layers of bloated and poorly designed frameworks that try to abstract semantics when they should only be abstracting implementations, software engineers who don't really understand what's going on under the hood, and a general "don't reinvent the wheel" mentality that favors re-using bad code (thus perpetuating the mediocrity) over writing decent software.

 

If you want to see a field where this does not happening, check out graphics computing, where the hardware is still light years away from having a decent performance, and thus real software engineering still happens.

 

This state of affairs is saddening, because we have vastly superior hardware than we did 20 years ago but aren't doing a lot more than we did back them, outside the realm of the Internet, which is irrelevant in this context.

post #17 of 38
What many miss here is bugs. Bugs in Apples SDK and apps lead to a lot of memory leaks. Between the leaks and just poor memory management in Apples apps we loose a lot of RAM to waste.

If you keep track of what is happening development wise, Apple is moving to ARC (Automatice Reference Counting) instead of garbage collection. The rumor is this has vastly improved the behavior of XCode.

It will be interesting to see how the bundled Apps work in Mavericks. I would not expect every app to transition to ARC but a few might. However even ARC will not solve all of the memory usage issues. Some features are simply destined to use lots of RAM until the programmers come up with better solutions.
Quote:
Originally Posted by twosee View Post


The poster is actually correct, but it's not limited to first party apps, and not necessarily an Apple issue. ALL Software in general is constantly increasing in size and demanding more and more resources. Not sure about the "100 times more" ratio of the Notes app, but for example, Office 95 requirements were 8-16 MB of memory, and 28MB - 89MB of HD space. Office 2010 Standard (for comparison) requires 3GB of HD space and 512MB of ram recommended, not to mention they recommend a 1Ghz processor to run all its features. It's just the normal evolution of software. 

I just hope somehow they allow the user to upgrade the memory like the cMBP.
post #18 of 38
Quote:
Originally Posted by DamonF View Post

Blaming this delay on the screen seems rather odd. Unless Apple or its LCD partner(s) wanted to change the manufacturing process of the screen to make building them less costly, I don't see Apple using a different Retina screen this year from last.
Actually I could see them going with a new technology screen. IGZO is the likely candidate
Quote:
Apple generally likes to get at least a couple years' worth out of their intricate engineering before switching things up. Well, one noteable exception being the iPad 2 vs original iPad.
I'm not sure where you got that idea from. Apple will drop technology like a hot potato if it isn't working out for them.
Quote:
Last year was the first of the MBP Retina, so it just seems too soon for Apple to be changing up the LCD after just a year. So if the screen is the same, there should be no reason for it causing a delay. It's not like the current Pro Retinas are flying off the shelves, causing a great backlog in supplying LCDs to the new models.
This is the thing, sales aren't that great so why the delay for a screen even if it is new technology?
Quote:
I know the MacBook Pro is supposed to be using a different class of Haswell chips, not the same "ultrabook" ones in the MacBook Air. Are other OEMs using these other class chips destined for the Pro yet?
This is the thing that is so effing obvious that it makes you wonder about all of these reports about other things causing delay. The reality is this, Intel has many new chips scheduled to ship in the September time frame. Since some of these chips could be MBP bound nothing will be shipped before then.
Quote:
If not, the delay may be everyone (including Apple) waiting for Intel to ship the processors in volume. And it may also be delayed for Mavericks.

Actually I think they are taking the extra time they have here to shore up Mavericks. We are going through a long debug cycle and it already looks pretty good to many users. But yeah, nothing will ship from Apple until Intel releases the chips, schedules can change but that won't be until sometime after the September launch.
post #19 of 38
Two things I am waiting on before purchasing a new MBP (which I am finding very difficult to wait for...):

1) 32 MB of RAM (at least as a BTO option!!)

2) Better GPU.

the current GPU is good. but it's been a year...

and I am one of those who NEED all the RAM I can get.

The new Mac Pro will help out quite a bit in these two areas at the office, but not on the road (and no, though it is "portable," I am NOT taking it out of the office.)
post #20 of 38
Quote:
Originally Posted by Epsico View Post

Hope they come with 32GB of Ram (though this is unlikely to happen if the price of RAM is rising); at the rate the average OS X first-party app eats memory (especially Safari), even that won't last long.  For reference: the Notes app eats up 100 times more RAM in 2013 than the entire Microsoft Office suite did in 1994, despite doing much less...

Maybe you should have kept your PowerBook 520 with its meager 4MB of RAM since you obviously are so in love with Microsoft Office from 1994.

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply
post #21 of 38

Haswell is nice, but Apple really needs to get the Retina screen cost down so that a low-end Retina 13" can send the non-Retinas packing.

 

Shipping two sets of the same notebook is problematic. It's muddying the choices for MacBook buyers.

The evil that we fight is but the shadow of the evil that we do.
Reply
The evil that we fight is but the shadow of the evil that we do.
Reply
post #22 of 38
Quote:
Originally Posted by mdriftmeyer View Post

What kind of crap is this? Text editor vs. full blown word processor. That word processor dumps to hundreds of MBs in an instant when you actually use it more than typing a Dear Mom letter.

The memory is being stored due to the shared frameworks being pre-loaded. More importantly, those Core Data Frameworks are shared across the system of OS X. Please just stop while you're all behind in understanding shared resources.

In 1994, personal computers didn't have hundreds of megabytes of RAM (or even disk) to spare.

 

Also note that VPRVT does not account for shared memory, so no, the memory that I am reporting here is not being used by shared frameworks.

 

Take the time to read the thread before posting.

post #23 of 38

Is there any chance that they will bring back the 17" Macbook Pro with this refresh?

45 2a3 300b 211 845 833
Reply
45 2a3 300b 211 845 833
Reply
post #24 of 38
Quote:
Originally Posted by Epsico View Post

In 1994, personal computers didn't have hundreds of megabytes of RAM (or even disk) to spare.

Also note that VPRVT does not account for shared memory, so no, the memory that I am reporting here is not being used by shared frameworks.

Take the time to read the thread before posting.

VPRVT does contain parts of the shared libraries copied on write to private space. Each app thinks it can access the entire stack of frameworks it has linked against and the shared code is copied to the private memory on write. Whats gained is apps which cant crash the OS.
I wanted dsadsa bit it was taken.
Reply
I wanted dsadsa bit it was taken.
Reply
post #25 of 38
Quote:
Originally Posted by asdasd View Post


VPRVT does contain parts of the shared libraries copied on write to private space. Each app thinks it can access the entire stack of frameworks it has linked against and the shared code is copied to the private memory on write. Whats gained is apps which cant crash the OS.

It appears that you don't fully understand what copy on write is.  Copy on write and shared memory are completely different concepts, with the fpr,er only happening in private writable allocations.  Since shared libraries are neither writable nor private, copy on write does not apply to them, and it is because they are not private that they are not accounted for in VPRVT (if you want to account for them as well, check VSIZE).

 

While, yes, copy on weite would show in VPRVT, there's no copy on write associated with shared libraries, so your point is moot.  The most common occurrence of copy on write on a system is in fork().

post #26 of 38
Quote:
Originally Posted by Epsico View Post

It appears that you don't fully understand what copy on write is.  Copy on write and shared memory are completely different concepts, with the fpr,er only happening in private writable allocations.  Since shared libraries are neither writable nor private, copy on write does not apply to them, and it is because they are not private that they are not accounted for in VPRVT (if you want to account for them as well, check VSIZE).

While, yes, copy on weite would show in VPRVT, there's no copy on write associated with shared libraries, so your point is moot.  The most common occurrence of copy on write on a system is in fork().

Um, no. Forking will duplicate the memory into another process. Copy on write in OS X duplicates shared memory into private memory. It is specifically designed for memory protection. Every application has access to the shared libraries but when a write is called on shared memory the kernel copies the shared memory into the callers space. If that didn't happen shared memory would be corrupted.

Some more explanation in this book.

http://books.google.ie/books?id=K8vUkpOXhN4C&pg=PA862&lpg=PA862&dq=copy+on+write+OS+X&source=bl&ots=OKojX-WsXz&sig=AWqJi34ZUjOvBRnMFoZ1__ET80M&hl=en&sa=X&ei=5zzzUZfGA8WRhQetxYDQBw&redir_esc=y#v=onepage&q=copy%20on%20write%20OS%20X&f=false
I wanted dsadsa bit it was taken.
Reply
I wanted dsadsa bit it was taken.
Reply
post #27 of 38
New Mac Pros sound good. A 32 gb ram option would sound great too, with the higher 1 TB of flash option too (if the Mac Pro packs it the Mac book pro can) wonder if we see any other goodies in it?
post #28 of 38
Quote:
Originally Posted by Curtis Hannah View Post
A 32 gb ram option would sound great too, with the higher 1 TB of flash option too (if the Mac Pro packs it the Mac book pro can) wonder if we see any other goodies in it?

It would be nice if ram and storage remained easily addressable with after market parts. CTO things on simple stuff often have fairly extreme embedded markups. The only rule with after market ram is test it prior to use. You can easily install 32GB today. Memory footprints can be abhorrent at times, so there are valid use cases.

post #29 of 38

Um, no. Forking will duplicate the memory into another process. Copy on write in OS X duplicates shared memory into private memory. It is specifically designed for memory protection. Every application has access to the shared libraries but when a write is called on shared memory the kernel copies the shared memory into the callers space. If that didn't happen shared memory would be corrupted.

 

While your lack of understanding of operating system design is shameful if you consider yourself a software engineer, even worse is your unfounded belief in your correctness.  If a book has taught you that way (which I seriously doubt), it taught you wrong.

 

Copy-on-write is precisely a duplication method, which is why modern operating systems use it in fork().  The point of copy-on-write is twofold: it optimizes the duplication process and reducing actual memory usage.  The optimization is achieved by deferring the actual duplication until a change is made to a region of memory, and the reduction of memory usage is achieved by sharing the unmodified regions of memory and marking them read-only, so that when a process attempts to write to one of these regions of memory, the system catches a page fault, copies it, creates a writable page descriptor for the new copy, and returns control to the process, which is now free to write.  This, obviously, only applies where the operating system is told that the region of memory that the process attempted to modify is supposed to be writable, which is not the case in shared libraries, so copy-on-write does not apply to them, and the system will kill your process if you attempt to write there.

 

Darwin goes a little further than most operating systems and actually re-uses a single set of page descriptors system-wide to map all global shared libraries into memory.  This optimizes application launch time even further since the dynamic linker does not have to map any of the global shared libraries when the application is executed, which is why even the simplest Hello World! program contains the entire set of global shared libraries mapped into its memory.  The protection is enforced at the hardware level by specifying that the memory referenced by those page descriptors read-only, you don't need to duplicate the contents of a memory region in order to protect it.

 

While copy-on-write is also used to optimize a specific kind of memory allocation (see MAP_PRIVATE vs. MAP_SHARED in mmap()'s manpage), that does not mean copy-on-write itself has anything to do with memory protection; it is only used in that case because you are effectively telling the system that writes are not shared, a semantic that would otherwise require duplication of the entire mapped region.  Since you NEVER have a reason to write to a shared library, and in fact all shared libraries are loaded into read-only memory, copy-on-write and private semantics do not apply to them, which is why they don't appear in VPRVT.

 

Take the time to actually understand what you read and how it is implemented.  I don't know what's in that book, but what you understood from it is completely wrong.  Not only do you demonstrate lack of understanding for basic operating system implementation concepts, but you also demonstrate lack of understanding of how memory management units (MMUs) in modern CPUs work, which means you probably don't have a clue of what virtual memory actually is.

 

EDIT: Fixed some stuff.

 

EDIT #2: Just for the kicks, I actually went on and read the definition in the book you linked, because I couldn't believe anyone could have published that kind of misinformation, and now I'm absolutely sure that you just lack readong comprehension.  You read that copy-on-write uses shared memory and for some reason automatically assumed that all uses of shared memory are related to copy-on-write, which is far from being the case.  Read my explanation above and you will understand how it actually works.


Edited by Epsico - 7/27/13 at 4:35am
post #30 of 38
Quote:
Originally Posted by Epsico View Post

Um, no. Forking will duplicate the memory into another process. Copy on write in OS X duplicates shared memory into private memory. It is specifically designed for memory protection. Every application has access to the shared libraries but when a write is called on shared memory the kernel copies the shared memory into the callers space. If that didn't happen shared memory would be corrupted.


While your lack of understanding of operating system design is shameful if you consider yourself a software engineer, even worse is your unfounded belief in your correctness.  If a book has taught you that way (which I seriously doubt), it taught you wrong.


Copy-on-write is precisely a duplication method, which is why modern operating systems use it in fork().  The point of copy-on-write is twofold: it optimizes the duplication process and reducing actual memory usage.  The optimization is achieved by deferring the actual duplication until a change is made to a region of memory, and the reduction of memory usage is achieved by sharing the unmodified regions of memory and marking them read-only, so that when a process attempts to write to one of these regions of memory, the system catches a page fault, copies it, creates a writable page descriptor for the new copy, and returns control to the process, which is now free to write.  This, obviously, only applies where the operating system is told that the region of memory that the process attempted to modify is supposed to be writable, which is not the case in shared libraries, so copy-on-write does not apply to them, and the system will kill your process if you attempt to write there.


Darwin goes a little further than most operating systems and actually re-uses a single set of page descriptors system-wide to map all global shared libraries into memory.  This optimizes application launch time even further since the dynamic linker does not have to map any of the global shared libraries when the application is executed, which is why even the simplest Hello World! program contains the entire set of global shared libraries mapped into its memory.  The protection is enforced at the hardware level by specifying that the memory referenced by those page descriptors read-only, you don't need to duplicate the contents of a memory region in order to protect it.


While copy-on-write is also used to optimize a specific kind of memory allocation (see MAP_PRIVATE vs. MAP_SHARED in mmap()'s manpage), that does not mean copy-on-write itself has anything to do with memory protection; it is only used in that case because you are effectively telling the system that writes are not shared, a semantic that would otherwise require duplication of the entire mapped region.  Since you NEVER have a reason to write to a shared library, and in fact all shared libraries are loaded into read-only memory, copy-on-write and private semantics do not apply to them, which is why they don't appear in VPRVT.


Take the time to actually understand what you read and how it is implemented.  I don't know what's in that book, but what you understood from it is completely wrong.  Not only do you demonstrate lack of understanding for basic operating system implementation concepts, but you also demonstrate lack of understanding of how memory management units (MMUs) in modern CPUs work, which means you probably don't have a clue of what virtual memory actually is.

EDIT: Fixed some stuff.

EDIT #2: Just for the kicks, I actually went on and read the definition in the book you linked, because I couldn't believe anyone could have published that kind of misinformation, and now I'm absolutely sure that you just lack readong comprehension.  You read that copy-on-write uses shared memory and for some reason automatically assumed that all uses of shared memory are related to copy-on-write, which is far from being the case.  Read my explanation above and you will understand how it actually works.

I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.

The Mac OS Internals book, on the page I linked to, clearly does say that the kernel shares read-write memory copy on write, between processes . So we agreed on that. You then answered with the straw man argument that I believe all memory is like that.

Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.

Just to be clear on what we are saying.

Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.

Here is what it says: at page

http://books.google.ie/books?id=K8vUkpOXhN4C&pg=PA924&lpg=PA924&dq=OS+X+write+to+shared+libraries&source=bl&ots=OKojYVRtVB&sig=UCaOcToRtVL3NeORsiMswMSYWaI&hl=en&sa=X&ei=8cXzUdm5LMWr7Aa31IHIDg&redir_esc=y#v=onepage&q=OS%20X%20write%20to%20shared%20libraries&f=false

which says - and I couldn't copy and past this for obvious reasons.

Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.

and later.

Now, a split-segemnt library can be mapped so that its text segment is completely shared between tasks with a single physical map , whereas the data segment is shared copy on write.

In short changes to the shared library data - probably like setting something - copies the data into the calling processes.

I don't claim to be an expert. I have read this book, however, and it contradicts you. You'll probably come back with some verbose answer about how of course the data segment was writibable, you never said that it wasn't and so on. But you did say that all shared libraries were not writable. And that is wrong, or the canonical book on OS X is wrong. I'd go with you being wrong.
I wanted dsadsa bit it was taken.
Reply
I wanted dsadsa bit it was taken.
Reply
post #31 of 38

I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.

 

Nope, you claimed that it happens in "shared frameworks", which are NOT read-write.  Don't move the goal posts now.

 

Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.

 

I did?  Where?  Because I distinctively recall stating the existence of a difference between private and shared memory, and that the difference only applies to writable memory.

 

Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.

 

I have already established that the problem was on you, not the book, you simply did not understand what you read.

 

Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.

 

The read-write memory sections are not shared and are actually extremely small (as I will demonstrate below).  Copy-on-write is used there in order to avoid expensive static initializations resulting from having a bunch of unnecessary pre-loaded libraries mapped in every process.

 

If you write the following useless program:

 

int main() {return 0;}

 

Then compile it using:

 

cc -o nop nop.c

 

Then use lldb to setup a breakpoint such as:

 

$ lldb ./nop

Current executable set to './nop' (x86_64).
(lldb) b main
Breakpoint 1: where = nop`main, address = 0x0000000100000f60
(lldb) run
Process 591 launched: './nop' (x86_64)
Process 591 stopped
* thread #1: tid = 0x1c03, 0x0000000100000f60 nop`main, stop reason = breakpoint 1.1
    frame #0: 0x0000000100000f60 nop`main
nop`main:
-> 0x100000f60:  pushq  %rbp
   0x100000f61:  movq   %rsp, %rbp
   0x100000f64:  movl   $0, %eax
   0x100000f69:  movl   $0, -4(%rbp)
(lldb)

 

And then check its PID on top (on another terminal), you will see that VPRVT (which includes 8MB for the main thread's deferred-allocated stack and the rest for static initializations) for that process sits under 20MB whereas its VSIZE (which also includes all the shared libraries) is over 2GB.  You can see which libraries are loaded to which addresses using vmmap, I won't post the output here since it's huge.

 

EDIT: Removed [code] tags which don't work on this forum, added 8MB of stack to VPRVT's accounting, which I forgot originally.


Edited by Epsico - 7/28/13 at 6:59am
post #32 of 38

Wow, all this consternation over memory.

Does a program use more or less, how much is used where and when?

 

You know a few months ago when memory was cheap, I avoided this consternation.

 

For $100 total delivered price, I upgraded my laptop to 32 GB 1600 CAS 10 Ram.

 

Of course, I had the luxury of making this timing market consideration because my notebook allowed the memory to be replaced.

 

Yeah, a LENOVO W530.

 

My only concern is using all that memory for RAMDISK, VM, and desktop applications.
 

post #33 of 38
Quote:
Originally Posted by mdk777 View Post


 

Of course, I had the luxury of making this timing market consideration because my notebook allowed the memory to be replaced.

 

Yeah, a LENOVO W530.

That model also accepts 4 sodimms. Macbook pros accept 2. With the air and rmbp it's soldered in. I completely understand the advantage to as much ram as possible for multiple VMs.

post #34 of 38

> Wow, all this consternation over memory.

> Does a program use more or less, how much is used where and when?

 

Just imagine what you would be able to accomplish today considering how fast and inexpensive memory is compared to what it was 20 years ago.  It's not just about the memory footprint, it's also about the memory bandwidth required by modern applications.  Hardware is progressing and software is regressing.  In computer graphics, the increasing memory footprint and CPU power can be easily justified by the fact that as hardware progresses, so do real-time rasters; CPU and GPU power are so important for computer graphics that, in 2013 we're still counting cycles in tight loops and leveraging CPU speed, GPU speed, memory bandwidth, and bus bandwidth.  If the engineering requirements for computer graphics were applied to everything else, today we would be able to load an entire operating system to RAM from secondary storage and run it there while committing changes to secondary storage in batches in the background.
 
Someone mentioned application icons earlier, for example, and in response to this I ask: why are we still using bitmaps for application icons, especially considering that their source is almost always vectorial?  Why not use a vectorial format like we've been using for fonts since the 90s?  GPUs are awesome when it comes to dealing with vectors!  They are specifically designed to process vectors!  They can transform millions of vectors per frame when they have nothing else to do!  And they can cache their work in textures for later use, too!
 
This resource waste truly annoys me.  If I had the money, I'd form a company and hire a bunch of other computer graphics engineers to show the world what modern techniligy can actually do if we tackle software engineering problems like we did 20 years ago.
post #35 of 38

I don't know about new Macbook Pros but I predict new Mac mini on Tuesday. The current generation is 5-7 days shipping on several online Apple Stores.

post #36 of 38
Quote:
Originally Posted by Epsico View Post

Someone mentioned application icons earlier, for example, and in response to this I ask: why are we still using bitmaps for application icons, especially considering that their source is almost always vectorial?  Why not use a vectorial format like we've been using for fonts since the 90s?  GPUs are awesome when it comes to dealing with vectors!  They are specifically designed to process vectors!  They can transform millions of vectors per frame when they have nothing else to do!  And they can cache their work in textures for later use, too!

The Mac might not use vectors for application icons, but it does for toolbar icons. Open a Finder window on /System/Library/ and type .pdf in the search box and you will see lots of them.

post #37 of 38
Quote:
Originally Posted by ascii View Post

I don't know about new Macbook Pros but I predict new Mac mini on Tuesday. The current generation is 5-7 days shipping on several online Apple Stores.

 

Apple's traditional Summer release date is the first week of August. So you are probably off by seven days. 1biggrin.gif

The evil that we fight is but the shadow of the evil that we do.
Reply
The evil that we fight is but the shadow of the evil that we do.
Reply
post #38 of 38
I going to upgrade to the 15" retina MacBook Pro. I like the bigger screen.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Rumor: Apple to ship Haswell-powered Retina MacBook Pros in October