What kind of crap is this? Text editor vs. full blown word processor. That word processor dumps to hundreds of MBs in an instant when you actually use it more than typing a Dear Mom letter.
The memory is being stored due to the shared frameworks being pre-loaded. More importantly, those Core Data Frameworks are shared across the system of OS X. Please just stop while you're all behind in understanding shared resources.
In 1994, personal computers didn't have hundreds of megabytes of RAM (or even disk) to spare.
Also note that VPRVT does not account for shared memory, so no, the memory that I am reporting here is not being used by shared frameworks.
In 1994, personal computers didn't have hundreds of megabytes of RAM (or even disk) to spare.
Also note that VPRVT does not account for shared memory, so no, the memory that I am reporting here is not being used by shared frameworks.
Take the time to read the thread before posting.
VPRVT does contain parts of the shared libraries copied on write to private space. Each app thinks it can access the entire stack of frameworks it has linked against and the shared code is copied to the private memory on write. Whats gained is apps which cant crash the OS.
VPRVT does contain parts of the shared libraries copied on write to private space. Each app thinks it can access the entire stack of frameworks it has linked against and the shared code is copied to the private memory on write. Whats gained is apps which cant crash the OS.
It appears that you don't fully understand what copy on write is. Copy on write and shared memory are completely different concepts, with the fpr,er only happening in private writable allocations. Since shared libraries are neither writable nor private, copy on write does not apply to them, and it is because they are not private that they are not accounted for in VPRVT (if you want to account for them as well, check VSIZE).
While, yes, copy on weite would show in VPRVT, there's no copy on write associated with shared libraries, so your point is moot. The most common occurrence of copy on write on a system is in fork().
It appears that you don't fully understand what copy on write is. Copy on write and shared memory are completely different concepts, with the fpr,er only happening in private writable allocations. Since shared libraries are neither writable nor private, copy on write does not apply to them, and it is because they are not private that they are not accounted for in VPRVT (if you want to account for them as well, check VSIZE).
While, yes, copy on weite would show in VPRVT, there's no copy on write associated with shared libraries, so your point is moot. The most common occurrence of copy on write on a system is in fork().
Um, no. Forking will duplicate the memory into another process. Copy on write in OS X duplicates shared memory into private memory. It is specifically designed for memory protection. Every application has access to the shared libraries but when a write is called on shared memory the kernel copies the shared memory into the callers space. If that didn't happen shared memory would be corrupted.
New Mac Pros sound good. A 32 gb ram option would sound great too, with the higher 1 TB of flash option too (if the Mac Pro packs it the Mac book pro can) wonder if we see any other goodies in it?
A 32 gb ram option would sound great too, with the higher 1 TB of flash option too (if the Mac Pro packs it the Mac book pro can) wonder if we see any other goodies in it?
It would be nice if ram and storage remained easily addressable with after market parts. CTO things on simple stuff often have fairly extreme embedded markups. The only rule with after market ram is test it prior to use. You can easily install 32GB today. Memory footprints can be abhorrent at times, so there are valid use cases.
> Um, no. Forking will duplicate the memory into another process. Copy on write in OS X duplicates shared memory into private memory. It is specifically designed for memory protection. Every application has access to the shared libraries but when a write is called on shared memory the kernel copies the shared memory into the callers space. If that didn't happen shared memory would be corrupted.
While your lack of understanding of operating system design is shameful if you consider yourself a software engineer, even worse is your unfounded belief in your correctness. If a book has taught you that way (which I seriously doubt), it taught you wrong.
Copy-on-write is precisely a duplication method, which is why modern operating systems use it in fork(). The point of copy-on-write is twofold: it optimizes the duplication process and reducing actual memory usage. The optimization is achieved by deferring the actual duplication until a change is made to a region of memory, and the reduction of memory usage is achieved by sharing the unmodified regions of memory and marking them read-only, so that when a process attempts to write to one of these regions of memory, the system catches a page fault, copies it, creates a writable page descriptor for the new copy, and returns control to the process, which is now free to write. This, obviously, only applies where the operating system is told that the region of memory that the process attempted to modify is supposed to be writable, which is not the case in shared libraries, so copy-on-write does not apply to them, and the system will kill your process if you attempt to write there.
Darwin goes a little further than most operating systems and actually re-uses a single set of page descriptors system-wide to map all global shared libraries into memory. This optimizes application launch time even further since the dynamic linker does not have to map any of the global shared libraries when the application is executed, which is why even the simplest Hello World! program contains the entire set of global shared libraries mapped into its memory. The protection is enforced at the hardware level by specifying that the memory referenced by those page descriptors read-only, you don't need to duplicate the contents of a memory region in order to protect it.
While copy-on-write is also used to optimize a specific kind of memory allocation (see MAP_PRIVATE vs. MAP_SHARED in mmap()'s manpage), that does not mean copy-on-write itself has anything to do with memory protection; it is only used in that case because you are effectively telling the system that writes are not shared, a semantic that would otherwise require duplication of the entire mapped region. Since you NEVER have a reason to write to a shared library, and in fact all shared libraries are loaded into read-only memory, copy-on-write and private semantics do not apply to them, which is why they don't appear in VPRVT.
Take the time to actually understand what you read and how it is implemented. I don't know what's in that book, but what you understood from it is completely wrong. Not only do you demonstrate lack of understanding for basic operating system implementation concepts, but you also demonstrate lack of understanding of how memory management units (MMUs) in modern CPUs work, which means you probably don't have a clue of what virtual memory actually is.
EDIT: Fixed some stuff.
EDIT #2: Just for the kicks, I actually went on and read the definition in the book you linked, because I couldn't believe anyone could have published that kind of misinformation, and now I'm absolutely sure that you just lack readong comprehension. You read that copy-on-write uses shared memory and for some reason automatically assumed that all uses of shared memory are related to copy-on-write, which is far from being the case. Read my explanation above and you will understand how it actually works.
> <span style="color:rgb(24,24,24);font-family:arial, helvetica, sans-serif;line-height:18px;">Um, no. Forking will duplicate the memory into another process. Copy on write in OS X duplicates shared memory into private memory. It is specifically designed for memory protection. Every application has access to the shared libraries but when a write is called on shared memory the kernel copies the shared memory into the callers space. If that didn't happen shared memory would be corrupted.</span>
<span style="line-height:18px;">While your lack of understanding of operating system design is shameful if you consider yourself a software engineer, even worse is your unfounded belief in your correctness. If a book has taught you that way (which I seriously doubt), it taught you wrong.</span>
<span style="line-height:18px;">Copy-on-write is precisely a duplication method, which is why modern operating systems use it in fork(). The point of copy-on-write is twofold: it optimizes the duplication process and reducing actual memory usage. The optimization is achieved by deferring the actual duplication until a change is made to a region of memory, and the reduction of memory usage is achieved by sharing the unmodified regions of memory and marking them read-only, so that when a process attempts to write to one of these regions of memory, the system catches a page fault, copies it, creates a writable page descriptor for the new copy, and returns control to the process, which is now free to write. This, obviously, only applies where the operating system is told that the region of memory that the process attempted to modify is supposed to be writable, which is not the case in shared libraries, so copy-on-write does not apply to them, and the system will kill your process if you attempt to write there.</span>
<span style="line-height:18px;">Darwin goes a little further than most operating systems and actually re-uses a single set of page descriptors system-wide to map all global shared libraries into memory. This optimizes application launch time even further since the dynamic linker does not have to map any of the global shared libraries when the application is executed, which is why even the simplest Hello World! program contains the entire set of global shared libraries mapped into its memory. The protection is enforced at the hardware level by specifying that the memory referenced by those page descriptors read-only, you don't need to duplicate the contents of a memory region in order to protect it.</span>
<span style="line-height:18px;">While copy-on-write is also used to optimize a specific kind of memory allocation (see MAP_PRIVATE vs. MAP_SHARED in mmap()'s manpage), that does not mean copy-on-write itself has anything to do with memory protection; it is only used in that case because you are effectively telling the system that writes are not shared, a semantic that would otherwise require duplication of the entire mapped region. Since you NEVER have a reason to write to a shared library, and in fact all shared libraries are loaded into read-only memory, copy-on-write and private semantics do not apply to them, which is why they don't appear in VPRVT.</span>
Take the time to actually understand what you read and how it is implemented. I don't know what's in that book, but what you understood from it is completely wrong. Not only do you demonstrate lack of understanding for basic operating system implementation concepts, but you also demonstrate lack of understanding of how memory management units (MMUs) in modern CPUs work, which means you probably don't have a clue of what virtual memory actually is.
EDIT: Fixed some stuff.
EDIT #2: Just for the kicks, I actually went on and read the definition in the book you linked, because I couldn't believe anyone could have published that kind of misinformation, and now I'm absolutely sure that you just lack readong comprehension. You read that copy-on-write uses shared memory and for some reason automatically assumed that all uses of shared memory are related to copy-on-write, which is far from being the case. Read my explanation above and you will understand how it actually works.
I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.
The Mac OS Internals book, on the page I linked to, clearly does say that the kernel shares read-write memory copy on write, between processes . So we agreed on that. You then answered with the straw man argument that I believe all memory is like that.
Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.
Just to be clear on what we are saying.
Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.
which says - and I couldn't copy and past this for obvious reasons.
Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.
and later.
Now, a split-segemnt library can be mapped so that its text segment is completely shared between tasks with a single physical map , whereas the data segment is shared copy on write.
In short changes to the shared library data - probably like setting something - copies the data into the calling processes.
I don't claim to be an expert. I have read this book, however, and it contradicts you. You'll probably come back with some verbose answer about how of course the data segment was writibable, you never said that it wasn't and so on. But you did say that all shared libraries were not writable. And that is wrong, or the canonical book on OS X is wrong. I'd go with you being wrong.
> I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.
Nope, you claimed that it happens in "shared frameworks", which are NOT read-write. Don't move the goal posts now.
> Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.
I did? Where? Because I distinctively recall stating the existence of a difference between private and shared memory, and that the difference only applies to writable memory.
> Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.
I have already established that the problem was on you, not the book, you simply did not understand what you read.
> Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.
The read-write memory sections are not shared and are actually extremely small (as I will demonstrate below). Copy-on-write is used there in order to avoid expensive static initializations resulting from having a bunch of unnecessary pre-loaded libraries mapped in every process.
If you write the following useless program:
int main() {return 0;}
Then compile it using:
cc -o nop nop.c
Then use lldb to setup a breakpoint such as:
$ lldb ./nop
Current executable set to './nop' (x86_64).
(lldb) b main
Breakpoint 1: where = nop`main, address = 0x0000000100000f60
And then check its PID on top (on another terminal), you will see that VPRVT (which includes 8MB for the main thread's deferred-allocated stack and the rest for static initializations) for that process sits under 20MB whereas its VSIZE (which also includes all the shared libraries) is over 2GB. You can see which libraries are loaded to which addresses using vmmap, I won't post the output here since it's huge.
EDIT: Removed [code] tags which don't work on this forum, added 8MB of stack to VPRVT's accounting, which I forgot originally.
> I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.
Nope, you claimed that it happens in "shared frameworks", which are NOT read-write. Don't move the goal posts now.
> Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.
I did? Where? Because I distinctively recall stating the existence of a difference between private and shared memory, and that the difference only applies to writable memory.
> Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.
I have already established that the problem was on you, not the book, you simply did not understand what you read.
> Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.
The read-write memory sections are not shared and are actually extremely small (as I will demonstrate below). Copy-on-write is used there in order to avoid expensive static initializations resulting from having a bunch of unnecessary pre-loaded libraries mapped in every process.
If you write the following useless program:
[code]int main() {return 0;}[/code]
Then compile it using:
[code]cc -o nop nop.c[/code]
Then use lldb to setup a breakpoint such as:
[code]$ lldb ./nop
Current executable set to './nop' (x86_64).
(lldb) b main
Breakpoint 1: where = nop`main, address = 0x0000000100000f60
And then check its PID on top, you will see that RPRVT for that process sits under 20MB whereas its VSIZE (which includes all the global libraries) is over 2GB. Since the code I posted above does not statically allocate any memory, the entirety of those 20MB belong to all the libraries in the runtime. You can see which libraries are loaded to which addresses using vmmap, I won't post the output here since it's huge.
> I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.
Nope, you claimed that it happens in "shared frameworks", which are NOT read-write. Don't move the goal posts now.
> Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.
I did? Where? Because I distinctively recall stating the existence of a difference between private and shared memory, and that the difference only applies to writable memory.
> Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.
I have already established that the problem was on you, not the book, you simply did not understand what you read.
> Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.
The read-write memory sections are not shared and are actually extremely small (as I will demonstrate below). Copy-on-write is used there in order to avoid expensive static initializations resulting from having a bunch of unnecessary pre-loaded libraries mapped in every process.
If you write the following useless program:
[code]int main() {return 0;}[/code]
Then compile it using:
[code]cc -o nop nop.c[/code]
Then use lldb to setup a breakpoint such as:
[code]$ lldb ./nop
Current executable set to './nop' (x86_64).
(lldb) b main
Breakpoint 1: where = nop`main, address = 0x0000000100000f60
And then check its PID on top, you will see that RPRVT for that process sits under 20MB whereas its VSIZE (which includes all the global libraries) is over 2GB. Since the code I posted above does not statically allocate any memory other than the default 8MB of stack, the rest of those 20MB belong to all the libraries in the runtime. You can see which libraries are loaded to which addresses using vmmap, I won't post the output here since it's huge.
Of course, I had the luxury of making this timing market consideration because my notebook allowed the memory to be replaced.
Yeah, a LENOVO W530.
That model also accepts 4 sodimms. Macbook pros accept 2. With the air and rmbp it's soldered in. I completely understand the advantage to as much ram as possible for multiple VMs.
> Does a program use more or less, how much is used where and when?
Just imagine what you would be able to accomplish today considering how fast and inexpensive memory is compared to what it was 20 years ago. It's not just about the memory footprint, it's also about the memory bandwidth required by modern applications. Hardware is progressing and software is regressing. In computer graphics, the increasing memory footprint and CPU power can be easily justified by the fact that as hardware progresses, so do real-time rasters; CPU and GPU power are so important for computer graphics that, in 2013 we're still counting cycles in tight loops and leveraging CPU speed, GPU speed, memory bandwidth, and bus bandwidth. If the engineering requirements for computer graphics were applied to everything else, today we would be able to load an entire operating system to RAM from secondary storage and run it there while committing changes to secondary storage in batches in the background.
Someone mentioned application icons earlier, for example, and in response to this I ask: why are we still using bitmaps for application icons, especially considering that their source is almost always vectorial? Why not use a vectorial format like we've been using for fonts since the 90s? GPUs are awesome when it comes to dealing with vectors! They are specifically designed to process vectors! They can transform millions of vectors per frame when they have nothing else to do! And they can cache their work in textures for later use, too!
This resource waste truly annoys me. If I had the money, I'd form a company and hire a bunch of other computer graphics engineers to show the world what modern techniligy can actually do if we tackle software engineering problems like we did 20 years ago.
Someone mentioned application icons earlier, for example, and in response to this I ask: why are we still using bitmaps for application icons, especially considering that their source is almost always vectorial? Why not use a vectorial format like we've been using for fonts since the 90s? GPUs are awesome when it comes to dealing with vectors! They are specifically designed to process vectors! They can transform millions of vectors per frame when they have nothing else to do! And they can cache their work in textures for later use, too!
The Mac might not use vectors for application icons, but it does for toolbar icons. Open a Finder window on /System/Library/ and type .pdf in the search box and you will see lots of them.
Comments
Quote:
Originally Posted by mdriftmeyer
What kind of crap is this? Text editor vs. full blown word processor. That word processor dumps to hundreds of MBs in an instant when you actually use it more than typing a Dear Mom letter.
The memory is being stored due to the shared frameworks being pre-loaded. More importantly, those Core Data Frameworks are shared across the system of OS X. Please just stop while you're all behind in understanding shared resources.
In 1994, personal computers didn't have hundreds of megabytes of RAM (or even disk) to spare.
Also note that VPRVT does not account for shared memory, so no, the memory that I am reporting here is not being used by shared frameworks.
Take the time to read the thread before posting.
Is there any chance that they will bring back the 17" Macbook Pro with this refresh?
VPRVT does contain parts of the shared libraries copied on write to private space. Each app thinks it can access the entire stack of frameworks it has linked against and the shared code is copied to the private memory on write. Whats gained is apps which cant crash the OS.
Quote:
Originally Posted by asdasd
VPRVT does contain parts of the shared libraries copied on write to private space. Each app thinks it can access the entire stack of frameworks it has linked against and the shared code is copied to the private memory on write. Whats gained is apps which cant crash the OS.
It appears that you don't fully understand what copy on write is. Copy on write and shared memory are completely different concepts, with the fpr,er only happening in private writable allocations. Since shared libraries are neither writable nor private, copy on write does not apply to them, and it is because they are not private that they are not accounted for in VPRVT (if you want to account for them as well, check VSIZE).
While, yes, copy on weite would show in VPRVT, there's no copy on write associated with shared libraries, so your point is moot. The most common occurrence of copy on write on a system is in fork().
Um, no. Forking will duplicate the memory into another process. Copy on write in OS X duplicates shared memory into private memory. It is specifically designed for memory protection. Every application has access to the shared libraries but when a write is called on shared memory the kernel copies the shared memory into the callers space. If that didn't happen shared memory would be corrupted.
Some more explanation in this book.
http://books.google.ie/books?id=K8vUkpOXhN4C&pg=PA862&lpg=PA862&dq=copy+on+write+OS+X&source=bl&ots=OKojX-WsXz&sig=AWqJi34ZUjOvBRnMFoZ1__ET80M&hl=en&sa=X&ei=5zzzUZfGA8WRhQetxYDQBw&redir_esc=y#v=onepage&q=copy on write OS X&f=false
Quote:
Originally Posted by Curtis Hannah
A 32 gb ram option would sound great too, with the higher 1 TB of flash option too (if the Mac Pro packs it the Mac book pro can) wonder if we see any other goodies in it?
It would be nice if ram and storage remained easily addressable with after market parts. CTO things on simple stuff often have fairly extreme embedded markups. The only rule with after market ram is test it prior to use. You can easily install 32GB today. Memory footprints can be abhorrent at times, so there are valid use cases.
> Um, no. Forking will duplicate the memory into another process. Copy on write in OS X duplicates shared memory into private memory. It is specifically designed for memory protection. Every application has access to the shared libraries but when a write is called on shared memory the kernel copies the shared memory into the callers space. If that didn't happen shared memory would be corrupted.
While your lack of understanding of operating system design is shameful if you consider yourself a software engineer, even worse is your unfounded belief in your correctness. If a book has taught you that way (which I seriously doubt), it taught you wrong.
Copy-on-write is precisely a duplication method, which is why modern operating systems use it in fork(). The point of copy-on-write is twofold: it optimizes the duplication process and reducing actual memory usage. The optimization is achieved by deferring the actual duplication until a change is made to a region of memory, and the reduction of memory usage is achieved by sharing the unmodified regions of memory and marking them read-only, so that when a process attempts to write to one of these regions of memory, the system catches a page fault, copies it, creates a writable page descriptor for the new copy, and returns control to the process, which is now free to write. This, obviously, only applies where the operating system is told that the region of memory that the process attempted to modify is supposed to be writable, which is not the case in shared libraries, so copy-on-write does not apply to them, and the system will kill your process if you attempt to write there.
Darwin goes a little further than most operating systems and actually re-uses a single set of page descriptors system-wide to map all global shared libraries into memory. This optimizes application launch time even further since the dynamic linker does not have to map any of the global shared libraries when the application is executed, which is why even the simplest Hello World! program contains the entire set of global shared libraries mapped into its memory. The protection is enforced at the hardware level by specifying that the memory referenced by those page descriptors read-only, you don't need to duplicate the contents of a memory region in order to protect it.
While copy-on-write is also used to optimize a specific kind of memory allocation (see MAP_PRIVATE vs. MAP_SHARED in mmap()'s manpage), that does not mean copy-on-write itself has anything to do with memory protection; it is only used in that case because you are effectively telling the system that writes are not shared, a semantic that would otherwise require duplication of the entire mapped region. Since you NEVER have a reason to write to a shared library, and in fact all shared libraries are loaded into read-only memory, copy-on-write and private semantics do not apply to them, which is why they don't appear in VPRVT.
Take the time to actually understand what you read and how it is implemented. I don't know what's in that book, but what you understood from it is completely wrong. Not only do you demonstrate lack of understanding for basic operating system implementation concepts, but you also demonstrate lack of understanding of how memory management units (MMUs) in modern CPUs work, which means you probably don't have a clue of what virtual memory actually is.
EDIT: Fixed some stuff.
EDIT #2: Just for the kicks, I actually went on and read the definition in the book you linked, because I couldn't believe anyone could have published that kind of misinformation, and now I'm absolutely sure that you just lack readong comprehension. You read that copy-on-write uses shared memory and for some reason automatically assumed that all uses of shared memory are related to copy-on-write, which is far from being the case. Read my explanation above and you will understand how it actually works.
I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.
The Mac OS Internals book, on the page I linked to, clearly does say that the kernel shares read-write memory copy on write, between processes . So we agreed on that. You then answered with the straw man argument that I believe all memory is like that.
Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.
Just to be clear on what we are saying.
Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.
Here is what it says: at page
http://books.google.ie/books?id=K8vUkpOXhN4C&pg=PA924&lpg=PA924&dq=OS+X+write+to+shared+libraries&source=bl&ots=OKojYVRtVB&sig=UCaOcToRtVL3NeORsiMswMSYWaI&hl=en&sa=X&ei=8cXzUdm5LMWr7Aa31IHIDg&redir_esc=y#v=onepage&q=OS X write to shared libraries&f=false
which says - and I couldn't copy and past this for obvious reasons.
Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.
and later.
Now, a split-segemnt library can be mapped so that its text segment is completely shared between tasks with a single physical map , whereas the data segment is shared copy on write.
In short changes to the shared library data - probably like setting something - copies the data into the calling processes.
I don't claim to be an expert. I have read this book, however, and it contradicts you. You'll probably come back with some verbose answer about how of course the data segment was writibable, you never said that it wasn't and so on. But you did say that all shared libraries were not writable. And that is wrong, or the canonical book on OS X is wrong. I'd go with you being wrong.
> I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.
Nope, you claimed that it happens in "shared frameworks", which are NOT read-write. Don't move the goal posts now.
> Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.
I did? Where? Because I distinctively recall stating the existence of a difference between private and shared memory, and that the difference only applies to writable memory.
> Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.
I have already established that the problem was on you, not the book, you simply did not understand what you read.
> Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.
The read-write memory sections are not shared and are actually extremely small (as I will demonstrate below). Copy-on-write is used there in order to avoid expensive static initializations resulting from having a bunch of unnecessary pre-loaded libraries mapped in every process.
If you write the following useless program:
int main() {return 0;}
Then compile it using:
cc -o nop nop.c
Then use lldb to setup a breakpoint such as:
$ lldb ./nop
Current executable set to './nop' (x86_64).
(lldb) b main
Breakpoint 1: where = nop`main, address = 0x0000000100000f60
(lldb) run
Process 591 launched: './nop' (x86_64)
Process 591 stopped
* thread #1: tid = 0x1c03, 0x0000000100000f60 nop`main, stop reason = breakpoint 1.1
frame #0: 0x0000000100000f60 nop`main
nop`main:
-> 0x100000f60: pushq %rbp
0x100000f61: movq %rsp, %rbp
0x100000f64: movl $0, %eax
0x100000f69: movl $0, -4(%rbp)
(lldb)
And then check its PID on top (on another terminal), you will see that VPRVT (which includes 8MB for the main thread's deferred-allocated stack and the rest for static initializations) for that process sits under 20MB whereas its VSIZE (which also includes all the shared libraries) is over 2GB. You can see which libraries are loaded to which addresses using vmmap, I won't post the output here since it's huge.
EDIT: Removed [code] tags which don't work on this forum, added 8MB of stack to VPRVT's accounting, which I forgot originally.
> I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.
Nope, you claimed that it happens in "shared frameworks", which are NOT read-write. Don't move the goal posts now.
> Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.
I did? Where? Because I distinctively recall stating the existence of a difference between private and shared memory, and that the difference only applies to writable memory.
> Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.
I have already established that the problem was on you, not the book, you simply did not understand what you read.
> Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.
The read-write memory sections are not shared and are actually extremely small (as I will demonstrate below). Copy-on-write is used there in order to avoid expensive static initializations resulting from having a bunch of unnecessary pre-loaded libraries mapped in every process.
If you write the following useless program:
[code]int main() {return 0;}[/code]
Then compile it using:
[code]cc -o nop nop.c[/code]
Then use lldb to setup a breakpoint such as:
[code]$ lldb ./nop
Current executable set to './nop' (x86_64).
(lldb) b main
Breakpoint 1: where = nop`main, address = 0x0000000100000f60
(lldb) run
Process 591 launched: './nop' (x86_64)
Process 591 stopped
* thread #1: tid = 0x1c03, 0x0000000100000f60 nop`main, stop reason = breakpoint 1.1
frame #0: 0x0000000100000f60 nop`main
nop`main:
-> 0x100000f60: pushq %rbp
0x100000f61: movq %rsp, %rbp
0x100000f64: movl $0, %eax
0x100000f69: movl $0, -4(%rbp)
(lldb)[/code]
And then check its PID on top, you will see that RPRVT for that process sits under 20MB whereas its VSIZE (which includes all the global libraries) is over 2GB. Since the code I posted above does not statically allocate any memory, the entirety of those 20MB belong to all the libraries in the runtime. You can see which libraries are loaded to which addresses using vmmap, I won't post the output here since it's huge.
> I didn't assume that "all shared" uses are related to copy on write, I claimed that it happens on write to shared memory which is read-write. Which is hardly "all cases". The fact that you make this straw man argument is in itself telling.
Nope, you claimed that it happens in "shared frameworks", which are NOT read-write. Don't move the goal posts now.
> Not true; however while I make no claims about all shared memory being writable, it is your claim that ALL shared libraries are read-only and that the shared memory which is copied-on-write is not in shared libraries, which makes me wonder where you think it would be.
I did? Where? Because I distinctively recall stating the existence of a difference between private and shared memory, and that the difference only applies to writable memory.
> Ok, so here's another page from the utterly misleading Max OS X Internals: A Systems approach . There can be no claims of misreading here, its fairly simple. Nevertheless, I am not the expert, it could be lies, and I think we should all get our money back until you decide to write your own book. But I am basing what I say on this book.
I have already established that the problem was on you, not the book, you simply did not understand what you read.
> Mac OS X uses these [shared memory sub maps] to support shared libraries. A shared library on Mac Os X can be compiled such that its read-only and read-write segments are split and relocated at offsets relative to the specific addresses.
The read-write memory sections are not shared and are actually extremely small (as I will demonstrate below). Copy-on-write is used there in order to avoid expensive static initializations resulting from having a bunch of unnecessary pre-loaded libraries mapped in every process.
If you write the following useless program:
[code]int main() {return 0;}[/code]
Then compile it using:
[code]cc -o nop nop.c[/code]
Then use lldb to setup a breakpoint such as:
[code]$ lldb ./nop
Current executable set to './nop' (x86_64).
(lldb) b main
Breakpoint 1: where = nop`main, address = 0x0000000100000f60
(lldb) run
Process 591 launched: './nop' (x86_64)
Process 591 stopped
* thread #1: tid = 0x1c03, 0x0000000100000f60 nop`main, stop reason = breakpoint 1.1
frame #0: 0x0000000100000f60 nop`main
nop`main:
-> 0x100000f60: pushq %rbp
0x100000f61: movq %rsp, %rbp
0x100000f64: movl $0, %eax
0x100000f69: movl $0, -4(%rbp)
(lldb)[/code]
And then check its PID on top, you will see that RPRVT for that process sits under 20MB whereas its VSIZE (which includes all the global libraries) is over 2GB. Since the code I posted above does not statically allocate any memory other than the default 8MB of stack, the rest of those 20MB belong to all the libraries in the runtime. You can see which libraries are loaded to which addresses using vmmap, I won't post the output here since it's huge.
Wow, all this consternation over memory.
Does a program use more or less, how much is used where and when?
You know a few months ago when memory was cheap, I avoided this consternation.
For $100 total delivered price, I upgraded my laptop to 32 GB 1600 CAS 10 Ram.
Of course, I had the luxury of making this timing market consideration because my notebook allowed the memory to be replaced.
Yeah, a LENOVO W530.
My only concern is using all that memory for RAMDISK, VM, and desktop applications.
Quote:
Originally Posted by mdk777
Of course, I had the luxury of making this timing market consideration because my notebook allowed the memory to be replaced.
Yeah, a LENOVO W530.
That model also accepts 4 sodimms. Macbook pros accept 2. With the air and rmbp it's soldered in. I completely understand the advantage to as much ram as possible for multiple VMs.
> Wow, all this consternation over memory.
> Does a program use more or less, how much is used where and when?
Just imagine what you would be able to accomplish today considering how fast and inexpensive memory is compared to what it was 20 years ago. It's not just about the memory footprint, it's also about the memory bandwidth required by modern applications. Hardware is progressing and software is regressing. In computer graphics, the increasing memory footprint and CPU power can be easily justified by the fact that as hardware progresses, so do real-time rasters; CPU and GPU power are so important for computer graphics that, in 2013 we're still counting cycles in tight loops and leveraging CPU speed, GPU speed, memory bandwidth, and bus bandwidth. If the engineering requirements for computer graphics were applied to everything else, today we would be able to load an entire operating system to RAM from secondary storage and run it there while committing changes to secondary storage in batches in the background.
Someone mentioned application icons earlier, for example, and in response to this I ask: why are we still using bitmaps for application icons, especially considering that their source is almost always vectorial? Why not use a vectorial format like we've been using for fonts since the 90s? GPUs are awesome when it comes to dealing with vectors! They are specifically designed to process vectors! They can transform millions of vectors per frame when they have nothing else to do! And they can cache their work in textures for later use, too!
This resource waste truly annoys me. If I had the money, I'd form a company and hire a bunch of other computer graphics engineers to show the world what modern techniligy can actually do if we tackle software engineering problems like we did 20 years ago.
I don't know about new Macbook Pros but I predict new Mac mini on Tuesday. The current generation is 5-7 days shipping on several online Apple Stores.
Quote:
Originally Posted by Epsico
Someone mentioned application icons earlier, for example, and in response to this I ask: why are we still using bitmaps for application icons, especially considering that their source is almost always vectorial? Why not use a vectorial format like we've been using for fonts since the 90s? GPUs are awesome when it comes to dealing with vectors! They are specifically designed to process vectors! They can transform millions of vectors per frame when they have nothing else to do! And they can cache their work in textures for later use, too!
The Mac might not use vectors for application icons, but it does for toolbar icons. Open a Finder window on /System/Library/ and type .pdf in the search box and you will see lots of them.
Quote:
Originally Posted by ascii
I don't know about new Macbook Pros but I predict new Mac mini on Tuesday. The current generation is 5-7 days shipping on several online Apple Stores.
Apple's traditional Summer release date is the first week of August. So you are probably off by seven days.