yes, this is absolutely true. if any of you type "top" in a terminal window you will see what wizard69 is talking about. At the moment on my 'book with 1.25GB physical RAM, I have over nine and a half GB virtual memory:
VM: 9.61G + 170M
You know not of what you speak.
Every process has it's own independent 4GB virtual address space. Top reports physical allocations as RSHRD and RSIZE, where RSHARED is a set of shared physical memory addresses that must be subtracted from RSIZE to determine how much memory has actually been physically allocated to the process by the OS. You can't just add up RSIZE though, because some of that may have been paged to the backing store on disk and is no longer in RAM. RPRVT reports the size of a mask over the address space that is private, but may not have actually been physically allocated yet.
Those VM numbers you are adding up, for each independent process, only represent the portion of that processes 4GB virtual address space the OS is currently allocating. The OS keeps track of it this way so it does not have to maintain exhaustive page tables for every processes full address space.
In reality you have exactly ( 4GB * numberOfProcesses ) of virtual memory (address space) in your computer, but the OS only needs to do the accounting for what you are using plus a reasonable ready reserve of addresses per process. Because most process use far less that their available 4GB, you are seeing a much smaller amount reported by the OS as being in VM use/ready to allocate.
I assume the systems we have at work are in 4gb chunks? I believe we have too intel servers with 8gb of memory in them. or maybe it was 7gb. I just remember it was over the 4gb threshold... Maybe 4-2gb chips?
Quote:
Originally posted by Hiro
You know not of what you speak.
Every process has it's own independent 4GB virtual address space. Top reports physical allocations as RSHRD and RSIZE, where RSHARED is a set of shared physical memory addresses that must be subtracted from RSIZE to determine how much memory has actually been physically allocated to the process by the OS. You can't just add up RSIZE though, because some of that may have been paged to the backing store on disk and is no longer in RAM. RPRVT reports the size of a mask over the address space that is private, but may not have actually been physically allocated yet.
Those VM numbers you are adding up, for each independent process, only represent the portion of that processes 4GB virtual address space the OS is currently allocating. The OS keeps track of it this way so it does not have to maintain exhaustive page tables for every processes full address space.
In reality you have exactly ( 4GB * numberOfProcesses ) of virtual memory (address space) in your computer, but the OS only needs to do the accounting for what you are using plus a reasonable ready reserve of addresses per process. Because most process use far less that their available 4GB, you are seeing a much smaller amount reported by the OS as being in VM use/ready to allocate.
That makes a Flash-based SSD approximately 100 times faster than a rotating disk.
"A solid state disk (SSD) is electrically, mechanically and software compatible with a conventional (magnetic) hard disk or winchester. The difference is that the storage medium is not magnetic (like a hard disk) or optical (like a CD) but solid state semiconductor such as battery backed RAM, EPROM or other electrically erasable RAMlike chip. This provides faster access time than a disk, because the data can be randomly accessed and does not rely on a read/write interface head synchronising with a rotating disk. The SSD also provides greater physical resilience to physical vibration, shock and extreme temperature fluctuations. The only downside is a higher cost per megabyte of storage."
From the webopedia SSD article
"Abbreviated SSD, a solid state disk is a high-performance plug-and-play storage device that contains no moving parts. SSD components include either DRAM or flash memory boards, a memory bus board, a CPU, and a battery card. Because they contain their own CPUs to manage data storage, they are a lot faster (18MBps for SCSI-II and 44 MBps for UltraWide SCSI interfaces) than conventional rotating hard disks ; therefore, they produce highest possible I/O rates. SSDs are most effective for server applications and server systems, where I/O response time is crucial. Data stored on SSDs should include anything that creates bottlenecks, such as databases, swap files, library and index files, and authorization and login information."
For the rest of this particular article, we will compare the HDDs and Flash-based SSDs. The latter are the most popular type of SSDs employed by the military, aerospace, industrial and embedded systems industries.
Read the rest of the article, it compares Hard disk to the upcoming flash based disk quite well:
I assume the systems we have at work are in 4gb chunks? I believe we have too intel servers with 8gb of memory in them. or maybe it was 7gb. I just remember it was over the 4gb threshold... Maybe 4-2gb chips?
There are several variants of servers which have more than one processor and multi-gigabyte RAM arrays where the hardware is [firmware] partitioned into multiple independent machines all inside the same box. Each virtual machine conforms to the basic requirements of the OS's hardware requirements. HP, Sun, SGI, IBM all make servers that follow this design. Not all of their servers do, but they all have lines that do.
If they were 64-bit CPU servers, then the >4GB memory could follow the shared model as a 64-bit CPU can natively implement an address space of 16 Exobytes.
That makes a Flash-based SSD approximately 100 times faster than a rotating disk.
"A solid state disk (SSD) is electrically, mechanically and software compatible with a conventional (magnetic) hard disk or winchester. The difference is that the storage medium is not magnetic (like a hard disk) or optical (like a CD) but solid state semiconductor such as battery backed RAM, EPROM or other electrically erasable RAMlike chip. This provides faster access time than a disk, because the data can be randomly accessed and does not rely on a read/write interface head synchronising with a rotating disk. The SSD also provides greater physical resilience to physical vibration, shock and extreme temperature fluctuations. The only downside is a higher cost per megabyte of storage."
From the webopedia SSD article
"Abbreviated SSD, a solid state disk is a high-performance plug-and-play storage device that contains no moving parts. SSD components include either DRAM or flash memory boards, a memory bus board, a CPU, and a battery card. Because they contain their own CPUs to manage data storage, they are a lot faster (18MBps for SCSI-II and 44 MBps for UltraWide SCSI interfaces) than conventional rotating hard disks ; therefore, they produce highest possible I/O rates. SSDs are most effective for server applications and server systems, where I/O response time is crucial. Data stored on SSDs should include anything that creates bottlenecks, such as databases, swap files, library and index files, and authorization and login information."
For the rest of this particular article, we will compare the HDDs and Flash-based SSDs. The latter are the most popular type of SSDs employed by the military, aerospace, industrial and embedded systems industries.
Read the rest of the article, it compares Hard disk to the upcoming flash based disk quite well:
Yes, someday SSDs will be affordable enough to replace rotating media as the primary storage for a PC. SSD is a well engineered solution which does not necessitate radical rewriting of any OS code.
Some of the arguments well above haven't hinged on this though. Flash and SSDs aren't bad, but there are ways of trying to use flash which are bad, or at least create more potential problems that they are worth.
Well the answer depends on the specifics of the server. It could be a SMP system using extended addressing on 32 bit hardware. It could be an Itanium based system and natively addressing the RAM. Or is could be a machine with two independant 32 bit processors Maxed out on RAM.
My geuss would be a SMP based system possibly using a XEON chip of some sort and extended addressing. This would give each process 4 gigs of virtual ram or possibly real ram. The OS would manage the memory beyond 4 gigs with the processes not knowing anything about it.
This is similar to (but not quite the same) what graphical processes experience under OS/X on PPC64. On Intel 32 bit server hardware the extra memory has proven to be very useful in some applications. None of these schemes however really make up for 64 bit real addressing for applications that need it.
Thanks
Dave
Quote:
Originally posted by webmail
I assume the systems we have at work are in 4gb chunks? I believe we have too intel servers with 8gb of memory in them. or maybe it was 7gb. I just remember it was over the 4gb threshold... Maybe 4-2gb chips?
Comments
Originally posted by hymie
yes, this is absolutely true. if any of you type "top" in a terminal window you will see what wizard69 is talking about. At the moment on my 'book with 1.25GB physical RAM, I have over nine and a half GB virtual memory:
VM: 9.61G + 170M
You know not of what you speak.
Every process has it's own independent 4GB virtual address space. Top reports physical allocations as RSHRD and RSIZE, where RSHARED is a set of shared physical memory addresses that must be subtracted from RSIZE to determine how much memory has actually been physically allocated to the process by the OS. You can't just add up RSIZE though, because some of that may have been paged to the backing store on disk and is no longer in RAM. RPRVT reports the size of a mask over the address space that is private, but may not have actually been physically allocated yet.
Those VM numbers you are adding up, for each independent process, only represent the portion of that processes 4GB virtual address space the OS is currently allocating. The OS keeps track of it this way so it does not have to maintain exhaustive page tables for every processes full address space.
In reality you have exactly ( 4GB * numberOfProcesses ) of virtual memory (address space) in your computer, but the OS only needs to do the accounting for what you are using plus a reasonable ready reserve of addresses per process. Because most process use far less that their available 4GB, you are seeing a much smaller amount reported by the OS as being in VM use/ready to allocate.
Originally posted by Hiro
You know not of what you speak.
Every process has it's own independent 4GB virtual address space. Top reports physical allocations as RSHRD and RSIZE, where RSHARED is a set of shared physical memory addresses that must be subtracted from RSIZE to determine how much memory has actually been physically allocated to the process by the OS. You can't just add up RSIZE though, because some of that may have been paged to the backing store on disk and is no longer in RAM. RPRVT reports the size of a mask over the address space that is private, but may not have actually been physically allocated yet.
Those VM numbers you are adding up, for each independent process, only represent the portion of that processes 4GB virtual address space the OS is currently allocating. The OS keeps track of it this way so it does not have to maintain exhaustive page tables for every processes full address space.
In reality you have exactly ( 4GB * numberOfProcesses ) of virtual memory (address space) in your computer, but the OS only needs to do the accounting for what you are using plus a reasonable ready reserve of addresses per process. Because most process use far less that their available 4GB, you are seeing a much smaller amount reported by the OS as being in VM use/ready to allocate.
"A solid state disk (SSD) is electrically, mechanically and software compatible with a conventional (magnetic) hard disk or winchester. The difference is that the storage medium is not magnetic (like a hard disk) or optical (like a CD) but solid state semiconductor such as battery backed RAM, EPROM or other electrically erasable RAMlike chip. This provides faster access time than a disk, because the data can be randomly accessed and does not rely on a read/write interface head synchronising with a rotating disk. The SSD also provides greater physical resilience to physical vibration, shock and extreme temperature fluctuations. The only downside is a higher cost per megabyte of storage."
From the webopedia SSD article
"Abbreviated SSD, a solid state disk is a high-performance plug-and-play storage device that contains no moving parts. SSD components include either DRAM or flash memory boards, a memory bus board, a CPU, and a battery card. Because they contain their own CPUs to manage data storage, they are a lot faster (18MBps for SCSI-II and 44 MBps for UltraWide SCSI interfaces) than conventional rotating hard disks ; therefore, they produce highest possible I/O rates. SSDs are most effective for server applications and server systems, where I/O response time is crucial. Data stored on SSDs should include anything that creates bottlenecks, such as databases, swap files, library and index files, and authorization and login information."
For the rest of this particular article, we will compare the HDDs and Flash-based SSDs. The latter are the most popular type of SSDs employed by the military, aerospace, industrial and embedded systems industries.
Read the rest of the article, it compares Hard disk to the upcoming flash based disk quite well:
http://www.storagesearch.com/bitmicro-art3.html
Originally posted by webmail
I assume the systems we have at work are in 4gb chunks? I believe we have too intel servers with 8gb of memory in them. or maybe it was 7gb. I just remember it was over the 4gb threshold... Maybe 4-2gb chips?
There are several variants of servers which have more than one processor and multi-gigabyte RAM arrays where the hardware is [firmware] partitioned into multiple independent machines all inside the same box. Each virtual machine conforms to the basic requirements of the OS's hardware requirements. HP, Sun, SGI, IBM all make servers that follow this design. Not all of their servers do, but they all have lines that do.
If they were 64-bit CPU servers, then the >4GB memory could follow the shared model as a 64-bit CPU can natively implement an address space of 16 Exobytes.
Originally posted by webmail
That makes a Flash-based SSD approximately 100 times faster than a rotating disk.
"A solid state disk (SSD) is electrically, mechanically and software compatible with a conventional (magnetic) hard disk or winchester. The difference is that the storage medium is not magnetic (like a hard disk) or optical (like a CD) but solid state semiconductor such as battery backed RAM, EPROM or other electrically erasable RAMlike chip. This provides faster access time than a disk, because the data can be randomly accessed and does not rely on a read/write interface head synchronising with a rotating disk. The SSD also provides greater physical resilience to physical vibration, shock and extreme temperature fluctuations. The only downside is a higher cost per megabyte of storage."
From the webopedia SSD article
"Abbreviated SSD, a solid state disk is a high-performance plug-and-play storage device that contains no moving parts. SSD components include either DRAM or flash memory boards, a memory bus board, a CPU, and a battery card. Because they contain their own CPUs to manage data storage, they are a lot faster (18MBps for SCSI-II and 44 MBps for UltraWide SCSI interfaces) than conventional rotating hard disks ; therefore, they produce highest possible I/O rates. SSDs are most effective for server applications and server systems, where I/O response time is crucial. Data stored on SSDs should include anything that creates bottlenecks, such as databases, swap files, library and index files, and authorization and login information."
For the rest of this particular article, we will compare the HDDs and Flash-based SSDs. The latter are the most popular type of SSDs employed by the military, aerospace, industrial and embedded systems industries.
Read the rest of the article, it compares Hard disk to the upcoming flash based disk quite well:
http://www.storagesearch.com/bitmicro-art3.html
Yes, someday SSDs will be affordable enough to replace rotating media as the primary storage for a PC. SSD is a well engineered solution which does not necessitate radical rewriting of any OS code.
Some of the arguments well above haven't hinged on this though. Flash and SSDs aren't bad, but there are ways of trying to use flash which are bad, or at least create more potential problems that they are worth.
My geuss would be a SMP based system possibly using a XEON chip of some sort and extended addressing. This would give each process 4 gigs of virtual ram or possibly real ram. The OS would manage the memory beyond 4 gigs with the processes not knowing anything about it.
This is similar to (but not quite the same) what graphical processes experience under OS/X on PPC64. On Intel 32 bit server hardware the extra memory has proven to be very useful in some applications. None of these schemes however really make up for 64 bit real addressing for applications that need it.
Thanks
Dave
Originally posted by webmail
I assume the systems we have at work are in 4gb chunks? I believe we have too intel servers with 8gb of memory in them. or maybe it was 7gb. I just remember it was over the 4gb threshold... Maybe 4-2gb chips?