Those numbers referrer to the "CASS latency" of the chips. Memory is always returned in bytes (wider busses multiplex the chips together), and usually you are looking for a whole series of bytes in a row. So some bright engineer had the idea to make the chips assume that you wanted a string of byte groups in a row, and automatically fetch them. Even if you don't want the subsequent bytes them there is rarely a penalty for fetching them.
Your 30330 memory says that it takes 3 clock cycles to fetch the first byte, it will have the second one ready on the next cycle, then you will have to wait 3 cycles for the third, another 3 for the fourth, and the fifth will be ready for the next cycle.
You new 30440 DIMM is a little slower and it says that it takes 3 clock cycles to fetch the first byte, it will have the second one ready on the next cycle, then you will have to wait 4 cycles for the third, another 4 for the fourth, and the fifth will be ready for the next cycle.
I don't know about the specifics of the memory controller you are working with, but in many cases it is a "low man wins" situation, so the whole sub-system runs at the speed of the slowest component. Now this might not be that bad a thing... We are not talking about a huge difference, and if we are comparing this difference to the amount of time spent pulling stuff from virtual memory on the hard drive... there are some enormous gains to be mad in the extra memory.
Those numbers referrer to the "CASS latency" of the chips. Memory is always returned in bytes (wider busses multiplex the chips together), and usually you are looking for a whole series of bytes in a row. So some bright engineer had the idea to make the chips assume that you wanted a string of byte groups in a row, and automatically fetch them. Even if you don't want the subsequent bytes them there is rarely a penalty for fetching them.
Your 30330 memory says that it takes 3 clock cycles to fetch the first byte, it will have the second one ready on the next cycle, then you will have to wait 3 cycles for the third, another 3 for the fourth, and the fifth will be ready for the next cycle.
You new 30440 DIMM is a little slower and it says that it takes 3 clock cycles to fetch the first byte, it will have the second one ready on the next cycle, then you will have to wait 4 cycles for the third, another 4 for the fourth, and the fifth will be ready for the next cycle.
I don't know about the specifics of the memory controller you are working with, but in many cases it is a "low man wins" situation, so the whole sub-system runs at the speed of the slowest component. Now this might not be that bad a thing... We are not talking about a huge difference, and if we are comparing this difference to the amount of time spent pulling stuff from virtual memory on the hard drive... there are some enormous gains to be mad in the extra memory.
Thank you Karl, for a very knowledgeable and clear explanation!
Comments
Your 30330 memory says that it takes 3 clock cycles to fetch the first byte, it will have the second one ready on the next cycle, then you will have to wait 3 cycles for the third, another 3 for the fourth, and the fifth will be ready for the next cycle.
You new 30440 DIMM is a little slower and it says that it takes 3 clock cycles to fetch the first byte, it will have the second one ready on the next cycle, then you will have to wait 4 cycles for the third, another 4 for the fourth, and the fifth will be ready for the next cycle.
I don't know about the specifics of the memory controller you are working with, but in many cases it is a "low man wins" situation, so the whole sub-system runs at the speed of the slowest component. Now this might not be that bad a thing... We are not talking about a huge difference, and if we are comparing this difference to the amount of time spent pulling stuff from virtual memory on the hard drive... there are some enormous gains to be mad in the extra memory.
Originally posted by Karl Kuehn
Those numbers referrer to the "CASS latency" of the chips. Memory is always returned in bytes (wider busses multiplex the chips together), and usually you are looking for a whole series of bytes in a row. So some bright engineer had the idea to make the chips assume that you wanted a string of byte groups in a row, and automatically fetch them. Even if you don't want the subsequent bytes them there is rarely a penalty for fetching them.
Your 30330 memory says that it takes 3 clock cycles to fetch the first byte, it will have the second one ready on the next cycle, then you will have to wait 3 cycles for the third, another 3 for the fourth, and the fifth will be ready for the next cycle.
You new 30440 DIMM is a little slower and it says that it takes 3 clock cycles to fetch the first byte, it will have the second one ready on the next cycle, then you will have to wait 4 cycles for the third, another 4 for the fourth, and the fifth will be ready for the next cycle.
I don't know about the specifics of the memory controller you are working with, but in many cases it is a "low man wins" situation, so the whole sub-system runs at the speed of the slowest component. Now this might not be that bad a thing... We are not talking about a huge difference, and if we are comparing this difference to the amount of time spent pulling stuff from virtual memory on the hard drive... there are some enormous gains to be mad in the extra memory.
Thank you Karl, for a very knowledgeable and clear explanation!