The AppleInsider Dictionary

2»

Comments

  • Reply 21 of 30
    eskimoeskimo Posts: 474member
    [quote]Originally posted by PookJP:

    <strong>Here's another one:



    L2 and L3 cache.



    What are they? What's the difference? Which is better? Why?



    - Pook</strong><hr></blockquote>



    There is no concrete definition for L2 and L3 cache. L2 stands for Level 2 and so L3 stands for Level 3 cache. The numeral designation simply indicates the relative proximity of that memory unit to the processor. Information residing in L2 cache will be able to be accessed in a more timely manner than L3 cache. The implementation of either is variable. L2 cache can be found on the physical CPU silicon itself (known as on-die) or exist as a seperate physical entity connected to the processor by a physical data path on the computers motherboard. The same holds true for L3. In the past manufacturing and cost issues prevented most consumer priced CPUs from containing on die cache. The general trend has been to move the cache on die as this reduced the physical distance to the memory which in turn reduces the time it takes for information to travel from the cache to the CPU. The time it takes to access information that resides in the cache is called latency. Latency is design dependent as well as relying on physical distance. Another term of measurement for cache is bandwidth. This is the rate at which data can be transfered to the CPU. Usually measured in GB/s. Bandwidth can be increased in several ways, two of which are increasing the "width" of the path measured in bits (64bit, 256bit...etc) and increasing frequency (clockspeed).



    Which is better is a somewhat subjective question and depends on the application. If there were no limitations on cost and manufacturibility you would want as much cache as close to the processor with the lowest latency, L1 or even L0. Since cost usually limits the size of the L2 cache tasks the processor undertakes using datasets that are larger than the physical size of the cache will take a performance hit as they have to swap the rest of the dataset with the main memory. In this case a decent sized L3 cache would provide a performance boost as it is much lower latency than the main memory.





    Note in this graph of performance as a measure of dataset size that as the size surpasses the size of the L1 cach on the Pentium(32KB) and Athlon(128KB) there is a decrease, and performance suffers a severe drop off when the size exceeds L2 cache sizes (256KB). This is due to the added latency and reduced bandwidth available to fetch the information from ever slower levels of memory.
  • Reply 22 of 30
    watermelons and cat?
  • Reply 23 of 30
    Apple's Font: Apple Garamond



  • Reply 24 of 30
    markmark Posts: 143member
    [quote]Originally posted by Arakageeta:

    <strong>watermelons and cat? </strong><hr></blockquote>







    I love seeing old-timers still hanging around. Cheers.
  • Reply 25 of 30
    macaddictmacaddict Posts: 1,055member
    My cat can eat a whole watermelon.



    My watermelon can eat whole cat.



    My cat is a vegetarian.



    I can eat both a watermelon and a cat.



    My cat is a watermelon.



    ...etc.
  • Reply 26 of 30
    pookjppookjp Posts: 280member
    Yeah, what was that whole Watermellons and Cat thing? I was new on the boards when that was going on, and didn't feel quite right jumping in. I registered November 1999.



    Anyway...



    Another question: Open GL What's that all about?
  • Reply 27 of 30
    xoolxool Posts: 2,460member
    [quote]Originally posted by PookJP:

    <strong>Yeah, what was that whole Watermellons and Cat thing? I was new on the boards when that was going on, and didn't feel quite right jumping in. I registered November 1999.



    Anyway...



    Another question: Open GL What's that all about?</strong><hr></blockquote>



    OpenGL is an 3-D programming API (Abstract Programming Interface). Basically its a set of program routines which perform certain useful functions, like drawing lines, changing the state of the renderer, etc. By using this API programmers do not have to write their programs to interface directly with the graphics hardware, in fact they don't even need to know how it really works at all!



    So, rather than dealing with all the little bits and specifics, code is more general. And all a hardware provider has to do is create OpenGL for their equipment, then most OpenGL programs will work with it. (A little simplified, but so what...)



    Anyhow, I can go in to more detail later... if its needed.
  • Reply 28 of 30
    pookjppookjp Posts: 280member
    <a href="http://www.macnn.com"; target="_blank">www.macnn.com</a> is reporting today that Open GL will never be available for OS X. Is that a big deal?
  • Reply 29 of 30
    daverdaver Posts: 496member
    [quote]Originally posted by PookJP:

    <strong><a href="http://www.macnn.com"; target="_blank">www.macnn.com</a> is reporting today that Open GL will never be available for OS X. Is that a big deal?</strong><hr></blockquote>



    It's a big deal all right, but you're a little confused. Mac OS X has supported OpenGL for a very long time, but now Apple is saying you need a Mac equipped with an ATI Rage 128 video card (or better) for it to work.



    This news isn't so bad for beige Power Mac G3 owners with a free PCI slot, as they can install a new graphics card. It is a major problem for iMac, iBook and PowerBook users, however, as these models' video hardware isn't upgradable. All iMacs before the iMac DV, all iBooks before the dual USB model and all PowerBooks older than Pismo won't be supported.
  • Reply 30 of 30
    pookjppookjp Posts: 280member
    Aaaah, interesting. I'm not sure if I agree with Apple's current trend of excluding older Mac users, given their definition of "old." I hardly think a computer 2 years old (dual USB iBook...) should be unable to use certain features and products (iPod...).



    - Pook
Sign In or Register to comment.