G5 : 64 bits or 32 bits ?

12346

Comments

  • Reply 101 of 126
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Eric D.V.H:

    <strong>

    No they didn't! they started off with 1-bit(Monochrome), 2-bit(4 colors), 4-bit(16 colors), 8-bit(256 colors) and (later)16-bit(Thousands of colors).

    </strong><hr></blockquote>



    Argh, of course. I meant that people originally used 24 bit wide "true colors".

    (Besides, you forgot the early "high color" video cards which had 15-bit-colors (5 bits per channel). Also, each palette entry in the Amiga was a 12-bit-color.)





    [quote]<strong>I have no of why they decided to reserve the last quarter of the byte for alternate channels/maps. probably to go along with the money grubbing synthesizer/digitizer manufacturers.

    </strong><hr></blockquote>



    No. They used 32 bits to store 24bit color values for alignment reasons. Cf. Programmer's comment above.





    [quote]<strong>

    Refer to prior comments to Programmer regarding the likelihood of a pixel's byte being quartered and drawn.

    </strong><hr></blockquote>



    ???





    [quote]<strong>

    Pull out a good, analog laserdisc, hook up the player and your computer to your headphones through a good audio switcher and try it yourself(and make sure the analog stuff is top notch. bad analog sounds MUCH worse than bad digital. but good analog sounds quite a bit better than any digital audio does, has or ever will).

    </strong><hr></blockquote>



    Either you accidentally called a vinyl record a laserdisc, or you just didn't make a lot of sense. Laserdiscs are digital.





    [quote]<strong>

    Don't forget the future. 24/192/8.2 DVD-Audio discs are the end-format of the future. and it never hurts to be prepared .

    </strong><hr></blockquote>



    That's still 40 bits headroom, which is still completely unrealistic.

    Have you ever seen anybody use more than 100% headroom?



    And again, while the tech stuff get's better all the time, our ears don't.



    Bye,

    RazzFazz
  • Reply 102 of 126
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Eric D.V.H:

    <strong>

    Hmm... your math is correct. but you're forgetting something. Macs originally used whole word sizes indivisable by three. such as eight and sixteen. to represent color. suddenly switching to a quartered byte to represent color would have proven practically impossible.</strong><hr></blockquote>



    a) How exactly do 1- and 4-bit-colors fall into the "whole word sizes" category?



    b) 8-bit-colors were palettized, and as such a wholly different beast (palettized colors of course can indeed not be subdivided in any sensible way).



    c) 16-bit-colors use 5 bits each for red and blue, and 6 bits for green (IIRC), because our eyes are more sensitive to the latter than to the former two.





    [quote]<strong>

    as well as crippling either the new colors or the old ones.

    </strong><hr></blockquote>



    Whatever that's supposed to mean...



    Bye,

    RazzFazz
  • Reply 103 of 126
    [quote]Originally posted by RazzFazz:

    <strong>a) How exactly do 1- and 4-bit-colors fall into the "whole word sizes" category?</strong><hr></blockquote>



    Simple.
    • 32÷32=1

    • 32÷8=4

    [quote]Originally posted by RazzFazz:

    <strong>b) 8-bit-colors were palettized, and as such a wholly different beast (palettized colors of course can indeed not be subdivided in any sensible way).</strong><hr></blockquote>



    Palettized colors are stored the same way inside the file as non-palettized colors. the only difference is that the _whole screen_ is changed to the custom palette as specified prior to(That is. separately from when) the colors themselves are read. that's why the colors that weren't designed with that palette in mind(Like your desktop. while playing a windowed game) looked all phreaky.



    [quote]Originally posted by RazzFazz:

    <strong>c) 16-bit-colors use 5 bits each for red and blue, and 6 bits for green (IIRC), because our eyes are more sensitive to the latter than to the former two.</strong><hr></blockquote>



    <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" /> :confused: <img src="graemlins/bugeye.gif" border="0" alt="[Skeptical]" />

    That sounds a little too fishy even for AppleInsider. I'd like a little substantiation on that one before I'll swollow it.



    Eric,



    [ 04-12-2002: Message edited by: Eric D.V.H ]</p>
  • Reply 104 of 126
    razzfazzrazzfazz Posts: 728member
  • Reply 105 of 126
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Eric D.V.H:

    <strong>

    Simple.



    32÷32=1

    32÷8=4

    </strong><hr></blockquote>



    Are you pulling my leg or something?

    You can't be serious here, can you?

    Now one bit is supposed to be "whole byte size" because 8 divided by 8 is 1?





    [quote]<strong>

    Palettized colors are stored the same way ones inside the file as non-palettized ones.

    </strong><hr></blockquote>



    What do you mean?

    Of course they are just bytes (we're talking about DACs and VRAM here, not files, BTW, even though the situation is the same there), but that's of course true for anything stored anywhere inside your computer. The difference is that they are interpreted in different ways, depending on whether they are direct or indexed colors.





    [quote]<strong>

    the only difference is that the _whole screen_ is changed to the custom palette as specified before(That is. separately from when) the colors themselves are read.

    </strong><hr></blockquote>



    The difference between palettized colors and direct colors is that the former ones are translated using the color lookup table, and the corresponding values from the CLUT are then passed on to the RAMDACs. Direct colors, on the other hand, are passed straight to the DACs.

    For example, in 8-bit-mode, each pixel can be any one of 256 possible colors stored in the CLUT. Each one of the 256b CLUT entries contains three bytes, one each for red, green and blue.

    So color no. 1 might translate to (0,0,255), meaning bright blue, whereas color 15 might translate to (255,0,255), meaning violet. That way, your palette contains 256 colors out of 16.7mio., but you can only ever have at most 256 of them on your screen simultaneously.





    [quote]<strong>that's why the colors that weren't designed with that palette in mind(Like your desktop. while playing a windowed game) looked all phreaky.

    </strong><hr></blockquote>



    No, that happens because your flipper overwrites CLUT entries with his own values, and suddenly the pixel colors translate to different ones than before (i.e. color no. 1 might now have become (63, 63, 63) or dark grey). Since the still "thinks" color no. 1 is bright blue, all bright blue areas on the screen surrounding the game turn into dark grey.



    Read the links I mentioned above, I hope they'll clear some things up.





    [quote]<strong> <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" /> :confused: <img src="graemlins/bugeye.gif" border="0" alt="[Skeptical]" />

    That sounds a little too fishy even for AppleInsider. I'd like a little substantiation on that one before I'll swollow it.

    </strong><hr></blockquote>



    What part of it do you fail to understand?



    Imagine a 16 bit color value in VRAM:

    xxxxxxxxyyyyyyyy

    --1 byte--1 byte



    If the graphics card wants to put this on screen, it interprets it in the following way:



    rrrrrggggggbbbbb

    --red-green-blue



    Example:

    Let's assume a value of 52647 at some given address in VRAM. In binary representation, this is:



    1100110110100111



    Now, when the graphics card reads that value, it takes the first 5 bits and sends them to the red DAC, then sneds the next 6 bits to thw green DAC, and finally the remaining bits to the blue DAC:



    Red: 11001 = 25

    Green: 101101 = 45

    Blue: 00111 = 7





    In a hypothetical 16 bit indexed color mode, the graphics card would look up the 52647th entry in the color lookup table, and then load the individual DACs with the values stored there.



    Got it?



    Bye,

    RazzFazz



    [ 04-12-2002: Message edited by: RazzFazz ]</p>
  • Reply 106 of 126
    OK. I don't know about all the programming stuff going on here, but Eric is correct about the audio stuff.



    CDs are recorded at 16bit/44.1kHz (not 48kHz), and this was specified as a minimal standard. 44.1kHz is just about double the frequency that we can hear at (up to 20kHz). In theory, you can get back the original analog signal, but it requires a lot processing power, and is never observed in reality. Newer audio "standards" (in quotes because they haven't really caught on) such as SACD and DVD-Audio go up to 24bit/192kHz.



    Why bother? Because the difference is audible. Probably not on a $40 boom box, but with good headphones or a high-end stereo you can hear the difference. The SACDs just sound more "natural" more like an good quality vinyl LP than do standard CDs.
  • Reply 107 of 126
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by Eric D.V.H:

    <strong>

    <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" /> :confused: <img src="graemlins/bugeye.gif" border="0" alt="[Skeptical]" />

    That sounds a little too fishy even for AppleInsider. I'd like a little substantiation on that one before I'll swollow it.

    </strong><hr></blockquote>



    Eric, I'm sorry but you clearly don't know much about computer graphics hardware. I've been doing this stuff for ~18 years now (gawd, has it been that long?!!).



    1-bit allowed a 32-bit CPU to manipulate 32 pixels per word.

    4-bit was palettized (16 entries) and the actual colour values were stored in the palette as 32-bit words (actually on the Mac it was 3 16-bit values, IIRC).

    8-bit was palettized (256 entries) and the actual colour values were stored in the palette in the same form as for 4-bit.

    16-bit on the Mac was always 1-bit unused, and 5 bits each of red, green, and blue. On the PC they commonly used 5 bits of red and blue and 6 bits of green.

    32-bit for the first few years was just 8-bits each of red, green, and blue but aligned to 32-bits for performance reasons. Some PC cards stored the 8-bit channels seperately, and some stored only 24-bits but this was very inefficient to draw to. Some early Mac video cards did hardware tricks to align it to 32-bits but not provide the actual RAM for the unused 8-bits. All modern hardware actually has the extra 8-bits and new graphics systems use it as the alpha or stencil buffer.



    If you want to see this oddness, paint a pure colour gradiant (i.e. 0-100% blue, 0% red, 0% green) and count the bands. In 16-bit you'll see 32, in 32-bit you'll see 256... assuming your video card actually has an 8-bit DAC. Many of the older cards, especially on the PC, only had 6 or 7 bit DACs. Older LCD panels' display hardware could only display 5-7 bits of colour precision (I don't know about the newer ones, I haven't been paying much attention lately). Also note that the colour channels from the frame buffer each go through their own "gamma lookup table" (glut) before going to the DAC. This is used by Apple's ColorSync to adjust the monitor's output to match the standard(s). This GLUT is a table of 256 entries, each of which is 8-bits, IIRC. This means if its anything but linear then you may have duplicate entries when in 32-bit mode, resulting in fewer effective possible colours.



    Pixel memory alignment is critical for processing performance. Early video subsystems would read on a byte or 16-bit word at a time and thus the individual channels had to be an even number of bits. Newer systems generally read one bus word (64 or 128 bits) at a time and get multiple pixels out of it. For CPU graphics, however, it is still far better to have the individual colour channels be arranged so that the processor can easily get at them and manipulate them -- i.e. individual byte or word per channel. If this is not done then the CPU has to "pack" and "unpack" pixels every time it wants to change them, which is wasteful and slow (and you think Aqua is bad now!!). Graphics hardware can have a special circuit to do this, and some has even had a memory interface where this is done auto-magically (i.e. the CPU writes one format and a different format is stored).



    [ 04-12-2002: Message edited by: Programmer ]</p>
  • Reply 108 of 126
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Bozo the Clown:

    <strong>OK. I don't know about all the programming stuff going on here, but Eric is correct about the audio stuff.



    CDs are recorded at 16bit/44.1kHz (not 48kHz), and this was specified as a minimal standard. 44.1kHz is just about double the frequency that we can hear at (up to 20kHz). In theory, you can get back the original analog signal, but it requires a lot processing power, and is never observed in reality. Newer audio "standards" (in quotes because they haven't really caught on) such as SACD and DVD-Audio go up to 24bit/192kHz.

    </strong><hr></blockquote>



    If you re-read my posts, you'll note than I never disagreed about the fact that CDs are 16/44.1, or that 24/96 audio makes some sense.



    Rather, I didn't agree to Eric's claim that there's a strong necessity for 64bit audio.





    [quote]<strong>

    Why bother? Because the difference is audible. Probably not on a $40 boom box, but with good headphones or a high-end stereo you can hear the difference. The SACDs just sound more "natural" more like an good quality vinyl LP than do standard CDs.</strong><hr></blockquote>



    I'm not arguing that.



    Bye,

    RazzFazz
  • Reply 109 of 126
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by RazzFazz:

    <strong>

    If you re-read my posts, you'll note than I never disagreed about the fact that CDs are 16/44.1, or that 24/96 audio makes some sense.



    Rather, I didn't agree to Eric's claim that there's a strong necessity for 64bit audio.</strong><hr></blockquote>



    I think audio should go to 32-bit floating point. This gives about 24 bits of precision, plus it handles way out-of-bound values... and all mixing software these days runs in floating point anyhow. 24-bit is an awkward size for the machines to deal with (same alignment issues as for graphics), but it is beyond the ability of the human ear to distinguish... most people can't detect the flaws in 48kHz/16bit. 24bit is 256 times more refined, and at a higher sampling rate it should be pretty much perfect to any human. Greater internal precision is useful for avoiding losses while mixing or doing processing on the audio, but the actual output doesn't need to be better.



    Audio is a single channel of data, whereas video is 3 channels. 24-bit audio is kind of like 72-bit graphics.
  • Reply 110 of 126
    [quote]Originally posted by RazzFazz:

    <strong>Argh, of course. I meant that people originally used 24 bit wide "true colors".

    (Besides, you forgot the early "high color" video cards which had 15-bit-colors (5 bits per channel). Also, each palette entry in the Amiga was a 12-bit-color.)</strong><hr></blockquote>



    I wasn't referring to that. I was referring to "<a href="http://www.webopedia.com/TERM/t/true_color.html"; target="_blank">True Color</a>"(I misspelled it). as in 24-bit graphics in general.



    [quote]Originally posted by RazzFazz:

    <strong>No. They used 32 bits to store 24bit color values for alignment reasons. Cf. Programmer's comment above.</strong><hr></blockquote>



    What I meant was: "I have no of why they decided to reserve the last quarter of the byte for alternate channels/maps instead of just using all 32 bits for color".



    [quote]Originally posted by RazzFazz:

    <strong>Either you accidentally called a vinyl record a laserdisc, or you just didn't make a lot of sense. Laserdiscs are digital.</strong><hr></blockquote>



    No they're not. laserdiscs are(Or at least were) 100% analog. they're optical like CDs(To increase ruggedness and media lifespan) and analog like film and records(To increase recording quality). although some of the most recent laserdiscs have alternate digital audio tracks. in addition to the analog tracks. but the video is still analog.



    [quote]Originally posted by RazzFazz:

    <strong>That's still 40 bits headroom, which is still completely unrealistic.

    Have you ever seen anybody use more than 100% headroom?



    And again, while the tech stuff get's better all the time, our ears don't.</strong><hr></blockquote>



    Like I said. heavy editing can lead to heavy damage. and more overhead allows you to do more damage without anyone noticing.





    Eric,



    [ 04-12-2002: Message edited by: Eric D.V.H ]</p>
  • Reply 111 of 126
    [quote]Originally posted by RazzFazz:

    <strong>Eric, regarding pixel formats and RGB555, RGB565 and RGB888, you might want to take a look at this stuff:



    <a href="http://grafi.ii.pw.edu.pl/gbm/matrox/ramdac.html"; target="_blank">http://grafi.ii.pw.edu.pl/gbm/matrox/ramdac.html</a>;

    <a href="http://www.americanpredator.com/downloads/information/Video815E.pdf"; target="_blank">http://www.americanpredator.com/downloads/information/Video815E.pdf</a>;

    <a href="http://www.law.emory.edu/fedcircuit/aug2001/00-1257.wp.html"; target="_blank">http://www.law.emory.edu/fedcircuit/aug2001/00-1257.wp.html</a>;



    (Look for "direct color".)



    </strong><hr></blockquote>



    Hmm. I guess I was wrong. um? oops.





    Eric,
  • Reply 112 of 126
    [quote]Originally posted by RazzFazz:

    <strong>Are you pulling my leg or something?

    You can't be serious here, can you?

    Now one bit is supposed to be "whole byte size" because 8 divided by 8 is 1?</strong><hr></blockquote>



    Yup. it's perfecly valid.



    [quote]Originally posted by RazzFazz:

    <strong>What do you mean?

    Of course they are just bytes (we're talking about DACs and VRAM here, not files, BTW, even though the situation is the same there), but that's of course true for anything stored anywhere inside your computer. The difference is that they are interpreted in different ways, depending on whether they are direct or indexed colors.</strong><hr></blockquote>



    What I mean is that actual palettized colors are stored in precisely the same manner as nonpalettized colors aside from the header dictating the reassignments of what the synthesizer is supposed to interpret a certain binary value as.



    [quote]Originally posted by RazzFazz:

    <strong>The difference between palettized colors and direct colors is that the former ones are translated using the color lookup table, and the corresponding values from the CLUT are then passed on to the RAMDACs. Direct colors, on the other hand, are passed straight to the DACs.

    For example, in 8-bit-mode, each pixel can be any one of 256 possible colors stored in the CLUT. Each one of the 256b CLUT entries contains three bytes, one each for red, green and blue.

    So color no. 1 might translate to (0,0,255), meaning bright blue, whereas color 15 might translate to (255,0,255), meaning violet. That way, your palette contains 256 colors out of 16.7mio., but you can only ever have at most 256 of them on your screen simultaneously.</strong><hr></blockquote>



    Like I said.



    [quote]Originally posted by RazzFazz:

    <strong>No, that happens because your flipper overwrites CLUT entries with his own values, and suddenly the pixel colors translate to different ones than before (i.e. color no. 1 might now have become (63, 63, 63) or dark grey). Since the still "thinks" color no. 1 is bright blue, all bright blue areas on the screen surrounding the game turn into dark grey.</strong><hr></blockquote>



    Like I said.



    [quote]Originally posted by RazzFazz:

    <strong>Read the links I mentioned above, I hope they'll clear some things up.</strong><hr></blockquote>



    They sure did.



    [quote]Originally posted by RazzFazz:

    <strong>What part of it do you fail to understand?



    Imagine a 16 bit color value in VRAM:

    xxxxxxxxyyyyyyyy

    --1 byte--1 byte



    If the graphics card wants to put this on screen, it interprets it in the following way:



    rrrrrggggggbbbbb

    --red-green-blue



    Example:

    Let's assume a value of 52647 at some given address in VRAM. In binary representation, this is:



    1100110110100111



    Now, when the graphics card reads that value, it takes the first 5 bits and sends them to the red DAC, then sneds the next 6 bits to thw green DAC, and finally the remaining bits to the blue DAC:



    Red: 11001 = 25

    Green: 101101 = 45

    Blue: 00111 = 7



    In a hypothetical 16 bit indexed color mode, the graphics card would look up the 52647th entry in the color lookup table, and then load the individual DACs with the values stored there.



    Got it?</strong><hr></blockquote>



    When I said that(Prior to perusing the links you gave). I understood the above perfectly. I just thought it was all poppycock.





    Eric,
  • Reply 113 of 126
    [quote]Originally posted by Programmer:

    <strong>Eric, I'm so?SNIPPED?nd a different format is stored).</strong><hr></blockquote>



    The only reason I was arguing was because. in the absence of any verifiable proof. like RazzFazz's links and your gradient striation counting trick. I couldn't except something so silly sounding.



    (So THAT'S the reason why the "Thousands of Shades of Gray" and "Millions of Shades of Gray" modes weren't any different. and why my beautiful, ditherless 24-bit texture maps would always be screwed up by dithered 8-bit bump maps as a result )



    Eric,
  • Reply 114 of 126
    [quote]Originally posted by RazzFazz:

    <strong>I didn't agree to Eric's claim that there's a strong necessity for 64bit audio.</strong><hr></blockquote>



    [quote]Originally posted by Programmer:

    <strong>I think audio should go to 32-bit floating point. This gives about 24 bits of precision, plus it handles way out-of-bound values... and all mixing software these days runs in floating point anyhow. 24-bit is an awkward size for the machines to deal with (same alignment issues as for graphics), but it is beyond the ability of the human ear to distinguish... most people can't detect the flaws in 48kHz/16bit. 24bit is 256 times more refined, and at a higher sampling rate it should be pretty much perfect to any human.</strong><hr></blockquote>



    Yeah. but 32-bit only affords about a 33% safety net over 24-bit for editing/mixing. as opposed to the much larger 100% and 170% safety nets for 48-bit and 64-bit audio(Respectively).



    [quote]Originally posted by Programmer:

    <strong>Greater internal precision is useful for avoiding losses while mixing or doing processing on the audio, but the actual output doesn't need to be better.</strong><hr></blockquote>



    Yes it does. the professionals that make the mix's need 48-64-bit digitizers so they have _something_ to work with. and as hardware downsampling/dithering can't possibly compare to the hand downsampled mix the audience will hear. those professionals also need 48-64-bit synthesizers to go with. and these professionals will be using Macs. and as you can perceive the extra 16-32 bits(If not consciously. then a deeper. but still perceived level) of quality. you might as well bump up the I/O on normal Macs. for music lovers, garage bands, gamers and numerous others. or maybe make it a BTO option. and let them decide.



    Eric.
  • Reply 115 of 126
    And as for whether or not a 64-bit CPU is needed for 48-64-bit video. even with quarter-byte(Or tri-byte in the case of even 64-bit bytes) video. my opinion is still yes.



    Assuming a 48-bit byte of video data had to be passed through the integer unit of a 32-bit CPU in 16-bit long chunks. it could only eat through two of them in a cycle. thus bringing back that nasty two-cycles-per-pixel penalty. whereas the integer unit of a 64-bit CPU would happily take all three pieces in one cycle.



    For this reason. the need for the G5 to be 64-bit(Which it already is ) in order to properly use both 48-64-bit audio AND video still stands as before.



    Eric,
  • Reply 116 of 126
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Eric D.V.H:

    <strong>

    No they're not. laserdiscs are(Or at least were) 100% analog. they're optical like CDs(To increase ruggedness and media lifespan) and analog like film and records(To increase recording quality). although some of the most recent laserdiscs have alternate digital audio tracks. in addition to the analog tracks. but the video is still analog.

    </strong><hr></blockquote>



    Could you provide a link or something? I find this a little hard to believe. How would you store analog data on optical media?



    EDIT: Never mind, looked it up myself. They indeed seem to have analog tracks. My apologies. Still wondering how this is stored optically, though... :confused:





    [quote]<strong>Like I said. heavy editing can lead to heavy damage. and more overhead allows you to do more damage without anyone noticing.

    </strong><hr></blockquote>



    And 40 bits of overhead give you a little more than 1 trillion operations on a single sample before you have rounding errors. Can anyone say overkill?





    [quote]<strong>

    What I mean is that actual palettized colors are stored in precisely the same manner as nonpalettized colors aside from the header dictating the reassignments of what the synthesizer is supposed to interpret a certain binary value as.

    </strong><hr></blockquote>



    And your point was?





    [quote]<strong>

    Yeah. but 32-bit only affords about a 33% safety net over 24-bit for editing/mixing. as opposed to the much larger 100% and 170% safety nets for 48-bit and 64-bit audio(Respectively).

    </strong><hr></blockquote>



    It seems to escape you that the the resolution grows exponentially with bit depth.

    With 24 bit audio, you can distinguish 2^24=16.8mio levels of amplitude ("sonic pressure"?). Adding another 8 bits doesn't just add 5.6mio (one third of 16.8mio) to that number, but rather multiplies it by 2^8=256. 64 bit audio would multiply the number by more than one trillion.





    [quote]<strong>

    Yes it does. the professionals that make the mix's need 48-64-bit digitizers so they have _something_ to work with. and as hardware downsampling/dithering can't possibly compare to the hand downsampled mix the audience will hear. those professionals also need 48-64-bit synthesizers to go with.

    </strong><hr></blockquote>



    48bit digitizers? As in 48 bit ADCs? I don't think something like that even exists in the audio field.



    Also, it would be nice if you could provide a link or anything that hints towards any existing piece of audio gear that has 48 or 64 bits of internal resolution.





    [quote]<strong>

    And as for whether or not a 64-bit CPU is needed for 48-64-bit video. even with quarter-byte(Or tri-byte in the case of even 64-bit bytes) video. my opinion is still yes.

    Assuming a 48-bit byte of video data had to be passed through the integer unit of a 32-bit CPU in 16-bit long chunks. it could only eat through two of them in a cycle. thus bringing back that nasty two-cycles-per-pixel penalty. whereas the integer unit of a 64-bit CPU would happily take all three pieces in one cycle.

    </strong><hr></blockquote>



    Did you actually read my previous posts at all?

    I hate to repeat myself, but as I already said at some point earlier in this thread, AltiVec can do exactly that, and can even do it for two such 64-bit-pixels at a time, in a single cycle, right now, with the current G4. This is exactly what it excels at.



    Bye,

    RazzFazz



    [ 04-12-2002: Message edited by: RazzFazz ]</p>
  • Reply 117 of 126
    airslufairsluf Posts: 1,861member
  • Reply 118 of 126
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by Eric D.V.H:

    <strong>

    Yeah. but 32-bit only affords about a 33% safety net over 24-bit for editing/mixing. as opposed to the much larger 100% and 170% safety nets for 48-bit and 64-bit audio(Respectively).

    </strong><hr></blockquote>



    As Razz has already pointed out, you have missed the fact that adding bits exponentially increases the data range. Going from 16-bit samples to 32-bit float samples increases the precision by a factor of 25600%, and it lets you represent very large and very small values precisely. This should be more than sufficient for any input/output, although heavy processing might use larger internal formats to avoid computational errors.



    I'm even a little skeptical if current technology allows equipment to be built (at sub-million dollar prices anyhow) that doesn't introduce noise caused by the circuitry at the level of 24-bit percision.



    Note that a 5.1 audio signal at 32-bits per channel would be 192-bits per sample.
  • Reply 119 of 126
    powerdocpowerdoc Posts: 8,123member
    as i have say here, the analogic circuits of the powermac are too bad to make a difference between 24 and 16 bits. If i want to ear good sound i do not listen to my mac even with my i stiks and subwoofer, i listen to my HI FI system, both are 16 bits but the HI FI is much, much better.

    If i change the CD player of my HI FI system with a DVD audio player it will make an improvement, but an audio 24 bit on my Mac will make no difference the audio analogic circuits are too basic. 5 $ of analogical audio circuits can not match 1000 $ one.
  • Reply 120 of 126
    mmastermmaster Posts: 17member
    LaserDiscs are not analog. This is a big myth. LD video may not be like DVD video, but it is definitely not analog.



    LD video and sometimes sound is stored as an analog signal converted to a digital format using a form of pulse FM. Basically it is something like you pick a base frequency then you take your sound and clip around it to get the 0's and 1's to store on the disc. During playback you use the knowledge of the base frequency and the 0's and 1's to reform the signal. Of course some info is lost since there is no way you can store something continuous (analog signal) in a discrete format (digital) without losing something. Just like you can't make a mapping of R to Z that is a bijection.



    So as you can see, LD data is discrete an thus is not analog. The method takes a video signal (NTSC for US) and encodes it in binary. It is no more analog than CD audio is.



    [ 04-13-2002: Message edited by: mmaster ]



    [ 04-13-2002: Message edited by: mmaster ]</p>
Sign In or Register to comment.