When everything is 64 bit...

2

Comments

  • Reply 21 of 59
    Quote:

    Originally posted by Tidris

    The G5 is faster with floating point operations than with integer operations, so I would expect to see more use of floating point operations, not less.



    Right, except if you need accuracy or anything like that.



    (I'm a firm believer in the non-existance of real numbers anyway, so to me floating point is useless )
  • Reply 22 of 59
    tidristidris Posts: 214member
    Quote:

    Originally posted by Anonymous Karma

    Right, except if you need accuracy or anything like that.





    What do you mean by that? Aren't the floating point units in the G5 dual precision?
  • Reply 23 of 59
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by Tidris

    What do you mean by that? Aren't the floating point units in the G5 dual precision?



    He's comparing integers to floats. Integers are guaranteed 100% accurate at the expense of range. Floats offer a much larger range at the expense of accuracy - there's approximation involved which is essentially negligible toward the center of the range, getting worse toward the edges. 64 bit floats offer greater precision and better range, but it's still "lossy."



    For example, one of the main reasons audio is moving to 24 bit is that 16 bit introduces distortion at low amplitudes, because of the paucity of values near minimum.
  • Reply 24 of 59
    tidristidris Posts: 214member
    Quote:

    Originally posted by Amorph

    He's comparing integers to floats. Integers are guaranteed 100% accurate at the expense of range. Floats offer a much larger range at the expense of accuracy - there's approximation involved which is essentially negligible toward the center of the range, getting worse toward the edges. 64 bit floats offer greater precision and better range, but it's still "lossy."



    I am very familiar with the integer and floating point formats. To say floating point numbers aren't useful when one needs accuracy is wrong. The opposite is in fact most often the case. Try representing the value of PI or the interest rate for your mortgage loan with an integer and see what kind of accuracy you get. Try calculating jet engine noise or a fast Fourier transform using integer values and see what kind of garbage results you end up with. Of course I know that floating point values aren't optimal for all purposes, such as when one needs a simple counter in a loop or an index into an array. However, in cases when an integer is the best choice, a 32-bit integer ussually has more than enough dynamic range to do the job. If a 32-bit integer has enough dynamic range, then switching to a 64-bit integer does nothing but waste memory and possibly slow the computation to some extent.
  • Reply 25 of 59
    whisperwhisper Posts: 735member
    Quote:

    Originally posted by Tidris

    I am very familiar with the integer and floating point formats. To say floating point numbers aren't useful when one needs accuracy is wrong. The opposite is in fact most often the case. Try representing the value of PI or the interest rate for your mortgage loan with an integer and see what kind of accuracy you get. Try calculating jet engine noise or a fast Fourier transform using integer values and see what kind of garbage results you end up with.



    What about fixed point math?
  • Reply 26 of 59
    programmerprogrammer Posts: 3,457member
    Quote:

    Originally posted by Amorph

    For example, one of the main reasons audio is moving to 24 bit is that 16 bit introduces distortion at low amplitudes, because of the paucity of values near minimum.



    This has nothing to do with integer vs. floating point. Audio hardware is typically all integer representations anyhow. Audio software that uses floating point is generally 32-bit floats, which is a 24-bit sign+mantissa and hence is as "accurate" as a 24-bit integer representation.



    The real source of problems with lower resolution audio is that the sensitivity of the human ear is non-linear while the distribution of digital values across the spectrum is linear. At least that's what I think.
  • Reply 27 of 59
    programmerprogrammer Posts: 3,457member
    Quote:

    Originally posted by Whisper

    What about fixed point math?



    Fixed point math is pretty much just integer math except using different units. e.g. instead of working in meters you work in millimeters. If you divide your smallest unit by two you end up with a round-off error. Floating point is only different in that each number also tracks its "units" in the exponent so if you divide by two the units of the number shrink. Rounding errors still occur all over the place when combining numbers of different exponents, or when reaching the limits of the exponent's representation. Floating point isn't inherently less accurate than integer, its just that people don't understand where accuracy is lost.
  • Reply 28 of 59
    bigcbigc Posts: 1,224member
    Lost me there, Units?
  • Reply 29 of 59
    programmerprogrammer Posts: 3,457member
    Quote:

    Originally posted by Bigc

    Lost me there, Units?



    Fixed point math is just a matter of doing what looks like fractional calculations with integers. I was just trying (poorly, apparently) to explain this in terms of the units of measurement used. A program that is tracking a number, say distance for the sake of argument, can choose to represent this in units of meters. If the number is an integer then the program cannot represent anything less than one meter, and it cannot represent any distances that are between even meter distance -- i.e. only 0 meters, 1 meter, 2 meters, etc can be represented. If you have a distance that is 0.2 meters then the program must choose to either represent this as 0 meters or 1 meter, neither of which is quite right. One way around this is to change the program so that it represents distances in millimeters instead of meters -- now the 0.2 meter distance is represented by the integer 200 and is quite precise. A distance of 0.0002 meters, however, is either 0 or 1 millimeters. If this matters you could represent your distances with even smaller units (nanometers, or something).



    This is all fine and dandy until you have to measure a larger distance -- not only are integers in a computer limited to integral values, they are also limited in magnitude based on how many bits they are (~2 billion in the case of a signed 32-bit integer). If you are representing distances as 32-bit signed integers in units of millimeters and somebody gives you a distance of 300 million kilometers, you have a problem. The smaller you make your units, the smaller the biggest measurement you can represent. The larger the units, the less accurate the smallest one.



    Floating point gets around this by taking the available bits (32 for single precision, 64 for double precision) and dividing them into two groups: the mantissa and the exponent. The mantissa is essentially an integer, and the exponent is essentially the "units". "Scale" is a better word in this case, but it really refers to the same thing.



    --------

    Related topic:



    And before somebody else pipes up and mentions variable precision integers... the integers (and floating pointer numbers, for that matter) discussed above are all in terms of the basic representation that the processor supports as a basic type. A 32-bit processor has registers that hold 32-bit values, and it (usually) has operations that allow it to add, subtract, multiply and divide those 32-bit values. This is the "native word size" of the machine. A 64-bit machine can deal with 64-bit numbers in hardware and so it can efficiently deal with much larger numbers. If you think back to your basic elementary school arithmetic, however, you'll remember that kids are (usually) first taught how to deal with single digit numbers (0..9). When you've got those licked they teach you a bunch of rules about how to add, subtract, multiply and divide numbers made of several digits. You're also taught that each digit represents ten times what the digit to the right of it does. By applying those rules you can do math on very large numbers.



    I hope they still teach this stuff in school rather than relying on calculators because it is really important... here's why: your teacher probably taught you only about a number system where there are ten digits and each "place" in the number represented a change of a factor of ten. If you generalize this, however, you realize that you can choose any "base", not just ten. Humans chose ten a long time ago, probably due to the number of fingers we have, but it is no more valid than any other base. Current computer hardware uses "base 2" where each digit is either zero or one, and each digit is half/double the value of the one to the left/right of it (this is called binary). Programmers often use base 8 (called "octal") and base 16 (called "hexadecimal) because writing in binary is very tedious and hard to read. Converting binary <-> decimal is messy whereas octal and hexidecimal convert quite nicely since 8 and 16 are powers of 2, so as a result 3 binary digits ("bits") is 1 octal digit and 4 binary digits is 1 hexadecimal digit.



    In software it is common to use a base where the size of a "place" (or "digit" if you prefer) in the larger number is the native word size of the machine. In a 32-bit machine this is a 32-bit number. Instead of each place having ten possible values, therefore, each place has 4 billion possible values. Moving one "place" to the right increases the value of your "digit" by a factor of approximately 4 billion. A two "digit" 32-bit number is equivalent to a 64-bit number, so by doing essentially the long-hand math you learned in school a 32-bit computer can do 64-bit integer calculations. A 64-bit machine, of course, can do those 64-bit calculations directly in hardware, and doesn't have to go through the work of doing long hand math. If it wanted to do a 128-bit integer, of course, then it would need two of its "digits"... but a 32-bit machine would need 4. There is no reason to stop there, and this is exactly what software like Mathmatica does -- they have a generalized "variable precision" implementation which tracks numbers that are as many digits long as they need to be to represent the number in question (i.e. just like we write big numbers... you only write down the digits you need to correctly represent the number). This longhand math is quite slow, however, and most computer languages don't provide good support for it so very few developers use it (or need it, since most numbers are more than small enough to fit in a 32-bit integer).



    Note that this essentially this technique can be used with floating pointer numbers, but it gets quite a bit more confusing!



    The use of the word "digit" above is also a bit confusing since it commonly refers to 0..9, but we don't have another good word for the basic contents of a "place" in a number.



    There you go, "remedial math 101".



    (that's "math 0x5" for the real geeks in the crowd, whom I'm sure will post plenty of corrections)
  • Reply 30 of 59
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by Programmer

    This has nothing to do with integer vs. floating point.



    Um, no it doesn't.



    It has something to do with 16 bit FP being inadequate to the task, as I said. I was using audio as an example of FP's "lossy" characteristics.
  • Reply 31 of 59
    programmerprogrammer Posts: 3,457member
    Quote:

    Originally posted by Amorph

    Um, no it doesn't.



    It has something to do with 16 bit FP being inadequate to the task, as I said. I was using audio as an example of FP's "lossy" characteristics.




    I'm not aware of any 16-bit floating point hardware on the audio market...? The latest GPUs have support for it as a storage format, but they use higher precision during computations to preserve image quality. As far as I know all the 16-bit audio hardware is based on integer DSPs, or else it uses 32-bit floating point internally and converts down to 16 bit integer resolution... losing accuracy at that stage, hence demonstrating that floating point is more accurate than integer.
  • Reply 32 of 59
    chris cuillachris cuilla Posts: 4,825member
    I remember...way back when...no one could ever imagine why anyone would need 32-bit computers...and before that...16-bit computers.



    It's about moving data around faster. More memory. Greater precision in computations. All of these things will be useful is a variety of applications, but most notably those that...need to move data around faster...more memory and greate precision.
  • Reply 33 of 59
    whisperwhisper Posts: 735member
    Quote:

    Originally posted by Programmer

    Fixed point math is pretty much just integer math except using different units. e.g. instead of working in meters you work in millimeters. If you divide your smallest unit by two you end up with a round-off error. Floating point is only different in that each number also tracks its "units" in the exponent so if you divide by two the units of the number shrink. Rounding errors still occur all over the place when combining numbers of different exponents, or when reaching the limits of the exponent's representation. Floating point isn't inherently less accurate than integer, its just that people don't understand where accuracy is lost.



    Heh, I just looked up the format for an IEEE floating point number and found out that I've been wrong about a rather important bit (1+sig, not 1.sig). Ok, I like FP math now. (I still want a fast hardware implementation of arbitrary precision numbers though).
  • Reply 34 of 59
    programmerprogrammer Posts: 3,457member
    Quote:

    Originally posted by Whisper

    Heh, I just looked up the format for an IEEE floating point number and found out that I've been wrong about a rather important bit (1+sig, not 1.sig). Ok, I like FP math now. (I still want a fast hardware implementation of arbitrary precision numbers though).



    http://developer.apple.com/hardware/...libraries.html



    Not quite arbitrary, but certainly big and fast. I suspect that is going to be as close as you're going to get to hardware support for it.
  • Reply 35 of 59
    whisperwhisper Posts: 735member
    Quote:

    Originally posted by Programmer

    http://developer.apple.com/hardware/...libraries.html



    Not quite arbitrary, but certainly big and fast. I suspect that is going to be as close as you're going to get to hardware support for it.




    Cool, thanks . On a somewhat related note, how long before we have Mathematica on a chip?
  • Reply 36 of 59
    programmerprogrammer Posts: 3,457member
    Quote:

    Originally posted by Whisper

    Cool, thanks . On a somewhat related note, how long before we have Mathematica on a chip?



    I don't think that is the direction hardware is going. Instead we'll get more cores and more threads per core... each with SIMD units for processing long streams of data. A proper variable length number representation using something like AltiVec is possible, and with bandwidth oriented architectures they should do a reasonable job of grinding through these sorts of computations. Mathematica is a very small portion of the overall computing market, so that's not where I expect dedicated hardware to be applied. Graphics, on the other hand...
  • Reply 37 of 59
    bigcbigc Posts: 1,224member
    Quote:

    Originally posted by Programmer

    Fixed point math is just a matter of doing what looks like fractional calculations with integers.....



    OK got ya, never thought of doing anything in other than floating point with integers as counters, but Fortran world and programs for my own calculations.





    Thanks for your time
  • Reply 38 of 59
    tidristidris Posts: 214member
    Quote:

    Originally posted by Whisper

    What about fixed point math?



    I am familiar with it. I used fixed point math once on a program meant to run on an embedded 8-bit CPU. The CPU didn't support floating point math in hardware nor had enough memory to hold the libraries needed to do floating point math in software.



    One does fixed point math using integer operations. Compared to floating point math on a G5, fixed point math is guaranteed to be slower. That is because the G5 can do the floating point math faster and also because fixed point math often requires additional scaling steps not needed when using floating point math.
  • Reply 39 of 59
    Quote:

    Originally posted by Tidris

    I am very familiar with the integer and floating point formats. To say floating point numbers aren't useful when one needs accuracy is wrong. The opposite is in fact most often the case. Try representing the value of PI or the interest rate for your mortgage loan with an integer and see what kind of accuracy you get.



    Wow, looks like I set off a flamewar when I wasn't paying attention.



    Pi doesn't exist anyway, so it's not an issue.



    In other words, what I'm claiming is that everything is discrete, so floating point numbers are simply a useless attempt at approximating something that has no basis in reality. This, however, is not the right forum for a philosophical debate, and my comment should be interpreted as a joke.
  • Reply 40 of 59
    mr. memr. me Posts: 3,221member
    Quote:

    Originally posted by Anonymous Karma

    Wow, looks like I set off a flamewar when I wasn't paying attention.



    Pi doesn't exist anyway, so it's not an issue.



    In other words, what I'm claiming is that everything is discrete, so floating point numbers are simply a useless attempt at approximating something that has no basis in reality. This, however, is not the right forum for a philosophical debate, and my comment should be interpreted as a joke.




    Pi doesn't exist? Are you serious? That's not philosophy, it's just silly. I would suggest that you go to high school and take some math. Then you might do well to take some science and engineering courses.
Sign In or Register to comment.