Arbitrary Precision Arithmetic

Discussion in 'Engineering Concepts' started by Amit Ray, Jul 30, 2004.

  1. Amit Ray

    Amit Ray New Member

    Joined:
    Jul 12, 2004
    Messages:
    75
    Likes Received:
    4
    Trophy Points:
    0
    Occupation:
    Software Developer
    Home Page:
    http://www.go4expert.com

    Arbitrary Precision



    In most computer programs and computing environments, the precision of any calculation (even including addition) is limited by the word size of the computer, that is, by largest number that can be stored in one of the processor's registers. As of mid-2002, the most common processor word size is 32 bits, corresponding to the integer . General integer arithmetic on a 32-bit machine therefore allows addition of two 32-bit numbers to get 33 bits (one word plus an overflow bit), multiplication of two 32-bit numbers to get 64 bits (although the most prevalent programming language, C, cannot access the higher word directly and depends on the programmer to either create a machine language function or write a much slower function in C at a final overhead of about nine multiplies more), and division of a 64-bit number by a 32-bit number creating a 32-bit quotient and a 32-bit remainder/modulus.

    Arbitrary-precision arithmetic consists of a set of algorithms, functions, and data structures designed specifically to deal with numbers that can be of arbitrary size. These functions often modify standard paper-and-pencil arithmetical techniques (such as long division) and apply them to numbers broken into word-size chunks.

    A major difficulty in creating good arbitrary-precision arithmetic is knowing where to stop a computation. A simple example of this problem is illustrated by the binary expansion of 1/3, which is given by the nonterminating binary decimal . As a result of the fact that exact numbers do not have terminating binary fraction expansions, additional functionality must be built into an arbitrary precision computation system. This can be either in the form of a failsafe, or a configurable 'maximum precision' at which the computation will always stop when it gets to a particular very small number. There are also other ways of going about storing such unfriendly-to-binary numbers without losing precision--for example, one could come up with a data structure that just stores square roots, and then create code to deal with the quirks of such a thing. Mathematica and high-end calculators use such a system.

    References :



    Knuth, D. E. The Art of Computer Programming, Vol. 2: Seminumerical Algorithms, 3rd ed. Reading, MA: Addison-Wesley, 1998.
     

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice