Can it be done?

Discussion in 'Physics & Math' started by Waiter_2001, Feb 13, 2016.

  1. Waiter_2001 Registered Senior Member

    Messages:
    459
    Can one ever be reached with the following equation?

    a=0
    b=1
    while (a<b)
    b=b/2
    a=a+(b/2)
    wend

    The program IS slowing but it will never reach the end: I believe it will take forever to get there...
     
    Last edited: Feb 13, 2016
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. ajanta Registered Senior Member

    Messages:
    611
    And I don't understand(( b=b/2, a=a+(b/2)) it. How ?! May be it is possible when a=b=0.
     
    Last edited: Feb 13, 2016
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Fednis48 Registered Senior Member

    Messages:
    725
    Two things:

    1. That's a program, not an equation. Equations are static things, with no notion of "steps" or update rules.

    2. Using an idealized computer with infinite memory, you're right that the program will approach 1 but never get there. Using any real computer, the result will eventually get too close to 1 to keep track of the difference using the available memory, so it will round off to 1.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. quantum_wave Contemplating the "as yet" unknown Valued Senior Member

    Messages:
    6,677
    Yes, it is like the old and controversial thread that went around with the statement that .999... = 1. In the end, it is a convention that does the rounding, lol.
     
  8. ajanta Registered Senior Member

    Messages:
    611
    Thanks. Its about logic gates.
     
  9. danshawen Valued Senior Member

    Messages:
    3,951
    This is Zeno's paradox.

    For a view that is in opposition to mine, see:

    http://www.chronon.org/articles/ZenoLynds.html

    If you continue to divide distance, time, or anything else indefinitely, what you eventually end up with is something static, or a frozen snapshot of an instant (not an interval) of time. As Peter Lynds has pointed out, this process eventually robs any such scenario of all energy of all but virtual energy, which needs an extra push either to become real or to propagate, which amounts to the same thing that Zeno's paradox really deals with. The speed of light is the same to all observers. You cannot subdivide it. Also, you cannot subdivide the inertia of bound energy that is matter (the crux of the paradox). Subdividing such things indefinitely is the mathematical equivalent of a moving wall that never really stops completely, but close enough.

    Because we live in a universe of energy transfer events, this paradox is every bit as impossible as the idea of motion in a Euclidean space. Thus, it is altogether fitting that Zeno's is a paradox conceived of in Ancient Greece. To Hades with them, their alphabet, their misbegotten static solid geometry that is misapplied to explain quantum mechanics in a universe that consists solely of energy exchange events, and their polytheistic influence on Western civilization which resulted in Saints, parables of the Apostles, prophets and the holy trinity, among other mistakes violating the basis of the first commandment.

    Please me know if I have left anything out of the litany above relating to Greek culture deserving of lambasting. Zeno's paradox really drove me far around the bend.
     
    Last edited: Feb 13, 2016
  10. Bruinthor Registered Member

    Messages:
    37
    Did you enter this correctly? First step (0,1), 2nd (1/4,1/2), 3rd (1/4+1/8,1/4), finis
     
    danshawen likes this.
  11. danshawen Valued Senior Member

    Messages:
    3,951
    Which is the way you end up reasoning only if someone convinces you that there is a deeper reality in numbers than in actual reality. Is Zeno's predicament a reality or not?

    Your program will terminate when the calculating engine encounters a situation in which it is no longer capable of dealing with the last digit of your calculation. This problem has no other correspondence to reality at all, and I no longer find such conundrums even interesting.

    Are there numbers so large or so small, or calculations so lengthy or so complex that they are as intractable for a computer as they are for a human mind to deal with? Yes. So, what?
     
    Last edited: Feb 13, 2016
  12. mathman Valued Senior Member

    Messages:
    2,002
    A computer program will end (assuming properly programmed) when the new a equals the old a, because every computer is precision limited.
     
  13. Bruinthor Registered Member

    Messages:
    37
    This algorithm never ends, in the real world it would run out of memory or precision.
    a=0
    b=1
    while (a<b)
    c=b-a
    b=b+c/2
    a=a+c
    wend

    Zeno's non-Paradox (in special relativity):

    If two world lines intersect their invariant separation is zero as seen by an inertial observer passing through the intersection point.
    If the two world lines pass the origin together with an none-zero invariant separation then that will be the maximum invariant separation.
     
  14. Waiter_2001 Registered Senior Member

    Messages:
    459
    I did not enter it correctly:

    a=0
    b=1
    while(a<b)
    b=b/2
    a=a+b
    wend

    Computers round off as someone posted by in real mathematics it never reaches one.

    1/2+1/4
     
  15. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
    If you write the (a,b) updates as a single operation this is
    (a,b) = (0,1)
    while (a<b) {
    (a,b) = (a + b/2, b/2)
    }

    Or written less ambiguously with a temporary variable:
    (a,b) = (0,1)
    while (a<b) {
    t = b/2
    (a,b) = (a + t, t)
    }

    which stops, not because of rounding, but because a grows toward 1 and b shrinks to zero. Thus if both monotonically approach those limits there will come a time when \( b \leq a \). That happens really fast.

    Bruinthor also noticed this disconnect between what you wrote in the original program in the earliest version of the OP and the concept of computer imprecision.

    Step 1: (a,b) = (0,1)
    Step 2: Check if 0 < 1 ✅
    Step 3: (a,b) = (1/2, 1/2)
    Step 4: Check if 1/2 < 1/2 ❎
    Stop.

    So it looks like you want to make your condition while (a < 1) if you want to test floating point rounding.

    Here is a perl (scripting language) one-liner where finite internal precision means the computer can't distinguish between \((1 - 2^{-53} ) + 2^{-54}\) and 1. This is common for computers using 64-bit IEEE-754 binary double precision floating point numbers. I have augmented a and b with n for the number of steps and t for a temporary variable. The choice of printf precision was sufficient to distinguish 1 from the closest numbers that IEEE double precision floating point can represent.
    Code:
    perl -e '
    my ($a, $b, $n) = (0, 1, 0); 
    printf "%32.30f, %32.30f, %2d\n", $a, $b, $n; 
    while ( $a < 1 ) { 
        my $t = $b/ 2 ;
        ($a, $b, $n) = ($a + $t, $t, $n+1);
        printf "%32.30f, %32.30f, %2d\n", $a, $b, $n;
    }
    '
    Code:
    0.000000000000000000000000000000, 1.000000000000000000000000000000,  0
    0.500000000000000000000000000000, 0.500000000000000000000000000000,  1
    0.750000000000000000000000000000, 0.250000000000000000000000000000,  2
    0.875000000000000000000000000000, 0.125000000000000000000000000000,  3
    0.937500000000000000000000000000, 0.062500000000000000000000000000,  4
    0.968750000000000000000000000000, 0.031250000000000000000000000000,  5
    0.984375000000000000000000000000, 0.015625000000000000000000000000,  6
    0.992187500000000000000000000000, 0.007812500000000000000000000000,  7
    0.996093750000000000000000000000, 0.003906250000000000000000000000,  8
    0.998046875000000000000000000000, 0.001953125000000000000000000000,  9
    0.999023437500000000000000000000, 0.000976562500000000000000000000, 10
    0.999511718750000000000000000000, 0.000488281250000000000000000000, 11
    0.999755859375000000000000000000, 0.000244140625000000000000000000, 12
    0.999877929687500000000000000000, 0.000122070312500000000000000000, 13
    0.999938964843750000000000000000, 0.000061035156250000000000000000, 14
    0.999969482421875000000000000000, 0.000030517578125000000000000000, 15
    0.999984741210937500000000000000, 0.000015258789062500000000000000, 16
    0.999992370605468750000000000000, 0.000007629394531250000000000000, 17
    0.999996185302734375000000000000, 0.000003814697265625000000000000, 18
    0.999998092651367187500000000000, 0.000001907348632812500000000000, 19
    0.999999046325683593750000000000, 0.000000953674316406250000000000, 20
    0.999999523162841796875000000000, 0.000000476837158203125000000000, 21
    0.999999761581420898437500000000, 0.000000238418579101562500000000, 22
    0.999999880790710449218750000000, 0.000000119209289550781250000000, 23
    0.999999940395355224609375000000, 0.000000059604644775390625000000, 24
    0.999999970197677612304687500000, 0.000000029802322387695312500000, 25
    0.999999985098838806152343750000, 0.000000014901161193847656250000, 26
    0.999999992549419403076171875000, 0.000000007450580596923828125000, 27
    0.999999996274709701538085937500, 0.000000003725290298461914062500, 28
    0.999999998137354850769042968750, 0.000000001862645149230957031250, 29
    0.999999999068677425384521484375, 0.000000000931322574615478515625, 30
    0.999999999534338712692260742188, 0.000000000465661287307739257812, 31
    0.999999999767169356346130371094, 0.000000000232830643653869628906, 32
    0.999999999883584678173065185547, 0.000000000116415321826934814453, 33
    0.999999999941792339086532592773, 0.000000000058207660913467407227, 34
    0.999999999970896169543266296387, 0.000000000029103830456733703613, 35
    0.999999999985448084771633148193, 0.000000000014551915228366851807, 36
    0.999999999992724042385816574097, 0.000000000007275957614183425903, 37
    0.999999999996362021192908287048, 0.000000000003637978807091712952, 38
    0.999999999998181010596454143524, 0.000000000001818989403545856476, 39
    0.999999999999090505298227071762, 0.000000000000909494701772928238, 40
    0.999999999999545252649113535881, 0.000000000000454747350886464119, 41
    0.999999999999772626324556767941, 0.000000000000227373675443232059, 42
    0.999999999999886313162278383970, 0.000000000000113686837721616030, 43
    0.999999999999943156581139191985, 0.000000000000056843418860808015, 44
    0.999999999999971578290569595993, 0.000000000000028421709430404007, 45
    0.999999999999985789145284797996, 0.000000000000014210854715202004, 46
    0.999999999999992894572642398998, 0.000000000000007105427357601002, 47
    0.999999999999996447286321199499, 0.000000000000003552713678800501, 48
    0.999999999999998223643160599750, 0.000000000000001776356839400250, 49
    0.999999999999999111821580299875, 0.000000000000000888178419700125, 50
    0.999999999999999555910790149937, 0.000000000000000444089209850063, 51
    0.999999999999999777955395074969, 0.000000000000000222044604925031, 52
    0.999999999999999888977697537484, 0.000000000000000111022302462516, 53
    1.000000000000000000000000000000, 0.000000000000000055511151231258, 54
    With exact math and higher precision output the last two lines would be
    Code:
    0.999999999999999888977697537484345957636833190917968750, 0.000000000000000111022302462515654042363166809082031250, 53
    0.999999999999999944488848768742172978818416595458984375, 0.000000000000000055511151231257827021181583404541015625, 54
    but IEEE cannot represent every real number because it only has 64 bits. Thus rounding is both necessary and expected for programs which rely on finite representations.


    If you want to determine the exact result of the inner loop, you can use linear algebra. The resulting matrix is a left stochastic matrix in that every column sums to 1 thus it leaves the sum of a column vector unchanged. The use of eigenvectors allows us to write in closed form the result of any number of left multiplications by the matrix on any column vector:
    \( M = \begin{pmatrix} 1 & \frac{1}{2} \\ 0 & \frac{1}{2} \end{pmatrix} \\ M \begin{pmatrix} 1 \\ 0 \end{pmatrix} = 1 \times \begin{pmatrix} 1 \\ 0 \end{pmatrix} \\ M \begin{pmatrix} -1 \\ 1 \end{pmatrix} = \frac{1}{2} \times \begin{pmatrix} -1 \\ 1 \end{pmatrix} \\ M^n \begin{pmatrix} a \\ b \end{pmatrix} = M^n \left( (a + b ) \begin{pmatrix} 1 \\ 0 \end{pmatrix} + b \begin{pmatrix} -1 \\ 1 \end{pmatrix} \right) = (a + b ) M^n \begin{pmatrix} 1 \\ 0 \end{pmatrix} + b M^n \begin{pmatrix} -1 \\ 1 \end{pmatrix} = \begin{pmatrix} a + (1 - 2^{-n}) b \\ 2^{-n} b \end{pmatrix} \\ M^n \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 1 - 2^{-n} \\ 2^{-n} \end{pmatrix}\)

    This is a very simple closed form.
     
  16. James R Just this guy, you know? Staff Member

    Messages:
    39,397
    waiter_2001:

    Are you trying to write a program to calculate the following sum?

    \(a=\frac{1}{2}+ \frac{1}{4} + \frac{1}{8} + \dots\)

    As you keep adding terms to this sum, the value of \(a\) gets closer and closer to 1, but it would take an infinite number of terms to reach 1.

    On a computer, the value of the term you add at each step, which you call b, keeps halving. At some point, the arithmetic precision of the computer will hit the limit of the small numbers it can express accurately, at which point the machine will equate b with zero and a with 1. This will occur unless you use a software system that allows arithmetic to unlimited precision; if that is the case, your program will never terminate.
     
    danshawen likes this.
  17. Waiter_2001 Registered Senior Member

    Messages:
    459
    My argument JamesR! A will never reach b: it takes forever.
     
    danshawen likes this.
  18. James R Just this guy, you know? Staff Member

    Messages:
    39,397
    Err... I'm not sure I understand what you just said. I take it we are in agreement. Are we?
     
    danshawen likes this.
  19. DaveC426913 Valued Senior Member

    Messages:
    18,935
    danshawen likes this.
  20. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
    Are you saying that there is no positive integer, n, such that \(2^{-n} = 0\)? Everyone who studied the issue over 3500 years agrees on that.
    That's because the integers have the property that there is no such thing as a largest integer. The proof is classic proof by contradiction.

    1) Assume there is such a thing as a largest positive integer, call it M.
    2) 1 > 0.
    3) 1 + x is an integer for all integers x.
    4) 1 + x > 0 + x = x for all integers x.
    5) It follows from 2, 3 and 4 that 1 + M is an integer larger that M.
    6) Therefore the assumption that there is a largest integer is false.
    7) This works just as well if we reason (1 + 1) = 2 > 1, therefore M + M = 2 M > M.

    Because there is no such thing as a largest positive integer, it follows that there is no such thing as a smallest reciprocal of a positive integer.
    1) Assume there is such a thing as a smallest reciprocal of a positive integer, call it 1/M.
    2) Then M is the largest positive integer.
    3) but 2M is a larger positive integer
    4) therefore 1/(2M) is a smaller reciprocal of a positive integer than 1/M.
    5) Therefore the assumption that there is a smallest reciprocal of a positive integer is false.

    Because \(2^{-n} = \frac{1}{2^n}\) what you are claiming is a statement about rational numbers (or integers), without requiring the full mathematics of the real number system.

    But in the real numbers, we have the concept of the limit of an infinite sequence and can write \(\lim_{n\to\infty} 2^{-n} = 0\) without at any point invoking an infinity number of stepwise operations.



    Or are you saying in standard computer representations there are minimum integers, \(\ell, m, n\) such that \(\sum_{k=1}^{\ell} 2^{-k} , \; 1 + 2^{-m} , \; 2^{-n}\) evaluate to 1, 1 and 0 respectively? That's a function of the IEEE-754 douple precision floating point representation which can exactly represent \(1 - 2^{-53} \lt 1 \lt 1 + 2^{-52}\) and \(-2^{-1074} \lt 0 \lt 2^{-1074} \) but not numbers in between.

    Code:
    Intel representation / Motorola representation / Conventional representation
    000000000000f07f / 7ff0000000000000 / "+∞" (Denotes results too large to represent)
    ffffffffffffef7f / 7fefffffffffffff / ( 2^53 - 1 ) * 2^971
    ...
    010000000000e07f / 7fe0000000000001 / 2^1023 + 2^971
    000000000000e07f / 7fe0000000000000 / 2^1023
    ffffffffffffdf7f / 7fdfffffffffffff / 2^1023 - 2^970
    ...
    0100000000004043 / 4340000000000001 / 2^53 + 2
    0000000000004043 / 4340000000000000 / 2^53 (largest contiguously representable integer)
    ffffffffffff3f43 / 433fffffffffffff /  2^53 - 1
    ...
    010000000000f03f / 3ff0000000000001 / 1 + 2^-52
    000000000000f03f / 3ff0000000000000 / 1
    ffffffffffffef3f / 3fefffffffffffff / 1 - 2^-53
    ...
    0300000000000000 / 0000000000000003 / 2^-1073 + 2^-1074
    0200000000000000 / 0000000000000002 / 2^-1073
    0100000000000000 / 0000000000000001 / 2^-1074
    0000000000000000 / 0000000000000000 / 0
    0000000000000080 / 8000000000000000 / "-0"  (Denotes results with magnitude too small to represent when the sign is negative)
    0100000000000080 / 8000000000000001 / - 2^-1074
    
    The solution is to use a computer representation without a fixed number of bits in its representation. Then exact arithmetic in rational numbers can be done to arbitrary precision, provided enough memory exists.
     
    Last edited: Feb 16, 2016
    danshawen likes this.
  21. Waiter_2001 Registered Senior Member

    Messages:
    459
    Possibly but how is the world going to understand or accept your statements given their complications. Should be easily accessible to all...(easy to play difficult to master.) I'm going to calculate the stated formula to see if 0.999999999 can be reached. THEN I will accept rounding.
     
  22. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
    No one knows what assertion you are responding to.
    James R, DaveC, and Fednis48 have been telling you what informed people in the world know. To the extent that some ignorant people in the world don't want to learn from the informed people, there is no point counting them as part of the problem. To the extent that some ignorant people haven't yet learned this, the demonstrations are trivial and have been in this thread for about a week.
    What reasoning led you to believe this, when you do not demonstrate knowledge of computer arithmetic?
    It's not an equation, it's not a formula. It's an algorithm.
    What precisely do you mean by 0.999999999 ? Do you mean any number between \(1 - 10^{-9}\) and \(1\)? Do you mean the real number \(9 \times \sum_{k=1}^{\infty} 10^{-k} = 0.\bar{9} = 1\)? Do you mean the computer representation of the largest representable floating point number less than \(1\) ( \(1 - 2^{-24}, \; 1 - 2^{-53}, \; \textrm{or} \; 1 - 2^{-113}\) for IEEE-754 binary single, double, and quadruple precision numbers, respectively) ?
    Rounding is a computer operation when programed to work with imprecise arithmetic. See post #12. Unless you study the details of what computers actually do, you don't know if the computer is rounding or not. It's a machine. It's just following its programming.
     
    Last edited: Feb 20, 2016
    danshawen and krash661 like this.
  23. Waiter_2001 Registered Senior Member

    Messages:
    459
    I'm actually going to perform the calculation manually and try to reach 0.9999999

    0.9*10^x

    Anything with all nines will be sufficient to justify rounding.
     

Share This Page