# 1+1=?

el-half said:
This statement constradicts the concept infinity. If you say "at infinity" infinity is a marked point and thus not infinite anymore.
You can phrase it as "When the string of zeros before the one is infinitely long" if it makes you feel better.

Im still sticking that infinity does not really exist even with numbers. Yes to the capacity of the human mind an infinite amount of numbers exist but i am saying when time stops something that is infinite will then become finite.

The sum could have been for binary, Namely 1+1 can't equal 2. It could only equal 1 or a 0 depending on the type of logic gate it's submitted through.

+ means 'or' in the logic algebra I was taught, so 1 + 1 would = 1

As for infinity, it is more meaningful to use it as a state rather than a quantity.

Well maybe infinity does not exist. Its possible that infinity is just used to represent and exremely large finite point. Could some one give me an example of something that is actually infinate.

1 divided by 3 is an infinite number. 1/3 in decimal form that is

1/3 isnt ifinite making .333r not infinate

el-half said:
I still can't agree with you, but you are correct in saying that my 0.0r1 does not make sense.

Sure it makes sense... it's just not particularly useful, since it is equal to zero.

0.0r1 = 0.0r + 1 x 10<sup>-(infinity+1)</sup>

= 0 + 1 x 10<sup>-(infinity+1)</sup>
= 0

0.9r = A = 0.9 + 0.09 + 0.009 + ... + 9*10^(-n) + ...

A/10 = 0.9/10 + 0.09/10 + 0.009/10 + ... + 9*10^(-(n+1)) + ... = 0.09 + 0.009 + 0.0009 + ... + 9*10^(-n) + ...

A - A/10 = 9A/10 = (0.9 + 0.09 + 0.009 + ... + 9*10^(-n) + ... ) - (0.09 + 0.009 + 0.0009 + ... + 9*10^(-n) + ...) = 0.9

so A = 0.9*10/9 = 1.

Odin'Izm said:
Well maybe infinity does not exist. Its possible that infinity is just used to represent and exremely large finite point. Could some one give me an example of something that is actually infinate.

1 divided by 3 is an infinite number. 1/3 in decimal form that is

"nothing" is infinite. (What a paradox!!!)

lol! thats a good one

and yes 1/3 is an infinite number , the 3's go on forever.

Odin'Izm said:
lol! thats a good one
and yes 1/3 is an infinite number , the 3's go on forever.

That's not what is usually meant by "an infinite number".
Let's be specific:

The decimal expansion of 1/3 contains an infinite number of digits.
The number 1/3 is finite.

ODIN, omg 1/3 is not an infinite number why cant u understand infinity does not exist

now calvin "nothing is infinate" what an idea. if absolute nothing exists then it may be infinate unless something can come from absolute nothing. So if something came from nothing then the statement "nothing is infinate is true" But ("nothing" is infinate would be false) so really it is not an paradox. it is either one or the other. (might have to read that over a few times)

Well, it could be argued that 1+1=1 if it is acknowledged that 1 represents a wholeness, and so any quantity greater than 1 is really a new form of oneness.

Re the original question, there actually was a research who managed to prove 1+1=2. It did take 300+ pages of set theory. I don't remember his name but I remember seeing a program (Nova maybe?) that touched on his work. He was trying to put math on a solid logical foundation. When he submitted for publication, an error was found on page 150 or so. It was fixible, but by that time he was so worn down that he didn't continue beyond 1+1=2.

If a=b then aa=ba, so aa-ab=0

Infinitely recurring decimals occur when you divide a number that is not a factor or share only common factors with the base into 1. In base 10, only 2 and 5 have this property. So 1/2 = 0.5 and 1/5 = 0.2. Also 1/4 = 0.25 and 1/8 = 0.125, because the only factor of those numbers is 2. 1/3=0.333r, 1/6=0.1666r 1/7 = 0.142857r and 1/9 = 0.111r. In base 3, however, 0.1 + 0.1 + 0.1 = 1.0, therefore 1/3 is 0.1. In base 9, 1/3 = 0.3. It's the other numbers which have infinite expansions.

Irrational numbers are numbers which cannot be represented by a fraction, so they have an infinite number of different digits in all bases. The best known example of an irrational is sqrt (2).

Transcendental numbers are numbers which cannot be represented or solved out of a simple (non-infinite) equation. sqrt(2) is not Transcendental because x<sup>2</sup> - 2 = 0 has the answer sqrt(2). This does not apply to pi or e, neither of which can be represented by equations that are not infinite series sums. Again, the commensurate corollary to this property is an infinite digit expansion in any number base.

Last edited:
kevinalm said:
Re the original question, there actually was a research who managed to prove 1+1=2. It did take 300+ pages of set theory. I don't remember his name but I remember seeing a program (Nova maybe?) that touched on his work. He was trying to put math on a solid logical foundation. When he submitted for publication, an error was found on page 150 or so. It was fixible, but by that time he was so worn down that he didn't continue beyond 1+1=2.
I just read in Dr Riemann's Zeroes, the recently published book about the Riemann Hypothesis, a proof that 1 + 1 = 2 which was provided by Russell and Whitehead in their magisterial tome Principia Mathematica (not to be confused with Newton's classic of the same name). It's about 6 lines long, though many of those lines are citations of other theorems in PM. Perhaps if it was completely spelled out it would cover 300 pages, I don't know.

This was some other guy. I wish I could remember his name. He wanted to rigorously prove the logical foundations of math, proving everything as he went. The program I saw was about "What do we know?" talking about how the foundations of all math and science ultimately have unproven assumptions. Pretty sure it was an episode on Nova, but I don't remember the title. Some years ago.

Anyone heard of wheels? You can divide by zero!

Well, the guy who wanted to rigorously prove the logical foundations of math was David Hilbert. PM was an attempt to create a total verifiable mathematical system out of which all proofs could proceed. In 1931 Kurt G&ouml;del showed that it was impossible, in fact, to create a totally self-consistent system. There must always be undecidable theorems in any mathematical system.