How can you tell the difference between adding zero and zero that was there all along?
I wear glasses which can see small details like that
How can you tell the difference between adding zero and zero that was there all along?
0 + 0 = 0 - 0 = 0
=>
+0 = -0 = 0
Which implies that zero is the only integer whose additive inverse is itself.
(i.e. +1 ≠ -1, . . . )
But I can have a naked +1 or -1?You cannot have a naked +0 or -0
But I can have a naked +1 or -1?
+1 and -1 are integers. 0 is an integer.
Are you saying I can write: 1 - 1 = 0, but I can't write 1 + (-1) = 0?
Um, what? Division by zero generally causes an exception fault: for example, fixed-point [0x09] or floating point divide [0x0f] exceptions. This is generally a hardware fault, in my experience. What am I missing here?Computers can divide by zero. They can return either infinity or just "not a number".
That certainly shouldn't happen. Software should never let hardware just blow up.Um, what? Division by zero generally causes an exception fault: for example, fixed-point [0x09] or floating point divide [0x0f] exceptions. This is generally a hardware fault, in my experience. What am I missing here?
That certainly shouldn't happen. Software should never let hardware just blow up.
Um... what?!n+0=n
n-0=n
n*0=n
No. Because imagine that we multiply both sides of that equation by zero. We then get:therefore n/0=n
Zero is a number. It is only ignored if you ignore it.Zero is ignored because there is NOTHING THERE...it does not exist.
This is meaningless rubbish.-0|0|+0
Context is important when you're doing mathematics. So is knowing some mathematics.In 1+0 and 1-0 the signs are both operators AND signs...
As I said, "Computers can divide by zero. They can return either infinity or just "not a number". If you tell the hardware to divide by zero, it will do it. It's just electronic circuitry. When you put voltages in here, something has to come out there. The actual electronic result of division by zero could vary from processor to processor. Any error that is raised has to be arbitrarily added, doesn't it? It could just as well raise an error if you tried to multiply by six.The hardware doesn't 'blow up,' the hardware detects the error. With a few exceptions, hardware faults are those detected by hardware during the execution of software instructions. In the example, above, the ALU [Arithmetic Logic Unit] detects the fault in the execution of the instruction, puts the error code in a register (along with other pointers, depending on the actual hardware), raises the error by causing an interrupt & exits.
As I said, "Computers can divide by zero. They can return either infinity or just "not a number". If you tell the hardware to divide by zero, it will do it. It's just electronic circuitry. When you put voltages in here, something has to come out there. The actual electronic result of division by zero could vary from processor to processor. Any error that is raised has to be arbitrarily added, doesn't it? It could just as well raise an error if you tried to multiply by six.
"Zero" is just a concept that WE have superimposed on some electronic condition. Put zero in and SOMETHING - some electronic condition - has to come out. We can superimpose the concept of "error" on that condition if we choose, or not.You have used hardware I've never encountered.
"Zero" is just a concept that WE have superimposed on some electronic condition. Put zero in and SOMETHING - some electronic condition - has to come out. We can superimpose the concept of "error" on that condition if we choose, or not.
What hardware have you used that doesn't work that way?
Huh? I'm talking about strictly physical phenomena.Why are you desperately arguing some esoteric, philosophical position?
And that "hardware-detected error" is what some engineer decided was an "error".Zero IS NOT some concept or any other crap we have imposed on some electronic condition. It is a specific gate in a gate-array that causes the result to fall through when 'true' to a hardware-detected error condition.