SciForums.com > Science > Physics & Math > Testing a coin for bias? PDA View Full Version : Testing a coin for bias? Post ReplyCreate New Thread Dinosaur01-11-11, 04:24 PMThe NY Times Science Section (11 January 2011) has an article relating to the testing of a coin suspected of being biased in favor of Heads. It discusses the following hypothetical results. A coin is tossed 1000 times & lands heads 527 times. A true coin would result in 526 or fewer Heads about 95.3% of the time. 5% is usually considered the cutoff indicating a statistically significent result. The above would be traditionally viewed as indicating that the coin is biased. The article makes the following statement (paraphrase, not true quote). The above 4.7% includes the probability of throwing 527, 528, 529 . . . . 1000 Heads, when only one result (527 Heads) was actually obtained). Some experts claim that it is more accurate to calculate the probability of tossing exactly 527 heads with a biased coin & comparing with the probability of tossing exactly 527 heads with a true coin. The latter probability is approximately .587% The article does not indicate how to calculate the probability of tossing exactly 527 heads with a biased coin. I have no idea how to calculate this probability, which obviously requires assumptions or data relating to the characteristics of the allegedly biased coin. Does anyone here have an idea relating to such a calculation? It would be incredibly time consuming, but one could make 1000 sets of tosses with 100 tosses in each set or 100 sets with 1000 tosses in each set. Having done all that coin tossing, the results could be plotted & compared with the bell-shaped curve associated with tossing a true coin. BTW: The article mentions ESP Testing originally done at Duke University & later done at various other colleges/universities. The testers generally claimed statistically significant results, which I (& many others) do not accept due to considering the experimental design to be flawed. Stryder01-11-11, 04:31 PMAlthough this doesn't answer your query, it is on the subject of coin tosses. I've noticed that if you have a sufficiently bright enough light to reflect the top of the coin, you can stop the flip any way up you choose. You just have to wait for the visual cue. So I guess I've just added the parameter that the coin has to be flipped and free fall to the ground with no further assistance to designate an outcome. (Of course then you can start querying the rate of torque applied, the arc and the height of the flip, since this too would fortify an outcome.) kurros01-11-11, 08:58 PMI have lots to say about this since I do work on related things (statistics wise, not coin wise :)), but don't have time to get into just now, maybe I'll come back later. For now though, I'll answer your actual question: "calculate the probability of tossing exactly 527 heads with a biased coin" So, the probability of getting exactly some sequence of tosses HHTTHT...HHTH etc for n tosses, in which H of them are heads, where the probability of tossing a head is PrH=whatever, is P=(PrH)^H (PrT)^T where of course PrT = 1-PrH and T=n-H But you don't care which exact sequence it is, so you have to multiply by the number of such sequences which give you the same number of heads. This number is mult=\frac{n!}{H!T!}, or C^n_H (n choose h) if you prefer (the n,h binomial coefficient) So overall the probability is prob(H heads in n throws) = P \times mult So for the fair coin PrH=0.5, so the calculation is prob(527 heads in 1000 throws) = (0.5)^{1000} \times C^{1000}_{527}=0.58742% Also quickly, regarding the hypothesis test, keep in mind that 5% is 1 in 20, so 1 in 20 such experiments you expect to see this result. It is "statistically significant" according to the doctrine, but really that just means it is worth doing the experiment again to see if it happens again. I wouldn't base any important decisions on it. And prior information is very important, for instance if the coin just came out of a change machine it is pretty damn unlikely to be biased a priori, so the probability that the observed result is just a fluke is much much higher than if you are watching some magician flip a coin. Dinosaur01-12-11, 12:03 AMKurros: Some of your notation is a bit unfamiliar to me. You moved a decimal point on your calculation of probability(527 Heads in 1000 tosses). It is 0.5874% not 5.874% kurros01-12-11, 12:09 AMKurros: Some of your notation is a bit unfamiliar to me. You moved a decimal point on your calculation of probability(527 Heads in 1000 tosses). It is 0.5874% not 5.874% Whoops, so I did, fixed. What in particular concerns you? I can explain any part in more detail if you like or resolve whatever notation issues there are. Was it the choose? i.e. C^a_b=\frac{a!}{b!(a-b)!}? BenTheMan01-12-11, 01:08 AMHere's a somewhat related, pretty easy interview question, slightly rephrased. (I got it right.) Suppose you have a coin which you know is not fair. Using this unfair coin, and only this coin, how can you simulate a toss from a fair coin? kurros01-12-11, 05:02 AMHere's a somewhat related, pretty easy interview question, slightly rephrased. (I got it right.) Suppose you have a coin which you know is not fair. Using this unfair coin, and only this coin, how can you simulate a toss from a fair coin? What if we have to pick a sequence of two tosses instead, HT vs TH? Throw away HH and TT as garbage if they come up. Maybe you can do it with less wastage though *shrugs*. RJBeery01-12-11, 10:17 AMHere's a somewhat related, pretty easy interview question, slightly rephrased. (I got it right.) Suppose you have a coin which you know is not fair. Using this unfair coin, and only this coin, how can you simulate a toss from a fair coin? How about choosing any sequence with parity? (e.g. TH vs HT, HHTT vs HTHT vs THTH vs TTHH...) The problem with this is that it requires an unknown, potentially infinite, number of tosses. Here's an idea: toss the coin, player A chooses "H or T" while at the same time player B chooses "keep or conjugate" which decides what player A's actual choice is...hah QuarkHead01-12-11, 11:15 AMToss zero times or infinitely many times. The former seems more practical for we mere mortals. przyk01-12-11, 01:08 PMKurros: Some of your notation is a bit unfamiliar to me. If it helps to see the same thing in slightly different notation, then if p is the probability of tossing the biased coin and getting heads, the probability of tossing N coins and getting k heads is C^{N}_{k} p^{k} (1-p)^{N-k} \;. As kurros explained, the p^{k} (1-p)^{N-k} bit is the probability of getting a particular sequence with k heads in it (eg. HHTT) and the C^{N}_{k} bit is the total number of sequences with k heads (eg. HHTT, HTHT, HTTH, ...). Here's an idea: toss the coin, player A chooses "H or T" while at the same time player B chooses "keep or conjugate" which decides what player A's actual choice is...hah Note that this only works if both players commit their choices before tossing the coin (eg. by noting them down and putting them in envelopes). But if you think the players will make good random bit generators then you don't even need a coin: just have each player choose a random bit and xor them to get the result. RJBeery01-12-11, 02:08 PMNote that this only works if both players commit their choices before tossing the coin (eg. by noting them down and putting them in envelopes). But if you think the players will make good random bit generators then you don't even need a coin: just have each player choose a random bit and xor them to get the result. Yep I thought of that, which is why I said they have to choose "at the same time", which must occur before the coin lands. You're also right that this is no different than one of the players choosing "odds or evens" followed by both players throwing up either 1 or 2 fingers. Maybe we could say the coin toss throws in an extra layer of randomness, whereas a game that relies completely on human decision could be more easily exploited (e.g. a really rotten "rock paper scissors" player could be beaten every time). Is there another answer to Ben's question that isn't possibly subject to infinite coin tosses? phyti01-12-11, 02:18 PMThe NY Times Science Section (11 January 2011) has an article relating to the testing of a coin suspected of being biased in favor of Heads. It discusses the following hypothetical results. A coin is tossed 1000 times & lands heads 527 times. A true coin would result in 526 or fewer Heads about 95.3% of the time. 5% is usually considered the cutoff indicating a statistically significent result. The above would be traditionally viewed as indicating that the coin is biased. ... If a coin tossed 1000 times resulted in 1ooo heads, would it be biased? If yes then why. RJBeery01-12-11, 02:22 PMNo, the coin is fair. Stryder is tossing it from beneath a sufficiently bright light source. ;) przyk01-12-11, 06:47 PMMaybe we could say the coin toss throws in an extra layer of randomness You could, though remember you're doing this in the first place because the coin is biased. Your idea could dampen that bias, but it wouldn't eliminate it unless the players themselves made perfectly unbiased random choices. Is there another answer to Ben's question that isn't possibly subject to infinite coin tosses? What's wrong with the idea kurros suggested? Granted there's no guaranteed upper bound on the number of coins you'd have to toss, but in practice it should produce a result reasonably fast unless the coin is heavily biased. Personally I doubt it's going to be possible to come up with a protocol much better than the one he suggested that actually uses the coin as the source of randomness. Actually, here's a shot at a proof (by contradiction) of that: suppose I have a general protocol that could always produce a perfectly fair result in a guaranteed finite number n(p) of coin tosses, where I'm allowing that number to depend on the probability p that a single toss of the coin produces the result "heads". More formally, I can think of a protocol as a function fp, intended for use if the coin has bias p, that associates either 0 or 1 to each binary string of length n(p) (representing the possible sequences of n(p) coin tosses). Now, the problem is that there are an uncountable infinity of possible values of p, but only a countable infinity of possible values of n. That means that at least some values of n are going to have to be shared by different values of p. Even there, for a fixed number n of coin tosses, there are only a finite number of possible protocols (specifically, 22n of them), so there will exist protocols that have an (uncountably) infinite number of biases p associated with them. Now take such a protocol f, that has an infinite number of biases p associated with it, and say it employs n coin tosses. Given f and p, the probability of obtaining the result "1" is P(1) \,=\, \sum_{ \{ a_{k} \} } f(a_{1},\, \ldots,\, a_{n}) \, p^{\sum_{k=1}^{n} a_{k}} \, (1-p)^{n - \sum_{k=1}^{n} a_{k}} where the outer sum is over all the possible binary sequences where each ak takes on the values 0 or 1. If the protocol works, then I'm claiming that P(1) = 1/2 for this fixed protocol f for the uncountably infinite number of biases p this protocol is supposed to work for. Or stated differently, I'm claiming the finite polynomial \sum_{ \{ a_{k} \} } f(a_{1},\, \ldots,\, a_{n}) \, p^{\sum_{k=1}^{n} a_{k}} \, (1-p)^{n - \sum_{k=1}^{n} a_{k}} \,-\, \frac{1}{2} has uncountably infinite roots. Of course, finite polynomials of degree n have at most n real roots, so this isn't possible. Conclusion: if you have a biased coin that has a probability p of producing "heads" when tossed, there are necessarily values of p for which it is impossible to simulate a perfectly fair coin toss in a guaranteed finite number of tosses of the biased coin. (It's also obvious that these values of p will include all the transcendental numbers.) Pete01-13-11, 06:27 AMHere's a somewhat related, pretty easy interview question, slightly rephrased. (I got it right.) Suppose you have a coin which you know is not fair. Using this unfair coin, and only this coin, how can you simulate a toss from a fair coin? Is this a lateral thinking type question? Is there an answer for the extreme case: P(head)=1? Does "using... only this coin" mean you can't mark the coin (you can use your hands, obviously, so why not the marker pen I usually carry)? RJBeery01-13-11, 09:20 AMPersonally I doubt it's going to be possible to come up with a protocol much better than the one he suggested that actually uses the coin as the source of randomness. Actually, here's a shot at a proof (by contradiction) of that: suppose I have a general protocol that could always produce a perfectly fair result in a guaranteed finite number n(p) of coin tosses, where I'm allowing that number to depend on the probability p that a single toss of the coin produces the result "heads". More formally, I can think of a protocol as a function fp, intended for use if the coin has bias p, that associates either 0 or 1 to each binary string of length n(p) (representing the possible sequences of n(p) coin tosses). Now, the problem is that there are an uncountable infinity of possible values of p, but only a countable infinity of possible values of n. That means that at least some values of n are going to have to be shared by different values of p. Even there, for a fixed number n of coin tosses, there are only a finite number of possible protocols (specifically, 22n of them), so there will exist protocols that have an (uncountably) infinite number of biases p associated with them. Now take such a protocol f, that has an infinite number of biases p associated with it, and say it employs n coin tosses. Given f and p, the probability of obtaining the result "1" is where the outer sum is over all the possible binary sequences where each ak takes on the values 0 or 1. If the protocol works, then I'm claiming that P(1) = 1/2 for this fixed protocol f for the uncountably infinite number of biases p this protocol is supposed to work for. Or stated differently, I'm claiming the finite polynomial has uncountably infinite roots. Of course, finite polynomials of degree n have at most n real roots, so this isn't possible. Conclusion: if you have a biased coin that has a probability p of producing "heads" when tossed, there are necessarily values of p for which it is impossible to simulate a perfectly fair coin toss in a guaranteed finite number of tosses of the biased coin. (It's also obvious that these values of p will include all the transcendental numbers.) That's pretty impressive przyk. I'm not sure where the line of pedantry lies but since we're trying to relate abstract theory to the real world, and given the finite number of quantum states of a coin, are we sure of the following? Now, the problem is that there are an uncountable infinity of possible values of p Putting it another way, if p were limited to rational values are we still unable to come up with a method that is guaranteed to terminate? When I asked the question I suspected the answer was no because of the case when the coin produces nothing but H, for example (Pete mentioned this also). Notice that this also breaks Kurros' method. This is why I suggested what I did, even if it was tongue-in-cheek, because my method survives the limiting case even if it's a bit of a hack :D phyti01-13-11, 12:19 PMMy point relates to the magazine article sited in post 1. Because the results differ by 1 from the 'ideal' statistical pattern is no reason to suspect bias (maybe in a paranoid world). The randomness of coin tosses, even assuming 'ideal' coins, is time independent, therefore you can't predict when any specific pattern occurs. If a lottery produced the same numerical sequence x days in a row, many would suspect the results are manipulated. What the're really saying, based on a misunderstanding of randomness and a false sense of determinism is 'it can't happen in my lifetime'. An appeal to long periods of time, and the notion of a 'rare' event. This is based on an actual event many years ago involving a state lottery and the reaction by a group of university students, which was published in the local city paper. For quality control purposes, if you plot the results of a process in real time using sampling, and notice changes that form a trend, you can then isolate the factors to determine the cause. That would be my suggested method for resolution if needed. In a world history with dishonesty at so many social levels, suspicion will usually require/demand it. Dinosaur01-13-11, 04:31 PMPhyti: Do you really need an answer to the following? If a coin tossed 1000 times resulted in 1000 heads, would it be biased? If yes then why.I would call it biases. 1000 tosses of a true (or unbiased coin) would be expected to result in approximately 500 Heads & 500 Tails. A True coin would occasionally favor Heads over Tails (or vice versa) by 520 (or more) to 480 (or less). Perhaps 1000 to zero might be described by some word other than biased (Like 2-Headed), since 600 Heads to 400 Tails would indicate a definite bias in favor of Heads. przyk01-13-11, 08:38 PMgiven the finite number of quantum states of a coin Er, why should a coin only have a finite number of quantum states? (You might have meant "discrete" rather than "finite". Even there, I'm not sure I'm convinced.) Putting it another way, if p were limited to rational values are we still unable to come up with a method that is guaranteed to terminate? Intuitively I'd expect it to be possible for rational biases. I haven't sat down to try to figure out a specific method, though for a rational bias of p/q (in simplest terms) I'd expect the number of necessary coin tosses to be q or some multiple or power of it, which would be consistent with the protocol breaking for a perfectly biased coin. (Thinking about perfectly biased coins is also the sort of thing I was thinking about that eventually got me to the impossibility proof I posted earlier. It seems just about everyone had that idea. Obviously a protocol that still works if the coin is perfectly biased isn't actually using the coin as the source of randomness.) przyk01-14-11, 05:48 AMActually: Intuitively I'd expect it to be possible for rational biases. I take that back. It's clearly impossible for P(Heads) = 1/3, since the probability of any sequence of n tosses will always be (some integer)/3n, and there will be no way of adding these to get 1/2. RJBeery01-14-11, 08:50 AMEr, why should a coin only have a finite number of quantum states? (You might have meant "discrete" rather than "finite". Even there, I'm not sure I'm convinced.) Because I'm presuming that we are disallowing coins of infinite size. I take that back. It's clearly impossible for P(Heads) = 1/3, since the probability of any sequence of n tosses will always be (some integer)/3n, and there will be no way of adding these to get 1/2. We don't need to get to 1/2, we just need two differentiable sequences that are of equal probability. That's why I said How about choosing any sequence with parity? (e.g. TH vs HT, HHTT vs HTHT vs THTH vs TTHH...) You'll note that Kurros' suggestion is just a special case of this. RJBeery01-14-11, 10:15 AMJust pondering other ways to do this. If P(H) = 1/3, toss the coin 4 times. P(even # T's) = P(odd # of T's). In this case, 0 T's doesn't count as even, you must start over. :( RJBeery01-14-11, 10:22 AMActually, przyk, I think your proof is valid so I stand corrected. If we could add the probabilities to 1/2 we wouldn't have a "start over" condition, and such a condition is exactly why we cannot guarantee termination. RJBeery01-14-11, 10:58 AMI just thought of a trivial solution to Ben's problem. We know that the coin is not fair, but we apparently don't know in which manner. He asked how we could simulate "a [singular] toss from a fair coin". Probability is based on knowledge (or rather lack of it). Since we don't know the nature of the coin's bias we an fairly simulate a single toss by simply doing it and choosing H or T in the usual manner...our lack of knowledge guarantees we would be correct 1/2 of the time! Ben, you haven't responded with the "correct" answer. Have we already covered it? phyti01-15-11, 12:17 PMPhyti: Do you really need an answer to the following?I would call it biases. 1000 tosses of a true (or unbiased coin) would be expected to result in approximately 500 Heads & 500 Tails. A True coin would occasionally favor Heads over Tails (or vice versa) by 520 (or more) to 480 (or less). Perhaps 1000 to zero might be described by some word other than biased (Like 2-Headed), since 600 Heads to 400 Tails would indicate a definite bias in favor of Heads. The sequence is an allowable one, thus it violates nothing. A toss of 3 coins resulting in HHH is not biased for 1 toss. The problem is the statistical distribution (curve) is being used to predict the sequence,which is independent of time. The curve cannot predict when any specific sequence will appear. In the example the next 1000 tosses might be all tails, giving an average of 1/2. You don't know how long it will take to produce a distribution that matchs the curve. In summary, a 'rare' random event can happen anytime! kurros01-15-11, 06:22 PMI just thought of a trivial solution to Ben's problem. We know that the coin is not fair, but we apparently don't know in which manner. He asked how we could simulate "a [singular] toss from a fair coin". Probability is based on knowledge (or rather lack of it). Since we don't know the nature of the coin's bias we an fairly simulate a single toss by simply doing it and choosing H or T in the usual manner...our lack of knowledge guarantees we would be correct 1/2 of the time! Ben, you haven't responded with the "correct" answer. Have we already covered it? Haha, how Bayesian of you, I like it. Too bad it only works once :p. Or once on each person anyway :). phyti01-16-11, 03:47 PMThe example of 1000 heads in 1000 tosses could be a problem in perspective. With 1000 tosses as the population the sample could be sets of 10 with 100 tosses, which may appear suspicious. If you consider the sample as a subset of a larger population, then it's less significant. Given 1000 tosses as one set, there are 2^1000 possible outcomes. The first trial just happened to be the least frequent, and there is no predicting the order of the outcomes. If the trials were continued the results should approximate the normal distribution. kurros01-16-11, 04:22 PMThe example of 1000 heads in 1000 tosses could be a problem in perspective. With 1000 tosses as the population the sample could be sets of 10 with 100 tosses, which may appear suspicious. If you consider the sample as a subset of a larger population, then it's less significant. Given 1000 tosses as one set, there are 2^1000 possible outcomes. The first trial just happened to be the least frequent, and there is no predicting the order of the outcomes. If the trials were continued the results should approximate the normal distribution. Not sure what you are trying to say here. Sure, it is possible that the coin is fair and you still flip it heads 1000 times in a row, but as you say this only happens in (0.5)^1000 fraction of such experiments. That is pretty damn unlikely. Sure, it almost certainly won't happen again if the coin is actually fair, but the fact that it happened once is fairly excellent evidence for a biased coin. It is not a guarantee though, maybe this was your argument. RJBeery01-16-11, 05:06 PMThis thread is delving into the basis of probability theory, which I've always found dubious. "Absolute" probability really only exists in idealized settings, certainly not in the real world. We can only postulate that a coin is fair or biased. In the real world this is something that we could never be sure of; we could only test a coin and come to conclusions with levels of certainty that never reached 100%...in other words, omnipotent "granting" of knowledge that we could never actually have in the real world is the only way probability theory allows us to make any absolute claims about the subject. But if we're in the business of omnipotent granting of knowledge to ourselves, why not include other information that isn't practically available to us such as the precise manner in which the coin will be tossed? Either way it's an arbitrary assignment of information rather than something intrinsic to the coin. My conclusion is that intrinsic (or "absolute") probability is meaningless, and is replaced by Bayesian (or "relative") probability. The confusion over the Monte Hall problem is a great example of the difference between the two. Dinosaur01-16-11, 07:20 PMConsider your real reaction to 1000 tosses resulting in 1000 heads. Assume that you did the tossing. Now you are asked to toss the coin one more time and are forced to bet your net worth on on the result of toss 1001. If you do not decide in 60 seconds, you will be shot. Would you bet on Heads? How confident would you be in your choice? Would anyone posting here consider betting on Tails? RJBeery01-16-11, 07:26 PMIf I was presented with the above scenario and given "even odds" on my net worth as to the outcome of the next coin toss I would seriously consider the possibility that I was being had and that the next toss would in fact be Tails. Dinosaur01-17-11, 08:20 PMRJ Beery: Note the following phrases in my recent post. . . . . Assume that you did the tossing . . . . you are asked to toss the coin one more time . . . . I expected those phrases to preclude thoughts like the following. If I was presented with the above scenario and given "even odds" on my net worth as to the outcome of the next coin toss I would seriously consider the possibility that I was being had and that the next toss would in fact be Tails.You must be seriously paranoid to think you are being scammed when you are doing the tossing. BTW: I would not be surprised if some experiments showed strong evidence that coins, dice, roulette wheels, etc did not exactly match the calculated probabilities in textbooks. I would be astonished if the experimental probabilities differed from textbook calculations enough to make casino owners nervous. I would still consider such processes to be random, with the experimental results supporting the notion that the textbook probabilities were not matched due to slight biases in the coins, dice, wheels, et cetera. Experimental verification of textbook probabilities might not be possible for a given coin, die, or wheel. For a coin it could take perhaps 1000 sets of 1000 tosses to test a coin. A million tosses is likely to cause enough wear & tear to change the physical properties of the coin. Since dice & roulette wheels have far more possible outcomes, more than million trials might be required. Modern technology can verify the statistics resulting from various random quantum processes such as tunneling & radioactive decay. A gram molecular weight of a radioactive substance contains about 6*1023 molecules. If the half life is a year, half the molecules would decay in a year. This would be 3*1023 molecules. With that many trials the statistics of decay can be verified with considerable precision. While the randomness of coins, dice, et cetera is often disputed, the statistics of random quantum level processes are well supported by considerable experimental evidence. RJBeery01-18-11, 11:05 AMDinosaur, I was being facetious. I'm not saying predictive capability has no value, I'm saying it's necessarily an inexact science (or rather, it cannot provide answers with certainty because those answers don't exist). Experimental verification of textbook probabilities might not be possible for a given coin, die, or wheel. For a coin it could take perhaps 1000 sets of 1000 tosses to test a coin. A million tosses is likely to cause enough wear & tear to change the physical properties of the coin. Since dice & roulette wheels have far more possible outcomes, more than million trials might be required. There is no number of tests that could be used to "verify" textbook probabilities, and it isn't due to wear and tear or some practical barrier. In my opinion it's because intrinsic probability doesn't exist. Rather, probability is a measure of our knowledge of a system, and knowledge is inherently subjective. Modern technology can verify the statistics resulting from various random quantum processes such as tunneling & radioactive decay. A gram molecular weight of a radioactive substance contains about 6*1023 molecules. If the half life is a year, half the molecules would decay in a year. This would be 3*1023 molecules. With that many trials the statistics of decay can be verified with considerable precision. I've already given you this link (http://news.stanford.edu/news/2010/august/sun-082310.html) which suggests that radioactive decay is subject to external conditions. The Zeno Effect (http://en.wikipedia.org/wiki/Zeno_effect) is more proof. If a process is subject to external conditions (e.g. distance between the Earth and the Sun, whether or not it's being observed, etc) then it the behavior of that process is not intrinsic. While the randomness of coins, dice, et cetera is often disputed, the statistics of random quantum level processes are well supported by considerable experimental evidence. My point above, which I made to you many months ago and was apparently not absorbed, refutes this belief. phyti01-18-11, 07:43 PMdinosaur-30 Would you bet on Heads? How confident would you be in your choice? If a fair coin, it's still 1/2 for either. Each toss is independent of the others. dinosaur-32 Experimental verification of textbook probabilities might not be possible for a given coin, die, or wheel. For a coin it could take perhaps 1000 sets of 1000 tosses to test a coin... This seems closer to the truth. The ideal distribution applies to any number of events greater than the number of possible outcomes. Because it can't predict when any specific event (or sequence) will occur, the results can wander around the ideal, and may never match it regardless of the number of trials. You must be seriously paranoid to think you are being scammed when you are doing the tossing. Does that mean he doesn't trust himself? Dinosaur01-22-11, 11:57 PMRJ Berry: The Peter Sturrock article (link in a recent post of yours) was very interesting. It was dated 25 August 2010. I read the entire article at that link and did a lot of web searching for related articles. Did you read (rather than scan the article) and/or do other relevant searches? Peter Sturrock seems to have serious academic credentials, indicating that he is not a crackpot. BTW: I was able to find some articles indicating that similar data has been found to support the Sturrock article. There is also some naysaying by establishment physicists suggesting possible measurement errors. The difference in decay rate is 0.1% (about one part in 1000), a figure oddly missing from the article at the link you provided. I was not able to find any indication of full acceptance (by main stream physicists) of the claim that solar activity affects decay rates. The naysayers might merely be reluctant to consider the conclusion since it requires acknowledging the existence of a currently unknown particle or accepting that neutrinos can affect radioactive nuclei. Note that only one in millions (billions?) of neutrinos has any measurable effect on matter. Those that neither accept nor deny the conclusion are probably waiting for further experiments confirming or refuting the conclusion. If solar activity is shown to affect radioactive decay rates, it will indicate a gap in our knowledge of the nature of solar radiation. It will not indicate that radioactive decay is a deterministic process rather than a probabilistic (Id est: Random process). A difference in decay rate (if definitely established) only indicates that the probabilistic data has a different mean. The claim that radioactive decay is not affected by environmental conditions is not quite true. It will not be affected by: Heating to 1000 degrees K or cooling to minus 100 K (I do not know the effects, if any of more extreme changes in temperature); Hitting it hard with a hammer will not affect it; Chemical reactions will not affect it. Firing high speed neutrons or other nuclear particles at a radioactive nucleus will speed up the decay rate. I think that the laser implosion techniques used for hot fusion reactors would increase the decay rate. Anyone with a knowledge of atomic bombs knows that the shape/density of a radioactive substance will change its decay rate. More than a critical mass of U235 (a workable metal) in the form of a thin flat sheet will decay at the normal rate; It will decay faster if the shape is a rectangular parallelepiped with one long dimension; If a solid sphere, it will decay exponentially fast (atom bombs are ignited by implosives forces acting on a spherical spongy sphere creating a dense sphere). We have been antagonists on the issue of randomness in several threads. Arguing with you on this issue seems a bit like arguing about religious dogma with a theist. You have what appears to be a faith-based belief in a reality governed by deterministic cause/effect laws. This belief cannot be valid if random processes exist: Ergo, you deny the possibility of random processes. Aside from radioactive decay consider the following. When one particle at a time is released in a single or double slit experimental apparatus, the pattern resulting on a detecting screen is built in what appears to be a random order. Repeated experiments do not show any pattern in the building of the final image (either a circle with radially symmetric brightness for one slit or a series of light & dark areas parallel to two slits). Experiments with polarizing filters result in random data. The data associated with quantum tunneling is probabilistic. I expect that an expert in Quantum Theory could provide more examples. I assume you know that I do not consider myself an expert. The Heisenberg Uncertainty Principle refutes deterministic causality & strongly implies a probabilistic reality at the quantum level which supports the classical reality of ours senses. This suggests that the macro level is also probabilistic rather than deterministic/causal. Mainstream science has accepted the existence of random processes for over 50 years. To refute the mainstream view, one must find a deterministic explanation for the random data produced by various quantum level processes. You cannot refute the mainstream view by claiming that randomness is an admission of ignorance of the pertinent deterministic laws. This view is merely stating an unsupported belief that future scientists will discover deterministic laws. To support a belief in determinism, you must find a way to refute the Heisenberg Uncertainty Principle. It is reasonable to believe that we will have laws of physics which are qualitatively better and/or quantitatively more precise than our current laws. It is wild speculation to believe that currently well supported concepts will be completely overturned without some supporting evidence and/or cogent arguments. BTW: I have the following books relating to quantum theory in my library. Quantum Physics in a Nutshell (Mahan). I have not read this one due to its being quite advanced. I hope to some day have the time and determination to study it (a project which will require 40-60 or more hours of concentrated effort). Compendium of Quantum Physics. I refer to this a bit like one would use an encyclopedia as a starting place for further investigation. The following I have read & in several cases reread more than once. Quantum Reality (Herbert) Schrodinger’s Kittens (Gribbin). I lost his previous book (Schrodinger’s Cat in the title). Uncertainty (Lindley) The Quantum Zoo (Chown) The meaning of Quantum Theory Baggott) How Physics Confronts Reality (R. G. Newton).I wonder how many books on the subject you have read. Have you read enough to get a feeling for the counter-intuitive weirdness of quantum level reality? EG: Entangled particles (spooky non-local effects, Bose/Einstein condensates, seemingly discontinuous particle paths, quantum tunneling, polarized light phenomena, Simple & complex 2-Slit experiments, other anomalies. RJBeery01-23-11, 07:39 PMArguing with you on this issue seems a bit like arguing about religious dogma with a theist. You have what appears to be a faith-based belief in a reality governed by deterministic cause/effect laws. This belief cannot be valid if random processes exist: Ergo, you deny the possibility of random processes. Whether "true" randomness exists is not a question which can be definitively answered by science at this time so taking any position on the topic will require subjectivity. It's cute how you color me the theist, however, while apparently you consider yourself to be guided by laser-cut logic and reason. A difference in decay rate (if definitely established) only indicates that the probabilistic data has a different mean. You are confusing probabilistic behavior with randomness (again). I could make the same statement about the biased coins we've been discussing or even a PRNG. You cannot refute the mainstream view by claiming that randomness is an admission of ignorance of the pertinent deterministic laws. This view is merely stating an unsupported belief that future scientists will discover deterministic laws. If randomness is not an admission of ignorance, could you please define it? The ultimate example of this is the Zeno effect...continual observation of a radioactive material stops its decay completely. In other words, when our ignorance of the material's state is removed, the randomness of its decay process is also removed. I wonder how many books on the subject you have read. Have you read enough to get a feeling for the counter-intuitive weirdness of quantum level reality? EG: Entangled particles (spooky non-local effects, Bose/Einstein condensates, seemingly discontinuous particle paths, quantum tunneling, polarized light phenomena, Simple & complex 2-Slit experiments, other anomalies. Ahh yes. When we were discussing PRNGs you asked for my qualifications in the area. When I supplied them under protest (because my logic should stand on its own, as should yours) you DOUBTED them. Now you want to know whose library is bigger? How far can you pee? How big is your house? Why don't you just post your IQ test results and to whomever the highest score belongs shall be declared the victor in this debate?:mufc: przyk01-23-11, 08:46 PMWhether "true" randomness exists is not a question which can be definitively answered by science at this time Er, nothing can ever be definitively answered by science. No matter what we discover, there could always turn out to be a deeper underlying reality. The best we can do is talk about the most fundamental picture of the world we have at any given time. Saying that our fundamental picture might be wrong is redundant unless you've got good evidence there's something wrong with it. Your "strong belief" that randomness is only apparent and due to our ignorance isn't evidence. If randomness is not an admission of ignorance, could you please define it? The ultimate example of this is the Zeno effect...continual observation of a radioactive material stops its decay completely. In other words, when our ignorance of the material's state is removed, the randomness of its decay process is also removed. You're presenting a false dichotomy which ignores a third possibility: nature can be fundamentally non-deterministic but still have influenceable statistical properties. Quantum mechanics is and always has been of this third variety: in most cases, the probabilities of various outcomes occurring depends on certain (often controllable) parameters, and the probabilities can even become deterministic in some cases. For a simple example, if you send a photon through a polarization filter then the probability that the photon gets through depends on the angle between the photon's polarization and the filter. In the special case where they're perfectly aligned the result is deterministic: the photon always gets through. The Zeno effect is another of the special cases where QM happens to predict deterministic behaviour. Being a prediction of QM, it doesn't work as an argument against the QM worldview. RJBeery01-23-11, 09:45 PMEr, nothing can ever be definitively answered by science. Er, definitive no's arise in science every day, that's how we make progress. I'm saying that our ability to claim whether or not radioactive decay (or some portion thereof) occurs intrinsically is simply something that we cannot answer at this time, except to the extent that we can influence it with external factors. This fact alone is enough for me to believe that it is not an intrinsically random process. You're presenting a false dichotomy which ignores a third possibility: nature can be fundamentally non-deterministic but still have influenceable statistical properties. Dinosaur and I have somewhat of a history on this subject so you're probably not really up to speed on what we're even discussing. Dinosaur grants radioactive decay (along with most or all quantum processes) with "true randomness". This is apparently a sacred property whose aggregate behavior could never be reproduced by any deterministic process such as a PRNG. I disagree with him strongly on both of these points. Anyway, if a quantum process is able to be influenced by externalities then it is neither wholly intrinsic nor wholly random (which was Dinosaur's stance), by definition. This is where the subjectivity comes into play: if we can explain macro-level physical behavior with physical laws, and we can explain quantum-level processes under certain "special" conditions, should we not presume that the rest of their behavior is also explainable? This is logical to me. You can choose to think otherwise with Dinosaur if you wish, but why don't the two of you come up with a definition of random that isn't related to knowledge of a system before we discuss this further... przyk01-24-11, 12:03 AMEr, definitive no's arise in science every day But not for something as general as whether nature is deterministic or random. Suppose we came up with a deterministic model for radioactive decay that seemed to work. How would you know that model wasn't an approximation to some deeper reality? And what would be preventing that deeper reality from being non-deterministic? Dinosaur grants radioactive decay (along with most or all quantum processes) with "true randomness". It doesn't make much sense to analyze radioactive decay. Radioactive nuclei are macroscopic particles composed of hundreds of nucleons that interact via a complicated force that, as far as I know, no-one knows how to describe accurately. It's complicated enough that we're going to get apparently random behaviour no matter the details of the underlying interactions. You (and Dinosaur) have picked a really bad example to discuss: nuclei are not fundamental particles and nuclear decay is not a fundamental interaction. You may as well be talking about dice throws. I disagree with him strongly on both of these points. Are you disagreeing because you know something about the nature of quantum processes that the rest of us don't, or because you just really really want nature to be deterministic and you hope determinism will be restored to mainstream physics some day? This is where the subjectivity comes into play: if we can explain macro-level physical behavior with physical laws, and we can explain quantum-level processes under certain "special" conditions, should we not presume that the rest of their behavior is also explainable? No, not necessarily, especially since all the examples you've given are predictions of the theory whose worldview you're disagreeing with. And I wouldn't use the word "explain". Quantum mechanics makes predictions in the form of probabilities, and in some cases it predicts that the probability of an event is 1. That happens when it predicts that a system will evolve into a state which happens to be an eigenstate of the observable you intend to measure. There is no more or less "explaining" going on in that case than for any other QM prediction. why don't the two of you come up with a definition of random that isn't related to knowledge of a system before we discuss this further... Why would I want to subject myself to that condition? Spectrum01-24-11, 04:18 AMIsn't there a difference in weight between the two patterns on the coins faces (the head or the tails)? RJBeery01-24-11, 09:09 AMBut not for something as general as whether nature is deterministic or random. Suppose we came up with a deterministic model for radioactive decay that seemed to work. How would you know that model wasn't an approximation to some deeper reality? And what would be preventing that deeper reality from being non-deterministic? True but Reductionism can't continue ad infinitum, right? It doesn't make much sense to analyze radioactive decay. Radioactive nuclei are macroscopic particles composed of hundreds of nucleons that interact via a complicated force that, as far as I know, no-one knows how to describe accurately. It's complicated enough that we're going to get apparently random behaviour no matter the details of the underlying interactions. You (and Dinosaur) have picked a really bad example to discuss: nuclei are not fundamental particles and nuclear decay is not a fundamental interaction. You may as well be talking about dice throws. This is why I commented that you probably don't have sufficient context to participate in this discussion. This if from an older PRNG thread: From http://mathworld.wolfram.com/PseudorandomNumber.html “ A slightly archaic term for a computer-generated random number. The prefix pseudo- is used to distinguish this type of number from a "truly" random number generated by a random physical process such as radioactive decay. ” MathWorld is a highly respected site. Note the implication that radioactive decay is a truly random process. And he didn't say it one time. He said it many times with authority. The core of our discussion is really centered around randomness, PRNG's and probability rather than quantum behavior, but as I said radioactive decay was chosen as the sacred process which was unassailable to deterministic analysis. As to why I don't believe in "true" randomness, you're right that it stems from my personal QM interpretation. why don't the two of you come up with a definition of random that isn't related to knowledge of a system before we discuss this further... Why would I want to subject myself to that condition? Umm, because if we're discussing the existence of TRUE randomness then it is intrinsic to nature and not a subjective phenomenon based on a knowledge set. It seems cowardly to defend a word like "random" while at the same time refusing to define it. If your definition of random includes the concept of lack of knowledge of a system then how can you disagree with me? przyk01-24-11, 01:06 PMTrue but Reductionism can't continue ad infinitum, right? Why not? There is no point at which we'll be able to know for certain that we've discovered the most fundamental laws of nature. We will never come up with a theory that isn't open to being disproved. As to why I don't believe in "true" randomness, you're right that it stems from my personal QM interpretation. Can you show that this interpretation reproduces the statistical predictions of QM? It seems cowardly to defend a word like "random" while at the same time refusing to define it. I'm not trying to defend anything. I'm merely objecting to the rush you seem to be in to do away with "randomness". I don't need to be defending it to criticise your arguments: saying science can't definitively answer whether true randomness exists "at this time" is redundant because it never will, and pointing out that QM makes some deterministic predictions or that its statistical predictions are subject to various influences doesn't necessarily suggest everything should be perfectly deterministic. Maybe nature is random to the degree QM says it is where it says it is, for example. If your definition of random includes the concept of lack of knowledge of a system then how can you disagree with me? When you say that randomness is due to lack of our knowledge of the state of a system, you are assuming that the additional state information we're ignorant of actually exists. There's plenty of room for disagreement and a definition of "true randomness" in there. RJBeery01-24-11, 01:45 PMThere is no point at which we'll be able to know for certain that we've discovered the most fundamental laws of nature. We will never come up with a theory that isn't open to being disproved. If we could model the world accurately down to the theoretical limit of knowledge available (i.e. Planck scale) then any "deeper physics" that may or may not behave non-deterministically would have as much relevance to Science as discussions of Unicorns. In other words, it is a world that would not exist for Science whether or not it "actually existed". If you want me to agree that a definitive "yes" is impossible in science for a given theory then of course I would. When I said "the question cannot be definitively answered by science at this time" it was a poor choice of words - what I meant was that science doesn't currently have the tools to analyze the question. Can you show that this interpretation reproduces the statistical predictions of QM? As far as I can tell, yes. I may start a thread on it at some point in the near future and you'll be welcome to critique away. When you say that randomness is due to lack of our knowledge of the state of a system, you are assuming that the additional state information we're ignorant of actually exists. There's plenty of room for disagreement and a definition of "true randomness" in there. Agreed 100% but I endow the world with hard Reality, which means the information exists whether we know about it or not. Pointing out that there are other positions to take on this topic is fruitless because I've already said it's a subjective issue. przyk01-24-11, 02:36 PMIf we could model the world accurately down to the theoretical limit of knowledge available (i.e. Planck scale) then any "deeper physics" that may or may not behave non-deterministically would have as much relevance to Science as discussions of Unicorns. Key point bolded: any theory that claims there's a "theoretical limit of knowledge available" is open to being disproven. Are you really going to claim a theory is "definitively" fundamental just because it says it is??? Agreed 100% but I endow the world with hard Reality, which means the information exists whether we know about it or not. God...? RJBeery01-24-11, 03:03 PMKey point bolded: any theory that claims there's a "theoretical limit of knowledge available" is open to being disproven. Are you really going to claim a theory is "definitively" fundamental just because it says it is??? Point taken but we already agree that nothing is definitive in Science. If we reach a point where we are unable to detect any deeper underlying Physics, have a plausible theory to explain why that is, and can explain everything that we can observe, precisely how do the Unicorns affect us? The second half of the 19th century had people making the claim that Physics was essentially "solved" except for some minute details...until Relativity came along, then QM, etc. What happens when NOTHING ELSE comes along? If we were ever able to "solve Physics" in this manner we would eventually stop looking...at which point any deeper Reality is irrelevant. God...? Yes, my son, and I come straight from the Old Testament which means I'm not above smiting out of annoyance. :spank: przyk01-24-11, 04:00 PMPoint taken but we already agree that nothing is definitive in Science. If we reach a point where we are unable to detect any deeper underlying Physics, have a plausible theory to explain why that is, and can explain everything that we can observe, precisely how do the Unicorns affect us? If we reached such a point and all we had was a stochastic theory, would you stop there? RJBeery01-24-11, 04:21 PMIf we reached such a point and all we had was a stochastic theory, would you stop there? No, I don't believe I would. Or rather, I hope I wouldn't. I'd like to think that I'm a critical thinker, and if our ultimate explanation for certain aspects of Nature was "it is simply inexplicable" it would be an invitation for analysis to me. What about you? "Accepting the inexplicable" is how I characterize the advocates of the Copenhagen Interpretation... Dinosaur01-24-11, 08:58 PM[RJ Beery:[/b] From your last post. Accepting the inexplicable" is how I characterize the advocates of the Copenhagen InterpretationThe above implies that you disagree with the Copenhagen interpretation. What interpretation do you advocate? RJBeery01-25-11, 01:20 PMWhat interpretation do you advocate? I'm not sure my QM interpretation has a name, I really don't know. I wasn't really seeking discussion on it at this point but a very short summary is: Relativity + Hard Reality = Block Time Block Time + Principle of Least Action = Determinism* *My definition of Determinism differs a little bit from the usual one but, basically, there's no wiggle room for multiple "futures". przyk01-25-11, 02:12 PMNo, I don't believe I would. Or rather, I hope I wouldn't. I'd like to think that I'm a critical thinker You're aware that simply choosing to believe something doesn't qualify as critical thinking, right? and if our ultimate explanation for certain aspects of Nature was "it is simply inexplicable" it would be an invitation for analysis to me. Which might be reasonable - we'd prefer a deterministic model of reality than a non-deterministic one, all else being equal. But in the hypothetical situation I'm giving you, not all else is equal: a non-deterministic unified model of nature was discovered two centuries ago. It was accepted rather grudgingly by the scientific community, which was hoping for a deterministic model, but no-one had a better idea, and it has since survived several generations of new physics graduates and 200 years of increasingly precise experimental verification. Thirty years after it was first proposed, an ingenious person came up with an interpretation of the theory that restored determinism by postulating the existence of additional theoretically undetectable hidden variables (i.e. "unicorns"), and extra laws governing them that turned out to be quite convoluted. The interpretation generated a lot of interest and maybe even a few variants, but otherwise never caught on as more than a curiosity. Question: in that case, do you still feel justified telling people on the internet that you "strongly believe" in determinism? What about you? In the above hypothetical situation? I'd see it as my duty to restore determinism to physics and generally be the fresh pair of eyes that sorted out the mess the physics community had gotten itself into and get it back on track. I'd do it by thinking for myself instead of just accepting what others told me. Then I'd graduate from highschool and actually learn the theory in university. After a few years of thinking about the theory, manipulating it, and developing an intuitive feel for it on its own terms, I'd find the non-deterministic aspect of it didn't bother me so much after all. I'd work on other physics problems under the guidance of a great thesis advisor and start learning Lisp on the side. Adapted from a true story. RJBeery01-25-11, 04:57 PMIn the above hypothetical situation? I'd see it as my duty to restore determinism to physics and generally be the fresh pair of eyes that sorted out the mess the physics community had gotten itself into and get it back on track. I'd do it by thinking for myself instead of just accepting what others told me. Then I'd graduate from highschool and actually learn the theory in university. After a few years of thinking about the theory, manipulating it, and developing an intuitive feel for it on its own terms, I'd find the non-deterministic aspect of it didn't bother me so much after all. I'd work on other physics problems under the guidance of a great thesis advisor and start learning Lisp on the side. You and I are very much alike, except I learned Lisp (among other languages) at the University and started thinking about Physics on the side. :) Also, your background should allow you to at the very least empathize with my desire to understand Physics; and by that, I mean study the lab experiments' results and understand it on my own terms rather than simply accepting the (sometimes incompatible) explanations. I don't know if you are aware of this but I'm back at the University taking Physics classes when my life allows me to. I actually fear the day that the parts of QM that bother me today no longer "bother me so much after all". I assume this comes from prolonged exposure, like a mathematician that becomes comfortable with infinity... Question: in that case, do you still feel justified telling people on the internet that you "strongly believe" in determinism? Justified? If my interpretation is not invalidated by the facts then the rest of it is a matter of aesthetics. What more justification do I need? If you don't like what my interpretation implies (regarding time, free will, etc) then you are free to choose another one. If there were a process that was somehow proven to be random (and since no one here will supply me with a definition, I'm going with "acausal") then Determinism goes away. Discussing randomness opens the door for actual debate though, as opposed to arguing over subjective choices in interpretations, which is why it interests me. Absane01-26-11, 01:37 PMHere's a somewhat related, pretty easy interview question, slightly rephrased. (I got it right.) Suppose you have a coin which you know is not fair. Using this unfair coin, and only this coin, how can you simulate a toss from a fair coin? Toss the coin twice. If the sequence is HT, count it as H. If it's TH, count it as T. Otherwise, throw out HH and TT and toss again. What if you have some weird coin that when you toss it twice, you get this: P(HH) = 0.28 P(HT) = 0.33 P(TH) = 0.16 P(TT) = 0.23 What then? przyk01-26-11, 01:43 PMAlso, your background should allow you to at the very least empathize with my desire to understand Physics; and by that, I mean study the lab experiments' results and understand it on my own terms rather than simply accepting the (sometimes incompatible) explanations. Usually you shouldn't have to do this. Knowing the established mainstream theory and having a general idea of the experimental domain over which the theory is confirmed is usually enough. Even if you disagree with the theory itself and your only aspiration is to supplant it, its mathematical formulation still acts as a convenient summary of all the observations you need to explain. I don't know if you are aware of this but I'm back at the University taking Physics classes when my life allows me to. Yes, you mentioned it in another thread. Incidentally, I'd imagine introductory QM is something you should be covering soon if you're in your second year. I actually fear the day that the parts of QM that bother me today no longer "bother me so much after all". I assume this comes from prolonged exposure There's really nothing to fear. It's not like you'll accept just anything after "prolonged exposure". A logical inconsistency will remain a logical inconsistency no matter how many times you're exposed to it for instance. What changes after "prolonged exposure" is that your repertoire of intuitive concepts expands. There are really two different meanings to the word "explain": the first is the logical one in which you show that a conclusion follows logically from a set of postulates. In that sense, since any conceivable physical theory will need postulates there will always be something left "unexplained" in any theory. The second is a colloquial meaning, where you consider you've "explained" something if you can relate it to the intuitions you already have. This meaning of "explain" is subjective since people's intuitions differ and are mostly shaped by their experiences. Living in an apparently classical world, you might accept "the particles collided and deflected off of one another" as an "explanation" for some observation, since it's the sort of thing you're used to dealing with in everyday life. It turns out that intuition is malleable, so you may well one day accept "the photon's quantum state decohered" as an intuitive "explanation" for something, once you've built up a familiarity with quantum states and the sort of behaviour, like decoherence, that they can exhibit. like a mathematician that becomes comfortable with infinity... What bothers you about infinity? Mathematicians actually have a number of definitions related to the concept of infinity that are applicable in different contexts. Mathematicians have a rigorous definition of what they mean when they say the set of natural numbers is an "infinite set" for example. JackBlack01-26-11, 01:49 PMDid someone ask, ''can we make a fair coin''? I forget who it was... however in theory we can. The only thing that makes a coin truely biased is an unfair weight-distribution favouring one side over another. A fair coin would be a two-flat-sided round metalic substance, with two slits applied to both sides, one acting horizontally and the other vertically, with exactly the same length on both sides. RJBeery01-26-11, 02:05 PMWhat if you have some weird coin that when you toss it twice, you get this: P(HH) = 0.28 P(HT) = 0.33 P(TH) = 0.16 P(TT) = 0.23 What then? The following would work for a standard biased coin: count the number of tosses it takes each player to produce both Heads and Tails. The player to do so in fewer tosses wins. This can be extended to your situation above: the player that can produce ALL FOUR combinations while tossing the coin twice at a time in the fewest tosses is the winner. A fair coin would be a two-flat-sided round metalic substance, with two slits applied to both sides, one acting horizontally and the other vertically, with exactly the same length on both sides. So both sides would have 2 parallel slits? You couldn't discern which side was which! You could paint each side a different color but tossing a coin requires a "tosser", traditionally human, so an unbiased coin still doesn't ensure unbiased results. JackBlack01-26-11, 02:19 PMThe following would work for a standard biased coin: count the number of tosses it takes each player to produce both Heads and Tails. The player to do so in fewer tosses wins. This can be extended to your situation above: the player that can produce ALL FOUR combinations while tossing the coin twice at a time in the fewest tosses is the winner. So both sides would have 2 parallel slits? You couldn't discern which side was which! You could paint each side a different color but tossing a coin requires a "tosser", traditionally human, so an unbiased coin still doesn't ensure unbiased results. No I also said my friend that the slits would be placed horizontally and vertically. So one side would be horizontal and the other vertical. However to make the fully appreciable, and fair, you could only flip this coin in a homogeneous gravitational field with no acting forces from non-local bodies. JackBlack01-26-11, 02:21 PMIn fact, come to think of it, if you orientate a coin correctly in a perfect homogeneous gravitational field, with the correct force distribution, there should be a chance to flip the coin perfectly so it falls exactly on the same side every time. Perhaps the perfect coin does not exist... ... ...? RJBeery01-26-11, 02:39 PMUsually you shouldn't have to do this. Knowing the established mainstream theory and having a general idea of the experimental domain over which the theory is confirmed is usually enough. I completely disagree with this! More than once have I come up with a theory or explanation only later to find that it's already been postulated in the Scientific community. Some might be discouraged by this but I see it as vindication. Also, coming to a realization on your own internalizes a concept more completely than reading about it ever could. Consider the following paragraph from "The Nobel Prize: A History of Genius, Controversy and Prestige" by Burton Feldman: The physicist Abraham Pais recalls that in 1946 he had dinner with Pauli who, rhythmically rocking as usual in his strange way, complained that he had trouble finding a physics topic to work on. He fell silent and added, "Perhaps that is because I know too much." He wasn't boasting, it was simply true. Knowing too much can sometimes handcuff one's inner freedom and originality. Richard Feynman studiously did not keep up with the flood of current literature, fearing it might stifle him. Feynman read the beginning of an article to see the direction, then set it aside to work the answer out for himself. There's really nothing to fear. It's not like you'll accept just anything after "prolonged exposure". A logical inconsistency will remain a logical inconsistency no matter how many times you're exposed to it for instance. Unfortunately, I don't agree with this either. An example is the black hole discussion we were having - you're the only one, out of dozens of respondents on multiple websites, that gave me the impression that you knew specifically what you were talking about. Maybe it's because you're the only one that could communicate in a manner that I could understand; but it's also possible that the other respondents didn't understand what they were defending and had simply accepted it after prolonged exposure.* Another example is a logical inconsistency between QM and SR (http://www.sciforums.com/showthread.php?t=91001). If you read through the linked thread you'll see that many Physicists (or at least Physics students) were completely unaware that there was any problem whatsoever. What bothers you about infinity? It's not a number, just a concept that exists on a plane outside of our Physical existence. We can do nothing but imagine it. It may be a useful tool while manipulating other objects on this imaginary informational plane but I don't trust it and would never invite it over for dinner. *This does not mean that a black hole inconsistency exists, just that *I* don't understand it yet. I finally made it to the University library and I still have questions on the subject of Kruskal coordinates, specifically what qualifies a coordinate as being temporal one and whether or not that cleanly applies to our concept of Time. This is obviously a topic for another thread and another, uh, time. RJBeery01-26-11, 02:42 PMNo I also said my friend that the slits would be placed horizontally and vertically. Yes...and you still don't see a problem with this? How would you orientate the coin after tossing it such that you could tell which side was which? It would just be a coin with lines on each side which happened to be perpendicular to each other! haha JackBlack01-26-11, 02:55 PMYes...and you still don't see a problem with this? How would you orientate the coin after tossing it such that you could tell which side was which? It would just be a coin with lines on each side which happened to be perpendicular to each other! haha A flip which favors one side where some density in the medium which will effect the orientation of the flip is a non-fair coin. The flip needs to make sure that the poles are following a flip along a 0 and 180 top-bottom line of path. Of course, we are trying to perform the perfect coin toss. These conditions should be appreciated. Not that I underlined that part or anything :) RJBeery01-26-11, 03:14 PMOf course, we are trying to perform the perfect coin toss. These conditions should be appreciated. Discussing a perfect coin TOSSER is a different subject. This is why I said You could paint each side a different color but tossing a coin requires a "tosser", traditionally human, so an unbiased coin still doesn't ensure unbiased results. I'm not even sure what a "perfect coin tosser" would be... - Is perfection defined by perfectly random or perfectly consistent force application? If random, what's wrong with a blind person? - Who would load it? A "perfect coin-tosser loader"? The coin's initial state is important. If you had a way to randomize the loading, why not just use that method as the coin toss results?! Also, I'd imagine there are infinite ways to cut grooves into each side of a coin such that they are discernible to the eye yet perfectly balanced. I'm simply too caffeine-deprived at the moment to come up with one. Dinosaur01-26-11, 04:24 PMPrxyk: Your name reminds me of an Al Capp character with a name I could not pronounce. His name was something like Job Bltpsk. It doesn't make much sense to analyze radioactive decay. Radioactive nuclei are macroscopic particles composed of hundreds of nucleons that interact via a complicated force that, as far as I know, no-one knows how to describe accurately. It's complicated enough that we're going to get apparently random behavior no matter the details of the underlying interactions. You (and Dinosaur) have picked a really bad example to discuss: nuclei are not fundamental particles and nuclear decay is not a fundamental interaction. You may as well be talking about dice throws.I tend to discus radioactive decay because it is a process better known to most people than various quantum processes. While it might be close to a border between the quantum & the classical levels of reality, I think it is on the quantum side of that undefinable boarder. Being influenced by Zeno measurements seems to strongly imply that radioactive decay is a quantum level process. Watched pots boil in spite of repeated temperature measurements. While radioactive decay might not be a fundamental process, it seems to be a random process. If future physicists discover some fundamental cause, I expect it to merely push the randomness down to a lower level (perhaps the quark level), rather than providing a deterministic explanation. Perhaps quantum tunneling or some analogous process might be the explanation. BTW: The Zeno effect astonishes me. This is not the first time I have been astonished by new (to me) experiments relating to quantum theory. While a disturbance explanation of the Uncertainty Principle is not valid, I wonder if a disturbance explanation of the Zeno effect has merit. Note that I bolded part of you above quoted remark. Not sure of the intent of that phrase other than that the implication that dice throws are a classical level process. The issue of the randomness of dice throws is an interesting subject. The prediction of the outcome of a dice thrown is such a formidable task that few (if any) would consider possible in practice. Many consider a dice throw to be predictable in principle. I think a good argument could be mounted to deny predictability in principle. RJBeery01-26-11, 04:44 PMYour name reminds me of an Al Capp character with a name I could not pronounce. His name was something like Job Bltpsk. His name reminds me of one of Superman's nemeses named Myxlplyx. Anyway, Przyk, did you see the summary of my interpretation about 10 posts ago? I would probably be more interested in discussing that specifically than discussing the merits of coming up with new ones in general. przyk01-26-11, 06:37 PMI completely disagree with this! More than once have I come up with a theory or explanation only later to find that it's already been postulated in the Scientific community. So did I. When I was thirteen I guessed invariance of c for instance. I also largely guessed the Everett interpretation for myself too (after actually studying a QM textbook and disagreeing with something I read there, by the way). But I was also born in the late 1980s in a world where relativity and quantum mechanics were part of popular culture, which I'm sure made a big difference. For example, I'd already heard about time dilation and "you can't go faster than light", and one day I just thought it would be really neat if time dilation worked out in such a way that the speed of light would always be the same for all observers. As far as keeping track of what's going on in the scientific community, Richard Hamming once gave a talk (http://www.paulgraham.com/hamming.html) on the topic of doing first rate original research. Personally, overall, I find it a lot more realistic than simplistic advice like "don't study textbooks, invent it all for yourself". One of the interesting observations he made was: Another trait, it took me a while to notice. I noticed the following facts about people who work with the door open or the door closed. I notice that if you have the door to your office closed, you get more work done today and tomorrow, and you are more productive than most. But 10 years later somehow you don't know quite know what problems are worth working on; all the hard work you do is sort of tangential in importance. He who works with the door open gets all kinds of interruptions, but he also occasionally gets clues as to what the world is and what might be important. Now I cannot prove the cause and effect sequence because you might say, The closed door is symbolic of a closed mind.'' I don't know. But I can say there is a pretty good correlation between those who work with the doors open and those who ultimately do important things, although people who work with doors closed often work harder. Somehow they seem to work on slightly the wrong thing - not much, but enough that they miss fame. The entire transcript is worth a read. Actually, scanning this talk again: There's another trait on the side which I want to talk about; that trait is ambiguity. It took me a while to discover its importance. Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. This is also something that has changed about me in the last 6-7 years, which might be surprising to a non-scientist. In this thread I haven't said I "strongly believed" anything, and that's because I really don't. I grant myself the right to be completely agnostic about whether "true randomness" exists if I don't see a good way of getting a handle on the issue either way. So sure, show drive and independence, and work a few things out for yourself if you can. But don't tune the rest of the world out. it's also possible that the other respondents didn't understand what they were defending and had simply accepted it after prolonged exposure. I doubt this, for two reasons. First, I also read those same posts, and it was quite plain to me that the other posters did know what they were talking about. Second of all is the problem itself. Apparently it all came down to this: This talk of alternate coordinate choices to solve problems seriously had me feeling like higher mathematics demanded a forfeiture of reasoning. I've been interpreting some posters' comments as implying that a change in coordinates allows for a change in prediction of what "actually happens", and this was deeply troublesome for me to understand. If you haven't studied GR from a course or textbook you won't know this, but GR is actually carefully constructed right from the beginning in such a way that it is coordinate system independent. In general, mathematicians and mathematical physicists are extremely careful about this sort of thing. Alphanumeric, prometheus, Guest, etc. would have known this so well they'd practically take it for granted. My own assessment is that they probably couldn't pin down exactly what problem you were having, but from their own understanding of how GR was constructed they were confident it couldn't be a real one anyway, and you'd probably solve your own problem if you studied GR first. Whenever someone says "Go read a textbook!", that's probably what they're thinking. They're not treating the textbook like a Bible or asking you to. Another example is a logical inconsistency between QM and SR (http://www.sciforums.com/showthread.php?t=91001). If you read through the linked thread you'll see that many Physicists (or at least Physics students) were completely unaware that there was any problem whatsoever. You didn't actually show an inconsistency. Funkstar pointed this out quite early on: First of all, simultaneity isn't meaningless in str, but it is relative. So you could accuse the usual "wave-function collapse" interpretation of QM of being ambiguous for not specifying in which frame the collapse occurs "instantaneously", but that in itself doesn't constitute an inconsistency. You could pick any frame or even claim that when the collapse occurred was "frame dependent" (ie. a different "instantaneous" collapse in every frame). The latter would be particularly bizarre, but even there it isn't necessarily inconsistent. The way you drew your diagram, it also looks like your observers are instantaneously performing measurements on distant photons, which is problematic in relativity. This is the sort of thing that looks like it could have multiple possible answers, and at least from a first glance at the thread, it looks like some participants had a shot at making up their own. It's not a number, just a concept that exists on a plane outside of our Physical existence. You could say the same about the sine function, or even the number 3. I know what it means to have 3 apples, but what would it mean for the number 3 itself to reside on our plane of physical existence? przyk01-26-11, 07:01 PMI tend to discus radioactive decay because it is a process better known to most people than various quantum processes. I suppose that's reasonable. I was just pointing out that if you're trying to convince someone of the existence of "true randomness", the first thing they're going to do is suggest that the random system only looks random but is actually governed by internal state information. And we all know nuclei have internal state information anyway. Maybe pick on muon decay or something. While it might be close to a border between the quantum & the classical levels of reality, I think it is on the quantum side of that undefinable boarder. Being influenced by Zeno measurements seems to strongly imply that radioactive decay is a quantum level process. Possible - I'm really not familiar with nuclear physics so I wouldn't know. Actually, the fact that radioactive decay follows an exponential decay law might suggest you're right, since it suggests no memory effects. Perhaps quantum tunneling or some analogous process might be the explanation. I thought radioactive decay already was explained in terms of quantum tunnelling. przyk01-26-11, 07:44 PMPrxyk: Your name reminds me of an Al Capp character with a name I could not pronounce. His name reminds me of one of Superman's nemeses named Myxlplyx. It isn't my name. I got it from a couple of highschool friends of Russian/Ukranian/Polish origin. It's supposedly an onomatopoeia for the "ksssk" sound you get when you open a beer bottle. Actually, I've been thinking of having it changed for years. I don't want people to get the false impression I'm Russian or something (I'm British/Belgian). Anyway, Przyk, did you see the summary of my interpretation about 10 posts ago? Do you mean this: I'm not sure my QM interpretation has a name, I really don't know. I wasn't really seeking discussion on it at this point but a very short summary is: Relativity + Hard Reality = Block Time Block Time + Principle of Least Action = Determinism* *My definition of Determinism differs a little bit from the usual one but, basically, there's no wiggle room for multiple "futures". ? This doesn't really resonate with me. What's "block time"? I would probably be more interested in discussing that specifically than discussing the merits of coming up with new ones in general. Well you've claimed you've got a deterministic interpretation of QM. It should be interesting to see what you've come up with. I should warn you from the start: I don't expect you to have succeeded. If this is in some way related to the idea of retrocausality, then I've also already expressed some of my own (not very favourable) opinions about that in earlier threads. RJBeery01-26-11, 08:55 PMThis is also something that has changed about me in the last 6-7 years, which might be surprising to a non-scientist. In this thread I haven't said I "strongly believed" anything, and that's because I really don't. I grant myself the right to be completely agnostic about whether "true randomness" exists if I don't see a good way of getting a handle on the issue either way. I like the idea of "tolerating ambiguity", but I don't think it should practiced to the point that you find yourself unable to have a strong conviction about something. I always assumed that scientists were trepidatious about issuing any subjective opinions because a future change in facts could affect their credentials. This career-induced constraint is also why I had always assumed that it was the boldness of cranks, rather than the possible veracity of their claims, that agitated scientists. I personally have nothing to lose by being wrong on anything (not that I enjoy it - I've reluctantly changed my opinion on the nature of Time for example). I doubt this, for two reasons. First, I also read those same posts, and it was quite plain to me that the other posters did know what they were talking about. "The other posters", implying every respondent, is a pretty sweeping statement. You may want to go back and read them (3 threads here plus a couple at PhysicsForums). Recall that I accumulated the following list of explanations from respondents 1) Backreaction occurs before EH crossing 2) Unruh radiation 3) Vaidya metric 4) Kruskal coordinates 5) The finite proper time of the in-falling body proves the BH's existence 6) "A very famous man told me so" (this is my favorite) 7) Speculative "mystery mechanisms" 8) "Here, check out my class notes" 9) Illusions 10) Painleve coordinate frame so unless you are claiming that all 10 them are equivalent I think you might want to recant. You didn't actually show an inconsistency. Funkstar pointed this out quite early on: "First of all, simultaneity isn't meaningless in str, but it is relative." The inconsistency is exposed if you presume an objective reality, which is why I included Einstein's quote in the OP. In other words, if a quantum particle P is calculated to have attribute X at time T from frame A, then P actually has X at T. No other frame can dispute this if we are demanding an objective reality. To say that frame B makes the claim that P is still in a state of superposition at T is a contradiction based on the very definition of a wavefunction; the probability amplitude of values other than X for P at T is necessarily zero if we are looking for global consistency. The result of this is block time (or, at a minimum, retrocausality where the quantum states are determined at the time of emission based on future measurements). przyk01-27-11, 12:55 PMI like the idea of "tolerating ambiguity", but I don't think it should practiced to the point that you find yourself unable to have a strong conviction about something. I meant within reason of course. I don't mind playing about with theory, but I'll have strong convictions against anyone who says "relativity is wrong!" for instance. This career-induced constraint is also why I had always assumed that it was the boldness of cranks, rather than the possible veracity of their claims, that agitated scientists. What agitates scientists about cranks is really two things: The crank is obviously wrong about something. The scientist is sure of this, or at least that if the crank turns out to be right, it was for the wrong reason (eg. there is always the possibility relativity might be wrong, but not because it's logically or philosophically inconsistent, and at worst it'll always be a useful approximation). The crank won't listen to corrections. They also often insulate themselves from understanding why they're wrong by refusing to learn more about what they're wrong about - i.e. they think they know "enough" and assume the rest is just details that can't affect their opinions (this often comes in the form of the "it's just abstract math" excuse). Basically, it's really the "backseat driver" attitude of cranks that's irritating - the attitude that they can tell physicists how to do their jobs without knowing anything about that job. "The other posters", implying every respondent, is a pretty sweeping statement. Fine: I generally remember Alphanumeric and Guest as clearly knowing what they were talking about. I wasn't trying to be exhaustive. I haven't even read the PhysicsForum threads. The inconsistency is exposed if you presume an objective reality Which itself exposes why the problem is really one of ambiguity: the contradiction depends on you presuming things. which is why I included Einstein's quote in the OP I thought it looked familiar. It's almost exactly the definition of "reality" given in the EPR paper. I'm not saying EPR wasn't a good argument, but Einstein didn't define the mainstream interpretation of QM. RJBeery01-27-11, 01:29 PMI thought it looked familiar. It's almost exactly the definition of "reality" given in the EPR paper. I'm not saying EPR wasn't a good argument, but Einstein didn't define the mainstream interpretation of QM. This is why I'm calling my view of the world an "interpretation"...it's just another way to look at things. I have an affinity for it, though, because objective reality is preserved without breaking relativity or locality, and it plays nicely with QM. PLUS it rescues the possibility of Determinism. Do you not see any value to this? Basically, the only thing that is sacrificed is a unidirectional "flow" of time, but I was calling that into question while reading about nothing but SR...only later did I realize that QM further supported this notion (or rather, this could explain QM's "spooky action at a distance" without invoking infinite branching dimensions, poofy electron clouds of probability, etc). przyk01-27-11, 02:29 PMThis is why I'm calling my view of the world an "interpretation"...it's just another way to look at things. I have an affinity for it, though, because objective reality is preserved without breaking relativity or locality, and it plays nicely with QM. PLUS it rescues the possibility of Determinism. Do you not see any value to this? You'd have to actually explain your interpretation for me to judge that. I would also take into account other things like its complexity. And of course, you've got to be able to show that it actually works (ie. reproduces QM predictions). Basically, the only thing that is sacrificed is a unidirectional "flow" of time Then you probably won't get anywhere. "Flow" of time isn't much more than a figure of speech that describes our subjective experience. I don't know any useful rigorous definition of "flow of time" that modern physics - and especially the more "fundamental" theories - are actually dependent on. To give an example, you mentioned the action principle. In a classical Lagrange theory, you specify initial and final positions, and the action principle tells you what path the system has to take between the two. The path is also uniquely determined if you specify initial positions and initial velocities. You could also determine everything from the final positions and velocities. The second point of view is more in line with how we apply physics (we want to predict the future using information we have available now), but otherwise mainstream theories don't give it any preferential treatment. So by the sound of it you want to sacrifice a notion we've never really been dependent on anyway. RJBeery01-27-11, 02:57 PMThen you probably won't get anywhere. "Flow" of time isn't much more than a figure of speech that describes our subjective experience. I don't know any useful rigorous definition of "flow of time" that modern physics - and especially the more "fundamental" theories - are actually dependent on. To give an example, you mentioned the action principle. In a classical Lagrange theory, you specify initial and final positions, and the action principle tells you what path the system has to take between the two. The path is also uniquely determined if you specify initial positions and initial velocities. You could also determine everything from the final positions and velocities. The second point of view is more in line with how we apply physics (we want to predict the future using information we have available now), but otherwise mainstream theories don't give it any preferential treatment. So by the sound of it you want to sacrifice a notion we've never really been dependent on anyway. Exactly my point! Physics doesn't demand or even prefer a temporal direction. With very few exceptions (kaon decay?), it allows one to choose a direction arbitrarily. It's rather odd, because while stating that sacrificing time flow is a non-issue, you've already claimed that you would have problems accepting any notion of retrocausality. So which is it? Are you going to stick to your human notions of unidirectional time flow or consider that they have no place in the world of Physics? Look at this response from another forum: If such experiments could conclusively prove somehow that the law of cause and effect concerning the normal sequence of time was invalid, then I'd be off to Aruba concentrating on my perception of beauty in its many forms since my theoretical days concerning cosmology and physics would have ended. The likelihood of such theories being valid in my opinion, is close to impossible. This gent was saying that if his human perception of time were disrupted he would quit his career because his world would be shattered. To dismiss this hurdle off-handedly is very admirable of you but also a bit naive if you believe that the rest of the scientific community would do so as easily. Anyway, maybe I'll make a new thread if you really want to discuss it... przyk01-27-11, 03:41 PMIt's rather odd, because while stating that sacrificing time flow is a non-issue, you've already claimed that you would have problems accepting any notion of retrocausality The first part of this sentence should really explain the second: the problem I had with retrocausality was that it didn't seem well defined. If physics came with some preferred fundamental notion of a "flow" or "arrow" of time that causality depended on (and I'm not sure I even know what that would mean), then you could easily imagine turning that around and getting something worth calling "retrocausality" that was fundamentally different than "ordinary" causality. But in a physics that is already pretty gender neutral about any arrow of time, what's the fundamental difference between "retrocausality" and the type of deterministic causality in Lagrange theories I described to you? The problem is that if there is no real fundamental difference, then "retrocausality" has no more explanatory power than the sort of determinism we've already been considering for centuries. In that case, you could adopt retrocausality as a valid point of view, but you couldn't rely on it as an explanation for anything. RJBeery01-27-11, 04:55 PMThe first part of this sentence should really explain the second: the problem I had with retrocausality was that it didn't seem well defined. If physics came with some preferred fundamental notion of a "flow" or "arrow" of time that causality depended on (and I'm not sure I even know what that would mean), then you could easily imagine turning that around and getting something worth calling "retrocausality" that was fundamentally different than "ordinary" causality. But in a physics that is already pretty gender neutral about any arrow of time, what's the fundamental difference between "retrocausality" and the type of deterministic causality in Lagrange theories I described to you? The problem is that if there is no real fundamental difference, then "retrocausality" has no more explanatory power than the sort of determinism we've already been considering for centuries. In that case, you could adopt retrocausality as a valid point of view, but you couldn't rely on it as an explanation for anything. IMO, the apparent flow of time is strictly a function of entropy; I believe there is a connection between extracting information from a system and increasing that entropy. Block time and/or retrocausality presume that the future already exists...given this, yet acknowledging the fact that we are apparently unable to access information in that future, it becomes clear that this is due to the entropy gradient which happens to exist along the time dimension. Under these rules, retrocausality would be defined in terms of linked events that are either thermodynamically reversible or which actually reduce entropy (in our experience); either of which are extraordinarily rare events, which is exactly why EPR-type experiments are so difficult to setup and become virtually impossible as you further leave the quantum domain. przyk01-27-11, 06:56 PMBlock time and/or retrocausality presume that the future already exists...given this, yet acknowledging the fact that we are apparently unable to access information in that future, it becomes clear that this is due to the entropy gradient which happens to exist along the time dimension. This still won't help you: I've already told you that entropy is an emergent property of large ensembles of particles. We use the concept of entropy in order to explain how we get apparent irreversibility and "flow of time" on a macroscopic level out of physics that has no preferred notion of "flow of time" on the microscopic level. Entropy in itself is never a fundamental explanation for anything. Under these rules, retrocausality would be defined in terms of linked events that are either thermodynamically reversible or which actually reduce entropy (in our experience); either of which are extraordinarily rare events, which is exactly why EPR-type experiments are so difficult to setup and become virtually impossible as you further leave the quantum domain. I don't see a link. My own experiences are different: EPR type experiments aren't just passive experiments where we sit and wait for an unlikely "entropy reversal" to occur somewhere in the universe. Most of the difficulties involved in manipulating entangled states are technological in nature: detectors have limited efficiency, controlling dispersion and attenuation in optical fibers becomes increasingly difficult with distance, and so on. How is any of this supposed to get you a deterministic QM anyway? RJBeery01-29-11, 12:04 AMThis still won't help you: I've already told you that entropy is an emergent property of large ensembles of particles. We use the concept of entropy in order to explain how we get apparent irreversibility and "flow of time" on a macroscopic level out of physics that has no preferred notion of "flow of time" on the microscopic level. Entropy in itself is never a fundamental explanation for anything. When did you say this, and what do you mean by it won't help me? If there's a connection between information availability and entropy then it's a bold statement to say that entropy isn't a fundamental explanation for anything. Information availability (or the lack thereof) is exactly what is preventing us from recognizing objective reality, if it exists. Is there a link between entropy and information? I can't prove it, but I also can't wait to take a course on information theory. How is any of this supposed to get you a deterministic QM anyway? Mostly through macro-physical analogies. Once you accept retrocausality the quantum world seems a lot less foreign, and that allows us to question (in my opinion) any and all of the standard assumptions previously held about it that differed from classical physics. First of all, it allows us to postulate that all photons have a preexisting absorption target, which is what allows their quantum state to be determined at the time of their emission. I turn the question around back to you: water flows downhill following the path of least resistance, much as light travels between two targets along the shortest path - no randomness; by your admission, the Zeno effect and the polarized filters provide more situations of certain quantum behavior. What is the reason we would attribute "all other quantum behavior" to acausality, particularly when SOME quantum behavior is deterministic as is macro physics? Surely you're not suggesting that humans hold the special power to somehow pin quantum behavior in a corner, such that it behaves predictably, and just after we turn our back it again begins to behave with no cause whatsoever? This is the kind of solipsism that questions whether the moon exists when we are not observing it - unprovable one way or the other, yet unappealing to common sense... przyk01-29-11, 02:40 AMWhen did you say this Here. It was in an earlier thread. and what do you mean by it won't help me? If there's a connection between information availability and entropy then it's a bold statement to say that entropy isn't a fundamental explanation for anything. There's nothing bold about it at all. Entropy isn't a fundamental interaction or degree of freedom. It doesn't even exist for fundamental particles. We don't attribute an entropy to a single photon for instance. It's only when you have a large group of particles that you can talk about a configuration being more or less likely. As such, entropy has very little to do with determinism. When you appeal to entropy, you are appealing to statistical arguments and ignoring the details of interactions where determinism might reside. For example, if you toss a fair coin a hundred times, you typically expect to get roughly as many heads as tails, just because there are many more configurations with this property than any other. Entropy is just a way of formalizing this sort of argument. If you instead give a deterministic account of all the coin tosses, relating all the outcomes to details about how the coins were tossed, the atmosphere they were tossed in, and anything else relevant, entropy becomes completely irrelevant. Is there a link between entropy and information? Sure: there's a measure of entropy that appears in information theory. But "there's a link" isn't a logical argument. So far, to paraphrase, all you've presented is "I believe the randomness in QM is due to our lack of information. Information has something to do with entropy, and entropy, which is always held to increase, is related to our concept of "flow of time". So if we allow retrocausality, QM is deterministic again." Every step of this reasoning is handwaving at best. If that's all you've got, you are a long way from being able to claim you have a deterministic interpretation of QM. light travels between two targets along the shortest path - no randomness No it doesn't. According to the action principle, classical light is predicted to travel along the shortest path (and even that is an oversimplification). Sure, that's deterministic, and if QM used the action principle it would be a deterministic theory. But it doesn't. The closest analogue of it in QM is the path integral formulation, where the transition amplitude of eg. a photon from one place to another depends on contributions from all the possible classical paths the photon could take. In some circumstances the dominant contribution comes from the shortest path and contributions from paths that deviate much from it largely cancel out, so QM is compatible with the classical action principle applying on the classical scale, but that's it. by your admission, the Zeno effect and the polarized filters provide more situations of certain quantum behavior. What is the reason we would attribute "all other quantum behavior" to acausality, particularly when SOME quantum behavior is deterministic as is macro physics? This might be a sensible argument if physicists had completely thrown out the idea that anything might be predictable to any degree, and then found they'd been too hasty and they could get some degree of predictability back. But that isn't the case. QM is no more or less deterministic now than it was in 1930. You're taking predictions that QM makes and trying to use them as arguments against the world view of QM. That's a remarkably silly position to take: the QM world view is not an argument against itself. Surely you're not suggesting that humans hold the special power to somehow pin quantum behavior in a corner, such that it behaves predictably, and just after we turn our back it again begins to behave with no cause whatsoever? No, I never said that. According to the "textbook" interpretation of QM the way the Zeno effect works is that you constantly "observe" a quantum state and keep forcing it to collapse before it has time to evolve into something else. There are interpretations (eg. Everett) which give an account of measurement and the appearance of wavefunction collapse without needing to attribute any special power to humans. I don't particularly want to start a mini discussion about that, so I'll stick to the textbook vocabulary for this sort of thing. RJBeery02-01-11, 01:42 PMIf that's all you've got, you are a long way from being able to claim you have a deterministic interpretation of QM. If you were hoping for a rigorous mathematical treatise proving Determinism I think you're holding the bar a bit high. I'm explaining the concepts of my interpretation in the same way that you might explain a variant of Many Worlds. According to the action principle, classical light is predicted to travel along the shortest path (and even that is an oversimplification). Sure, that's deterministic, and if QM used the action principle it would be a deterministic theory. But it doesn't. The closest analogue of it in QM is the path integral formulation, where the transition amplitude of eg. a photon from one place to another depends on contributions from all the possible classical paths the photon could take. In some circumstances the dominant contribution comes from the shortest path and contributions from paths that deviate much from it largely cancel out, so QM is compatible with the classical action principle applying on the classical scale, but that's it. This is another interpretation. The path integral formulation is a mathematical model, not necessarily an explanation of reality. I once watched some videos of Feynman giving a series of talks about QED and the most striking lesson I pulled from it was that, while a given lab experiment may not give the "simplest" expected result, all results measured had complete objective-reality explanations. IOW, there was no QED result which did not have a purely physical analogue in theory. To say that light only moves from A to B in the shortest path possible because its various detours to the local pub and/or grocery store all cancel each other out seems dubious. This is testable, I'd imagine. Off the top of my head, I wonder if shadows are perfectly dark in a vacuum...I'd wager they are save for diffraction effects of the wavelength which, again, is an objective reality explanation. This doesn't help you. It amounts to saying that the measurement results are predetermined by initial conditions, and Bell's theorem already covers this case. This is not true. Initial conditions do not predetermine anything when considering retrocausality, as "initial" demands a temporal dependence. If you really understood my interpretation you would agree that the world *could* be deterministic. IOW, the spin orientation of a pair of entangled particles is determined at the time of their emission because of their future measurements. The fact that I can (under specific circumstances) account for their spin axes but not a given particle's spin direction should not mean that the interpretation fails. I can say the same thing about Many Worlds: specifically provide me with the mechanism that determines the "thickness of the branches" of each possible world. To date, I've never even seen a discussion on this, yet MWI is a one of the most subscribed-to interpretations. Retrocausality/Determinism is superior to WMI in this regard - MWI is, almost by design, too hand-wavy to ever be falsified. Not only have I given a falsification test for my interpretation (i.e. provide me with a provably acausal process), I've also suggested very a preliminary explanation for the area that needs to be fleshed out (i.e. apply least action to the quantum world...and your claim that "QM doesn't use the action principle" sounds awfully definitive for a Physicist that tolerates ambiguity). If you want to claim that it's all hogwash based on aesthetic appeal it is your prerogative, but you must differentiate that from denouncing it as factually untenable. przyk02-01-11, 05:30 PMIf you were hoping for a rigorous mathematical treatise proving Determinism I think you're holding the bar a bit high. I'm not holding the bar any higher than I'd hold it for myself. I'm explaining the concepts of my interpretation in the same way that you might explain a variant of Many Worlds. There's two differences: I am not making an original presentation of the MYI. It was proposed more than half a century ago by someone who did work through the details. Second, the way I typically explain MYI is in terms of processes such as the formation of entanglement that will already be familiar to people who have studied QM. If you know QM, it should be reasonably easy to work out the details for yourself, and if you couldn't or didn't want to, there's always Everett's thesis. (and if you don't already know QM, you'd need to learn that first anyway). By contrast, I have no idea how to apply your interpretation to make predictions based on what you've said about it. The path integral formulation is a mathematical model, not necessarily an explanation of reality. I never said it was an explanation of reality. It's an alternative way of calculating transition amplitudes in the quantum formalism. In principle you could express the Everett interpretation in the path integral formalism if you wanted. What I said was that the path integral formulation was the closest analogue of the action principle used in QM. And, as with the classical action principle, it's not as simplistic as it sounds. In both classical and quantum field theory, the paths in question aren't literal particle paths. They're classical field configurations. Initial conditions do not predetermine anything when considering retrocausality, as "initial" demands a temporal dependence. If you really understood my interpretation you would agree that the world *could* be deterministic. If you really understood my objections, you wouldn't feel so confident saying that. Terminology aside, I am only relying on a definition of determinism you apparently agreed to in the earlier thread: namely that there exists a unique correspondence between the initial and final conditions. Also, I'm not necessarily saying a deterministic interpretation is completely impossible - just that you haven't presented one. I'm also a bit mystified as to why you're replying to something I said several months ago. I linked to my old post to highlight the paragraph preceding the one you just replied to. IOW, the spin orientation of a pair of entangled particles is determined at the time of their emission because of their future measurements. The fact that I can (under specific circumstances) account for their spin axes but not a given particle's spin direction should not mean that the interpretation fails. Yes it should, because in QM the randomness is in the spin direction given the spin axis. Restoring determinism means you can make deterministic predictions where QM offers only probabilities. I can say the same thing about Many Worlds: specifically provide me with the mechanism that determines the "thickness of the branches" of each possible world. You'd do that the same way you make any other prediction in QM: you apply the Schroedinger equation. The whole point of MYI is that it re-uses the standard textbook QM formulation. It explains "branching" in terms of the formation of an entangled state, and comes naturally with the mechanism to describe "branch thickness" (ie. as the square of an amplitude that's fundamentally no different than any other amplitude appearing in QM). I didn't really want to get into the MYI here though. The relevant point is: the validity of the MYI as a QM interpretation does depend on it being able to predict "branch thicknesses". I am not exempting it from conditions I am holding you to. MWI is, almost by design, too hand-wavy to ever be falsified. Only to the extent that no working interpretation of QM can be experimentally distinguished from any other. Not only have I given a falsification test for my interpretation (i.e. provide me with a provably acausal process) Can you give a hypothetical example of a process that would be "provably acausal" by your standards? I've also suggested very a preliminary explanation for the area that needs to be fleshed out (i.e. apply least action to the quantum world...and your claim that "QM doesn't use the action principle" sounds awfully definitive for a Physicist that tolerates ambiguity). It's not my personal opinion. Quantum mechanics does not apply the classical action principle in order to make its predictions. If you disagree, it is your job to show that an interpretation that uses the classical action principle is possible. If you want to claim that it's all hogwash based on aesthetic appeal it is your prerogative, but you must differentiate that from denouncing it as factually untenable. I'm not appealing to aesthetics. I am saying that you haven't succeeded in presenting a deterministic interpretation of QM. Aside from pointing out the obvious (that you haven't given any real indication of how we recover the statistical predictions QM makes), I've also given reasons I don't find your approach convincing, and you haven't given adequate replies. You ignored the parts where I explained why entropy wouldn't help you for example. You also seem to ignore the parts where I explain why there's no fundamental difference between your "retrocausality" and "ordinary" causality. RJBeery02-01-11, 06:09 PMIn both classical and quantum field theory, the paths in question aren't literal particle paths. They're classical field configurations. Speaking of this, I believe an objective reality would require particle paths. I am saying that you haven't succeeded in presenting a deterministic interpretation of QM. Aside from pointing out the obvious (that you haven't given any real indication of how we recover the statistical predictions QM makes), You need to separate in your head the difference between predictive power and an interpretation postulating objective reality. WMI and Everett offer nothing in terms of predictions that aren't already embedded in the Schroedinger equation, the accuracy of which is not in doubt. I could do the same thing with a deck of cards, and consider the top card to be in a superposition of 52 states which collapse when I turn it over. This has absolutely no bearing whatsoever on the actual physicality of the deck itself. In this case, the appearance of randomness comes from the complexity of shuffling that, when done correctly, leaves us with no knowledge of the deck's arrangement. If something like "least action" applied to quantum behavior, it's a trivial task to imagine certain conditions which must be met allowing, for example, a radioactive particle to decay that would give the appearance of a purely random Poisson distribution when taken over a large sample. I've also given reasons I don't find your approach convincing, and you haven't given adequate replies. You ignored the parts where I explained why entropy wouldn't help you for example. You also seem to ignore the parts where I explain why there's no fundamental difference between your "retrocausality" and "ordinary" causality. First of all, I brought up entropy to explain why we cannot "see into the future", given that we're discussing its preexistence. You claim you have no problem with the absence of the flow of time so I dropped the subject in the interest of keeping this discussion as terse as possible. Second, you said the problem I had with retrocausality was that it didn't seem well defined. If physics came with some preferred fundamental notion of a "flow" or "arrow" of time that causality depended on (and I'm not sure I even know what that would mean), then you could easily imagine turning that around and getting something worth calling "retrocausality" that was fundamentally different than "ordinary" causality. But in a physics that is already pretty gender neutral about any arrow of time, what's the fundamental difference between "retrocausality" and the type of deterministic causality in Lagrange theories I described to you? The problem is that if there is no real fundamental difference, then "retrocausality" has no more explanatory power than the sort of determinism we've already been considering for centuries. In that case, you could adopt retrocausality as a valid point of view, but you couldn't rely on it as an explanation for anything. however, I've already provided a definition of retrocausality related to either thermodynamic reversibility or reduction in entropy of a system. I agree that Physics is gender neutral on the subject, but the issue of retrocausality and time flow is absolutely crucial to the acceptance of Determinism precisely because traditional definitions of Determinism and Causality presume a flow of time. It is an adjustment to these definitions that we are discussing so to act like causality vs retrocausality is a non-issue is a bit hasty. przyk02-01-11, 08:13 PMSpeaking of this, I believe an objective reality would require particle paths. Well, the more you deviate from the mainstream, the more you need to justify. You need to separate in your head the difference between predictive power and an interpretation postulating objective reality. I already know the difference, thanks. WMI and Everett offer nothing in terms of predictions that aren't already embedded in the Schroedinger equation, the accuracy of which is not in doubt. Of course. Because the Everett interpretation treats quantum states and wavefunctions - which the Schroedinger equation applies to - as axiomatic. You say wavefunctions just describe our ignorance, so it's up to you to give a more detailed account of what's going on. Otherwise you don't really have an interpretation of QM. You're just assuming you can use QM "in practice" and tack your own beliefs on top of it without checking that the mathematics of the theory is actually consistent with your belief system. I could do the same thing with a deck of cards, and consider the top card to be in a superposition of 52 states which collapse when I turn it over. This has absolutely no bearing whatsoever on the actual physicality of the deck itself. In this case, the appearance of randomness comes from the complexity of shuffling that, when done correctly, leaves us with no knowledge of the deck's arrangement. If something like "least action" applied to quantum behavior, it's a trivial task to imagine certain conditions which must be met allowing, for example, a radioactive particle to decay that would give the appearance of a purely random Poisson distribution when taken over a large sample. But this isn't providing a deterministic interpretation of QM. This is claiming you think there's room for a deterministic theory which would reproduce QM "on average". If you want to believe that and it somehow helps you sleep at night then fine. But don't claim you have an interpretation then. To a physicist, that implies you've got a deterministic model ready. I've already provided a definition of retrocausality related to either thermodynamic reversibility or reduction in entropy of a system. The entropy of what, incidentally? And how do you define it and why? You've never actually mentioned that. First of all, I brought up entropy to explain why we cannot "see into the future", given that we're discussing its preexistence. So? You're explaining the wrong problem. You're trying to explain how we can get a deterministic QM, not explain how macroscopic human beings get a psychological arrow of time or why vases always fall to the ground and break instead of doing the process in reverse. That problem has in principle already been solved for you. As I said and keep saying, entropy is not a fundamental degree of freedom of a system. It's a macroscopic statistical variable which doesn't exist at the most fundamental levels. Entropy and other statistical concepts might appear in the translation layer when you try to recover the statistical predictions of QM from a more fundamental deterministic theory. But you have to provide the more fundamental deterministic theory first. Nothing to do with entropy will magically restore determinism for you unless you already have it. I agree that Physics is gender neutral on the subject, but the issue of retrocausality and time flow is absolutely crucial to the acceptance of Determinism precisely because traditional definitions of Determinism and Causality presume a flow of time. As I explained earlier, no they don't. RJBeery02-02-11, 11:42 AMBut this isn't providing a deterministic interpretation of QM. This is claiming you think there's room for a deterministic theory which would reproduce QM "on average". If you want to believe that and it somehow helps you sleep at night then fine. But don't claim you have an interpretation then. To a physicist, that implies you've got a deterministic model ready. Fair enough...I guess I'll get back to you in a couple of years. :) Actually, determinism isn't the "crux" of my interpretation, block time is. It's a Local, Real explanation that allows for both QM and SR. The fact that retrocausality additionally could open the door to Determinism is a bonus. Curiously, you discounted retrocausality as unpalatable, yet also irrelevant and trivial, which I still don't understand. The entropy of what, incidentally? And how do you define it and why? As I said, the entropy of the system. You asked for a definition and I provided one. If accepting retrocausality is a trivial task for you why do you keep asking about this? You need to separate in your head the difference between predictive power and an interpretation postulating objective reality. I already know the difference, thanks. yet and the wavefunction is a mathematical tool with no physical significance. So why is it so useful? I'd be careful with this kind of dismissal. If you can agree that the superposed deck of cards would be a useful mathematical tool without any physical significance why do you seem to be insisting that a QM interpretation must hold wavefunctions as axiomatic? Can't you appreciate that sufficient complexity (i.e. hiding information in obfuscation) generates the same results? przyk02-02-11, 02:48 PMFair enough...I guess I'll get back to you in a couple of years. :) That's probably the best course of action for you at this point. I honestly can't see why you're in such a rush to interpret a theory you haven't really studied yet. You just end up approaching physics with a lot of prejudices in this way. If you can agree that the superposed deck of cards would be a useful mathematical tool without any physical significance why do you seem to be insisting that a QM interpretation must hold wavefunctions as axiomatic? I didn't. I said that any interpretation that treated the Schroedinger equation as axiomatic was also effectively treating wavefunctions as axiomatic, because the Schroedinger equation describes the dynamics of wavefunctions. RJBeery02-02-11, 03:11 PMI honestly can't see why you're in such a rush to interpret a theory you haven't really studied yet. In my defense, I realize my ideas need some more time in the oven... I'm not sure my QM interpretation has a name, I really don't know. I wasn't really seeking discussion on it at this point You just end up approaching physics with a lot of prejudices in this way. This is exactly backwards IMO though! You've already said that you had aspirations to "make sense of Physics" when you were in HS, but that basically your definition of what "makes sense" changed as your acquired your education. You grew more comfortable with what are traditionally nonsensical ideas...basically (from my POV) you succumbed to the Dark Side! Having "prejudices" (i.e. alternative theories) that can be tested against the data supporting the standard theories while you're being exposed to them is a way to inoculate yourself from this. Many are already lost, przyk, but you still have a chance...I can feel the good in you! przyk02-02-11, 03:47 PMI find your lack of faith... disturbing. :p Post ReplyCreate New Thread