Illustrating Olbers' paradox

Discussion in 'Astronomy, Exobiology, & Cosmology' started by humbleteleskop, May 29, 2014.

  1. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
    Due to a non-linear property called "gamma" -- a PNG with constant intensity of #404040 will not be as bright on average as an image where 3/4 are #000000 and 1/4 are #ffffff.

    For a typical gamma of 2.2, the ratio of intensities is \(\frac{ \left(\frac{64}{255}\right)^{2.2} }{ \frac{3}{4} \left(\frac{0}{255}\right)^{2.2} + \frac{1}{4} \left(\frac{255}{255}\right)^{2.2} } = 4 \left( \frac{64}{255} \right)^{2.2} = \frac{256}{255} e^{ -(2.2 - 1) \ln \frac{255}{64} } \approx 2^{ -2(2.2 - 1) } \approx 0.19\)
    So by using unscientific illustrations derived from mainstream paint tools, one would completely mis-illustrate Olber's paradox for two layers.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. btr Registered Member

    Messages:
    93
    My "measuring scale" is counting photons, not something you can adjust.

    Interpreting your greyscale values as being proportional to the number of photons each star sends towards my detector per second, I would conclude, correctly, that both patches of sky send the same number of photons to my detector per unit time (or my retina, or whatever).

    Have you tried summing up an infinite number of layers yet? I think focussing on just one pair of layers is misleading your intuition.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. btr Registered Member

    Messages:
    93
    You won't see a picture like the famous HDF image in a universe which satisfies the assumptions of Olber's paradox. That empirical fact does not have any bearing on the validity of the argument which leads from the assumptions of Olber's paradox to the conclusion that every patch of the night sky should send as many photons per second (in each wavelength band) towards Earth as an equal solid angle of sky covered entirely, from our vantage point, by the surface of a star, regardless of how far away the star is. That's one way we know that our universe doesn't satisfy the assumptions of Olber's paradox.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. btr Registered Member

    Messages:
    93
    With infinite exposure time, you'd saturate your detector. However, infinite exposure times have no relevance to what I said.
     
    Last edited: Jun 4, 2014
  8. btr Registered Member

    Messages:
    93
    Saturation is modelled by having the greyscale max out at 100%.

    Exposure time is modelled by adjusting the value of K[sub]2[/sub].

    Neither of these facts saves us from the conclusion. Whatever positive values you choose for K[sub]1[/sub], K[sub]2[/sub] and M, the mean greyscale level per layer K[sub]1[/sub]K[sub]2[/sub]/M will be non-zero, and so with your suggestion of using additive blending the total greyscale level for all the (infinitely many) layers will still be infinite.
     
    Last edited: Jun 4, 2014
  9. humbleteleskop Banned Banned

    Messages:
    557
    Number of photons reaching a certain area. Indeed. Brightness is a function of projected area. That certain area belongs to 'sensor', once reached it will be the final destination of those photons. It's the place where brightness is evaluated. Brightness does not exist without or outside a sensor, and that's why we say brightness is "subjective property".

    The same amount of photons distributed over smaller sensor area, per unit pixel area, imprints higher level of pixel color-brightness than if it is distributed over larger sensor area. Total number of photons received per unit time is not brightness, that's intensity. Brightness is intensity per incident sensor surface area, obviously.

    Please Register or Log in to view the hidden image!




    Brightness is in the eye of the beholder. Always remember.


    You are going in circles. At the beginning of that sentence you already had photons reaching a certain area - the sensor. Then you went on like a picture-in-picture type of infinite loop, mistaking "emitted" and "received" photons, luminance and brightness, source and destination, emitter and sensor. The picture above represents the image you would see in your head, photons final destination. Brightness is not about reflection or emission, it's only about absorption.


    You fight like a dairy farmer. You need more Jazz.
     
  10. btr Registered Member

    Messages:
    93
    A few test questions for humbleteleskop:

    Lets suppose I have a digital pinhole camera with a circular aperture of fixed radius 1 millimetre, whose (circularly-arranged) light sensor array records the exact number of optical-wavelength photons which hit it during the fixed exposure time of 1 second, and provides the resulting array of numbers to us in a raw bitmap format. Suppose that camera has a fixed and fairly narrow field of view, such that a 1 metre radius sphere will exactly fill the field of view at a distance of 100 metres.

    I take a 1 metre radius sphere and my camera into a (very) large darkroom, and place the sphere at one end of the room. The sphere is luminous, emitting monochromatic green light (500 nm) uniformly in all directions with a total output of 100 watts.

    I mount my camera on a tripod exactly 100 metres from the disk, so that it exactly fills the field of view. I capture an image - call it image #1 - and save it. I then move my tripod 50 metres closer to the disk, and capture a second image - call it image #2 - and save it. In both cases, I am careful to switch off or otherwise eliminate other sources of light which could interfere with the measurements.

    Question 1: In image #1, how many photons were recorded by the entire array (to within 5%)?

    Question 2: Suppose that the sensor element at the centre of the array has an area of 1% of the total array area. How many photons did it record in image #1 (to within 5%)?

    Question 3: Repeat questions 1 and 2 for image #2.

    In each case, you may assume that there is negligible scattering by the intervening air, and the only light which reaches the camera comes directly from the sphere (i.e. there are no reflections from surfaces in the room).

    Please show your working.
     
    Last edited: Jun 4, 2014
  11. humbleteleskop Banned Banned

    Messages:
    557
    How to get multi-quote (nested-quote) within the reply text so I can also see what I said which quoted replay is responding to? I tried Multi-Quote checkbox next to "Reply With Quote" button, I don't see it does anything at all.


    So if you want me to be specific about image brightness you first need to give me some of those specific numbers.


    A. image receives different amount of energy from different stars?
    B. different image pixels receive energy from different stars?

    A+B. therefore, different pixels receive different amount of energy?

    Where exactly do we disagree, and why do you think otherwise?


    Yes, and the amount of light received from each individual star is proportional to exposure time and inversely proportional to its distance.


    You are struggling to differentiate "variable parameters" during the measurement, from "initial setup" parameters which are constant during the measurement. I don't want exposure time to vary, the problem is it's not defined. Insufficient data, it does not compute. I can not tell you how bright is the image if you don't tell me how long was the exposure time. And I told you what you get with 1 second interval, which is as arbitrary as 37 seconds, or zero and infinity of seconds. I know your default value actually, Olbers' exposure time. It's obviously not zero, so it must be infinity. Which explains all the blinding brightness you see.

    -//-...
     
  12. humbleteleskop Banned Banned

    Messages:
    557
    Yes, each shell adds the same number of photons to the image...

    ...where each star contributes different amount to different pixels, proportionally to exposure interval and inverse-proportionally to the star's distance.


    So what if we erase all the other shells and leave just these two:

    Please Register or Log in to view the hidden image!



    ...would there be any difference in their apparent color, and how do you call that property of color they're different in? Brightness maybe? Perhaps the same "brightness" Olbers was trying to figure out and what paradox conclusion describes and relates to?


    Please Register or Log in to view the hidden image!



    Beside their size and location, can you name one more very significant and obvious difference between the two squares?


    You are talking about ME, for some strange reason. It is you who is stuck and unable to answer my question, from several pages ago, which you are amusingly avoiding by demanding the answer from me. [rolls eyes] You said you know the answer, you said you made some image you're going to show us... so what are we still waiting for? Show me the money!
     
  13. humbleteleskop Banned Banned

    Messages:
    557
    Miss-illustration leads to more bright or less bright image? Can you express your conclusion in a more descriptive way, and with some example perhaps?
     
  14. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
    The misillustration by using standard painting tools is that the second image is 80% too dark. The correct analysis is via Markov modelling.

    Say at 1 light-year (D) we have x stars that have radius r. Then a certain fraction of the sky is dark, considering just those stars. \(p_1 = 1 - x \frac{r^2}{4 D^2} \) . So the remainder of the sky is the surface of stars \(q_1 = 1 - p_1 \). At \(n\) light-years we have \(n^2\) times as many stars, but they each have reduced angular measure so the fraction of the sky that is dark, considering just those stars, is \(p_n = 1 - n^2 x\frac{r^2}{4 n D^2} = 1 - x \frac{r^2}{4 D^2} = p_1\).

    Therefore the fraction of the sky that is dark considering layers 1...n is \(p = p_1^n\) when tends to zero as n gets large.

    Now starlight totals to about 2x10^-4 lumens per square meter, the sun has an angular radius of about (π/720) radians and dumps about 128000 lumens per square meter on us.
    So I conclude about 1- p = 10^-14 of the night sky is not dark. The universe is about 10^10 years old, so this estimates that at 1 light year about \(1-p_1\) = 10^-24 of the sky is not dark. If this was just one star it would have a radius of 2 × 10^-12 light-years or 19 km. That's only off about about 4 orders of magnitude which is an illustration of how bad the assumption is that the stars are uniformly distributed. In reality we are living at the edge of a clump of stars. Also the stars differ hugely in luminous flux and diameter. Finally, because the sky is mostly dark, dust is cool and being opaque it is dark in a way that this trivial model can't handle.
     
    Last edited: Jun 5, 2014
  15. humbleteleskop Banned Banned

    Messages:
    557
    That's only ok if the amount of light received is the same as you would get with any wider field of view. What do you do when there are many stars in your field of view, how do you avoid to measure their light? So why not use some normal, that is static and wide field of view to start with? Smartphone camera on auto-settings will do.


    If the condition above is satisfied, then the amount of photons recorded will be inversely proportional to distance. With equal exposure time image #1 will receive less total amount of photons per entire sensor array than image #2.


    34? Why would I even start thinking about supposing such thing? When you were calculating intensity you were looking at all the stars in all the shells, with 360 field of view, and then instead of snapping a photo you first want to shrink your field of view to infinitesimally narrow "line of sight"? What are you doing? Why? Seriously, why? Can you not get correct answer with static and wide field of view?


    Same answer as for the 1st question.

    Please Register or Log in to view the hidden image!



    http://en.wikipedia.org/wiki/Inverse-square_law
     
  16. humbleteleskop Banned Banned

    Messages:
    557
    80% too dark relative to what? Can you show us "correct" and "wrong" version of the image?
     
  17. humbleteleskop Banned Banned

    Messages:
    557
    There is no conclusion. Without time there is nothing. -- So anyway, what positive value do you choose, and why exactly that one and not some other? Can I choose the smallest value for K2 it can accept?
     
  18. humbleteleskop Banned Banned

    Messages:
    557
    Inverse-square law is the reason why the stars in Hubble Deep Sky are invisible to the naked eye?

    The amount of photons received from those stars would be the same if HDF was in the paradox universe?
     
  19. humbleteleskop Banned Banned

    Messages:
    557
    I'm afraid I'm not familiar with more than few things I see there. Too many unknowns for me to even start evaluating what is it you actually said. If you would pick something that was previously discussed in this thread and relate to that, so I can at least be sure what is it we are actually talking about.
     
  20. PhysBang Valued Senior Member

    Messages:
    2,422
    Well, you have clearly identified yourself as beyond help.
     
  21. Russ_Watters Not a Trump supporter... Valued Senior Member

    Messages:
    5,051
    Good to know, but it isn't something I'm too concerned about; it's a minor handicap. Since the start and end points are the same, it is ok if the middle points are not.
     
  22. Russ_Watters Not a Trump supporter... Valued Senior Member

    Messages:
    5,051
    [Sigh] Could you be any lazier about your own thread?

    I put your images into Photoshop. The stars are all identical and the images are 256 value greyscale. The stars are not single points, which is fine because it simulates a real image, but makes the calculation less precise. I counted the greyscale values and got 984 per star for the first and 249 for the second. This is within the potential rounding error (particularly for values <1), so I assume they were intended to be 1000 and 250, matching the inverse square law.

    So far so good?
    In your simulation, the stars in each layer are identical, but differ per layer, so when you combine layers, you get different amounts of light from different stars based on which layer they are in.
    Yes.
    Good.
    It appears to me we agree on this part. I'm already pretty sure your error isn't in the individual bits of logic, only in the final assembly: I'm just being methodical about this, holding your hand the entire way to make sure I don't lose you along the way.
    Fine, but again, for at least the eight time, we're not going to change the exposure time.
    If you don't want it to vary, then you should stop talking about it varying. But since it doesn't vary, it doesn't matter because we already have the starting image, so the details of how we got it aren't really that important. But regardless, since you want a number, I've now given you: 1 second.

    [Edit: lost some of the post in an edit. Will fix later.]
     
    Last edited: Jun 5, 2014
  23. humbleteleskop Banned Banned

    Messages:
    557
    Came angry, left angry. You need more Jazz.
     

Share This Page