Deep-space communications relay stations

The other is changing data in a data stream (same as errors in the transmission) that causes the decoding algorithm to fail completely.
assuming NASA uses such a scheme.
knowing NASA like i do i can safely say they would do their best to never employ a scheme that "fails completely".

edit:
thanks for the images in post 21, especially the dish and voyager pictures.
 
Last edited:
assuming NASA uses such a scheme.
knowing NASA like i do i can safely say they would do their best to never employ a scheme that "fails completely".

Well sometimes you do, like when you want to insure that you got the exact data that was sent with no alterations at all (or use other embedded means, like a checksum to tell you the data needs to be resent).

But there is a fundamental difference between noise in an Analog system (AM or FM for instance) and Digital, as in current TV broadcasting. The good thing about digital data is you are sending nothing but "zeros and ones" and so the amplitude and duration of the "ones" can be set sufficiently high and wide enough that normal noise is not typically a problem. In other words, spikey noise is not recognized as a one.

http://en.wikipedia.org/wiki/Cliff_effect
 
... to insure that you got the exact data that was sent with no alterations at all (or use other embedded means, like a checksum to tell you the data needs to be resent). ...
Re-sending is not used for large data streams, especially by NASA, where just telling the originator to resend may require 5 hours delay. Instead Hamming Error Correction coding is done. (I have encoded a small data set for error recovery by hand as a class exercise, but it is a fully automatic procedure normally.)

NASA (or any other organization with a lot of data to move thru an imperfect channel) uses Hamming error correction code. It is no good to only know an error has occurred with an extra “check bit” – You want to get the bit sequence correctly despite bit corruption. Hamming coded transmissions allows this.

See http://en.wikipedia.org/wiki/Hamming_code for discription or
http://users.cis.fiu.edu/~downeyt/cop3402/hamming.html for examples and how the original data words are transformed so that the original can be recovered even if there is a transmission error.

One way to intutitively understand the idea in Hamming coding is that a "little bit" of each of the original data bits is represented in each bit of the Hamming coded version, which is actually transmitted. Thus an error in the transmitted data converts back to a small error in ALL of the original data's bits - but not enough error to make the system mistake what was sent. I.e. your decoding calculation of a of three bit string which should be 0, 1, 0 might produce: 0.2, 1.1, 0.1 but you can confidently guess that is suposed to be 0 1 0.
 
Last edited by a moderator:
I wasn't trying to explain all the different methods. In some cases retransmission is done, in others it's not. Depends as you point out on the different factors involved and type of errors expected and there are different error correcting codes and each has it's unique traits that are matched to the need.

A Reed-Solomon error correction code was used on a Voyager mission, but then only for the pictures (which were great).

http://en.wikipedia.org/wiki/Reed–Solomon_error_correction

Uploading of new programmed instructions was not done with error correction, but the traditional checksum methods, as the data has to be exact, not just very good.
 
... Reed-Solomon error correction code was used on a Voyager mission, but then only for the pictures (which were great).

http://en.wikipedia.org/wiki/Reed–Solomon_error_correction...
Thanks. As an efficient decoding algorythm was not discovered until 1969 I was well out of classes by then.

Your link states: "In Reed–Solomon coding, source symbols are viewed as coefficients of a polynomial p(x) over a finite field." That sounds very much like the 2D Fourier Transform of a field of pixels - i.e. of a photograph. There is very famous FT of photo of Lincoln. I can see why for photos Reed–Solomon coding would be prefered - a natural for photographs.

In both the Reed–Solomon or the Hammond coding the approach /the basic idea/ is the same: Every bit that is transmitted represent information from all the source. Thus, if a transmission error occurs, the computed decoded version is not much different from the original source. i.e. only very slightly in error everywhere. If that is a picture you just accept a very slight loss of photo quality.

If the source is known to be 1s and 0s, you can perfectly reconstruct the original binary data. As I illustrated: If when decoded you get 0.2, 1.1, 0.1 for a known to be binary original, you make the all the slightly wrong bits perfectly correct: i.e. you know the original was not 0.2, 1.1, 0.1 but was 0,1,0.
 
Last edited by a moderator:
Your link states: "In Reed–Solomon coding, source symbols are viewed as coefficients of a polynomial p(x) over a finite field." That sounds very much like the 2D Fourier Transform of a field of pixels - i.e. of a photograph. There is very famous FT of photo of Lincoln. I can see why for photos Reed–Solomon coding would be prefered - a natural for photographs.

I think you might be talking about three different things here.

The first is use of Fourier transforms for artistic purposes, to change the look of a photograph. (Not specific to that particular transform; lots of them are used for various effects.) I haven't seen the picture of Lincoln you refer to so I don't know if, or what sort, of such effects were used.

The second is the use of various sine-based compression schemes to encode visual information. .JPG's use this method. The basic idea is that you transform spatial (X-Y) information into frequency domain (i.e. energy per frequency) information. .JPG's specifically use a discrete cosine transformation to do this. It works well because most images contain regular features that are easily encoded via a frequency based approach. This is a lossy technique; the encoded image looks similar to, but not identical to, the original. However, 10:1 image compressions are possible.

The third are lossless techniques like Reed-Solomon. These always result in MORE bits than the original had - but if you lose a small number of bits you can recover the original signal without any loss in information.

(There are, of course, systems that use combinations of the two, that allow recovery of a compressed picture even with bit errors.)
 
here are two images of Pres. Lincoln

F13.medium.gif
This right version is more like what your brain processes after the processing in V1 is done. In part that explains why / how you recognize things of any size or orientation. I.e. recognition is not by comparison to millions of stored images but by analysis of the single set of "FT like" transforms different size images would give.

Actually as space FTs extend forever, the types of transforms used in brain after eye & V1 are more like Gabor functions.

Also it is a lot of fun to play with optical FTs which if I recall correctly are found at the focal length of a lens (I.e. the transforms are all Parallel beams that come to a focus there but each at its own location in the focal plane) The "normal image" is both at the source location and upside down at the image location, both more than the focal length from the lens.

I.e. at the focal length you can stick the black tip of a match to absorb particular FT components. Then with a large lens to capture most of the main FT component reform the modified image - I.e. invert the FT to get back normal image and see what it looks like with some TF component absent. This works best with a highly regular original "image" like a piece of screen wire. that makes very discrete FT components which you can remove one at a time with the black match tip.

Few realize it but lenses do an optical FT like transform.
 
Last edited by a moderator:
But there is a fundamental difference between noise in an Analog system (AM or FM for instance) and Digital, as in current TV broadcasting.
:confused:
there is?
The good thing about digital data is you are sending nothing but "zeros and ones" and so the amplitude and duration of the "ones" can be set sufficiently high and wide enough that normal noise is not typically a problem.
In other words, spikey noise is not recognized as a one.
this would be true until the signal strength dropped to at or below the noise level.
it appears that NASA does not use straight binary to transmit from or to voyager but uses PCM instead.
i've been looking for the methods NASA uses to insure data integrity but so far the only things i have seen relate to hardware redundancy.
i DO know data communications with the shuttle was done in triplicate with a voter circuit to determine the outcome.
 
Re-sending is not used for large data streams, especially by NASA, where just telling the originator to resend may require 5 hours delay. Instead Hamming Error Correction coding is done. (I have encoded a small data set for error recovery by hand as a class exercise, but it is a fully automatic procedure normally.)
what is the source for your statement "hamming code" is used?
i've looked for the types of error correction NASA uses and i can't find any.
 
:confused:
there is?

Yes, Any noise in an Analog signal affects the outcome.
Only noise above a theshold affects digital.

http://en.wikipedia.org/wiki/Cliff_effect

this would be true until the signal strength dropped to at or below the noise level.
it appears that NASA does not use straight binary to transmit from or to voyager but uses PCM instead.

PCM is Binary.

http://en.wikipedia.org/wiki/Pulse-code_modulation

i've been looking for the methods NASA uses to insure data integrity but so far the only things i have seen relate to hardware redundancy.
i DO know data communications with the shuttle was done in triplicate with a voter circuit to determine the outcome.


Never heard the triplicate issue with the shuttle.
Source?

http://en.wikipedia.org/wiki/Tracking_and_Data_Relay_Satellite
 
what is the source for your statement "hamming code" is used?...
Personnel experience but about 20 years old now. I was in the space department at Johns Hopkins Applied Physics Lab - We built many space craft packages both for the US Navy and NASA, not entirely usually but often at least one major scientific packages on more than 100 launches now.

Go here to see some 20 or so of the more important ones starting in 1973:

http://civspace.jhuapl.edu/programs/index.php?sort=Launch&show=all

That is only the civilian packages - many for the navy also exist. Some are secrete and not even listed.

I cannot promise Hamming code is still the best but it was.
 
it appears that NASA does not use straight binary to transmit from or to voyager but uses PCM instead.

PCM (pulse code modulation) is a modulation technique used to transmit digital messages. There are a great many methods of doing so. Saying "NASA does not use binary; they use PCM instead" is akin to saying "Aircraft radios don't use radio waves; they use VHF instead."
 
it seems i have confused PCM with PWM, sorry guys.
PWM is analoge and may be used for example to control a model airplane's rudder. -set for straight head if + & - square pulses are of equal duration. Longer + might turn plane to the right, etc.
 
Back
Top