There's no "I" in "Star Trek transporters"....

Discussion in 'General Philosophy' started by Baldeee, Jan 15, 2017.


Would you willingly step into a Star Trek transporter and allow yourself to be "beamed" somewhere?

  1. Yes

  2. No

  3. Undecided

  1. DaveC426913 Valued Senior Member

    But in this case, the unpredictability is a measurement limitation.
    i.e. It is unpredictable in practice - but not in principle.

    Taking the thought experiment to the extreme, if we simulated two identical minds, with identical memories in a matrix-like environment, they would both make the identical decisions when confronted with identical stimuli.
    How then, can they have free will, if we can use the first one to predict perfectly what the second one will do?

    In the real (yet farflung future) world, Syne has a valid point. We could do the same experiment by duplicating a real human, in a real, closed environment (to keep external stimuli identical).
    How could the two humans do anything but make identical decisions?
    Last edited: Jan 19, 2017
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Syne Sine qua non Valued Senior Member

    Oh, I did, but was trying to loop the off-topic detour back into the actual topic of the thread.
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. Syne Sine qua non Valued Senior Member

    Using one to predict the other necessitates an offset in time in which new cognitive conclusions may be arrived at that change the choices. Even without new stimuli, old stimuli may be compared and reevaluated. Making identical, but unpredictable, decisions does not speak to free will. Identical decisions then just become coincidental, whether externally or self-determined. Otherwise indeterminate causes would result in them diverging, even with identical stimuli.
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Sarkus Hippomonstrosesquippedalo phobe Valued Senior Member

    This seems to simply beg the question of what is "human decision making". Do humans actually make decisions any more than a rock rolling down a hill makes decisions about the path it follows? We certainly think we do, but the question here (with regard free will) is whether or not that appearance is simply an illusion, created because we are self aware but not sufficiently aware of all the causal influences acting upon us at a given moment, included among them being the processing and various feedback loops within our own thinking.
    This is addressing a rather macroscopic view of freewill, compared to the microscopic view that suggests illusion. A different view, a different perspective on the question to that argued by the underlying mechanics.
    So extraordinary as to be a fiction of science, perhaps?

    Please Register or Log in to view the hidden image!

    Me, personally, am swaying toward using the teleporter, but none of the arguments have yet fully convinced.
    Each time I think I am convinced I revert back to the notion that if the teleporter somehow left the original intact and with continuous existence, I would not be the duplicate at the other end. So if the process annihilates me to create the duplicate, why is that different from my perspective to me simply being annihilated?
  8. Sarkus Hippomonstrosesquippedalo phobe Valued Senior Member

    The universe seems to be probabilistic, at the quantum level at least, and that can undoubtedly lead to some probability at the macro.
    And this probability appears to be intrinsic rather than just a measurement limitation.
  9. iceaura Valued Senior Member

    Not really. The difficulty with accounting for the life history of dreams, for example, doesn't seem to be a measurement problem per se.
    You can't. You can't predict the first one, and if you delay the stimuli etc of the second - so that you can use whatever the first one did as your prediction - you no longer have identical setups.
    I see Syne has noted that.

    A larger point is that prediction itself, done that way, does not rule out free will. You postulated identity - meaning identical wills and freedoms thereof.
    Exactly. The point to attend is that the microscopic one won't work, because you need the macroscopic pattern of action in time - if for no other reason than to predict which Heisenberg uncertainties and quantum fluctuations are going to be amplified in such a chaotic, feedback-dominated regime, and which damped.

    Consider a much, much simpler model of the problem: predicting the path of a water molecule in the air, moment by moment. If it's in a snowflake, the universe of stuff you need for prediction becomes enormous - and at some point is going to have to include the snowflake itself, as an entity, to beat the Heisenberg and quantum problems. You have to know what the subject has been dreaming about, their perceptions and thoughts of the past, to predict their next thoughts and perceptions.
    And then notice what awareness of the subject's thinking, as necessary information for predicting their next thoughts, entails.
  10. river Valued Senior Member

    Of course the holistic me would be understood before the transport . Hence any duplicate changes would be noticed .

    But further ; and deeper into this , situation ; the duplication would take , double the energy at the very least .

    Hence the technology of transport would be over whelmed and break down , before any transport could actually take place in the first place .
    Last edited: Jan 19, 2017
  11. wellwisher

    The more you know how the brain and consciousness work and integrate, the easier these things can be addressed. There are two centers of consciousness in the human brain. The main center or inner self is the same center animals have, and is connected to natural instinct. This is the center of the unconscious mind. If I jumped from behind a door and scared you, the inner self will react first; by instinct. This might cause you to jump with a hair trigger affect that also triggers adrenaline. The second center is the conscious mind centered on the ego. The ego will often get embarrassed or upset, if someone games you by jumping out and and scaring it, since the ego would prefer be in control and maintain a certain mask for others to see. Squealing and jumping can make the tough guy mask look weak.

    A creative person will make greater use of the inner self, since it can process the same data as the ego, but in more unique and creative ways. Like someone jumping from behind the door, the creative output can be made conscious, by its affect. The ego can choose to accept or reject the output. If the ego bases its self esteem on conformity to the mass mind of culture, it may decide to repress the inner self and act like a robot to only external cultural input. This choice can become habit, which makes unconscious to anything that appears to be free will. Those who are more in tune with the inner self, are given choices all the time, which they process with the ego.

    Say someone jumps from behind the door. This makes you jump via the inner self. Your friend, who is trying to scare you, for a laugh, is redundant and continues to play this same game over and over. At first, you jump via the inner self. But as you get more desensitized, to the routine, you will still jump in the inside, but you can learn to control your body language so the ego is in control. You can choose not to jump, or pretend to being scared and jumping, to play along, based on a choice stemming from the feelings that are still be triggered.

    If this continues even longer, the inner self will stop reacting to it and become unconscious again. The trick with developing the inner self is variety instead of redundancy, so the trigger is new each time, and you can sense the output affect, and infer the reactionary output. Then choices appear, which need to be made or not.

    The inner self make use of layers of personality firmware to address inputs. Depending on the input; situation, a given layer will be used for processing. Jumping out from behind the door uses the lowest layer connected to instinct. Higher layer will be used to address emotional and intellectual content and inputs. These outputs from the inner self are more subtle and are harder to observed and translate, since it is not a straight forward as jumping and adrenaline and inferring the intent.
  12. river Valued Senior Member


    How does what your saying relate to this thread ?
  13. DaveC426913 Valued Senior Member

    No. You can isolate the simulated minds in a closed environment, so that it can be exactly duplicated. (The simulated minds do not know that one was run on Monday and the other was run on Tuesday.)

    But even if you can't so what? So they made different decisions based on external changes. That still doesn't imply free will.
    Two identical smoke alarms will respond differently if their external environments are not identical. That's not free will. Each smoke alarm has no choice in reporting the presence of smoke around them.

    Free will would imply that two smoke detectors (or two simulated brains) could be identical in every way, and their environments also identical in every way - yet we still could not predict which decision they'd make - because it's up to them.

    Heh. Talk about convergent conversation. Very brain-in-a-vatty...
    Last edited: Jan 19, 2017
  14. DaveC426913 Valued Senior Member

    You are utterly missing the point.
    It is not an engineering discussion about energy costs and reliability rates.
  15. DaveC426913 Valued Senior Member


    But we were talking about consciousness in a classical deterministic universe.

    To recap:
    I said " [there are those to think that] quantum effects are necessary to explain consciousness ... [but it's probably] more a rejection of a deterministic world in favor of free will via QM)"

    Iceaura didn't see the need for QM. She thought that complexity was enough to produce free will:
    I'm saying that complexity is a qualitatively different kind of unpredictability - it's just a measurement issue.

    Point of order: we are blurring two concepts here: consciousness and free will. The sidebar has gotten a little muddy.
    The original idea stemmed from just how good a record one needs to have to fully duplicate/transport a conscious person. If consciousness arises from QM effects (as some arguments would have it), do we need to capture quantum effects, such as superposition?
  16. DaveC426913 Valued Senior Member

  17. C C Consular Corps - "the backbone of diplomacy" Valued Senior Member

    Clearly there's no distinction between living and dead human bodies either. Graveyards are a major resource for consultation and advice from experienced thinkers who remain reliably rooted.
  18. Sarkus Hippomonstrosesquippedalo phobe Valued Senior Member


    Please Register or Log in to view the hidden image!

    I do sometimes think they hold better conversations.
  19. Syne Sine qua non Valued Senior Member

    I think it's fairly straight forward, whether you're a dualist or physicalist. As a dualist, I consider the mind capable of being split without diminishing, just like a cell splits into two identical copies of itself. The potential for a duplicate is just the potential for another mind, which we already grant, unless we're solipsists. If I were a physicalist, I wouldn't believe that I would ever be aware of the instant loss of my original self, since I have no independently existing mind. And should my original self fail to be annihilated, I would again merely be granting another mind can exist.

    Does the absence of belief in an afterlife of some sort make fear of death the same as fear of oblivion? I've never understood a fear of oblivion, since by its very nature, you will never experience oblivion. And since teleporter annihilation would likely be instantaneous, we wouldn't expect there to be any pain or experience of death either (since this would supposedly transfer to the duplicate as well).

    Ah, I missed the part where you moved from real duplicate to two simulated minds (in essence AI's). Bringing that example back to real people, it sounds like your saying one passes directly through to one environment, and the other, held in the transporter buffer or something for some time, is then teleported to an identical environment. In that case, yes, you could predict the duplicates decisions. But does prediction alone rule out free will? Free will is usually, minimally defined as the ability to do otherwise. Would prediction alone remove such ability, or does such prediction only foreshadow the ability?

    Harry Frankfurt amended this definition to be: "A person is not morally responsible for what he has done if he did it only because he could not have done otherwise." IOW, coercion may both remove the ability to do otherwise and moral responsibility. While you can predict the decisions of your duplicate, you cannot say he is being coerced or forced into those decisions. Those decisions are just as natural and free from coercion to him as they are to the original. So even though he cannot really do otherwise than predicted, that is not the causal, determinate reason for his decisions. Otherwise we'd be faced with the contradiction that the original could do something for which he is morally responsible, but then the duplicate could not be held morally responsible for the exact same decision for the exact same reasons and motives.

    Granted, if you're a hard determinist, you may not believe in moral responsibility at all.

    But can you demonstrate that a person has as little choice as a smoke alarm? That sounds like a very bold assertion to support. Prediction alone does not preclude free will. The ability to do otherwise does not cease to exist when it can be predicted, it just fails to obtain from the perspective of the predictor. For the individual, prediction doesn't change motive. If prediction did change the cause of decisions, then we would expect the decisions to actually diverge from the original, as they no longer share the same causal history.

    Yeah, I joined this discussion because I saw the similarity to the BIV discussion. Luckily, this one has a much simpler scenario, so maybe it won't invite as much ad hoc stipulation.

    Please Register or Log in to view the hidden image!

  20. DaveC426913 Valued Senior Member

    I'd hazard to say that the fear of death is more accurately the fear of loss of life. i.e all the things undone - the words not spoken, the loves not loved, the hills not climbed. We each made ourselves a purpose in life, whatever it might be, and the idea that it might be snatched away only half-done, is hard to take.
    Consider those who, due to some progessive disease, have time to get their affairs in order, and thereafter can welcome death with serenity.
  21. Syne Sine qua non Valued Senior Member

    So...get your affairs in order before you first trip?

    Please Register or Log in to view the hidden image!

    Considering you'd still be able to climb that hill, through the continuity of the duplicate, and unable to experience the loss of life either way, I'm still not sure I understand the trepidation.
  22. DaveC426913 Valued Senior Member

    But one would not want to do that if one did not know one were not about to die.
    Getting one's affairs in order is often not something you can take back should you not die.

    Well, the duplicate would be able to climb that hill.

    The one that steps into the transporter will not. It will cease to exist.

    The duplicate will live on with the memories of the original. Which is fine for Dupe. But it is not Dupe that is faced with the decision to take that step into death.

    You bring up a good point about sleep. Our consciousness does not experience continuity. We cannot know that when we wake up in the morning we are the same person. OTOH, I expect that the unconscious may provide that continuity.

    I have often owndered if - teleportation notwithstanding - general anaesthetic creates the same problem. You do not dream. Essentially your conscious and unconscious are shut off for a duration.

    Sometimes I wonder why those who believe in souls don't adamantly refuse to go under general anaesthetic.
    Last edited: Jan 20, 2017
  23. Syne Sine qua non Valued Senior Member


    Since the duplicate fully believes it's you, I fail to see the difference. Still seems silly that someone who doesn't believe they will actually experience any such loss worries about it. But fear's not always rational, I guess. The duplicate will be faced with the return trip, and will make that decision on the same basis you did, with the same history of experience. Is it just the first trip, or would you, knowing you're the duplicate, have the same qualms about the return trip (and living on as yet a second duplicate)?

    Since general anesthesia is forced unconsciousness, I'm not sure what you mean by "unconscious shut off". Do you mean subconscious? I'm not sure brain activity is ever lowered far enough to preclude that.
    ... But one patient in a thousand remembers moments of awareness while under general anesthesia, physicians estimate. The memories are sometimes neutral images or sounds of the operating room, but occasionally patients report being fully aware of pain, terror, and immobility. Though surgeons scrupulously monitor vital signs such as pulse and blood pressure, anesthesiologists have no clear signal of whether the patient is conscious. But a new study finds that the brain may produce an early-warning signal that consciousness is returning—one that's detectable by electroencephalography (EEG), the recording of neural activity via electrodes on the skull. -

    Why would believers in souls worry about being unconscious, or even if their subconscious could be fully suppressed?

Share This Page