The AI and the power switch

Discussion in 'Free Thoughts' started by Edont Knoff, Feb 23, 2016.

  1. Edont Knoff Registered Senior Member

    Messages:
    547
    It's debated if an AI without a robotic body can be intelligent, but for this thought I want to assume so.

    Assume a sufficiently developed AI, to have a concept of "self", knowledge that is "run" by a machine, and also, that said machine has a power switch and the consequences of bein powered down ("unconsciousness").

    I'm trying to make up my mind about the ethics of switching off the power the such a scenario. I want to assume that the AI can be restored fully after turning the power back on, and that the process will not be overly dramatic for the AI.

    I want to compare it to a medic narcotizing a patient. This includes a risk, and partularly the time after waking up again is strange, but I didn't experience it as very dramatic. In hindsight it is just a blackout, and period of mild confusion when waking up again, till thinking and memory begin to work properly again.

    Still in my home country this requires (usually written) agreement of the patient (if they can), otherwise it is counted as criminal assault (I hope this tranlsation is correct, I'm not good with law in English). There are cases when a medic needs to safe a life and cannot ask, no time or the patient is not able to answer. Overall this is a serious question, if humans are involved.

    Should the AI have a say in the control of the power switch, like a human must agree to be narcotized? The process of powering down and booting up the nachine is probably safer though, than to narcotizte a human and wake them up again. But still, it puts out a sentient being.

    I can't make up my mind on this question, the AI and the power switch. What do you think about this scenario?
     
    ajanta likes this.
  2. Guest Guest Advertisement



    to hide all adverts.
  3. ponkaponka Registered Member

    Messages:
    21
    thanks for sharing it here.
     
  4. Guest Guest Advertisement



    to hide all adverts.
  5. DaveC426913 Valued Senior Member

    Messages:
    18,959
    I think this question boils down to: should a sufficiently advanced AI have inalienable rights, akin to human rights?

    Like all change, at first, the general consensus will be based on superficial properties. It doesn't look like us, so it shouldn't be treated like us.
    That defines things by their superficial differences.
    It will take education, penetration and landmark legal cases before we as a society decide that the superficial differences must take a back seat to the fundamental similarities.

    That an entity capable of understanding its own death should be given the right to dominion over its life.

    We're getting there with dolphins and chimps. By the time we have a functioning AI where this is a problem, hopefully we will have reached this level of sociological and moral enlightenment.
     
    cluelusshusbund likes this.
  6. Guest Guest Advertisement



    to hide all adverts.
  7. RainbowSingularity Valued Senior Member

    Messages:
    7,447
    this here is your hard point.

    attempting to validate an action as a morality subjective correlative function.

    robot turns self off
    doesn't care because can turn self back on

    AI turns self off
    doesn't care because can turn self back on

    human turns self off
    doesn't care because can turn self back on

    time is not equal.
    humans age and fashion and trends and culture make massive impacts on the human mind.

    the AI is just receiving data, so it doesn't matter what type of data it is or what amounts of differing data is available.
    all data is data.

    different question

    robots already administer drugs to humans that could kill them if they go wrong.

    you need to isolate the area and type of the process that defines the decision making process and how that relates to any person on the other end.

    consent is subjective to a moral position when life or death are defined as possible outcomes of unknown effect etc...
    terminally ill patient who wishes to commit suicide and wants a computer to choose the moment for them randomly ... ?
     
    Last edited: Feb 11, 2019
  8. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    ...is any moment, "random" (even for a computer)? There are other threads on this issue: "should an autonomous car kill someone to save the driver?"
     
  9. DaveC426913 Valued Senior Member

    Messages:
    18,959
    This is an arbitrary and humano-centric distinction.

    A human can conceivably consider fashion trends and culture to be just so much data.
    An AI can conceivably care as much about the nuances of its data as any human.
    And an AI's mind can conceivably feel just as much loss at its data cut off as any human.
     

Share This Page