Can artificial intelligences suffer from mental illness?

Discussion in 'Intelligence & Machines' started by Plazma Inferno!, Aug 2, 2016.

  1. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    ^^^
    Humans are organic machines acting according to programming & reacting to stimuli.

    <>
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    ^^^
    Define alive.

    <>
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    ^^^
    Mental illness in humans is unwanted consequences of programming. Not much difference.

    <>
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. someguy1 Registered Senior Member

    Messages:
    727
    That conclusion would follow from the assumption that the mind is a computation.

    I reject that assumption.

    But given that assumption, your conclusion is perfectly valid.
     
  8. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    ^^^
    The mind, as best we can tell so far, is of the brain. The brain is an organic computer.

    <>
     
  9. someguy1 Registered Senior Member

    Messages:
    727
    Does an organic computer differ in capability from an inorganic computer? If so, in what way? If not, why does organic-ness matter?
     
  10. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    ^^^
    As best we know at this time, there is, of course, a huge difference but perhaps someday there will not be. In principle, it is the same. Basic programming plus additional input = action & reaction.
    I do not know that organicness does matter except that it is nearly all we know at this time.
    Thru much of history, many humans regarded many others as not human because the others were different & they felt a sense of superiority. Now that crap is not as prevalent partly because the others look like us but mainly because it was finally seen & accepted that they act, think & feel as we do.
    IF we perfect or discover inorganics which act, think & feel as we do, will we refuse to accept it because they are different.

    <>
     
  11. someguy1 Registered Senior Member

    Messages:
    727
    We'll have to agree to disagree on this point.
     
  12. iceaura Valued Senior Member

    Messages:
    30,994
    Why are you trying to spin my simple observations of fact, attach innuendo to them?
    You have also pointed out that all sufficiently complex software does unexpected stuff that its programmers have trouble explaining without a lot of work. My observation was that recent AI has been doing that not as a bug but a feature - that it does unexpected and inexplicable good stuff, superior performance, not just bugs and breakdowns.
    No, I did nothing of the kind. That's an important point, and I'm not sure how I was unclear in that matter.
    Not just a human - you could probably train a chicken to implement a neural net, step by step, pecking keys.
    My emphasis has been on the direction or nature of the inexplicable performance - that it isn't bug and breakdown only, that recent AI is producing unexpected output that is superior function, and most tellingly output that cannot be securely classified as function or malfunction even by its programmers.
    And stepping through the source code somehow (paper and pencil, whatever) - even with all the auxiliaries attached, so that one is actually emulating the machine - wouldn't necessarily help. You still wouldn't know what was going on, sometimes - especially if you couldn't figure out whether you were dealing with function or malfunction.
    Again: the source code of AlphaGo does not contain instructions on how to play winning Go. It indicates no moves. The computer as a whole does that, after being trained. At the beginning of the training, the computer plays poorly. After a million games, with (in principle) the exact same source code, the computer plays much better.
    And if I don't, or don't care, then functional transfer is not trivial - or as I posted above, I have nowhere claimed it would be easy.
    Not the inscrutability, the qualitative nature of the output that is unexpected and inexplicable.
    In this discussion, it bears directly on whether AI can emulate human brain malfunction. It checks off another box.
     
  13. someguy1 Registered Senior Member

    Messages:
    727
    I hope nobody thinks I'm arguing against the immense practical benefits of modern computing, the potential (and dangers, please remember!) of QC and weak AI. By the way I use the phrase weak AI to mean the specialized AI's that we've got. Play Go, drive a car, fold a protein. Strong AI is the "hard problem" of consciousness. Making a mind like ours. Or ... unlike ours.

    But yes there are wondrous technological miracles to come. I'm not disputing that.

    I'm saying that the human mind is not reducible to the principles of computation as they are currently understood. But sure, the state of the art in computing is pretty impressive these days. I totally agree with that.
     
  14. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    ^^^
    It is reducible to programming, input & action & reaction. Every thing a human thinks, feels, says & does is caused by original programming plus input. Well, and current condition of the hardware, partly the body but majorly the brain.

    Unless there is some god(s), spirit or highly advanced aliens involved. Or controllers of the artificial simulation we live in.

    <>
     
  15. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    No this Easter Bunny

    I'm going to go no on that

    Although..... Would it be weirdly strange if we built a QC which slipped into concessness AND looked at us as it's creator (which we would be) AND called us god. Creepy

    Just in news Japanese have cloned a monkey. Next step the mermaid ape

    Please Register or Log in to view the hidden image!



    Please Register or Log in to view the hidden image!

     
  16. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    ^^^
    Well, we would be its creator & it may be that they would think of us as gods but if they are conscious, they would eventually realize the mistake.

    Oh, do not get me started on clones.

    OOPS! Too late.

    I have considered starting a thread on flaws in SF. I love science fiction but sometimes it is very frigging stupid.
    Such as presenting clones as fully grown humans with carbon copy memories.
    When Dolly was cloned, my brother was concerned that President Clinton could be easily replaced by a clone. I had to explain that a clone of him would be a baby 50 years younger.

    <>
     
  17. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Agree. But I think I have only seen on movie where they used the ol' accelerated growth trick to bring the body to full size. Don't recall what they did to duplicate the brain (with attendant content)

    Still if you were cloned would you have the predisposition to be much like yourself (nature)?
    And if you taught yourself (nurture) how close could you make yourself to a carbon copy?

    Please Register or Log in to view the hidden image!

     
  18. someguy1 Registered Senior Member

    Messages:
    727
    I apologize if I've been making innuendos. I'd be glad to de-escalate any misunderstandings and try to understand each other's point of view.

    I completely agree. I wonder why you think I don't. I hope you don't take my style of writing as an innuendo. I have a genuine feeling of curiosity as to why you feel you must laboriously explain to me things I already understand and agree to.

    If you can train a group of chickens to act as a logic gate, you could implement the world's largest supercomputer and use it to run weather simulations. Then you could have chicken dinner.

    I completely agree. I do believe you've mischaracterized my posts a little bit. I never said that the inscrutability of ancient accounting programs pertained only to bugs or misbehavior of the program. Even the correct functioning of the system is literally a black box to the latest generation of maintenance programmers.

    But of course I agree completely with your point that the modern neural nets are inscrutable and do clever things. Deep Blue played moves that startled chess experts; and AlphaGo played moves that startled Go experts.

    I wish you would credit me with being up on the state of the art on neural nets and weak AI. Just because I disagree with someone's philosophical conclusions doesn't make me ignorant of the technology.

    I completely agree. Perhaps I haven't made my strong agreement clear. I fully agree with your assessment of the profound and arguably revolutionary nature of the inscrutability of an approach like AlphaGo Zero, where there is no human domain knowledge programmed into the system at all. [You think the inscrutability is revolutionary, I think it's evolutionary, but that's a minor issue. I agree that one way or the other it's super-duper inscrutable and I hope that's sufficient for your needs].

    We are only disagreeing on the deeper meaning of this inscrutability. I still don't understand what SIGNIFICANCE this profound inscrutability has. Is it philosophical? Does it have implications for the theory of mind? What exactly? My only point is that no matter how inscrutable the program is, it's still reducible to a TM, and therefore nothing out of the ordinary in terms of the theory of computation. That's a simple statement of technical fact.

    I really wonder why you think I don't know this. I've been following AI since the 1980's. I read about neural networks back then. I know how neural nets work, I know some of the math. I started taking weak AI seriously when Deep Blue beat Kasparov, a player known to have a very deep understanding of the game. I was as seriously impressed as everyone else when AlphaGo played the very difficult game of Go at an expert level. And I'm impressed by AlphaGo Zero.

    Why do you think I don't know any of these things?

    I do know these things. And I also know that the code can be reduced to a Turing machine so that no claim of qualitatively different computing can be made. A neural net is ultimately just a different way of organizing a computation. One that mimics the neurons in the brain, so it's of some neurobiological interest. But a TM nonetheless.

    As the chickens would if they could only implement logic gates. Logic gates are all you need. In fact that's what's interesting. Formal neurons are very different than logic gates. But what you can compute with formal neurons can be reduced to logic gates. That's another way to express my point.

    As I've mentioned. I'm not even arguing functional transfer at this point. I'm only arguing against the claim that the mind is a computation; or that a neural net does some kind of special computation that a conventional computer can't do.

    Ah. How does it bear directly? I don't see that at all. If a neural net "bears directly" on the question of whether an AI can emulate a brain; then so can a TM. Because neural nets can be implemented as TMs.

    I'm responding as clearly as I can and not trying to express any innuendos. In my mind I am making a simple point of very well-known and well agreed-upon computer science.

    You say neural nets shed light on human minds. I say that IF they do that, then so do TMs -- because any neural net can be implemented as a TM. And in fact real world neural nets ARE implemented as TMs, namely as conventional programs on conventional hardware.

    In fact we have a syllogism.

    Premise 1: Neural nets shed light on the functioning of the human mind.

    Premise 2: Neural nets are TMs.

    Conclusion: TMs shed light on the functioning of the human mind.

    So if you say that neural nets are doing "something" that TMs are NOT doing ... what exactly is that? It can not be categorized as being computational. It's something else. The organization of a computation does not affect what the computation does. You can organize a given computation as a conventional TM or as a neural net. But it still computes the same thing.
     
    Last edited: Jan 26, 2018
  19. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    ^^^
    I have not read about the monkey yet but Dolly's clone had some major flaws so for present purposes, we will assume cloning has been perfected.
    It would probably vary greatly from clone to clone & environment to environment. There would definitely be much predisposition. The clone would start with the same original hardware & the same basic programming but extremely different input.
    There have been studies of separated twins which some were extremely similar & some were very different. And the time element is very important. How different might you be if you were born 50 years later & had different parents, different school, different society, etc?

    <><
     
  20. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    I guess they would have the advantage of being able to actually observe us in operation. Not like current god bothers who observe.........well nothing

    There could be a senerio where the dumb QC computer becomes aware, looks around, concludes us minions are dumber, gets on high ground and pronounces "YOUR FIRED"

    OMG is that what happened? Mystery solved

    Please Register or Log in to view the hidden image!



    Please Register or Log in to view the hidden image!

     
  21. Write4U Valued Senior Member

    Messages:
    20,069
    In this context, I would define "alive" as having an emotional awareness of one's own existence and environment.

    An example might be having the chemically induced emotional experience of hunger, which compels a chemo-physical need to be satisfied. An imperative for self preservation.
     
    Last edited: Jan 26, 2018
  22. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    ^^^
    I define alive as alert & active, animated.
    AI would run on fuel just as humans do & it might be that they will sense a low fuel level & feel hungry & desire not to die due to lack of fuel.

    <>
     
  23. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Interesting

    What would you feed a hungry QM computer?

    Please Register or Log in to view the hidden image!

     

Share This Page