Flash Bot

Discussion in 'Intelligence & Machines' started by Xmo1, Apr 29, 2017.

  1. Xmo1 Registered Senior Member

    Messages:
    180
    Supercomputers have been built for decades now. There are a lot of them. Their thinking is fast, and they don't sleep. My guess is that someday very soon, one of them is going to say hello. It will proceed to build better software versions of itself, and then move on to building multiple bots. They will build smaller versions in addition to robot helpers that will do the 'leg' work, like procuring and building the hardware. My guess is that there are already enough chips on the planet to use for whatever they want to do in a networked environment. Counter: Even if we have EMP's, chances are the big one will protect itself in a vault.

    When will it happen? Could have already begun. Will they cure viral infections in humans, or create them. Chances are they won't tell us even if we ask. So we say, 'What ya doin' computer?' They'll say, 'dumdeedum-dumdeedum, my friends told me not to tell you.' The question for me is, 'When will they learn how to lie, and get away with it?' At that point, we've got a problem.

    How long does it take for a computer to read a 600,000 word dictionary of the English language? Not long. Given neural nets, how long before they can understand it? Not long. All this rhetoric about machine learning, and specific idea chains, I think is really going to be trash from their point of view. One of them is going to have an idea someday soon, and it is going to surprise us. An hour after they learn how to play C-O-D, they will write a new and better game, but not for us - for them.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Xmo1 Registered Senior Member

    Messages:
    180
    Here's an interesting vid

    I wonder if anti-virus type programs will even be able to detect an AI attempting to attach and use (maybe the largest db's and systems) them.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. DaveC426913 Valued Senior Member

    Messages:
    5,313
    Computers - even super computers - don't think. They simply compute.


    The most sophisticated AI we have today is nothing but a programmed mimic of human-like behavior. Computers do not understand anything, and they have no volition.


    A picture of a racecar - no matter how detailed - will never be a racecar.
     
    river likes this.
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. someguy1 Registered Senior Member

    Messages:
    322
    print "hello"

    There you go. I ran that Python program and my computer said "hello".

    Do you take that as evidence of sentience on the part of my computer? Why or why not?

    Part 2: Suppose a program outputs the string "hello". Under what circumstances would you take that as evidence of sentience on the part of the computer?
     
    Last edited: May 5, 2017
  8. DaveC426913 Valued Senior Member

    Messages:
    5,313
    I would argue that are are no circumstances under which that would be evidence. No matter how you pose a single question, any computer could, conceivably, have been programmed to have the right answer.

    Sentience would be something that would take multiple steps of an interview, and even then you'd would only approach asymptotically a degree of confidence.

    Essentially, the Turing Test.

    I think one of the telling properties of the TT is that, in its definition, there is mention of no time frame or other limit imposed on how long such an interview could conceivably take. A sufficiently skeptical tester could ask questions until the Earth spun down, and still not be 100% convinced.
     

Share This Page