AI and the singularity

Discussion in 'Intelligence & Machines' started by arfa brane, Jun 9, 2017.

  1. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Do you think it will be possible for humans to build machines (computers) that will be able to perform tasks no human can understand?

    Will we build AI systems to for instance, learn how to compose quantum algorithms that we have difficulty understanding, at least?
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Write4U Valued Senior Member

    Messages:
    20,069
    Perhaps deeper knowledge of how our brains function, might give insight into a possible structuring of AI. I found this and the following Ted Talks presentations fascinating and possibly pertinent to AI.
    https://www.ted.com/talks/mehdi_ordikhani_seyedlar_what_happens_in_your_brain_when_you_pay_attention?
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    The question though, isn't so much about AI and whether we can program consciousness into a machine, it's about whether we can write specialist knowledge kinds of programs that can "understand" quantum programming better than we do.

    If so, will it discover better quantum algorithms? will we in fact be compelled to do this, because we just don't understand the quantum algorithms that well? Evidence for this is the relatively small set of quantum algorithms that we are sure about, and how most of them use two or three qubits. Beyond that, there is Shor's algorithm which works with any number of qubits, and a handful of others.

    If we do get AI systems to do this because of the trouble we have with quantum logic (something a machine can presumably be programmed to ignore), will we understand what they're doing?

    I suppose the thread here is, we do already build machines that can do things for us which are hard, and therefore which we "don't understand" in a sense. Can you understand how Deep Blue can beat a chess grandmaster?
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Write4U Valued Senior Member

    Messages:
    20,069
    I was not so much talking about consciousness (as we understand it), but the processing and categorizing of neural wave-functions, which have computable specific values.
    Example; little girl lying on the ground crying. AI approaches and after analyzing the visual and sound waves calculates a "high value" (emergency) condition , and assumes an "assist mode", overriding another mode of " lesser value".

    Ultimately it comes down to analyzing values and sets of values, IMO.

    I admit I know little of AI structural neural processing systems, but as I understand it we are already able to construct "intuitive" response systems in computers.

    Just as a spell checker is able to recognize a word with incorrect values, which causes it to signal a possible error and even offer alternative words which show similarities in structure.

    I seldom see mention of a learning curve in AI. I know there are chess programs which learn from experience. The more games it plays, the better it gets. But that takes time and many games before it recognizes the possible implications of the opponent's strategy and make the correct counter moves.

    But IMO, in order to recognize a value, one must have prior cognitive and associative experiences, which can be achieved only after a period of learning.
    A doctor in medicine, has to spend 8 years of higher education in order to achieve the knowledge to make diagnoses and prescribe treatment. Why should an AI be exempt from having to learn from experience?

    I believe a "learning" AI also must undergo a period of time before it can properly diagnose an existing condition and the variables involved. You cannot expect any form of intelligence to be able to respond to value inputs, by their priorities, unless it has a kind of mirror function by which it can make comparative and relative abstract perspectives, which enhances cognition.
     
    Christian Dauz likes this.
  8. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    I'll rephrase.

    Suppose there is an AI which, by using a quantum oracle, can learn to design quantum circuits (according to some set of rules over a finite set of quantum "gates").
    Suppose we know what we can input, or we know the start state of such a system. Then it outputs, after some learning period, a novel circuit which we can't understand. We implement the circuit on some quantum computer and it outputs exactly what the AI predicts it will, even though we can't understand how the circuit computes anything beyond a few qubits at a time.

    Is that scenario possible and does it constitute a singularity? What kind of singularity is it when we can construct devices that "know more" than we do even though we wrote the code that runs on the AI computer system? What if quantum computers are used for the AI?

    So I'm talking about an AI that can write programs, these programs could be beyond our understanding, but given what circumstances? If it happens, then (possibly) an AI on a quantum computer could rewrite itself in ways we can't fathom and still be useful. What does that mean, really? Such a machine might appear godlike to us, possibly.
     
    Last edited: Jun 9, 2017
  9. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Not sure it would appear god like

    It would be standing on the shoulders of giants (to coin a phrase

    Please Register or Log in to view the hidden image!

    )

    The giants of course being the humans who inputed the seed programs

    And I would be fairly certain those humans would be able to understand any new programs put out by the computer

    Please Register or Log in to view the hidden image!

     
  10. Write4U Valued Senior Member

    Messages:
    20,069
    key-word "learn".
    Again key-word is "learn", except this would require all of current knowledge about circuitry.
    That's a contradiction, IMO. If the AI can predict the result, then we should be able to understand how the quantum computer arrives at the predicted result.
    Moreover, a quantum computer still works on energy, and we can track energies. So we can follow the energy distribution to certain parts of the AI, similar to a brain scan, which shows different brain activities by their concentration of energy..
    We don't quite know how our bio-chemical brains work either, but we know approximately where in the brain processing takes place.
    It seems unlikely that an AI can ever know more than what it can observe or learn from existing mathematical and physical natural phenomena.
    As to singularity, one could claim that each individual brain is a singularity, but I don't think that is the commonly held view.
    why would any program be able to create a "program" beyond our understanding. If it were that "intelligent", it should be able to tell us how it does it. Even if it is in a complicated binary fashion, which can be tested by current smart conventional computers, even if we would need a string of computers, each designed for specific analysis and processing powers.
    I qualify my perspective with the fact I am no expert in programming.
     
    Last edited: Jun 9, 2017
    Michael 345 likes this.
  11. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    I would think from your post you qualify for common sense

    Please Register or Log in to view the hidden image!

     
    Write4U likes this.
  12. C C Consular Corps - "the backbone of diplomacy" Valued Senior Member

    Messages:
    3,324
    That's arguably already the case, if "our ability to understand" is dependent upon traditional guidelines and conceptual tools of science and philosophy. Chris Anderson's predictions made almost a decade ago are indeed coming to pass.

    Can we adapt to the alien approaches that smart machine programs are developing on their own, which seem to rely largely on impromptu invention and correlation rather than pre-planning, coherent models / unified theories, and causation or mechanistic explanations? Since the systems which the West rose to power on were actually somewhat alien themselves to earlier populations (it's not like the prescriptions for modern thinking were / are inherent in us), then we might adjust after all by rebooting to earlier orientations and then advancing forward in alternate directions, or change the definition of "understanding".

    But we'll never match the processing speed of computers without technologically augmenting ourselves (transhumanism). Those who are of unprivileged / non-relevant status or are just too poor to afford enhancements will be left behind. Regardless, all or some disparaged segment of humankind won't be able to keep up and thus will be towed by the new alien oracles. "We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that 'think' about the world differently than we do." --David Weinberger, technologist

    In terms of AI / robots not only matching our existing or familiar abilities, but also outperforming us therein: Experts in North America set the date at around three-quarters of a century from now, while those in Asia go with circa 30 years. It will be either something in between those two or earlier than latter's minimum. Don't expect them to really take over the world as in the movies, since aside from lacking such self-interested goals, real-life manufacturing output and manipulation of concrete circumstances will still remain incredibly slow in contrast to the speed of computational and virtual creativity. We'll just become heavily dependent upon them for both our reasoning and work.

    - - - -
     
  13. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    What do you understand this thing called "The Singularity" is? Is it, roughly, when we develop machines which are more intelligent than we are? What does "more intelligent" mean anyway?

    As to an AI which learns how to construct quantum circuits. If it does output some design we can't explain the computational workings of--we can't understand the algorithm--then logically, we would build a similar AI that can teach us what it has learned. If this AI depends on a quantum oracle, then what of learning itself?
     
  14. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    intelligence
    ɪnˈtɛlɪdʒ(ə)ns/
    noun
    noun: intelligence
    1. 1.
      the ability to acquire and apply knowledge and skills.
    Google

    I would put it this way

    the ability to acquire = I would equate with gathering information

    apply knowledge = I would equate put into operation

    skills = the ability to produce something from the above information and knowledge

    As an example from

    information about books / music the AI gains

    knowledge that humans enjoy reading and also listening to music and using its AI

    skills to write a book and produce music

    Please Register or Log in to view the hidden image!

     
  15. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    Computers cannot divide by zero. A human can.
     
  16. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    Hey Arf.

    Compters (or robots) are limited in their capabilities because not every sequence of numbers has a relationship.

    For example:

    4673
    1234

    Should there be a relationship it would be possible to select any time, and extrapolate the number.

    With the correct program it would also be possible to type any sequence of numbers to discover the pattern. It is not always possible!

    Sequence...
    12345678...

    Should every sequence have a relationship it would be most useful, for example, drawing graphs, smilies, or even a formula describing a human.
     
  17. someguy1 Registered Senior Member

    Messages:
    727
    We already do and we've been doing so for decades.

    Take any big bank. Bank of America, Chase, etc. Their software comprises tens of millions of lines of code first written in the 1960's and modified over the decades by several generations of programmers.

    No living person has any idea how it all works. They just try not to break things as they add new features.

    Any working programmer who builds complex systems say 10k lines of code and up) to not remember how much of it works a few months later. You have to look things up. When the program does something you don't understand, you dive into the code. It's commonplace for working programmers to no longer understand how their own code works even when looking at the source.

    It's a common cliche these days in AI discussions to say, "Oh what will happen when we no longer understand our software." In fact this is a common and perfectly normal situation for large software projects and it has been this way for decades.

    I'd like to see this cliche retired, or at least put into its proper context. When you go to the ATM machine and withdraw twenty dollars, no living human in the world knows or understands the entire code path from the ATM to the bank's many layers of intermediate databases back to what they call the System of Record, the single official source of what's in your checking account. That code path and all those software and hardware systems are the work product of thousands of programmers who are long retired or dead. Nobody knows all the details of how these systems work.

    Every time you write a simple program of even relatively nontrivial length -- a few hundred lines of code, say -- it does things you didn't expect. You have to read your own code to figure out why it behaved the way it did. And if it's been a year or two since you wrote it, you often can't understand your own code and sometimes even the documentation you wrote for your future self.

    That's the nature of software and it always had been. AI is nothing new in this regard.
     
  18. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    I think I understand what you are saying about programmers not understanding their own programs a few years later

    So should those programs NOT be designated AI but AD (Artificial Dumb)

    The program has obviously not kept itself up-to-date to cope with changes

    And if it can't explain to its creator what it does dah how dumb is that?

    Thought bubble for programmers

    Don't make reams of notes which will be probably be incomprehensible to follow after a few years

    Either construct within the program code a voice capability which when accessed will TELL the fixerupper what process it is performing and the expected outcome

    OR

    build a access socket to plug into with the above mentioned diagnostic program along with the voice capability

    The closest program of that type I can think of is the one which monitors some fighter aircraft and has Bitching Betty as the voice

    Please Register or Log in to view the hidden image!

     
  19. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    "Fixerupper?" Is that the technical term Michael?
     
  20. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    It should be fixeruperuper

    but I don't know how to spell that

    Please Register or Log in to view the hidden image!



    Please Register or Log in to view the hidden image!

     
  21. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    Yeah. I used to say molesterer. Man alive! I can say it now.
     
  22. river

    Messages:
    17,307
    It means the ai adapts
     
  23. Write4U Valued Senior Member

    Messages:
    20,069
    Also called evolves.
     

Share This Page