Do machines already exceed human intelligence?

Discussion in 'Science & Society' started by Speakpigeon, Oct 7, 2019.

?

Do machines already exceed human intelligence?

Poll closed Nov 6, 2019.
  1. Yes

    33.3%
  2. No

    33.3%
  3. I don't know

    0 vote(s)
    0.0%
  4. The question doesn't make sense

    33.3%
  1. Truck Captain Stumpy The Right Honourable Reverend Truck Captain Valued Senior Member

    Messages:
    1,263
    IMHO - a task-oriented tool created by humans isn't the same thing as intelligence, especially given that:
    1- a human created it
    2- the tool in question only operates on a program or set of algorithms
    3- speed isn't always increased by learning (though this isn't the case with all computers)

    this differs from, say, a child, because a child learns [x] and then can apply said knowledge to [y], which is a different situation and not always related to [x] whereas a tool like a computer or calculator can only apply [x] within the parameters of the program (usually - I'm not familiar with all AI out there)


    technically, a rock fired from a trebuchet is "in flight", but that doesn't make it "flight-capable"
    with enough thrust, anything can be "in flight", technically speaking

    but can the tool get faster as it learns? that is the question, IMHO

    not to me. Speed only indicates practice, muscle memory or training - I would call that discipline rather than intelligence.
    My grandson laid tile faster than I did, but I've had to replace every one of his tile jobs in the last 5 years. it looked good at the time, but ...

    .


    mostly offered as my reasoning process ...
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. billvon Valued Senior Member

    Messages:
    20,653
    Right. And with deep learning systems, 2 and 3 are no longer true. (Even 1 is true only to a degree. A human created it, but did not program it or tell it what to do.)
    And deep learning systems do.
    OK.

    Two people take the SAT. One person is not very accurate; they get many questions wrong and cannot finish the test on time. They score a 400. The second is more accurate and faster; he scores a 700. Who is more intelligent based on the test?

    Speed and accuracy certainly don't define intelligence - but they are part of it.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,193
    The whole notion of measuring intelligence in some meaningful way has always struck me as somewhat preposterous. For instance, I know a number of people--well, I'm one of them--who can, with ease, get a perfect, or near perfect, score on these sorts of standardized tests: SAT, GRE, LSAT, etc. (Even back before the SAT was recalibrated, making it now ridiculously easy to get 800s. ) Now, I don't think any of us are stupid, but I also, frankly, don't think we're nearly as "intelligent" as our test scores might suggest. Moreover, some--me, especially--are kind of lazy. Laziness and intelligence are hardly mutually exclusive, of course, but they kinda are if you wish to truly reap the fruits of this so-called "intelligence." IOW taking a test and actually doing something are worlds removed.

    Obviously, there are reasons aplenty for why tests, such as the SAT, are anything but "objective," but I think this one is often overlooked: some of us simply know how to "game the system," in a manner of speaking. What this means for determining whether or not a machine is "more intelligent" than humans, I don't really know. But I do think it is far more difficult, even when assessing for a single, very specific capacity, than any of us can imagine. And focussing on a single, specific capacity--and then concluding that we can deduce something meaningful from this--is, in a manner, a sort of "gaming the system."
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. billvon Valued Senior Member

    Messages:
    20,653
    Absolutely. There are all kinds of intelligence.
    Also agreed. At best we can get a rough estimate of one sort of intelligence from a test.
     
  8. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,193
    I always come back to the final passage of Borges's "Funes, the Memorious"

    Real intelligence is often revealed through limitations--sometimes, the person who "can't see the forest for the trees" will prove to be more useful, more intelligent, more innovative in a certain situation. Other times, the exact opposite is needed: the person who only sees the big picture, and misses all the details. (See season 1, Halt and Catch Fire.)
     
    sideshowbob likes this.
  9. Truck Captain Stumpy The Right Honourable Reverend Truck Captain Valued Senior Member

    Messages:
    1,263
    if that is the case then Asimov's Laws are pretty useless
    I'm wondering about this one: I don't know of any computer or AI that isn't programmed by a human. if there is one, can you link it or provide a reference to it, please? thanks
    I rather liked the answer by Parmalee

    I always tested high, and I know people who tested higher than I did on the SAT, but some of those people were also incapable of making it through college and they're not considered "intelligent" because of their choices in life
    meh
     
  10. billvon Valued Senior Member

    Messages:
    20,653
    They always have been. (Asimov discusses this in his last story in that series.)
    Pretty much all deep learning systems. They aren't programmed; they are trained. They are shown a bunch of pictures of a cow, for example, and told "that's a cow." Then they remember and recognize cows in the future.

    However, the original hardware/software that allows that learning was created by people.
    Yep. There are all kinds of intelligence. Someone who is brilliant at math might be shit at any practical application of it - and vice versa.
     
  11. Truck Captain Stumpy The Right Honourable Reverend Truck Captain Valued Senior Member

    Messages:
    1,263
    true that
     
  12. Hercules Rockefeller Beatings will continue until morale improves. Moderator

    Messages:
    2,816
    I’ve been a chess enthusiast since my father taught me when I was a young boy (early 80s). Over the years I’ve watched the evolution of chess computers. Initially they were a bit of a joke, but they improved steadily, far faster than a human can improve their game. In the early 90s they had become very strong but still unable to win a match against a grandmaster. (I’m not referring to an individual game but a match which is usually a best-of contest over several games.)

    In 1996 there was the famous Kasparov vs IBM’s Deep Blue, the most serious attempt to beat a GM to date in a full tournament setting. Kasparov won, but lost the re-match only a year later.

    By the turn of the century, a variety of different algorithms had reached GM level. From about 2010 onwards, the best algorithms (Stockfish, Komodo etc) had become unbeatable by even the strongest human players. These days such algorithms are viewed as training aids rather than opponents as it’s no longer a contest.

    The most interesting thing, to my mind, is the evolution of the algorithms. I’m no computer scientist and cannot comment in detail. But, in a nutshell, the algorithms have evolved from simple brute force computing to a machine learning and heuristic approach. They have become much more ‘human’ in the way they “think”. For instance, Deep Blue was a supercomputer that searched 200 million positions per second. In contrast, modern chess engines running on mobile phones have won human tournaments – chess engine Hiarcs 13 running inside Pocket Fritz 4 on the mobile phone HTC Touch HD won the Copa Mercosur tournament in Buenos Aires, Argentina with 9 wins and 1 draw on August 4–14, 2009. Pocket Fritz 4 searches fewer than 20,000 positions per second.

    https://en.wikipedia.org/wiki/Computer_chess
     
    James R likes this.
  13. Write4U Valued Senior Member

    Messages:
    18,612
    Check out the epic 5 game tourney between AlphaGo and the world chamopion Go player in the past ten years.
    I am also a chess player, but as I understand it the 2500 yr old game of Go is far more complicated than chess and because of the much bigger board, the mathematics are incredibly difficult and requires "intuition" . AlphaGo won 4 out of the five matches.

    and here is the tourney
     
  14. psikeyhackr Live Long and Suffer Valued Senior Member

    Messages:
    1,205
    Ask a computer whether it prefers to play go or chess.

    Can it give an intelligent answer?
     
  15. Write4U Valued Senior Member

    Messages:
    18,612
    Ok, do you play both chess and Go?
    If so what game do you prefer to play, chess or Go? If not can you give an intelligent answer?

    GPT3 would be able to do a search on the internet and learn both games in just a few days . At that point it might be able to tell you which game feels the most comfortable given its fundamental programming.

    If I understand the difficulty of Go, I am confident that all AI would choose chess, being that it highly suited to the average AI computing algorithms.

    GPT3 is not you average AI, it is text based and all written knowledge is accessible over the internet. But Go is more difficult than can be "learned". You need abstract intuition and that may be a problem for a deterministic neural system.
     
  16. psikeyhackr Live Long and Suffer Valued Senior Member

    Messages:
    1,205
    I play chess, though rarely these days. I don't know how to play Go. But regardless of how well any computer plays any game can it care about playing?

    If it is incapable of caring can it be intelligent?
     
  17. DaveC426913 Valued Senior Member

    Messages:
    16,944
    Pfft. Even 38 years ago it could.

    Please Register or Log in to view the hidden image!

     
  18. Write4U Valued Senior Member

    Messages:
    18,612
    That's not the question. Do you prefer to play chess or Go?
    What if it expresses a preference?

    The thing is that GPT3 can be taught aesthetics, which it has already demonstrated in painting and music composition.
    Apparently it likes to paint in the Van Gogh style and in music it is indistiguishable from human composers.

    The thing is, you cannot assign it human properties, but that does not mean it cannot have some natural preferences, some short cuts that take preference over a longer algorithm.

    Lets be clear, GPT makes choices by anticipating appropriate "next" words in a sentence. This is how it is able to write its own programs, it uses text in addition to algebraic functions.

    According to the developers the only current limitation is in neural numbers and connections. i.e. 175 billion neural switches in GPT3 vs 250 trillion synaptic connections in the human brain.

    Here is a story told by GPT3 based on the legend of unicorns. The command something like "tell us a story of the unicorns".
    This is one of the several stories GPT3 came up with, I have read worse.
    https://github.com/minimaxir/gpt-3-experiments/blob/master/examples/unicorn/output_0_7.md

    Architecture

    Please Register or Log in to view the hidden image!


    Humans ------ 250 trillion parameters.

    Other interesting things it can do!

    https://deepaksingh-rv.medium.com/g...elligence-agi-or-just-a-strong-ai-e9df644fade

    I have doubts about that bolded statement. If an AI has choices from which it can make a "considered" selection in context of the subject. Is that not a form of "understanding"?
     
    Last edited: Jul 1, 2021
  19. psikeyhackr Live Long and Suffer Valued Senior Member

    Messages:
    1,205
    Grade school kids could write a program to give an answer or randomly select one.

    You call that intelligence?

    Testing a machine's understanding of something is another story.

    I wrote a python program to count science and fantasy words in text and compute the density. It does it faster and more reliably than a human. That doesn't make it intelligent. von Neumann devices just manipulate symbols according to a program faster without comprehending the meaning of the symbols.

    Humans are projecting intelligence on computers just because they perform complicated activities. Those egotists at Dartmouth should have called it Simulated Intelligence in 1954.
     
  20. DaveC426913 Valued Senior Member

    Messages:
    16,944

    Please Register or Log in to view the hidden image!

    From a movie you're too young to have seen.
     
  21. Write4U Valued Senior Member

    Messages:
    18,612
    Are you suggesting that grade school children are not intelligent?
    How?
    How do you test comprehension of words? By definition, no? Ask a GPT3 the definition of any word and it'll comply instantly.
    Is knowing the definition of a word, in context of a larger sentence, understanding?
    And that intelligence is less that the "best guess" intelligence of humans? Humans are infallible and always correct ? What reality do you live in?

    It's funny you should posit that specific problem of performing complicated activities. You do know that the human brain works on a principle of "best guess" of meaning contained in incoming data, very much the same as any organism that relies on sensory data, including AI

    Intelligence and understanding are hierarchically evolved properties of complex neural systems, starting with single celled organisms to Dolphins, Whales, Octopi, Hominids, Humans.

    The value system applies to all sentient organisms.
     
    Last edited: Jul 1, 2021
  22. James R Just this guy, you know? Staff Member

    Messages:
    37,189
    What makes you think your own brain is any different, in principle? Can you point to where the "comprehending" happens in your brain or explain how that works?

    Any sufficiently sophisticated "simulated intelligence" would be indistinguishable from the real deal (whatever that might be). Or do you imagine you can somehow tell the difference?
     
  23. Write4U Valued Senior Member

    Messages:
    18,612
    This may be of interest. Teaching a GPT3 is not cheap. It's very much like a human attending advanced studies at university level.
    IMO a trained GPT3 should merit a Doctorate or Masters in languages. Imprinting 175 billion engrams (tokens) is a daunting task and probably exceeds human capacity in that aspect alone.
    Key Takeaways: What makes GPT-3 So Special?
    https://chatbotslife.com/is-gpt-3-the-adam-of-natural-language-cf59656456f2
     

Share This Page