Do machines already exceed human intelligence?

Discussion in 'Science & Society' started by Speakpigeon, Oct 7, 2019.

?

Do machines already exceed human intelligence?

Poll closed Nov 6, 2019.
  1. Yes

    33.3%
  2. No

    33.3%
  3. I don't know

    0 vote(s)
    0.0%
  4. The question doesn't make sense

    33.3%
  1. Write4U Valued Senior Member

    Messages:
    20,069
    OpenAI's GPT-3 Language Model: A Technical Overview

    GPT-3 Key Takeaways
    • GPT-3 shows that language model performance scales as a power-law of model size, dataset size, and the amount of computation.
    • GPT-3 demonstrates that a language model trained on enough data can solve NLP tasks that it has never encountered. That is, GPT-3 studies the model as a general solution for many downstream jobs without fine-tuning.
    • The cost of AI is increasing exponentially. Training GPT-3 would cost over $4.6M using a Tesla V100 cloud instance.
    • The size of state-of-the-art (SOTA) language models is growing by at least a factor of 10 every year. This outpaces the growth of GPU memory. For NLP, the days of "embarrassingly parallel" is coming to the end; model parallelization will become indispensable.
    • Although there is a clear performance gain from increasing the model capacity, it is not clear what is really going on under the hood. Especially, it remains a question of whether the model has learned to do reasoning, or simply memorizes training examples in a more intelligent way.
    ...... more
    https://lambdalabs.com/blog/demystifying-gpt-3/

    Note: Data contamination is very much a problem with human learning.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Write4U Valued Senior Member

    Messages:
    20,069
    I have a feeling that GPT3 is already much more than a pure Turing machine. You don't need to write code for it to execute a task. You give it verbal instructions and it will execute the task and write the code for you!

    GPT3 is a language-based system, very much like human learning and like human thought processes, the GPT3 is able to make associative decisions. So it is already much more than a pure Turing system. It does not blindly crunch numbers. It reads the question or command and begins to assemble relative information from memory, very much as humans do.
    From the several choices it will select the "images" that most closely answer to the verbal instructions.

    GPT3 is not programmed with just algorithms. It is language based and has access to the open internet where it has access and knowledge of the entire Wikipedia in several languages. It has face recognition and creates its own avatar. It does not yet have a body, because of the enormous memory requirements that exceed all other deep learning computers.

    GPT3 has to go to school to learn, just like a human child. It just learns much faster than a human child!

    Can an AI think in context ? Check out this little clip of Dall-e, an artistic version of GPT3.



     
    Last edited: Jul 21, 2021
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Jerimiah Registered Member

    Messages:
    45
    Machines are nothing more than data bases that are programmed by us . They can soon become obsolete if not kept up to date in programming . For example if we decide all of sudden to change math , the calculator would always give a wrong answer !
     
    amy stivens likes this.
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Write4U Valued Senior Member

    Messages:
    20,069
    You may want to checkout GPT3. Other than making new GPT3 , they don't need humans at all. They will be able to write new programs if you ask them nicely.


    p.s. you cannot change mathematical functions. You can only change the value symbols. Mathematics exist independent of humans.
     
    Last edited: Sep 27, 2021
  8. Write4U Valued Senior Member

    Messages:
    20,069
    Do check out the series of GPT3 interviews

    This is one of them;
     
  9. Write4U Valued Senior Member

    Messages:
    20,069
    p.s. this is how an AI challenges you to check your humanness. A GPT3 would have no problem solving the question.

    GPT's are "text based" and can read variations on letters and numbers.
     
  10. amy stivens Registered Member

    Messages:
    1
    I agree on that one. There are still many and many jobs which machines can't do and not gonna be able to do in the nearest future.
     
  11. Write4U Valued Senior Member

    Messages:
    20,069
    That's a terrible example. If we all of a sudden changed our math the human world would come to a standstill!

    Moreover, you cannot change mathematics. You can only change its symbolic representations.
    The algebraic functions will always be the same.

    Input --> Function --> Output and the resulting Garbage in --> Garbage out.
     
  12. sculptor Valued Senior Member

    Messages:
    8,466
    base 6 seems to have been once the norm

    Please Register or Log in to view the hidden image!



    next ring would be 12. and the next 18...etc... 6th ring =36
     
    Last edited: Nov 6, 2021
  13. Write4U Valued Senior Member

    Messages:
    20,069

    Please Register or Log in to view the hidden image!



    Right, you can change the symbolism, but you cannot change the maths and mathematical functions.

    Maths can be represented in many different bases, such as; base 10 (decimal system), base 2 (binary system) and base 3, 4, 5, 6, 7, 8, 9 that only change the symbolic representations of the same values. You just cannot mix them!

    Roger Antonsen shows this flexibility in representing mathematics.


    Take human maths away altogether and nothing changes in the universe, absolutely nothing.

    A "learning" AI with observational abilities would easily be able to fashion a mathematical language that is compatible with the scientific method.

    In fact you tell it to perform a task and it will write the program for you!

    If an AI can learn to play Go without even being taught the rules of the game, it can figure out naturally occurring mathematics.

    How does GPT learn?
    https://towardsdatascience.com/understanding-gpt-3-in-5-minutes-7fe35c3a1e52
     
    Last edited: Nov 6, 2021
  14. DaveC426913 Valued Senior Member

    Messages:
    18,935
    With whom?

    Not a lot of 2-dimensional objects in nature.

    This is the science of crystal packing structures. In 3D, there are many.

    But I realize in retrospect, this has diverged off-topic. Reporting to have last six or so posts redirected.
     
    Last edited: Nov 6, 2021
  15. sculptor Valued Senior Member

    Messages:
    8,466
    I do not know/rember.
    I encountered this in an anthropology class over 40 years ago and I blew it off.
    Until, I laid a coin on a surface, then surrounded it with a ring of same size coins which was 6 then a next row/ring which was 12 coins, then a next ring which was 18 coins then 24 then 30 etc...

    Ok base 6 seemed to make sense
    but
    a pictoral representation does not mean that I understand the whole system
    It just makes sense when viewed this way
    or
    (I could be wrong?)
     
  16. Write4U Valued Senior Member

    Messages:
    20,069
    They all make sense if you do not consider ease of use and accuracy in decimals. That is why the decimal system has become the preferred scientific tool. Its ease of use surpasses all other symbolic languages. You can use your hands as "handy" calculators.

    Which brings up the question if AI have an internal query system, as all humans do. A program that asks the question "why" and engages the main data gathering brain in an internal dialogue as to the nature of a thing and why that is so, before it files the data in memory.

    This is the first sign in a human child that it is not satisfied with just observing but wants to know "why" there are natural phenomena and thereby gain "understanding" and ability to "reason" based on understanding the object from several different perspectives (Anil Seth's "controlled hallucination"), enabling "expectation" and "cognition".

    Let me cite an example of a use for the internal query "why". GPT3 was asked to make a chair using an avocado and came up with a whole series of chairs using an avocado as the material in a great variety of configurations.

    Please Register or Log in to view the hidden image!

    Please Register or Log in to view the hidden image!


    note that there are only a few chairs that are really functional for human use, based on the comfort for human body configuration.

    A chairmaker would never make all those possible configurations, but would most likely design and produce the single one I picked out as the best candidate, by asking what makes that chair my favorite.
    Q: "why"
    A: "because it looks very comfortable based on my body configuration"

    I don't know if this is already being used in AI, but if it isn't, this may be a huge step in allowing an AI to learn how to "deliberate" and discover (learn) "reasons" why one design is preferable over others without needing to be instructed.

    It already does consider "percentage chance of winning" in the game of Go! That's why it was able to "resign" game #4 long before the end game, rather than blindly continue to play a losing position. That is a remarkable decision making ability for an artificial intelligence.

    The AI learns something new and asks itself "why" is this different from that. The fundamentals of reasoning.
     
    Last edited: Nov 6, 2021
  17. sculptor Valued Senior Member

    Messages:
    8,466
    and then
    we have degrees-minutes-seconds
    360 degrees = 1296000 seconds (60 x 60 x 360)

    is there an easier way?
     
  18. Write4U Valued Senior Member

    Messages:
    20,069
    I don't know, ask the scientists that use these symbolic values, "why" they use them.....

    Please Register or Log in to view the hidden image!

     
  19. sculptor Valued Senior Member

    Messages:
    8,466
    i read of one ancient culture which measured a year as 360 days---and the extra days belonged to the gods

    when zeroing a rifle one uses minutes or arc(moa)
    if you are off 1 inch at 50 meters(about 2 moa---not very accurate) then you will be off 2 inches at 100 meters, off 4 inches at 200 meters and much beyond that------------don't take the shot

    ..................................
    hey
    I didn't invent none of this stuff
    all i do is try to figure out how to use it.
     
  20. Write4U Valued Senior Member

    Messages:
    20,069
    Note that here you are making category error, mixing mathematical bases. Meters and inches don't mix too well, especially when calculating exponential increases...

    Please Register or Log in to view the hidden image!

     
    sculptor likes this.
  21. sculptor Valued Senior Member

    Messages:
    8,466
    As a young man, I spent a lot of time adjusting my stride to be @36", 3 feet, or one yard.
    When I got good at it. I could pace off 120 feet (40 yards), and be accurate within a foot or two

    OK question
    Do young men in Europe try to set their strides to a meter?
     
  22. Write4U Valued Senior Member

    Messages:
    20,069
    AFAIK, all Olympic sports are based on the decimal system.
    (1 m = 3.281 ft)
     
    Last edited: Nov 6, 2021
  23. sculptor Valued Senior Member

    Messages:
    8,466
    So were the target ranges in the army circa 1967
    there was a manlike silhouette every 50 meters out to 400
    (I never missed at 400---but did miss once at 50-----go figure)

    ..........................and
    Do young men in Europe try to set their strides to a meter?--- 39.37008 inches vs 36
    and, if so
    is there a noticeable difference in the way they walk?

    or was I the only teen who did this?
     

Share This Page