Do machines already exceed human intelligence?

Do machines already exceed human intelligence?


  • Total voters
    6
  • Poll closed .
Nope. A great many embedded systems do very complex calculations without getting the numbers from a human - or any source other than the environment they are in. (Which is the same thing humans do BTW.)
IMHO - a task-oriented tool created by humans isn't the same thing as intelligence, especially given that:
1- a human created it
2- the tool in question only operates on a program or set of algorithms
3- speed isn't always increased by learning (though this isn't the case with all computers)

this differs from, say, a child, because a child learns [x] and then can apply said knowledge to [y], which is a different situation and not always related to [x] whereas a tool like a computer or calculator can only apply [x] within the parameters of the program (usually - I'm not familiar with all AI out there)


Be hard to argue that asteroids are not in flight
technically, a rock fired from a trebuchet is "in flight", but that doesn't make it "flight-capable"
with enough thrust, anything can be "in flight", technically speaking

Speed is part of intelligence.
but can the tool get faster as it learns? that is the question, IMHO

I think everyone here would agree that if you have two people, and the first can do a task (say, math problems) more quickly and more accurately than the other, then the first person would be considered more intelligent in that skill.
not to me. Speed only indicates practice, muscle memory or training - I would call that discipline rather than intelligence.
My grandson laid tile faster than I did, but I've had to replace every one of his tile jobs in the last 5 years. it looked good at the time, but ...

.


All good with me, but my first post wasn't my arguments. I just wanted most people agree with me.

mostly offered as my reasoning process ...
 
IMHO - a task-oriented tool created by humans isn't the same thing as intelligence, especially given that:
1- a human created it
2- the tool in question only operates on a program or set of algorithms
3- speed isn't always increased by learning (though this isn't the case with all computers)
Right. And with deep learning systems, 2 and 3 are no longer true. (Even 1 is true only to a degree. A human created it, but did not program it or tell it what to do.)
but can the tool get faster as it learns? that is the question, IMHO
And deep learning systems do.
not to me. Speed only indicates practice, muscle memory or training - I would call that discipline rather than intelligence.
OK.

Two people take the SAT. One person is not very accurate; they get many questions wrong and cannot finish the test on time. They score a 400. The second is more accurate and faster; he scores a 700. Who is more intelligent based on the test?

Speed and accuracy certainly don't define intelligence - but they are part of it.
 
Two people take the SAT. One person is not very accurate; they get many questions wrong and cannot finish the test on time. They score a 400. The second is more accurate and faster; he scores a 700. Who is more intelligent based on the test?

The whole notion of measuring intelligence in some meaningful way has always struck me as somewhat preposterous. For instance, I know a number of people--well, I'm one of them--who can, with ease, get a perfect, or near perfect, score on these sorts of standardized tests: SAT, GRE, LSAT, etc. (Even back before the SAT was recalibrated, making it now ridiculously easy to get 800s. ) Now, I don't think any of us are stupid, but I also, frankly, don't think we're nearly as "intelligent" as our test scores might suggest. Moreover, some--me, especially--are kind of lazy. Laziness and intelligence are hardly mutually exclusive, of course, but they kinda are if you wish to truly reap the fruits of this so-called "intelligence." IOW taking a test and actually doing something are worlds removed.

Obviously, there are reasons aplenty for why tests, such as the SAT, are anything but "objective," but I think this one is often overlooked: some of us simply know how to "game the system," in a manner of speaking. What this means for determining whether or not a machine is "more intelligent" than humans, I don't really know. But I do think it is far more difficult, even when assessing for a single, very specific capacity, than any of us can imagine. And focussing on a single, specific capacity--and then concluding that we can deduce something meaningful from this--is, in a manner, a sort of "gaming the system."
 
IOW taking a test and actually doing something are worlds removed.
Absolutely. There are all kinds of intelligence.
But I do think it is far more difficult, even when assessing for a single, very specific capacity, than any of us can imagine. And focussing on a single, specific capacity--and then concluding that we can deduce something meaningful from this--is, in a manner, a sort of "gaming the system."
Also agreed. At best we can get a rough estimate of one sort of intelligence from a test.
 
Absolutely. There are all kinds of intelligence.

Also agreed. At best we can get a rough estimate of one sort of intelligence from a test.

I always come back to the final passage of Borges's "Funes, the Memorious"

I suspect, however, that he was not very capable of thought. To think is to forget differences, generalize, make abstractions. In the teeming world of Funes, there were only details, almost immediate in their presence.

Real intelligence is often revealed through limitations--sometimes, the person who "can't see the forest for the trees" will prove to be more useful, more intelligent, more innovative in a certain situation. Other times, the exact opposite is needed: the person who only sees the big picture, and misses all the details. (See season 1, Halt and Catch Fire.)
 
And with deep learning systems, 2 and 3 are no longer true
if that is the case then Asimov's Laws are pretty useless
Even 1 is true only to a degree. A human created it, but did not program it or tell it what to do
I'm wondering about this one: I don't know of any computer or AI that isn't programmed by a human. if there is one, can you link it or provide a reference to it, please? thanks
Who is more intelligent based on the test?
I rather liked the answer by Parmalee

I always tested high, and I know people who tested higher than I did on the SAT, but some of those people were also incapable of making it through college and they're not considered "intelligent" because of their choices in life
meh
 
if that is the case then Asimov's Laws are pretty useless
They always have been. (Asimov discusses this in his last story in that series.)
I'm wondering about this one: I don't know of any computer or AI that isn't programmed by a human. if there is one, can you link it or provide a reference to it, please? thanks
I rather liked the answer by Parmalee
Pretty much all deep learning systems. They aren't programmed; they are trained. They are shown a bunch of pictures of a cow, for example, and told "that's a cow." Then they remember and recognize cows in the future.

However, the original hardware/software that allows that learning was created by people.
I always tested high, and I know people who tested higher than I did on the SAT, but some of those people were also incapable of making it through college and they're not considered "intelligent" because of their choices in life
Yep. There are all kinds of intelligence. Someone who is brilliant at math might be shit at any practical application of it - and vice versa.
 
I’ve been a chess enthusiast since my father taught me when I was a young boy (early 80s). Over the years I’ve watched the evolution of chess computers. Initially they were a bit of a joke, but they improved steadily, far faster than a human can improve their game. In the early 90s they had become very strong but still unable to win a match against a grandmaster. (I’m not referring to an individual game but a match which is usually a best-of contest over several games.)

In 1996 there was the famous Kasparov vs IBM’s Deep Blue, the most serious attempt to beat a GM to date in a full tournament setting. Kasparov won, but lost the re-match only a year later.

By the turn of the century, a variety of different algorithms had reached GM level. From about 2010 onwards, the best algorithms (Stockfish, Komodo etc) had become unbeatable by even the strongest human players. These days such algorithms are viewed as training aids rather than opponents as it’s no longer a contest.

The most interesting thing, to my mind, is the evolution of the algorithms. I’m no computer scientist and cannot comment in detail. But, in a nutshell, the algorithms have evolved from simple brute force computing to a machine learning and heuristic approach. They have become much more ‘human’ in the way they “think”. For instance, Deep Blue was a supercomputer that searched 200 million positions per second. In contrast, modern chess engines running on mobile phones have won human tournaments – chess engine Hiarcs 13 running inside Pocket Fritz 4 on the mobile phone HTC Touch HD won the Copa Mercosur tournament in Buenos Aires, Argentina with 9 wins and 1 draw on August 4–14, 2009. Pocket Fritz 4 searches fewer than 20,000 positions per second.

https://en.wikipedia.org/wiki/Computer_chess
 
I’ve been a chess enthusiast since my father taught me when I was a young boy (early 80s). Over the years I’ve watched the evolution of chess computers. Initially they were a bit of a joke, but they improved steadily, far faster than a human can improve their game. In the early 90s they had become very strong but still unable to win a match against a grandmaster. (I’m not referring to an individual game but a match which is usually a best-of contest over several games.)
Check out the epic 5 game tourney between AlphaGo and the world chamopion Go player in the past ten years.
I am also a chess player, but as I understand it the 2500 yr old game of Go is far more complicated than chess and because of the much bigger board, the mathematics are incredibly difficult and requires "intuition" . AlphaGo won 4 out of the five matches.
and here is the tourney
 
Ask a computer whether it prefers to play go or chess.

Can it give an intelligent answer?
 
Ask a computer whether it prefers to play go or chess.
Can it give an intelligent answer?
Ok, do you play both chess and Go?
If so what game do you prefer to play, chess or Go? If not can you give an intelligent answer?

GPT3 would be able to do a search on the internet and learn both games in just a few days . At that point it might be able to tell you which game feels the most comfortable given its fundamental programming.

If I understand the difficulty of Go, I am confident that all AI would choose chess, being that it highly suited to the average AI computing algorithms.

GPT3 is not you average AI, it is text based and all written knowledge is accessible over the internet. But Go is more difficult than can be "learned". You need abstract intuition and that may be a problem for a deterministic neural system.
 
Ok, do you play both chess and Go?
If so what game do you prefer to play, chess or Go? If not can you give an intelligent answer?

I play chess, though rarely these days. I don't know how to play Go. But regardless of how well any computer plays any game can it care about playing?

If it is incapable of caring can it be intelligent?
 
Ask a computer whether it prefers to play go or chess.

Can it give an intelligent answer?
Pfft. Even 38 years ago it could.

greetings-professor-falken-shall-we-play-a-game-war-games--80s-tshirt-thumbnail_1.jpg
 
I play chess, though rarely these days. I don't know how to play Go. But regardless of how well any computer plays any game can it care about playing?
That's not the question. Do you prefer to play chess or Go?
If it is incapable of caring can it be intelligent?
What if it expresses a preference?

The thing is that GPT3 can be taught aesthetics, which it has already demonstrated in painting and music composition.
Apparently it likes to paint in the Van Gogh style and in music it is indistiguishable from human composers.

The thing is, you cannot assign it human properties, but that does not mean it cannot have some natural preferences, some short cuts that take preference over a longer algorithm.

Lets be clear, GPT makes choices by anticipating appropriate "next" words in a sentence. This is how it is able to write its own programs, it uses text in addition to algebraic functions.

According to the developers the only current limitation is in neural numbers and connections. i.e. 175 billion neural switches in GPT3 vs 250 trillion synaptic connections in the human brain.

Here is a story told by GPT3 based on the legend of unicorns. The command something like "tell us a story of the unicorns".
This is one of the several stories GPT3 came up with, I have read worse.
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
The unicorns were discovered by Professor Hans Sigmund, who was traveling with a team of scientists and archaeologists to study the ancient Incan ruins in the area.
"We were walking up the mountain and came across this small valley. The moment we stepped inside, I noticed that the animals there were completely different from the animals we'd seen before. They were, well, unicorns."
The scientists observed the herd of unicorns for two days before they approached the animals. They made no aggressive moves, and instead offered the herd of unicorns bananas and apples. The unicorns cautiously approached the scientists and allowed them to pet them. The scientists were amazed by the unicorns' physical appearance, and even more amazed by their intelligence.
"They're so intelligent. I was able to converse with them about quantum mechanics, which is something I've never even tried to talk to a regular horse about."
Professor Sigmund's discovery has caused a lot of controversy in the scientific community. Scientists have always believed that unicorns were just mythological creatures.
"This discovery has really rocked our world. We thought unicorns were just mythological creatures, but this proves otherwise. It just goes to show how little we know about the world."
When asked if the discovery would change the way we view unicorns, Professor Sigmund responded, "I don't know. I mean, they look a lot like horses, and the horn is just a symbol of their intelligence. I don't know if this will change how we view them."
When asked if they would try to make contact with other herds of unicorns, Professor Sigmund responded, "We'll probably try to make contact with other herds of unicorns. We don't know how long they'll be in the valley, and we don't want to miss our chance to study them."
https://github.com/minimaxir/gpt-3-experiments/blob/master/examples/unicorn/output_0_7.md

Architecture
To put things even more in perspective, consider this; a human brain has approximately a hundred billion neurons, which approximately makes about 100 to 500 trillions connections. If we think that increasing the number of hidden layers and the number of parameters is the solution for human-like intelligence, then we are still 1000 leaps behind human intelligence.
However, even being 1000x behind human intelligence is a big leap (I believe) and close if we consider the speed of scale-up in the past few decades as evident from the figure below.
0*FEQvxP9elfanVbYd

Humans ------ 250 trillion parameters.

Other interesting things it can do!

The interesting part is that unlike any other machine learning model, GPT-3 is not specific to some particular task or work. This was trained as a general language model and we know a language can contain anything — from a poem to an article and from coding language to questions and answers.
It turns out that GPT-3 can perform tasks that its creators did not think of, such as creating javascript code from the English language instructions. Later, it was found that there was a lot of coding accompanied by explanations in the websites/webpages that GPT-3 crawled upon. As a result, it started converting English instructions to javascript or natural language instructions to UI elements similar to how it does in the case of English to French. You can find these examples in the first video of this writing. You must check that once!
Though the model is just following an algorithm and does not have an understanding or sense of what it is saying and doing unlike us (humans, who understand what we are doing), it can still prove to be a great assistant or maybe the best one.
https://deepaksingh-rv.medium.com/g...elligence-agi-or-just-a-strong-ai-e9df644fade

I have doubts about that bolded statement. If an AI has choices from which it can make a "considered" selection in context of the subject. Is that not a form of "understanding"?
 
Last edited:
Pfft. Even 38 years ago it could.

greetings-professor-falken-shall-we-play-a-game-war-games--80s-tshirt-thumbnail_1.jpg
Grade school kids could write a program to give an answer or randomly select one.

You call that intelligence?

Testing a machine's understanding of something is another story.

I wrote a python program to count science and fantasy words in text and compute the density. It does it faster and more reliably than a human. That doesn't make it intelligent. von Neumann devices just manipulate symbols according to a program faster without comprehending the meaning of the symbols.

Humans are projecting intelligence on computers just because they perform complicated activities. Those egotists at Dartmouth should have called it Simulated Intelligence in 1954.
 
Grade school kids could write a program to give an answer or randomly select one.
You call that intelligence?
Are you suggesting that grade school children are not intelligent?
Testing a machine's understanding of something is another story.
How?
I wrote a python program to count science and fantasy words in text and compute the density. It does it faster and more reliably than a human. That doesn't make it intelligent. von Neumann devices just manipulate symbols according to a program faster without comprehending the meaning of the symbols.
How do you test comprehension of words? By definition, no? Ask a GPT3 the definition of any word and it'll comply instantly.
Is knowing the definition of a word, in context of a larger sentence, understanding?
Humans are projecting intelligence on computers just because they perform complicated activities. Those egotists at Dartmouth should have called it Simulated Intelligence in 1954.
And that intelligence is less that the "best guess" intelligence of humans? Humans are infallible and always correct ? What reality do you live in?

It's funny you should posit that specific problem of performing complicated activities. You do know that the human brain works on a principle of "best guess" of meaning contained in incoming data, very much the same as any organism that relies on sensory data, including AI

Intelligence and understanding are hierarchically evolved properties of complex neural systems, starting with single celled organisms to Dolphins, Whales, Octopi, Hominids, Humans.

The value system applies to all sentient organisms.
 
Last edited:
von Neumann devices just manipulate symbols according to a program faster without comprehending the meaning of the symbols.
What makes you think your own brain is any different, in principle? Can you point to where the "comprehending" happens in your brain or explain how that works?

Humans are projecting intelligence on computers just because they perform complicated activities. Those egotists at Dartmouth should have called it Simulated Intelligence in 1954.
Any sufficiently sophisticated "simulated intelligence" would be indistinguishable from the real deal (whatever that might be). Or do you imagine you can somehow tell the difference?
 
This may be of interest. Teaching a GPT3 is not cheap. It's very much like a human attending advanced studies at university level.
IMO a trained GPT3 should merit a Doctorate or Masters in languages. Imprinting 175 billion engrams (tokens) is a daunting task and probably exceeds human capacity in that aspect alone.
GPT-3 is a good start for human-like natural language performance. Perhaps a betterer analogy might be the “Homo habilis” [1] of Artificial General Intelligence Natural Language.
The Homo habilis species distinguish itself from the earlier Australopithecus group with a slightly larger brain-case and smaller face and teeth. However, the Homo habilis species retained some ape-like features [1].
Most importantly, for the analogy I will make, the Homo habilis species is thought to be the first maker of stone tools [1].
In my analogy, GPT-3 represents the Homo habilis of the Artificial Intelligence of Natural Language. The following is my reasoning:
GPT-3 has a significantly bigger brain (parameters) than predecessor NLP models;
GPT-3 uses tools, for example, transfer learning and fine-tuning.
GPT-3 accomplishes other NLP tasks (makes tools) for which it was not trained.
GPT-3 retains most of the GPT-2 architecture [3,5]
.

Analogous to Homo habilis species, GPT-3 retains “some ape-like features” [3].
Key Takeaways: What makes GPT-3 So Special?
The GPT-3 (Generative Pre-Trained Transformer-3) is OpenAI’s most massive natural language prediction (NLP) model to date (available to public June 2020).
GPT-3 has approximately 175 billion parameters. In contrast, the human brain has approximately 86 billion neurons with on the average 7,000 synapses per neuron [2,3];
Comparing apples to oranges, the human brain has about 60 trillion parameters or about 300x more parameters than GPT-3. Note: If 10% of the human brain capacity is needed for natural language tasks, then the human brain has about 30x more parameters than GPT-3.
It is estimated that GPT-3 costs between $4 million and $12 million in cloud compute time and required months to train [3,7]. Open AI is not saying how much GPT-3 cost to train, and it is not clear they know within 20%. However, the author count on the paper was 31 staff [3]. That is at least an additional $12 million in staff salaries and benefits for a year. GPT-3 trained on several large corpora of text totaling about 500 Billion tokens (a token is approximate as a word) [3].
https://chatbotslife.com/is-gpt-3-the-adam-of-natural-language-cf59656456f2
 
Back
Top