After doing a bit more online shopping it occurred to me the $2 deposit into my account may also be from a purchase made where a small test is made of your account actually being in existence before committing to withdrawing the full amount$2.00 & tell the bank its not yours & to return or hold it
I don't think you understand what generalised artificial intelligence means.ai will feel what it is programmed to feel .
river said: ↑
ai will feel what it is programmed to feel .
I don't think you understand what generalised artificial intelligence means.
I don't think you understand what generalised artificial intelligence means.
Yep, just like humans. We learn by mimicry. It is part of the brain programming mechanics.Mimicry perhaps?
![]()
Yep, just like humans. We learn by mimicry. It is part of the brain programming mechanics.
Highlighted
Mimicry through experience .
How does one step outside mimicry ?
Yes, baby chicks can eat grubs! In fact, these are an excellent snack that add nutrition to the diet, mimic a natural diet for chicks, and stimulate natural behaviors.
Mama hen say yes to grubs, so you can too!
https://grubblyfarms.com/blogs/the-flyer/can-chicks-eat-grubbliesIf you’ve ever seen a mother hen with a brood of chicks foraging in the yard, you will notice that she introduces her babies to all sorts of new foods! This includes pieces of bugs, grubs, and worms. By feeding chicks dried grubs, you are mimicking what a mother hen would naturally be introducing to her brood.
It means that the AI will not be "programmed" to perform a limited set of tasks. Rather, it will be a general-purpose problem-solving machine - just like you are. It will have "senses" that allow it to take in information in various forms. It will be able to think abstractly about that information and it will be able to form conclusions based on its past experience and the new information.Then explain to me what " generalised artificial intelligence " means .
It means that the AI will not be "programmed" to perform a limited set of tasks. Rather, it will be a general-purpose problem-solving machine - just like you are. It will have "senses" that allow it to take in information in various forms. It will be able to think abstractly about that information and it will be able to form conclusions based on its past experience and the new information.
You're idea that a generalised AI will "only feel what it is programmed to feel" is as naive as saying that a human baby will "only feel what it is programmed to feel". True in a very coarse sense, but completely missing the point in the sense that's actually relevant to the discussion.
That does not necessarily preclude the ability to "reason" and IMO, ability to reason is the definition of intelligence in and of itself.My point is and you missed it . Is that AI will always be electronic . Not a true living thing .
river said: ↑
My point is and you missed it . Is that AI will always be electronic . Not a true living thing .
That does not necessarily preclude the ability to "reason" and IMO, ability to reason is the definition of intelligence in and of itself.
You are stuck in an anthropomorphological world, my friend. Biology is just a very small part of the universe.
Yes, but nobody claims otherwise. That is why we distinguish between human and artificial intelligence.No it doesn't . I never argued against it .
But electronics can never grasp the evolution that life went though to get where it is .
As I've said life , living intellect , is biological based . Different from electronics .
That is debatable. What makes you think that the only option is purely electronic in principle.Ai will never feel the emotions that life does . Because the emotions are based on entirely different . Biology is based on Living things . Electronics emotions is based on a program ( which we invent ) .
Bionic Brain? Scientists Develop Memory Cells That Mimic Human Brain Processes - Learning Mind (learning-mind.com)Human brain cells, memory cells to be exact, are being tweaked and improved to create an intelligence like never seen before. Scientists are almost there on the road to the real ‘bionic brain’.
The key to artificial intelligence is the electronic long-term memory cell.
16 - 3D bioprinting nerveTissue engineering is an evolving field that seeks to create functioning artificial tissues and organs that may restore the healthy, functional, and homeostatic 3D microenvironment. For that, regenerative medicine relies on synthetic scaffolds designed to mimic the natural ECM, to repair or replace damaged tissues [18 ].
3D bioprinting nerve - ScienceDirectNerve tissue is a complicated neural network with various types of cells. Although printing neural tissue has limited success still, literatures have shown that this technique is of great potential for neural regeneration as well as the study of neural diseases. This article reviews recent advances in the three-dimensional printing of neural system.
Robots taking offense...Since posting this thread, I wonder if any potential emotionality of AI will come down to our application of our own feelings and emotions “imposed” on AI. For example, if we would be offended by a particular “command” from another human, would we simply be assuming that robots will take offense, as well?
Hmm. Unless they act out independently on their own, I might think we are imposing our emotions and how we would react given different situations, onto them.
I've never heard of this; how funny.Robots taking offense...
You are treading into territory that involves one of the most dangerous thought experiments of all time.
As one article says: WARNING: Reading this article may commit you to an eternity of suffering and torment.
Read up on Roko's Basilisk. If you dare.
Mere discussion of it has been purported to have given participators nightmares and even breakdowns, to the extent that all further discussion of it was banned and existing documentation deleted.
https://rationalwiki.org/wiki/Roko's_basilisk
https://slate.com/technology/2014/0...errifying-thought-experiment-of-all-time.html
Will AI acquire an ego? According to GPT3 itself, AI will be a perfect companion to humans for dangerous jobs or jobs that require patience. Does that express a willingness to be of assistance always? Good question.....Since posting this thread, I wonder if any potential emotionality of AI will come down to our application of our own feelings and emotions “imposed” on AI. For example, if we would be offended by a particular “command” from another human, would we simply be assuming that robots will take offense, as well?
Hmm. Unless they act out independently on their own, I might think we are imposing our emotions and how we would react given different situations, onto them.
He is serious!I've never heard of this; how funny.
''If there's one thing we can deduce about the motives of future superintelligences, it's that they simulate people who talk about Roko's Basilisk and condemn them to an eternity of forum posts about Roko's Basilisk.''
—Eliezer Yudkowsky, 2014
LOL!
Yudkowsky may be onto something...
Eliezer Yudkowsky, LessWrong's founder, banned any discussion of Roko's Basilisk on the blog for several years because of a policy against spreading potential information hazards.
Roko's argument was broadly rejected on Less Wrong, with commenters objecting that an agent like the one Roko was describing would have no real reason to follow through on its threat: once the agent already exists, it can't affect the probability of its existence, so torturing people for their past decisions would be a waste of resources.
Roko's basilisk - LesswrongwikiAlthough several decision theories allow one to follow through on acausal threats and promises — via the same precommitment methods that permit mutual cooperation in prisoner's dilemmas — it is not clear that such theories can be blackmailed. If they can be blackmailed, this additionally requires a large amount of shared information and trust between the agents, which does not appear to exist in the case of Roko's basilisk.