What is life

Robban

Registered Senior Member
Laws of biology and definition of life leads to another rather interesting question. When and what is life really? Must life be biological? If yes, then why?

If we create a complex environment for a mind to habit in a mecanical/electronical matter, would this live?

Can a mind "habit" something? Can it move to another "house" or is it bound by the physical matter?

If its not bound, then what is mind?

If we create an artificial inteligence that is so complex that we recognize mind, is there then a mind or is it an illusion?

Do we (humans) have the knowledge to recognize when there is real life or just an illusion?

I am pretty sure we allready today can create a computersystem that can mimic a human mind to some point. Will this ever be a mind and ultimately life?

If we (humans) say we can distingish real life/mind from illusions, then can we be sure that the life we recongize today really _is_ life....?

If we encontered an alien species that after some time, according to our definitions, is mekanical.. have we finaly encontered extraterestial life? :)

Reproduce... hmm.. robots making robots.. well its not an impossible thought.

And finaly, if we can create life - will that make us Gods?
 
Just for reference of Biology's basic rules for life on Earth.


According to the "rules" or "laws" of biology, something is only living if it has the following characteristics:

1. It is made up of at least one living cell.

2. It contains DNA or hereditary material.

3. It can reproduce.

4. It can convert energy.

5. Homeostasis, the ability to maintain internal environmental conditions.

Besides "living", something can be categorized as "non-living" or "dead". Dead is obviously something that was once living, but no longer fulfills the laws required to be considered living. Non-living is anything that has never fulfilled the laws of biology to be considered living.


The problem I see with artificial life is that it must be made up of at least one living cell. It doesn't matter how smart they are, unless real live parts were used to make the robot (and even then) it can't be considered alive unless it's cells can convert energy. (tell me if i'm wrong. i think i confused myself :confused: )
 
What is life?.

Life is a box of chocolates!!

A roller coaster, a phenomenon that need not be explained to exist.

Questions such as these: what is life? who created the universe?, what is or who created existence?.
Are ancient mind subverting gimmicks of posting invalid, intellectually unetanable questions that have no basis in reality.

That false question maneuver has been used by theologians and other mystics for centuries. The gimmick works by taking an invalid or meaningless idea and then cloaking the idea with speicous but profound-sounding phraseology. That phraseology is then used as an "intelectual" prop to advance false irrational concepts or doctrines.

But for fun and hypothetically I'll give an answer to your questions.

Must life be biological? If yes, then why?

As we know it, life must be biological. life must contain the signature of dna in order to indentify the species. DNA is biological matter so therefore by this definition life must be biological. However what is consciousness? and can we have a consciousness in non-biological existence?

If we create a complex environment for a mind to habit in a mecanical/electronical matter, would this live?

No! Data is not living beign and he knows it, however what makes him strive to be like a human beign is what gives us the illusion of a life!!.

Can a mind "habit" something? Can it move to another "house" or is it bound by the physical matter?

Mind habits my body, when I die it will move elsewhere, dont ask?. At this point of human development which is about two thousand years behind, mind is bound by physical matter, however if we were to clone our body and transplant our brain to it we would have in-habited a new physical body made of our own cells.

If we create an artificial inteligence that is so complex that we recognize mind, is there then a mind or is it an illusion?

Neither its a program.

Do we (humans) have the knowledge to recognize when there is real life or just an illusion?

Yes, I believe we have the capabilities to test and experiment a species or an entity and determine if its a life or an illusion!.

I am pretty sure we allready today can create a computersystem that can mimic a human mind to some point. Will this ever be a mind and ultimately life?

No! But if one were to electronically take over the machine and dispose of the body and in essence become a machine with a mind, we would then be a freak of science!!.

If we (humans) say we can distingish real life/mind from illusions, then can we be sure that the life we recongize today really _is_ life....?

NO I'm a mchine!!. However the person typing these words is a real life, not an illusion, the illusion is what you see in front of your screen!.

If we encontered an alien species that after some time, according to our definitions, is mekanical.. have we finaly encontered extraterestial life?

Yes & no, we encountered an intellegence that created these machines, however we've not encountered the programmer!!.

And finaly, if we can create life - will that make us Gods?

Oh!! silly, we create life every day, babies are born, we are Gods!!!!
 
Originally posted by ScrollMaker
The problem I see with artificial life is that it must be made up of at least one living cell.

Maybe the rules needs to be modified.
 
Re: What is life?.

>biological matter so therefore by this definition life must be biological. However what is consciousness? and can we have a consciousness in non-biological existence?


Mmmm yes, to make an interesting discussion we need to distingish between "life" and "consciousness" as it is not very likely a bacteria has a "mind".


>No! Data is not living beign and he knows it, however what makes him strive to be like a human beign is what gives us the illusion of a life!!.


So you mean the human brain cannot be seen as a complex information structure with the ability to learn by experience and remember, as with a complex computer?


>it we would have in-habited a new physical body made of our own cells.


The human brain is made of physical material but the mind is not. The mind should very well be compared to the concept of software and the physical brain as hardware.

Thus maybe your idea about transplanting a brain to another body would also be made possible by "copy" the "software".. hmm maybe a bit too much sci-fi ... i drop that one :)


>Neither its a program.
How can we know?


>NO I'm a mchine!!. However the person typing these words is a real life, not an illusion, the illusion is what you see in front of your screen!.

Good point. But if the words Im typing is actually written by a robot... :)


>Yes & no, we encountered an intellegence that created these machines, however we've not encountered the programmer!!.

What if the programmer is a machine? And the programmer of that programmer also is a machine.. like a mecanical evolution.


Do a machine has to be made of silicon? What if we create a computer out of biological matter?
 
Re: Re: What is life?.

Originally posted by Robban
----------
biological matter so therefore by this definition life must be biological. However what is consciousness? and can we have a consciousness in non-biological existence?
----------
(Matter can have a biological existence, but until it develops to the point of being viable, it doesn't contain the soul. Let's assume that God created us through evolution, etc., but until we become a living entity viable on our own, we are not a vessel for the One Spirit of God.)
----------
Mmmm yes, to make an interesting discussion we need to distingish between "life" and "consciousness" as it is not very likely a bacteria has a "mind".
----------
(The embryo may have a rudimentary developing brain, but it doesn't have a conscious mind.)
----------
(I realize you were comparing life using a computer model, but when I read your post, I had these thoughts. I don't mean to get off track here.)
 
Biological life could be anything from bacteria to a human, and includes all forms of vegetation as well.

As we enter the realm of artificial life then we need to distinguish clearly what makes us different from all other forms of life, and what comes next.

Our single obvious distinguishing difference with other species is our brain complexity and power. This evolved level of complexity has bridged a critical threshold that has enabled what is best described as self awareness. Some other primates also exhibit some levels of self-awareness but only at a trivial level. Our ability allows us to state “I think therefore I am”. No other species can do that. And the implications are colossal.

When AI emerges and can make the same statement then we will no longer be alone. The question now should be whether we are more similar to plant life or to an intelligent self-aware entity? I hope everyone will agree to the latter. This distinction should make it apparent that the definition for life must absorb two very distinct forms.

(1) Simple life that is not self-aware, but capable of sustaining itself and perhaps even reproducing. All plants and animals and bacteria fit this description but so do computer viruses. Their survival is determined by their living environments on which they are dependent but have little to no controlling influence.

(2) Self-aware life whose capabilities allow them to determine, control, create, and change their own living environments as necessary in order to survive.

I believe there is a third form of life that is on the extreme edge of my imagination and which I can’t effectively define, but is the result of super-intelligence. I.e. that state where AI evolves at such a rate that it exceeds our intelligence by several orders of magnitude. The capabilities of such entities are currently beyond our ability to predict or imagine. I suspect it will be that point where first contact with alien species would occur if they exist.
 
I can observe self-awareness in just about every member of the animal kingdom. The main difference between us and them is that the animals cannot communicate to us. They know that they exist but cannot express their existance in words. Also I think it is logically imposible to produce a system that knows that it exists without coding it by hand into the machine. Anyone can produce a computer that says "I exist" but no one can produce one that understands the "I exist". In fact this impossible because the concept of existance cannot be defined using a non-circular definition and therefore cannot be observed.
 
I would say that the singular quality that distinguishes life from non-life is the capacity for action. That is, the ability of the 'organism' to act on its own and in its own 'interests' rather than simply being affected by its surroundings. As a base definition of life it would require some re-categorization. Viruses, considered by some not to be alive, would indeed be considered alive while prions would not. Some machines and computer programs could also be considered alive as well. But it seems to be a truer definition IMO than the biological definition which seems rather biased towards Earth life.

~Raithere
 
Okinrus,

I can observe self-awareness in just about every member of the animal kingdom.
What animal can express the concept “I think therefore I am”.

The main difference between us and them is that the animals cannot communicate to us. They know that they exist but cannot express their existance in words.
Communication is a two way activity. We certainly have the ability to communicate with them but they have extremely limited abilities to understand our communications apart from simple commands. Chimps and gorillas have been trained to understand many commands but that has not been easy. No one has yet been able to debate philosophical issues with a chimp or gorilla.

So I think you miss my point – we have achieved and passed that threshold of self-awareness that enables the “I think therefore I am” condition. There is certainly no evidence that any animal has reached that state.

The necessary conditions of self-awareness are that the system has a conception or internal representation of the self and that it has the capability for reflective awareness. When the internal representation of the self (e.g. body image as perceived in a mirror) is consciously updated, evaluated, described etc., so that the representation of the self is in primary consciousness and taken as an object by reflective consciousness, then the creature is self-aware.

The mirror self-recognition test has revealed that chimpanzees and orangutans have the capability of recognizing themselves in the mirror — evidence of a basic form of self-awareness — and that human children acquire this capability at about the age of 18 months. What most animals seem to lack is the ability to become aware of it as their own body. They cannot reflectively realize that the body representation is a representation of the animal itself.

Also I think it is logically imposible to produce a system that knows that it exists without coding it by hand into the machine. Anyone can produce a computer that says "I exist" but no one can produce one that understands the "I exist".
It is true that we have yet to build such a self-aware machine but then the best computers have a power that is still only a fraction of the processing power of the human brain. Computers are currently at the lizard stage. This diagram indicates we may have to wait until around 2030 before machines become self-aware.

http://www.frc.ri.cmu.edu/~hpm/talks/revo.slides/power.aug.curve/power.aug.html

In fact this impossible because the concept of existance cannot be defined using a non-circular definition and therefore cannot be observed.
I couldn’t decipher this. Please try again if it is an important point.
 
Last edited:
The necessary conditions of self-awareness are that the system has a conception or internal representation of the self and that it has the capability for reflective awareness. When the internal representation of the self (e.g. body image as perceived in a mirror) is consciously updated, evaluated, described etc., so that the representation of the self is in primary consciousness and taken as an object by reflective consciousness, then the creature is self-aware.
It's a bit more than that. If that was all, then just about any modern operating system would be self-aware.

So I think you miss my point – we have achieved and passed that threshold of self-awareness that enables the “I think therefore I am” condition. There is certainly no evidence that any animal has reached that state.
I don't we have a good definition of what thinking means. Do computers think?

It is true that we have yet to build such a self-aware machine but then the best computers have a power that is still only a fraction of the processing power of the human brain. Computers are currently at the lizard stage. This diagram indicates we may have to wait until around 2030 before machines become self-aware.
I have to wonder what computers your talking about because while computers are good at crunching numbers they don't do the complicated things a lizard can do. Ok, MIPS are a bad measurement of computer speed because of the difference in instructions between architectures and depend on entirely on what program were running on. It's a complete joke to extend this to human beings :) And a huge data collection such as what is contained inside the brain would have be stored using secondary memory, an impossiblity at the rate that's increasing at. So let's pretend we have a machine that does every algorithm with no time loss and unlimited memory, we still could not create an intelligent being because we don't have algorithms or data structures that could do it. The visual processing and object reconizition of a lizard alone have not been solved AFAIK.
 
Re: What is life?.

Originally posted by Godless
Questions such as these: what is life? who created the universe?, ... Are ancient mind subverting gimmicks of posting invalid, intellectually unetanable questions that have no basis in reality.
That an angry, pretentious little mind finds a question difficult to answer does not render the question untenable.
 
Okinrus,

I have to wonder what computers your talking about because while computers are good at crunching numbers they don't do the complicated things a lizard can do.
From your comments I don’t think you work in the computer industry, do you?

Ok, MIPS are a bad measurement of computer speed because of the difference in instructions between architectures and depend on entirely on what program were running on.
We can always normalize for MIPS. There are a number of utilities that will do that regardless of the hardware platform.

It's a complete joke to extend this to human beings.
Why? The human brain has some 100 billion neurons and each neuron fires at around 200 times per second. That’s a total speed of around 20K GHz noting that all neurons are capable of independent and parallel operation. In terms of current Intel technology that is about 10,000 x 2GHz Pentium processor chips. It is theoretically possible to connect that number together now. But practically speaking, assuming Moore’s law holds then by around 2012 that should be about 40 chips and connecting those together will be no problem. The hardware will not be any problem.

http://www.jetpress.org/volume1/moravec.htm

And a huge data collection such as what is contained inside the brain would have be stored using secondary memory, an impossiblity at the rate that's increasing at.
You keep saying things are impossible. How much memory do you think the brain would need? I suspect it is not as much as you might think.

So let's pretend we have a machine that does every algorithm with no time loss and unlimited memory, we still could not create an intelligent being because we don't have algorithms or data structures that could do it.
No time loss? Unlimited memory? I’m not sure what you mean by those conditions. But you are right; the software is lagging behind the hardware. While the hardware will be ready in a few years I suspect that it will take a further 10 to 20 years to perfect the software.

The visual processing and object reconizition of a lizard alone have not been solved AFAIK.
Depends if you consider the intelligence of a guppy much like a lizard. Try this –

http://www.frc.ri.cmu.edu/~hpm/proj...1/RoboEvol.presentation/revo.slides/2000.html
 
From your comments I don’t think you work in the computer industry, do you?
No, I'm in school but this is from a book done by the author's of the MIPS so I suspect that it is accurate. Anyways if I was in the industry with all those project deadlines gone amiss I would multiply any estimate of 30 years by 10.

We can always normalize for MIPS. There are a number of utilities that will do that regardless of the hardware platform.
Well for an inexact measurement you might be able to use it but other than that it's a real stretch. it doesn't specifiy the collection of instructions that they are timing with and each machine has different instructions.

Why? The human brain has some 100 billion neurons and each neuron fires at around 200 times per second. That’s a total speed of around 20K GHz noting that all neurons are capable of independent and parallel operation.
A neuron firing is a completely different "instruction" than a CPU instruction.

No time loss? Unlimited memory? I’m not sure what you mean by those conditions. But you are right; the software is lagging behind the hardware. While the hardware will be ready in a few years I suspect that it will take a further 10 to 20 years to perfect the software.
Basically a machine that can calculate the result of any terminating algorithm. I don't think that we can reliably build a machine as complex as the brain and make it fault tolerant. By the way took Babbage's model of the computer took quite a while to be build and that was with a full design.

Depends if you consider the intelligence of a guppy much like a lizard. Try this –
No, what I realy mean the recognition of what it's seeing;.for example, can we create a computer that recognize a generic chair or table? I don't even think OCR systems are fully perfect and for a machine to learn it would have to be given an example of an object such a chair, and find out detailed information on what's different and similar about it from other objects.
 
Okinrus,

Anyways if I was in the industry with all those project deadlines gone amiss I would multiply any estimate of 30 years by 10.
It doesn’t work that way.

Well for an inexact measurement you might be able to use it but other than that it's a real stretch. it doesn't specifiy the collection of instructions that they are timing with and each machine has different instructions.
It’s sufficiently accurate for the type of estimates we are making here.

A neuron firing is a completely different "instruction" than a CPU instruction.
For sure, but the human brain uses many bio-chemical reactions that are incredibly slow compared to CPU speeds. But even if the estimates are a few decades wrong, the point is that the best scientists working in this field have absolutely no doubt it is going to occur.

I don't think that we can reliably build a machine as complex as the brain and make it fault tolerant.
Why not? My company designs and builds fault tolerant hardware, that I suspect will fit the need quite nicely.

No, what I realy mean the recognition of what it's seeing;.for example, can we create a computer that recognize a generic chair or table?
Yes, I understand, visual recognition is one of the major areas for AI progress. But I think that that was what Hans was claiming in his projects on mobile robots. That these industrial robots are general purpose within a limited environment and can recognize unexpected objects.

For his full site see http://www.frc.ri.cmu.edu/~hpm/
 
It doesn’t work that way.
Maybe not that drastic but the AI industry has a history of being overly optimistic. Also it is said that software complexity grows exponetionally with lines of code as well.

It’s sufficiently accurate for the type of estimates we are making here.
I believe that the professor should make note of all assumptions. To implement a neuron might take 100 instructions and if it has to access main memory there will be added slow down.

For sure, but the human brain uses many bio-chemical reactions that are incredibly slow compared to CPU speeds. But even if the estimates are a few decades wrong, the point is that the best scientists working in this field have absolutely no doubt it is going to occur.
I don't think it will occur until we fully understand our own brains. Once we understand our own brains to fairly good level we may be able to use this knowledge to produce a human -like robot.

Yes, I understand, visual recognition is one of the major areas for AI progress. But I think that that was what Hans was claiming in his projects on mobile robots. That these industrial robots are general purpose within a limited environment and can recognize unexpected objects.
I think by 30 years we may have a machine that is capable of recongizing objects that it sees and naming them along with a definition. We may even be able to produce a machine that learns how to detect different objects but intelligent machines are pretty far away.
 
Hmm. Where do we reach the point of self-awareness?

An AI is basicly a series of IF-THEN statements combined with a learning algoritm and a statistic database.

For instance,

Base condition:
The battery is below a sertain critical level, thus try to recharge.

Locate powersource:
1) check map in memory of any powersorce is marked.
2) if no, check for obstancles usually been seen close to powersource based on the statistic database.
3) if no, cry for help i a proper human lingo
4) if no, try locating source by following the closest wall in random direction.

The logics can be extended into the extreme, especially if several 1000 robots are interconnected trough Internet to share the knowledge-database.

What makes the human brain so different from this? We also use simple logic, even when we discuss on this forum.

We have a right-side brain that is different tho. No idea how that could be compared to AI but probably pure logic even here but with use of more random thins... :)

so.. when will a system become self-aware.

Will it just happen once the complexity has reached a sertain unknown level?


(Raithere, isnt that Tim the enchander you have there as image :)
 
I find this post to be quite interesting. I think we do need to change our concept of life, but how I'm not sure. I do know that we are more than just "biological" organisms and I believe advanced studies on consciousness will one day be able to prove this.
 
Robban,

An AI is basically a series of IF-THEN statements combined with a learning algorithm and a statistic database.
OK but then a BRAIN is basically a set of cells that represent simple IF – THEN functions. I.e. if there is sufficient signal potential on the inputs then the neuron will fire.

These simple cells combine both volatile aproximizing (fuzzy) distributed database function as well as logic sequencing.

We can consider each of these cells (neurons) as a simplistic microprocessor, complete with local memory, but there are some 100 billion of them; the brain is simply a massively parallel processing system.
 
Invisbleone,

I do know that we are more than just "biological" organisms
How do you know this? The brain is not yet fully understood.
 
Back
Top