Reality and Mathematical Modelling

Discussion in 'Computer Science & Culture' started by Spellbound, Jun 14, 2014.

  1. Spellbound Banned Valued Senior Member

    Messages:
    1,623
    Brains as Models of Intelligence

    by Eric Hart (aka Chris Langan)

    Intelligence testing has, for some time been in disrepute. Critics have a number of complaints with it; it is "culturally biased", it "fails to account for creativity", there are "too many kinds of intelligence to measure on one kind of test", and so on. But advances in the theory of computation, by enabling a general mathematical description of the processes which occur within human brains, indicate that such criticisms may overlook more general parameters of intellectual ability... i.e., that there exist mathematical models of human mentation which allow at least a partial quantification of these parameters.

    Neural networks are computative structures analogous in general topology and functionability to the human cortex. The elements of such networks act like generalized brain cells linked by excitative and inhibitive "synapses" and conditioned by "learning functions" acting on "sensory" input. Because they are computative machines, they obey the same principles as machine programs, digital computers, and cellular automata.

    While there are certain aspects of human brains which distinguish them from generalized neural nets, it is instructive to assume a close analogy between them; specifically, that every thing a brain can do has its formal counterpart in the model. Thus, an algorithmic description of mental processes, translated into the "language" of the model, generates patterns with definite consequences in the model's descriptive logic.


    http://www.scribd.com/doc/30454472/Noesis-Chris-Langan-Comments

    Reality as a model. There you have it. Human mentation being modeled in the above 1989 paper in Noesis by Christopher Langan. A perfect isomorphic description of the functions of the human cortex.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Spellbound Banned Valued Senior Member

    Messages:
    1,623
    Artificial intelligence

    Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also an academic field of study. Major AI researchers and textbooks define the field as "the study and design of intelligent agents",[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines".[4]


    http://en.wikipedia.org/wiki/Artificial_intelligence

    Artificial neural network

    In computer science and related fields, artificial neural networks (ANNs) are computational models inspired by an animal's central nervous systems (in particular the brain) which is capable of machine learning as well as pattern recognition. Artificial neural networks are generally presented as systems of interconnected "neurons" which can compute values from inputs.

    http://en.wikipedia.org/wiki/Neural_network

    Reality is real. This follows from neural networks.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Andrus Registered Member

    Messages:
    19
    There is one thing that bothers me about "mathematics" in ANN-s.

    While neural net computer algorithms are straightforward math, there is the "random" component which is necessary to train them. (Same goes for evolutionary algorithms).

    So I cannot quite imagine writing a complete mathematical formula of an AI system. Because, how would you describe "random" component in a mathematical formula?

    Sidenote: There is a related but more programming-specific funny aspect of AI systems. To my experience, the programmer cannot fully describe "how" it works and this has the interesting side-effect that you cannot debug and "fix" a trained and working AI machine. Very unusual situation for a programmer

    Please Register or Log in to view the hidden image!

     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. danshawen Valued Senior Member

    Messages:
    3,951
    Roger Penrose long ago pointed out that certain 1 celled organisms were capable of intelligent (avoidance of dangerous environments, attraction to nutrient and resource rich environments) behavior without evidence of having a single neuron or even an analog of a neuron.

    Basically, this level of neural function is the same as for higher animals up to reptiles, which have a repertoire of responses to predator and prey, hot and cold, and not very much else. This pattern of behavior is also seen in teenage human beings who still largely depend on their amygdalas for certain kinds of reasoning. You may guess what kind that is. For the most part, this is the same basic brain functioning as your typical sociopath.

    The mammalian neocortex is where the real workhorse mental activity that put human beings over the top, brain-wise happens. With the neocortex you start seeing the beginnings of a highly evolved social animal that models and predicts the behaviors of parents, friends, potential mates and co-workers. But more than this, it enables obsessions with other things humans can do like making and using complex tools, habitats, science, religion, origami, or what-have-you. Far from being a disorder, a predisposition to OCD and everything that goes with it including modeling complex behaviors and processes -- all of that is what serendipitously make our brains unique and more adaptable than anything less developed. This is where true intelligence lies -- not on some stupid IQ test originally developed by the US Army. Trust me-- I'm a mensal, and I can tell you, cognitive prowess is not what those assessments tell you it is. Some of the dumbest people I've ever met made high marks on those things, and some of the most brilliant never tested.

    I noticed in one of your other threads that you seem to think Chris Langan knows something normal humans don't, by virtue of his very high measured IQ. What he knows of philosophy is auto didactical and it shows. I've given up hope that CTMU or Chris will ever amount to much, frankly.

    Now about computing engines and Artificial Intelligence

    The folks who study AI are of two camps that don't get along very well together. One camp, the symbolics logic folks, believe that language and most particularly the language of mathematics is the be-all end-all of machine intelligence. This camp spends most of its time tweaking learning algorithms and getting their machines to try and devise their own languages. The other camp, the behaviorists, believe that it's the engineering of sophisticated electronic versions of neurosensory including tactile apparatus integrated with motion servos that will eventually enable us to duplicate the functions of a human being cybernetically, the way Mr. Data on STTNG does positronically.

    A machine intelligence built by the symbology camp, if done perfectly, will have not the slightest idea, when finished, what a 'number' actually is, despite the fact that their intelligence is fundamentally based on it.

    A machine intelligence built by the neurosensory folks has a decided advantage because finally, the data it processes will basically be analogous to what humans do. A human mind has not the slightest idea what a 'length' is, even though as babies, we all learned first how far we could reach and so the unit distance of an arm or a leg's length became something we measured the outside world with. Our inner ears contained hairs of different lengths attached to neurons that responded to different frequencies of sound. Our retinas are so well integrated as a neurosensory apparatus that by the time the signal reaches the optic nerve, the retina of the eye has already identified what it is tracking. This is a handy integration for avoiding predators or catching prey.

    Newton and Einstein were our greatest minds, and what subjects did they concern themselves mostly with? Length and optionally time. Time, in terms of light travel time, is actually a length, you see? These are the foundational ideas common to all human languages. The rest of what appears in human dictionaries and encyclopedias of all languages are just so many circular references. It's all we know, yet we have no idea what it really is. Kind of like a symbology based Artificial Intelligence trying to understand what a number actually is.

    I don't think you will find these ideas in whole or in part elsewhere unless I have written them. If anyone asks, my finite mind has no clue where these ideas came from.
     
  8. iceaura Valued Senior Member

    Messages:
    30,994


    This is only true if 1) there emerge no properties or capabilities of any interest as the brain increases in size and complexity of connection, or 2) the neural network model is sufficiently large and complex enough to produce them reliably - that it is reasonably close to the size and complexity of a human brain.

    I believe that 2 does not hold in any actual model so far - certainly did not hold in 1989.
     
  9. wellwisher Banned Banned

    Messages:
    5,160
    What is interesting, the way we surf the internet is similar to the way dreams work. The internet was sort of modeled on a projection of the dynamics of dreams. In dreams, we may start with an initial dreamscape. There may not appear to be any logical sequence to the next stage of the dream. Rather it is like a hyper link.

    For example, I might be researching dogs and see a link for a particular breed. While I read that, I see an add about shoes and click on that. I may get side tracked away from where I began. These are all connected via search history, but not in a linear or even logical way. Subjectivity plays a role.
     
  10. wellwisher Banned Banned

    Messages:
    5,160
    The functioning of the mind and brain is different from computers and networks, because neurons are designed differently from computer memory. Neurons are designed for NI (natural intelligence). If you look at a neuron, the neuron expends considerable energy pumping and exchanging cations (sodium and potassium). The net result is a neuron, at rest, is actually at highest potential, at the top of an energy hill. Computer memory is designed for stability and exists at low potential, much lower down on an energy hill.

    If computer memory was designed like a neuron, it would be at the top of an energy hill, unstable, and subject to spontaneous change; fire. In this computer memory design, memory would have a very small shelf life and would self change in a short time. This self change, if structured, is NI.

    The neurons are designed to restore themselves after each fire. Computer memory would need high energy memory subject to spontaneous change, while also having a rewriting and resetting mechanism to bring it back, while also sort of filtering the previous changes, for useful change.

    Neural networks are more like the memory capacitance from previous change of memory offering a capacitance from change but pliable to change. This is regulated by personality firmware.
     
    Last edited: Oct 1, 2014
  11. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Wellwisher,
    You do realise that your "beliefs" of how it works isn't the same as what science conjectures right?
     

Share This Page