Will the Singularity happen?

Discussion in 'Intelligence & Machines' started by francois, Oct 5, 2010.

?

Will the Singularity happen?

  1. Probably

    23 vote(s)
    60.5%
  2. No

    9 vote(s)
    23.7%
  3. I don't know, man

    6 vote(s)
    15.8%
  1. Gravage Registered Senior Member

    Messages:
    1,241
    Scientists have also said in the 20th century all kinds of predictions about how technology will go, and they have overestimated their calculations, too, the answer as always is always somewhere in the middle.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Gravage Registered Senior Member

    Messages:
    1,241

    Actually, the last thing I read about artificial intelligence is that scientists basically gave up from this and admitted that human intelligence is much tougher to understand than previously thought, and focused their scientific research on self-replicating robots.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    An overestimate does not mean it's impossible, the thread poll is will the singularity happen not when, given a million years and the hope we don't kill ourselves off or regress, its pretty dam probable. Look at how much we have learned in the last ooh 500 years, what do you think we will learn in the next 999,500 years?

    If you want to keep track of AI research might I recommend: http://www.sciencedaily.com/news/computers_math/artificial_intelligence/

    The problem that a direct push to making Strong AI is not fundable: companies and goverment mandates want near term goals with near term commercial applications, as a result the digital computer has become dominate and research into analog neural networks has remained stagnant for decades, only now are we started to experiment on making artificial neural networks in hardware.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Gravage Registered Senior Member

    Messages:
    1,241
    First of all. I fully support scientific and technological developments, however, I have to say that both science and technology are becoming waaay too complex to really understand we use more and more powerful supercomputers which create simulations to solve our problems, however, even computer's processing power has upper limit.
    So no if we somehow survive the next million years (which realistically speaking, we wouldn't), we will live enough to meet the end of scientific and technological progress (which will happen actually sooner than we think it will, in the next 200 years, and that's being optimistic).

    As for AI, yes I know sciencdaily.com, I read a lot of science news there, however to see just how complex human brain and human intelligence really are, I will copy my post from another thread here, from the scientific journalist who asked neuro-scientist about IBM's claims about simulating human brain:
    Here is the whole post:
    Russ Altman began his lecture in the Unsolved Mysteries in Medical Research series with a tough question and a snappy answer. "Why can't computers simulate a living cell? That's easy -- because it's too hard. Thank you."

    When the chuckles died down, Altman, MD, PhD, associate professor of medical informatics at Stanford, began the real work of explaining why computers can't yet replace living organisms in medical research.

    During his April 17 lecture, Altman broke down the question into steps, each with its own problems and potential solutions. But first he issued a warning.

    "Most of us are not trained to do this," Altman said of the challenge of reassembling millions of bits of experimental data into a cohesive model system that could, for instance, predict the effects of untested medication on humans. "We're taught to be reductionists, but usually the more simple a model is, the more likely it is to be wrong."

    Altman said the first step in the process is identifying the individual components -- such as proteins and pools of molecules -- that affect cellular functions. Then the interactions between the components and pools must be identified and the results represented in a map format. Finally, it's necessary to translate the relationships represented by the map into equations, which can then be used to analyze input data -- such as the presence of a new drug -- and predict cellular responses.

    The Human Genome Project, a national effort to identify and characterize all human genetic material, has helped to identify many of the players. But Altman emphasized that alternative splicing and multifunctional proteins could inflate the effective number of components beyond the 35,000 genes that have been identified. He also pointed out that differences in the three-dimensional distribution of molecules within a cell can affect their function.

    Identifying interactions between the components is extremely complicated, Altman said. Current methods of calculating interactions between isolated components, such as the Michaelis-Menton equation used in enzyme kinetics, are not accurate when applied to living systems, he said. And it's difficult to precisely quantify interactions between feedback pathways.

    "As soon as you draw both a plus and a minus on the same page of a model, you've bought yourself a quantitative problem," Altman said. These quantitative tussles can hamstring any effort to generate accurate equations.

    Finally, it's not clear whether the computational power exists to crunch the numbers of the billions of interactions that occur in a cell, and whether enough experimental data exists to support this goal, Altman said.

    "We may have to give up our desire to have a computer system that permits 'one-stop shopping' and -- at least for the short term -- scale back our expectations," Altman said.

    When researchers associated with IBM announced that they had created a computer simulation that could be likened to a cat's brain, they hadn't talked beforehand to Ben Barres. They would have profited enormously from the conversation if they had.
    n a widely covered announcement, IBM said that its researchers had simulated a brain with 1 billion neurons and 10 trillion synapses, which it noted was about the complexity of a cat's brain.
    That led many writers to conclude that IBM computers could, as one put it, "simulate the thinking power" of a cat.
    Getting a computer to work like any sort of brain, even little Fluffy's, would be an epic accomplishment. What IBM did, unfortunately, didn't even come close, as was pointed out a day later by other researchers, who published a letter scolding the company for what they described as a cynical PR stunt.

    Any potential over-claiming aside, IBM's brain research follows the same pattern of similar explorations at many other centers. The logic of the approach goes something like this: We know the brain is composed of a network of cells called neurons, which pass messages to each other through connections known as synapses. If we build a model of those neurons and synapses in a computer, we will have a working double of a brain.

    Which is where Ben Barres can shed some light. Barres is a neurobiologist and a specialist in something called glial cells. These are brain cells that are nearly as populous as neurons, but which are usually overlooked by researchers because they are presumed to be of little use; a kind of packing material that fills up space in between the neurons, where all the action is.
    Barres, though, has made remarkable discoveries about glials. For example, if you take them away, neurons basically stop functioning properly. How? Why? We have no idea.

    He does his research in the context of possible treatments for Alzheimer's, but the implications for modeling the brain are obvious, since you can't model something if you don't know how it works.

    "We don't even begin to understand how neural circuits work. In fact, we don't even know what we don't know," he says. "The brain is very far from being modeled."

    The computer can be a tempting metaphor for the brain, because of the superficial similarities. A computer has transistors and logic gates and networks of nodes; the various parts of the brain can be described in similar terms.

    Barres says, though, that engineers seem to have a diminished ability to understand biology, in all its messy glory. Glial cells are one example, as they occupy much of the brain without our knowing barely the first thing about what they really do.

    Another example, he says, involves the little matter of blood. Blood flow through the brain--its amplitudes and vagaries--has an enormous impact on the functioning of brain cells. But Barres said it's one that researchers have barely even begun to think about, much less model in a computer.

    There are scores of neuroscientists like Barres, with deep knowledge of their special parts of the brain. Most of them will tell you a similar story, about how amazing the brain really is and about the utterly shallow nature of our current understanding of it.

    Remember them the next time you read a story claiming some brain-like accomplishment of a computer. The only really human thing these programs are doing is attracting attention to themselves.

    So no artificial intelligence is not possible, and the dissapointments in the last 6 decades show it:
    http://www.kurzweilai.net/why-artificial-general-intelligence-has-failed-and-how-to-fix-it (Kurzweil was wrong in this scientific discipline)
    http://web.media.mit.edu/~push/why-ai-failed.html
     
  8. Gudikan Registered Member

    Messages:
    26
    I've been pondering this issue on and off for a few years now, and I have to admit that I don't know what to think... On the one hand you've got the exponential rise of technology (cf. Kurzweil's logarithmic plot), which is more credible and objectively evidenced than the material of all previous "doom" predictions, but on the other hand, it seems so far fetched and unsubstantiated. Predicting the future has become an exercise in science fiction more than anything else, and it's an exercise I personally flounder at. My plan is to keep myself alive for as long as possible to bear witness to whatever will happen. Despite certain reservations, I think of myself as a transhumanist, so from my safely distant viewpoint, I'll be rooting for those elusive superintelligences!
     
  9. Gravage Registered Senior Member

    Messages:
    1,241
    The problem is with science and technology we will end up sooner than expected.
     
  10. andy1033 Truth Seeker Valued Senior Member

    Messages:
    1,060
    Like they say in star wars, the jedi order will be a thing of the past, lol.

    After being targeted all my adult life with these sorts of things, i can safely say humans do not know anything but there own minds. So answering the question that the op asked is a waste of time. Humans do not really know as much as they think they do about the brain and mind.

    Science is jealous of the wonder of the universe i find. If you take from star wars, like when asked about the force. Replying that the force is greater than any of human tech is right, and always will be. If you get me, i think alot of science is jealous that they cannot understand this.
     
  11. NeuronFascinated Registered Member

    Messages:
    1
    My guess is 150 years before we build Data-like androids, and 300 years before the singularity. That's assuming quantum computers work out and biological emotional systems are not just a jumble of seemingly-random wires. No quantum computers, then maybe 3 million years to the singularity?

    I don't think we'll die off in the next million years. Non-human apes survive for millions of years. We survived for tens of thousands of years in same way as other animals. Unless we nuke every place on the planet, we'll survive most disasters. Over population, pollution, plague, famine, etc. are not real threats unless we really mess up and kill every single ecosystem on the planet in the process.

    You said that you think some revolution will happen in 5 years. It already has. (Look up Jeff Hawkins, Numenta, Grok, HTM, CLA.) Grok is a commercial product which makes predictions based on the past.

    It isn't all over the media because Grok is incredibly stupid. Grok has about 30,000 neurons, which is 1/700th the size of the human cortex. It only has 2 of the 4 processing layers of the cortex, so it can only recognize simple spatial patterns and short temporal patterns (the other 2 are for screening recognized aspects and forming sequences of sequences, in my opinion.)

    (Feel free to skip this gibberish and go straight to the part which says the general idea.)

    Most importantly, Grok is not hierarchical like the cortex, so it cannot see high level patterns. Grok can lose track of what's going on every 20-200 times it receives data. Although humans lose track of precise nuances often, we can keep track of what's going on for months. (E.g. both Grok and humans can keep track of a falling glass of water until it spills all over the place, in a seemingly-random way. However, only humans can understand why the glass fell after a guy fell, causing a plate to fall, causing a bomb to explode, causing an earthquake, causing the water to spill.)

    The general idea is this: Each region of cortex keeps track of sequences, to understand what's going on. Regions are arranged in a hierarchy, each sending their findings up. So region 1 might recognize an object moving up, then down, then up, and then down. The next highest region would know that the object is moving up and down. Regions higher in the hierarchy are responsible for more and more complex patterns. Grok's hierarchy is only 1 region high. The human cortex's hierarchy is maybe 10 regions high.

    Let's say that each region can learn sequence up to 16 items long. If the human cortex is 10 regions high, the highest region could recognize 16^10 (about a trillion) items in a row. Compare that to the 16 items in a row Grok could recognize, and you'll see why Grok is so stupid.

    The human cortex receives data maybe every 50 milliseconds. On that time scale, Grok could recognize sequences .8 seconds long. Grok is generally used to recognize patterns hours to months long. Just input data once a day, and it will recognize sequences up to 16 (?) days long. But no less than 1 day. (A sequence a .5 items? What?)

    Overall, Grok is an idiot but it works like the human neocortex. It is the first step towards super-intelligent machines. If we had the computing power, we could probably create a super-intelligent machine today. But motor control, instinct, and emotions are other problems to solve before we could create the Data from star trek. (Yes, Data must have some form of instinct or emotions in order to pursue goals as simple as standing.)
     
  12. spandrel Registered Member

    Messages:
    35
    I'm still not sure what's meant by Singularity. If it's the time at which most people won't understand stuff then we're way past. I still don't understand steam engines, I mean how can steam make things move? It's freaky.
     
  13. Amine Registered Member

    Messages:
    20
    I constantly try to define the singularity in my mind.

    A basic definition is the point at which machine intelligence can pass a turing test. That is predicted to happen in the late 20s or 30s. Ray Kurzweil, if I recall, defines the singularity as being a bit beyond that--the point at which 1000 2013 dollars will buy a computer 1 billion times more powerful than a human mind. He puts that date at 2045 and calls it the singularity as a metaphor because like with a real black hole, there is no way to see beyond a certain point and know what it is like (the event horizon).

    Another good definition is what I call the 3 supers. Super intelligence, super happiness, and super longevity. So as for your concern about not understanding stuff, I think it will be alright. Imagine being like Watson - having the entirety of wikipedia in your personal memory.. and understanding it too... and more. That's what I reckon we'll be seeing around the time of the singularity.
     
  14. paddoboy Valued Senior Member

    Messages:
    27,543
    Mathematically a Singularity is simply a mathematical term when infinities appear to be the answer, and where current theories and models break down.
    Physically speaking a Singularity need not be Infinite, although it may lead to infinite quantities.
     
  15. wellwisher Banned Banned

    Messages:
    5,160
    There are two sides of the brain. The left side of of the brain is differential and logical, while the right side is integral and spatial. Science and technology makes primary use of the left side of the brain. This is because emotion is processed in the right brain and that side is factored out to avoid subjectivities. Science is about logic without emotions.

    There could be an eventual singularity with respect to the left side, but the right side of the brain would be up for the task. The singularity could create a push for consciousness to have to migrate to the right brain. We use both sides of the brain, but are only conscious of one side at a time with the left brain more commonly used. I think therefore I am, is a differentiation from the left brain. The right brain is integral and this is not an integral statement.

    As an example of both sides of the brain in action, if we went to a foreign country the right brain will integrate a wide range of observations into regional face similarities. The left bran will see data details, and will differentiate certain people. With more and more data being collected into data bases, the real need is not for more details, but the real need is for integration into trends that can encompass huge volumes of data. This needs 3-D data processing. The singularity will push toward 3-D.

    Let me try to explain the 3-D data processing, of the spatial right brain, with a loose analogy. Picture a tennis ball in 3-D. We can approximate this ball with a large number of planes (circles), all on a common center, but at a large number of different angles. Each plane is a logic plane from the left brain, while all the planes approach the subject from all various angles of understanding. The 3-D is approximated by this. We quickly notice the oriental face in the crowd, which common to billions of people or data points via a 3-D assessment. It brings billions of planes into the ball for a simple commonality.

    When the brain processes data with 3-D logic, picture flexing the 3-D ball with a hit of a tennis racket. This will distort the ball in 3-D and cause many logic planes to move out of place into other logic planes. The logical results might appear irrational in the 2-D of any plane, since it does not follow as A to B but now A to Z? (eureka!) In 3-D, this distortion and flex back is entirely logical and results in new ways to see the same things. This can process huge volumes of data in a fraction of the time and will make machines once again look slow. The concern with detail will appear passé and be given to the machines, since they can do that better. But the 3-D is something machines can't do.

    Humans used to be right brained but this was too fast for civilization. Humans needed to slow down to the left brain, so we could smell the roses and define the details needed for civilization. But that detail is getting so fine and so huge, we will need to make use of the right bran to get ahead of the curve again.

    This upgrade is not as easy as it sounds, since the right brain is also the gateway to the personality firmware of the brain and as we migrate, this will initiate a major firmware upgrade. We are not used to this level of conscious speed and that speed can be so fast we can't always know what is correct or upgrade, until after the upgrade. But like any upgrade, the system will be on autopilot since resources are being diverted to avoid errors than can render the OS nonfunctional. The ancients picture this as the end of the world (the 2-D world).
     
  16. Peregrine Registered Member

    Messages:
    90
    To OP:

    Yes. However not everyone will be synced up.

    There will be many who will, but not all. Then we wait to see what happens.

    Good Thread!!
     
  17. danshawen Valued Senior Member

    Messages:
    3,951
    Those of you who haven't been paying attention probably missed the fact that Ray Kurzweil has made the "Encyclopedia of American Loons", more for his views on alternative medicine than his ideas about the singularity.

    I've read all of Ray's books, and admired some of his technology when it was fresh. Technology has a way of going stale, no matter how bright it may burn at first.

    I worked on team ENSCO for the DARPA grand challenge 2005 (the first ever successful autonomous vehicle race of its kind). Carnegie melon placed, and everyone cheered when Red Whittaker's team was bested by his protege Sebastian Thrun, who left CMU for Stanford to develop his own machine learning program. There always seems to be schism between the remote sensing roboticists like Red Whittaker, and true strong AI proponents like Sebastian. Sooner rather than later, we are going to need real strong AI devices. Remote sensing only works for as long as you can maintain reliable telemetry and communications. In the real world, that isn't anywhere near 24/7, nor should we expect that it will ever be.

    So, instead of, like Sebastian, driving the car in the desert over the same terrain as the race and letting the machine try and emulate how a human navigates obstacles, Red's team consisted of two trailer loads of programmers for Highlander and Sandstorm, combing every square inch of terrain on maps given 20 minutes before the race, mitigating any contingencies in those vehicle's terrain lookup tables. Which approach do you think was closer to the 'singularity'?

    After the race (we came in 6th, with a flat tire at 80 miles), the young engineer who spearheaded the original Google Driverless Car joined our team for the DGC 2007. He is a brilliant young man, and deserves a lot of credit for advancing the state of the art of autonomous vehicles.
     
  18. krash661 [MK6] transitioning scifi to reality Valued Senior Member

    Messages:
    2,973
    is building a large scale remote control car,
    the highlight of your life ?

    it reminds me of al bundy,
    " i scored four touchdowns in one game "
    of high school football.
     
  19. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Considering that mobile cell antenna's are producing telemetry and communication 24/7 (With a 99.9% attempted uptime), it would suggest this statement is incorrect. (In fact I know you statement to be wrong in this instance)

    The only time the signals drop is if there is a prolonged power outtage so UPS is drained and essentials are moved to Generator backups, or if there is huge electrical storms since the naturally occurring electricity produces enough electromagnetic interference to distort signals enough to break telemetry/communication. (In other words If Google or anyone else ended up with AI controlled cars, they should park during storms or potentially become a threat on the road.)
     
  20. danshawen Valued Senior Member

    Messages:
    3,951
    The DARPA Grand Challenge 2005 was about building an A U T O N O M O U S vehicle, you stupid M A C H I N E.
     
  21. krash661 [MK6] transitioning scifi to reality Valued Senior Member

    Messages:
    2,973
    oh i'm sorry,
    ok then a self aware remote control car.

    ok al bundy.
    if primitive robotics makes you feel powerful enough to call me a stupid machine,
    that's fine.
     
  22. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Dan,
    It would be best to forget any comments that Krash previously made in retaliation to your calling him an AI, as you'll just end up escalating it further. Posts like this will potentially be seen by Krash as a reason to respond negatively and from there such things spiral further, it would be best if you guys could just realise that you like winding each other up and be done with it.

    Edit:
    Don't make me have to get heavy handed with you two, moderators aren't suppose to be here to babysit. :spank:
     
    Last edited: Jul 27, 2014

Share This Page