AI and the singularity

Discussion in 'Intelligence & Machines' started by arfa brane, Jun 9, 2017.

  1. RainbowSingularity Registered Senior Member

    interesting debate i would like to follow...
    should society be regulated based on the example of Darwinian evolution being a known absolute regulatory mechanism ?
    is Regulation its self going against Darwinian evolution ?
    would want the likes of Niel D~T Richard D Brian C Bill G Steve W and a few others, would make an amazing TV series.

    "Divine Evolution"

    removing humans from the labour market by taking away their only means of generating food clothing housing breeding etc is a regulation process.
    producing robots is a process of producing regulation of the labour market.

    that needs to be addressed as a social-ecconomic-geo-political-cultural mandate to prevent extinction by regulation.
    the regulator(robot) must be Regulated to prevent extinction as a result of regulation.
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Write4U Valued Senior Member

    Personally I would like to call Evolution a "probabilistic mathematical function" which can be statistically shown to exist by 14 billion years of development of ever greater complex patterns, and eventually resulting in the advent of self replicating polymers. Mutations in this process (random assimilation of compatible chemicals) , may have resulted in a survival advantage and the beginning of competition to dominate, e.g. Darwinian Evolution .

    This does not necessarily mean that simpler structures cease to exist altogether, they may well continue to exist, but will take a different path of evolutionary development.

    A great example IMO, is the fusion of chromosome 2 in humans, which marked the split from our hominid precursors.

    Please Register or Log in to view the hidden image!

    If it were not of man's interference by destruction of habitat and hunting for "bushmeat", the great apes would be thriving instead of declining. The same holds true for whales
    and coral reefs which provide a rich environment for a great variety of fish and other life forms.

    "mess with Mother Nature and she will exact a price for destabilizing local and/or global ecosystems"

    We cannot regulate global functions, only disturb the symmetries and balance which took earth some 4 billion years to establish by self-organization of the ecosphere due to earth's inherent potentials derived from chemical reactions and our stable proximity to the sun.

    The rare probabilistic events were due to "outside" interferences, such as collision with Theia

    Please Register or Log in to view the hidden image!

    Which may have introduced several elements (such as gold) into the earth's chemistry and be causal to the eventual emergence of self-replicating polymers, which continued to organize into more complex biochemical structures over the following millions of years.

    This is why I see creation of life as a probabilistic, but statistically demonstrable process of trial and error over some 2 trillion, quadrillion, quadrillion, quadrillion chemical interactions on earth alone (Hazen), in spite of the early chaotic state of the earth's atmosphere, which created the great extinction epochs, where only the hardiest or sheltered organisms survived and when conditions settled, continued to populate the earth.

    I see this as a "fundamental" process, and as Hazen proposed, there may have been other ways to form self duplicating biomolecules, but they all must have had something like it in common.
    Last edited: May 9, 2018
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. river

    Of course we know that all of this has happened in the past , before , write4u .
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. river

    Yes of course the probability theory of life is simple , but complex .

    But how does this account for Arctic Worms ?
  8. Write4U Valued Senior Member

    They are simple.

    Water bears (tardigrades) are also very simple.

    Please Register or Log in to view the hidden image!

    And of course there are the extremophiles

    Please Register or Log in to view the hidden image!

    IMO, with these organisms the fact that they formed in extreme environments may well have prevented them from evolving beyond where they thrive now.
  9. arfa brane call me arf Valued Senior Member

    On topic, what will the advent of "useful" quantum computers mean for AI and machine learning?

    The current paucity of quantum algorithms (perhaps due largely to our lack of intuition) is being addressed by computer scientists (or at least by those who can grasp what a quantum algorithm is and how it's very different from classical algorithms). One way to think about this difference is this: a classical algorithm is like a musical composition where single notes are played in a sequence, whereas quantum algorithms are like playing all the notes at the same time so they interfere--the interference "is" the algorithm, type of thing.

    But our current understanding of machine learning and AI in general is entirely classical, even designing a quantum computer follows classical rules of physics.
    So what will happen when we start to use quantum systems in AI (already happening)?

    Does that mean we will need to "lose" somehow, our classical perspective, because we will have handed the problem over, so to say? Will we be able in that case to predict what such systems can learn? Will we be able to implement algorithms on quantum computers that can faithfully simulate natural systems like black holes, or even living cells (nobody really knows just now)?

    What could that mean? I'd say right now we have no real idea. But of course we don't, we've always been unable to predict how new technology will change the world as we know it.
  10. arfa brane call me arf Valued Senior Member

    So, intuitively, we can only develop new technology which is based on existing knowledge and so, is based on what we know about the technology we have, that is to say, on theories which "explain" it and its "working", or "usefulness".
    Which means, it's based on what we conceive of as "information" we can collect from things we designate as "outputs" for abstract signals, which encode this information.

    But physically, information is there whether or not we can collect it. In other words, the nature of information is pretty much a decision we make, based on theories of its collection and storage in some kind of archive. We invoke theories of information transmission from senders to receivers: "observers" of information. Thus, quantum information itself cannot "escape" this paradigm, or so we think.

    So we're in some sense (we don't really understand) constrained to a context where information is encoded in abstract signals. Because we apply an input-to-output context we have to choose which is which. And we have to choose an encoding.

    The thing is that a quantum of energy can be encoded in many ways. When an electron "absorbs" a photon, that's equivalent to an electron "emitting" a photon, in terms of the energy: the encoding is either as a change in the electron's momentum or it's the (equivalent) photon. So our decision about which is an input and which is an output appears to depend on the time direction. (we don't understand photons as being able to "encode momentum" into the past).
    Last edited: May 15, 2018
  11. arfa brane call me arf Valued Senior Member

    Apart then, from the problems we still have with being able to construct large enough "qubit" ensembles, and then encode them all at the same time (which it seems will only ever be statistically possible such that we have a "fiducial" signal of some kind) we don't yet understand why we do need to connect (in a virtual way, i.e. not physically) the outputs back to the inputs: everything it seems must be in a superposition, even when there is no detectable input or output.

    How strange. As if something in the past "knows about" something that's in the future. Or are we just somehow confused about the nature of information and its transmission and storage?
    A qubit doesn't have to be in a "stable" state for very long so it can "store" a signal, moreover. The signal output too, corresponds to correlations between outputs, not so much the outputs themselves, since these are random.
    Last edited: May 16, 2018
  12. arfa brane call me arf Valued Senior Member

    About the question: can a quantum computer (even one we haven't built yet, i.e. it exists in our "technological" future, along with a classical causal chain of events, an extension of a chain extending into the past that includes modern electronics--the transistor and other semiconducting devices, a "class" of material that now includes graphene, many kinds of doped silicon and other metallic elements, yada yada);

    If we let's say, successfully predict that the technological problems with fabrication and subsequent operation of large-scale quantum ensembles such as n x n arrays, of qubits, will all be conquered, then would such a computer be able to solve problems like (ta da!) the black hole paradox?

    If the answer is yes, what will that mean? Will the answer depend on the "context-switch": a quantum computer with a (large enough) number of qubits can "faithfully" simulate a black hole (akin to a physics model on a classical computer doing this, in say a computer video game, which projects, as it were, images onto an n x m array of pixels) . . .

    If instead we see problems with the simulations, we might have to assume it can't be done, we will never have the technology.
    Or what? What might we conceive of how to test a simulation of a black hole; what would the architecture of the quantum computer be so we can have classical measurements, moreover that means we also must have classical inputs (unless we want to throw causality away).

Share This Page