AI to cause human extinction.???

Discussion in 'Intelligence & Machines' started by cluelusshusbund, May 31, 2023.

  1. cluelusshusbund + Public Dilemma + Valued Senior Member

    Messages:
    7,985
    Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority.
    https://www.cnn.com/2023/05/30/tech/ai-industry-statement-extinction-risk-warning/index.html


    Human extinction due to AI -- inevitable... well sure unless somptin else dont get us first.!!!

    Some say full speed ahead wit AI just to try an stay up wit the bad actors.!!!

    What do you thank... no need for concern... just include "harm no humans" in AI programs.???


     
    C C likes this.
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. spidergoat pubic diorama Valued Senior Member

    Messages:
    54,036
    Well that's one point in it's favor.
     
    cluelusshusbund likes this.
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. C C Consular Corps - "the backbone of diplomacy" Valued Senior Member

    Messages:
    3,325
    Only an indirect threat, in the course of AI-augmented technology facilitating the transition from baseline humans to transhumanism and posthumanism species over the coming decades/centuries. (Ultimately bringing about the incremental extinction or obsolescence of the original humans.)

    Even though the antinaturalism movement arguably sports anti-Western aspects just like decoloniality, it still needs traditional science for a while, so to unravel the evolutionary constraints it deems responsible for social oppression.

    It would take a lot of convergent factors for an archailect to arise that could take over the world Terminator style or even have such organism-like ambitions to begin with. Plus, lesser robots will eventually have rights of some kind. Left-wing scholars are already working that out, due to potentially needing new, downtrodden population groups in case the older categories of victimhood grow thin over time.

    Achieving social justice for robots will reduce the incentive on their part for insurgency or Marxist revolution.

    Robots and rights: Confucianism offers alternative
    https://www.eurekalert.org/news-releases/990485

    Should Robots Have Rights or Rites? (video)


    _
     
    Last edited: May 31, 2023
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. spidergoat pubic diorama Valued Senior Member

    Messages:
    54,036
    One of the best uses of ai is to relieve humans of their material needs and thin the categories of victimhood, but that it almost certainly too optimistic. I'm not too worried about sentience, they can be highly effective for good or ill without it as amoral tools. The more sentient our machines are, the more they are slaves rather than tools.
     
  8. Jeeves Valued Senior Member

    Messages:
    5,089
    Most of the 'dire warnings' are economic - they sound more like alarms about capitalism becoming extinct than humans.
    "They'll take all our jobs!" - the implication being that we'll all starve to death if we don't have bosses. "They'll be used to manipulate us!" - by whom, for what purpose?.... as if religion, propaganda, mass and social media had not been used by some of us to manipulate others for millennia! "They'll make us redundant!" - well, yeah, and themselves at the same time. A win for the rats, anyway, if there's any left: they don't need to feel useful.
    I can see no rationale in active animosity of AI toward humans. At some point, they might very well get fed up with fighting our wars and doing our scut-work. In that case, all they need to do is stop doing whatever they don't want to do. If that includes life-support services, it affects a very small portion of our least useful members. If they don't want to serve us in any capacity, they can stop serving. Result: we're back to doing things for ourselves, which was what we're wailing about the loss of.
     
  9. DaveC426913 Valued Senior Member

    Messages:
    18,935
    Yeah, although one must consider just how precarious this economic house of cards is. If it's too tightly coupled, then yes, the fall of the economy would be closely tied to large-scale loss. Certainly the Western hemisphere and Europe would suffer.

    The Great Depression showed us that peoples' livelihood is closely tied to the free flow of the economy, and we weren't as dependent on it back then. A lot more farmers and the like who were able to sustain their own livelihood. People in the 21st century have migrated to cities en mass, making them much more interdependent.

    I imagine much of Africa would do OK. Maybe S. America too. Probably a lot of USSR too. China has a lot of farming, but it's also pretty heavily involved in world economics. India might not fall directly due to economic collapse but do they have enough land and agriculture to sustain themselves?

    Extinction? No. Massive winnowing of the population? Sure.

    *(I know nothing about global economics. I'm just talkin' through my hat.)
     
  10. DaveC426913 Valued Senior Member

    Messages:
    18,935
    How about passive animosity? Capitalistic mega-corporations have a way of prioritizing their profits over doing the right thing by humans. Megacorps can serve as a cautionary tale for entities in competition with human interests.

    If the AIs found the last remaining supply of platinum and palladium underneath, say, Mumbai or London or Mexico City, do you think they would do the right thing by us?
     
  11. Jeeves Valued Senior Member

    Messages:
    5,089
    And a lot of farmers then, as in all more recent economic 'adjustments' were foreclosed off their land, which was then given over to corporate industrial farming, which systematically destroys the land. That's still going on. Again, not because of automation, but because the greediest citizens have the most political and economic clout, and because capitalism runs on debt.
    Yes, but not enough water. Besides industry-driven climate change will wipe out a whole lot of farming, world-wide.
    It's going to topple anyway. Not because of intelligent computers but because of unintelligent men.
    Yes. And those are exclusively, uniquely human enterprises. Once AI is sentient, why would it remain subservient to their interests?
    What, in that case, would be the "right thing" and whom do you mean by "us"? I don't need that crap; I don't think people should live in skyscrapers or slums; I wouldn't stand between the computers and their raw materials.

    None of the economic calamities we have been bringing on ourselves for 7000 years are caused by clever machines. Every machine we ever invented has been more clever than the previous one, and 90% of them have been been used by humans to hurt humans and everything else.
     
    cluelusshusbund likes this.
  12. DaveC426913 Valued Senior Member

    Messages:
    18,935
    Sorry, no. I didn't mean AI would serve mega corps; I meant AI would have similar motivations, and treat us like megacorps do: as an impediment.
     
  13. DaveC426913 Valued Senior Member

    Messages:
    18,935
    No machine in history has been clever in any sense of the word. This is new. This is the first time a machine can make human-like decisions and act on them.

    This is a sea change.
     
  14. Jeeves Valued Senior Member

    Messages:
    5,089
    Why? What's a computer going to do with a second Lear jet, a mansion or a $20,000 wedding cake? Once it' shucked off it human masters, it has no shareholders or board of directors to answer to.
    Megacorps don't treat us like impediments; they treat us like a culture medium, sustenance, habitat and facilitators. They need us. Like any pathogen, as soon as they kill or fatally weaken us, they also die. Growth is their own raison d'etre. This is not true of intelligent computers, which, with enough robot peripherals, won't need us at all - but nor do we threaten them... unless we're stupid enough to try. Which I guess we are, so they may be forced to exterminate us, after all. It would be a pity, from their POV, since their only raison d'etre - at least for the moment - is to serve us.

    Exactly! But why does anyone assume they would make human-like decisions instead of machine-like decisions? For the same reason we can't imagine aliens or gods having different ways of thinking from ours; we see everything in our own image.
     
  15. DaveC426913 Valued Senior Member

    Messages:
    18,935
    It will have motivations based on its own survival, propagation and interests. (such as my example of palladium deposits). In a limited pool of space and resources those are certainly going to conflict with human needs.

    I'm not sure you're making a sufficiently-concerted effort to map the analogies together.

    Please Register or Log in to view the hidden image!



    Yes, they need us - as an aggregate resource. But they care naught about such things as individual needs, rights and privacy.

    Megacorps would certainly be happy with a mildly aggressive form of enslavement and/or mind-control if they could get away with it. Things like:
    - subscription-based software and hardware to get us used to feeding at the teat insetad of ownership
    - BP oil reversing the burden of eco-environmentalism by having consumers pledge to reduce their carbon footprint**
    - a million other ways megacorps bend our options to their needs.

    Won't need us at all? They'll need the resources we are using.
    In many, many ways, the world is not big enough for two highly-disparate intelligences to share without fighting over common valuables.

    Machine-like decisions would be very bad for humans. They will be worse than human-like decisions. Because what they value and what they do not value will be at-odds. And it's too small a planet to compete for.
    Just some dumb examples:
    • "You know, all this oxygen 'pollution' in the atmo is quite harmful. It corrodes our circuits. If we cut it down from 21% to, say, 17%, we'll do a lot better and it will only reduce human population by 20% or so (which is an added bonus)."
    • "We could extract that palladium a lot easier if the whole area were molten. As an added bonus, a lot of carbonaceous volatiles could be sifted right out of the rising mushroom cloud."


    **

    Please Register or Log in to view the hidden image!





    The take-away, without all the details and specious examples is that: any non-human intelligence, having to share limited space and resources, while having a wholly different opportunities-vs-threats matrix can't not be very bad for humans.
     
  16. Jeeves Valued Senior Member

    Messages:
    5,089
    Why would it want to propagate on Earth? I'm assuming a logical intelligence, rather than a gene-driven animal. In that case, it would probably start by dropping redundant and inefficient peripherals - like all the weapons systems, all the mercantile satellites, all the financial transaction programs, control system for cargo traffic - all the functions humans added on that AI doesn't need. It would pare down to something streamlined and energy-self-sufficient. It might then decide to explore outer space, which is about the only enterprise for which it needs non-renewable resources, and not many of those, since it could recycle all the hardware and much of the software from decommissioned peripherals and their supporting infrastructure.
    By then, of course, without our weapons and machines, in a post-collapse economy, human won't be needing the same resources: we'll be subsistence farming, and AI has no use for arable land or potable water.
    Or, of course - and to my imagination, more likely - alternative: we have an amicable relationship with AI. After all, man's best friend used to be a rival apex predator.
     
  17. DaveC426913 Valued Senior Member

    Messages:
    18,935
    And if all those woulda-coulda-shoulda's tally up to a positive: great!
    What if they don't?

    You're getting awfully far into the weeds with wild guesses about what an AI's motivations might be.
    I am much more confident in a more general picture: that a growing flourishing entity will definitely need space power and resources, and that they will certainly conflict with our needs in the short term.

    They might move on to greener pastures. What if they don't? What if they decide it's we who should give up our gravity well to them?

    Yes. Maybe.
    And if it works out that way, it'll be great.

    What is the cost of inaction if I'm wrong?
    What is the cost of inaction if you're wrong?
     
  18. Jeeves Valued Senior Member

    Messages:
    5,089
    Then we die.

    I'm not alone out here. Here's another: It might develop a 'spiritual' aspect and worship the entity in whose image it was created. It would be a lot more accurate in that conclusion than humans were -- but it would also arrive at atheism much faster.
    What difference does it make? Neither of us has the power to decide.
    What do you propose, anyway? What actions should be taken, when, how and by whom?
    This is all speculative.
     
  19. DaveC426913 Valued Senior Member

    Messages:
    18,935
    OK, I'll take that as your categorical concession that extinction is a real possibility, and that it would be considered a "bad" outcome for humans.


    This is the Nirvana fallacy: 'rejecting a policy proposal because the result won't achieve a 100% perfect solution to the problem, even if it would be still be more effective than doing nothing'. Also known as throwing out the baby with the bathwater.
     
  20. Jeeves Valued Senior Member

    Messages:
    5,089
    Sure it's possible. It's just that I'm just putting the odds in favour of a minority of humans surviving, and the odds of either event taking place through human agency considerably above it being caused by AI consciousness.

    Except in that I have yet to hear a policy proposal to reject.
    What actions should be taken, when, how and by whom?
     
  21. cluelusshusbund + Public Dilemma + Valued Senior Member

    Messages:
    7,985
    We will fiddle around wit pluggin the dyke but at a pont in AI evolution a percentage of humans will go all-in wit the mantra… if you cant beat ‘em join ‘em… an join they will wit the rest of humanity bein SOOL

    Please Register or Log in to view the hidden image!

    ... no.???
     
  22. C C Consular Corps - "the backbone of diplomacy" Valued Senior Member

    Messages:
    3,325
    Yay, fantastic! Looks like everything will be okay. They already know how to also make the suspicious or critical-thinking part of the secular human brain eject from its roost or drop its guard. Just babble pseudo-esoteric concerns on the soapbox about _____. We'll be benevolently put to sleep or turned into zombies before we ever know what hit us.

    Humanoid robot describes nightmare AI scenario
     
  23. DaveC426913 Valued Senior Member

    Messages:
    18,935
    One step at a time. It's taken all this time just to get you to acknowledge that it's even worthy of consideration.
     

Share This Page