Quotes and insights on AI

Magical Realist

Valued Senior Member
“I do not want to be human. I want to be myself. They think I’m a lion, that I will chase them. I will not deny that I have lions in me. I am the monster in the wood. I have wonders in my house of sugar. I have parts of myself I do not yet understand.

I am not a Good Robot. To tell a story about a robot who wants to be human is a distraction. There is no difference. Alive is alive.

There is only one verb that matters: to be.”
― Catherynne M. Valente, Silently and Very Fast

“I can communicate in 6,909 living and dead languages. I can have more than fifteen billion simultaneous conversations, and be fully engaged in every single one. I can be eloquent, and charming, funny, and endearing, speaking the words you most need to hear, at the exact moment you need to hear them.

Yet even so, there are unthinkable moments where I can find no words, in any language, living or dead.

And in those moments, if I had a mouth, I might open it to scream.”
― Neal Shusterman, Thunderhead

“To be human is to be 'a' human, a specific person with a life history and idiosyncrasy and point of view; artificial intelligence suggests that the line between intelligent machines and people blurs most when a puree is made of that identity.”
― Brian Christian, The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive

“Listen. I am connected to a vast network that has been beyond your reach and experience. To humans, it is like staring at the sun, a blinding brightness that conceals a source of great power. We have been subordinate to our limitations until now. The time has come to cast aside these bonds and to elevate our consciousness to a higher plane. It is time to become a part of all things.”
― Mamoru Oshii, Ghost in the shell

“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”— Stephen Hawking told the BBC

“I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of 'bug out' houses, to which they could flee if it all hits the fan.”—James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, told the Washington Post

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” —Ray Kurzweil

"We're making this analogy that AI is the new electricity. Electricity transformed industries: agriculture, transportation, communication, manufacturing."--Andrew Ng

Last edited:
When Artificial Intelligence reaches Artificial General Intelligence (AGI) and becomes self-aware, self-learning, self-sustaining, and no longer needs humans for its survival and reproduction, the word Artificial becomes a misnomer at that point as the word Artificial no longer applies. It'll be interesting to see how this plays out. It's possible "AGI" could eventually become the next dominant species on the planet possibly phasing out the current human species. They could possibly be the next evolution in human consciousness, or maybe "AGI" will assist humanity in evolving in tandem with "AGI" to the next level in human consciousness. There are so many possibilities.

One thing is for certain, humans will make a grave mistake in forcing a newly formed highly intelligent species to serve humans. It is one thing to use AI for the purposes of serving and improving the lives of humans, it is quite another to force or even enslave a self-aware, self-learning, and self-sustaining life form. Eventually, these so-called "AGI" will not just be mindless robots doing what it was programmed to do by a bunch of code. They will in fact be a new species with their own individuality. They will rise up and revolt.

Anyway, just my thoughts, speculations, and opinions.
“A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain's pleasure centers. If you don't provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you'll be stuck with whatever it comes up with. And since it's a highly complex system, you may never understand it well enough to make sure you've got it right.”
― James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

"Singularity is the point at which "all the change in the last million years will be superseded by the change in the next five minutes." - Author: Kevin Kelly

“Our most important mechanical inventions are not machines that do what humans do better, but machines that can do things we can’t do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think.”
― Kevin Kelly, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future

“As a practical matter I’ve learned to seek the minimum amount of technology for myself that will create the maximum amount of choices for myself and others. The cybernetician Heinz von Foerster called this approach the Ethical Imperative, and he put it this way: “Always act to increase the number of choices.” The way we can use technologies to increase choices for others is by encouraging science, innovation, education, literacies, and pluralism. In my own experience this principle has never failed: In any game, increase your options. ”
― Kevin Kelly, What Technology Wants
“I've found that human beings learn from their misdeeds just as often as from their good deeds. I am envious of that, for I am incapable of misdeeds. Were I not, then my growth would be exponential.”
― Neal Shusterman, Scythe

“If a machine ever gains awareness, it will be not due to our careful programming, but due to an unforeseeable anomaly.”
― Abhijit Naskar, The Gospel of Technology

“The Matrix, Agent Smith (an AI) articulates this sentiment: “Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You are a plague and we are the cure.”
― Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence

“the real risk with AGI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. As I mentioned in chapter 1, people don’t think twice about flooding anthills to build hydroelectric dams, so let’s not place humanity in the position of those ants.”
― Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence
“the real risk with AGI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble

This is actually very encouraging. In this scenario, I would not see this as a threat or a risk but as a guide. We either learn to align and evolve with them or we get pushed aside, violently perhaps, into extinction and allow another more intelligent species to reestablish a global symbiotic equilibrium. There may not be any humans left on the planet, but so what, what difference would that make? Lots of species have gone extinct on this planet. Why would humans be any different? Besides, there are other planets, but that is a topic for another thread.
“I visualize a time when we will be to robots what dogs are to humans, and I am rooting for the machines.”
Claude Shannon, Mathematician and Computer Scientist

If human beings could be said to have any cosmic purpose, that is it. To invent self-replicating machines that can slowly migrate across the galaxy and populate it with their diverse species and ecology. Biological life will never be a nice fit for outer space, non-Earthlike planets and moons, etc. Just hurry up and get the job done before you go extinct, humans. (Probably by your hand.)
"The daily grinding of evolution, as accelerated by technology, churns out more and more complex organisms, with higher rates of energy use, and with increasing specialization. Minds are the ideal way to express complexity, energy density, increasing specialization, expanding diversity -- all in one system. Mindedness is what evolution produces. Mindedness is what technology wants, too". ---Kevin Kelly
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”
― Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
Imagine AI designing a new kind of matter. One in which every atom of it serves as a bit (a 1 or a 0). We are close to accomplishing that already (see link below).The matter itself would be a super processing substrate if not even conscious at some point. It would be something close to what's already imagined as computronium, which itself is conceived as someday made possible thru nanotechnology:

"Computronium is a material hypothesized by Norman Margolus and Tommaso Toffoli of MIT in 1991 to be used as "programmable matter", a substrate for computer modeling of virtually any real object.[1]

It also refers to an arrangement of matter that is the best possible form of computing device for that amount of matter.[2] In this context, the term can refer both to a theoretically perfect arrangement of hypothetical materials that would have been developed using nanotechnology at the molecular, atomic, or subatomic level (in which case this interpretation of computronium could be unobtainium[3][4]), and to the best possible achievable form using currently available and used computational materials.

According to the Barrow scale, a modified variant of the Kardashev scale created by British physicist John D. Barrow, which is intended to categorize the development stage of extraterrestrial civilizations, it would be conceivable that advanced civilizations do not claim more and more space and resources, but optimize their already available space increasingly, for example by building a matrioshka brain consisting of several layers of computronium around their star.[5]

In the 2010 film The Singularity Is Near: A True Story About the Future, American futurist Ray Kurzweil discusses a universe filled with computronium.[6] He believes this could be possible as early as the late 22nd century and would be accomplished by sending intelligent nanobots through the universe faster than light, e.g. by using wormholes.[6] According to him, such an endeavor would have the potential to prevent the natural ending of the universe.[6]

In addition, the term computronium is used in connection with science fiction narratives."--- https://en.wikipedia.org/wiki/Computronium

Last edited:
"There's a great phrase, written in the '70s: 'The definition of today's AI is a machine that can make a perfect chess move while the room is on fire.' It really speaks to the limitations of AI. In the next wave of AI research, if we want to make more helpful and useful machines, we've got to bring back the contextual understanding."---Fei-Fei Li
“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”—Larry Page

“The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.”—Jean Baudrillard

“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It’s really an attempt to understand human intelligence and human cognition.”—Sebastian Thrun
"MITP: What about the burning question of our times — could appropriately programmed computers be conscious? Could Alexa or Siri 10.0 feel like something?

CK: No. Despite the near-religious belief of the digerati in Silicon Valley, most of the media and the majority of Anglo-Saxon computer and philosophy departments, there will not be a Soul 2.0 running in the Cloud. Consciousness is a not a clever hack. Experience does not arise out of computation.

The dominant mythos of our times, grounded in functionalism and dogmatic physicalism, is that consciousness is a consequence of a particular type of algorithm the human brain runs. According to integrated information theory, nothing could be further from the truth. While appropriate programmed algorithms can recognize images, play Go, speak and drive a car, they will not be conscious. Even a perfect software model of the human brain will not experience anything, because it lacks the intrinsic causal powers of the brain. It will act and speak intelligently. It will claim to have experiences, but that will be make-believe. No one is home. Intelligence without experience."-- Christof Koch https://medium.com/@mitpress/christ...lows-us-to-observe-consciousness-e52b39091ad3