Philosophy Updates


Consular Corps - "the backbone of diplomacy"
Valued Senior Member
Topic's dual purpose:

1. Providing recent (and sometimes old) items that may serve as candidates for starting separate discussion threads about.

2. Provide the overall philosophy section with a source for keeping up with happenings, developments, issues, and interviews.​
Last edited:
Philosophy, Bullshit, and Peer Review

ABSTRACT: Peer review is supposed to ensure that published work, in philosophy and in other disciplines, meets high standards of rigor and interest. But many people fear that it no longer is fit to play this role. This Element examines some of their concerns.

It uses evidence that critics of peer review sometimes cite to show its failures, as well as empirical literature on the reception of bullshit, to advance positive claims about how the assessment of scholarly work is appropriately influenced by features of the context in which it appears: for example, by readers' knowledge of authorship or of publication venue. Reader attitude makes an appropriate and sometimes decisive difference to perceptions of argument quality.

This Element finishes by considering the difference that author attitudes to their own arguments can appropriately make to their reception.
With respect to #1, superficial digital cloning is the best you're going to get, unless some future brain implant mediated by AI can actually tap into your thoughts and sensory data and record/organize a crude identity template (that will still fall short of the original).
- - - - - - - - - - -

(Dec 4) Could you move from your biological body to a computer?

EXCERPT: The feasibility of mind uploading rests on three core assumptions.
  • first is the technology assumption – the idea that we will be able to develop mind uploading technology within the coming decades
  • second is the artificial mind assumption – the idea that a simulated brain would give rise to a real mind
  • and third is the survival assumption – the idea that the person created in the process is really “you”. Only then does mind uploading become a way for you to live on.
How plausible is each of these? (MORE - missing details)
How Nietzsche’s insights can help fight fanaticism

EXCERPTS: Fanatical thinking is based on a narrative of resentment toward outgroups. Nietzsche offers two ways of changing the script.

The 19th-century philosopher Friedrich Nietzsche had firsthand experience with fanaticism. In 1885, his sister Elisabeth married Bernhard Förster, a far-Right, antisemitic activist. [...] Nietzsche despised him, telling his sister that her marriage was ‘one of the greatest stupidities you have ever committed … Your association with an anti-Semitic chief expresses a foreignness to my whole way of life which fills me ever again with ire and melancholy.’

[...] Perhaps unsurprisingly, Nietzsche offers us some of the sharpest philosophical insights into the origins and consequences of fanaticism – insights that remain as relevant in our modern social world as they were in his time. By engaging these ideas, we can better understand contemporary fanaticism and develop strategies to undermine it...
I saw only the "brain is not a computer" without any kind of proof or even an example, such as Wiltshire and 9 other savants who rely strictly on memory to describe their reality.
So instead of saying that thinking is not computing, I saw no explanation of why and how thinking is different.


IMO, I see no problem with identifying human brains as biological computers. To say brains are not computers is misleading.
The problem does not lie in computing power, but in data compatibility and processing utility.

The question is how to convert the data or the results from one OS to the other OS, which is already what I am doing this very moment.
I have translated my thoughts via the keyboard and deposited them into my computer. The result is now contained in both my Biological brain and my Computer's memory.

I just read this and this seems to ask the proper question

The technology assumption
Trying to simulate the human brain would be a monumental challenge. Our brains are the most complex structures in the known universe. They house around 86 billion neurons and 85 billion non-neuronal cells, with an estimated one million billion neural connections. For comparison, the Milky Way galaxy is home to about 200 billion stars.
That said, there are other obstacles. Creating a static brain map is only one part of the job. To simulate a functioning brain, we would need to observe single neurons in action. It’s not obvious whether we could achieve this in the near future.


If we cannot copy data from Windows OS to Apple OS, what leads one to believe that data can be transferred from a purely electromagnetic OS of AI (artificial intelligence), to an electrochemical OS of BI (biological intelligence)?
Last edited:
  • Like
Reactions: C C
Occam's razor is the only feature that differentiates science from pseudoscience, etc?

PRESS RELEASE: Occam's razor – the principle that, when faced with competing explanations, we should choose the simplest that fits the facts – is not just a tool of science. Occam's razor is science, insists a renowned molecular geneticist from the University of Surrey.

In a paper published by Annals of the New York Academy of Sciences, Professor Johnjoe McFadden argues Occam's razor – attributed to the Surrey-born, Franciscan friar, William of Occam (1285 – 1347) – is the only feature that differentiates science from superstition, pseudoscience or fake news.

Professor McFadden said: "What is science? The rise of issues such as vaccine hesitancy, climate scepticism, alternative medicine, and mysticism reveals significant levels of distrust or misunderstanding of science amongst the general public. The ongoing Covid enquiry also highlights how scientific ignorance extends into the heart of government. Part of the problem is that most people, even most scientists, have no clear idea of what science is actually about."

Factors often cited as being the essence of science, such as experimentation or mathematics, are widely used in disciplines as diverse as gardening, accounting, cooking or astrology. Alchemists performed thousands of experiments attempting to transform base metal into gold but got nowhere, whereas astrologers use mathematics to calculate horoscopes. Neither is considered science. But why? [Continued under Spoiler]

William of Occam insisted that science is the search for the simplest solutions. Occam's razor was adopted by Copernicus, Kepler, Galileo and Newton to, for example, argue that Earth orbits the sun, not the other way around, because it is simpler. They used the razor to clear a path through mysticism, superstition, and religion to found modern science. The razor continues to be invaluable, helping to predict, for example, the Higgs boson.

Professor McFadden continued:

"Whereas practitioners of mysticism, alternative medicine, pseudoscience or fake news can invent spirits, demons conspiracies or Elvis on the moon, to make sense of their world, scientists will always adopt the simplest solution to even the most complex problems. That is the beauty of Occam's razor."

"While mysticism, alternative medicine, and fake news often resort to elaborate explanations like spirits or moon-landing conspiracies, scientists seek the simplest solutions to complex problems. Today's world, riddled with pseudoscience and misinformation, partly stems from a poor grasp of science. Often taught as a jumble of obscure theories and complex equations, science can overwhelm students, driving them away. However, portraying science as a method to find simple explanations for our world's complexities, using experimentation, mathematics, and logic, could make it accessible to all, including politicians."
However, portraying science as a method to find simple explanations for our world's complexities, using experimentation, mathematics, and logic, could make it accessible to all,
I believe that searching for "common denominators" in different patterns is one way to satisfy Occam's principle.
Consciousness does not require a self

The idea that consciousness requires a self has been around since at least Descartes. But problems of infinite regress, neuroscientific studies, and psychedelic experiences point to a different reality. 'You' may not be what you seem to be, writes James Cooke.

- - - - - - - - - - - - - -

The moral imperative to learn from diverse phenomenal experiences

EXCERPTS: . . . Take the case of Blake Ross, the co-creator of the Firefox web browser. For the first three decades of his life, Ross assumed his subjective experience was typical. After all, why wouldn’t he?

Then he read a popular science story about people who do not have visual imagery. While most people can, without much effort, form vivid images in their ‘mind’s eye’, others cannot – a condition that has been documented since the 1800s but only recently named: aphantasia. Ross learned from the article that he himself had aphantasia.

[...] ‘I have never visualised anything in my entire life,’ Ross wrote in Vox in 2016. ‘I can’t “see” my father’s face or a bouncing blue ball, my childhood bedroom or the run I went on 10 minutes ago… I’m 30 years old, and I never knew a human could do any of this. And it is blowing my goddamn mind.’

There is a kind of visceral astonishment that accompanies these types of hidden differences. We seem wedded to the idea that we experience things a certain way because they are that way. Encountering someone who experiences the world differently (even when that difference seems trivial, like the colour of a dress) means acknowledging the possibility that our own perception could be ‘wrong’.

And if we can’t be sure about the colour of something, what else might we be wrong about? Similarly, for an aphantasic to acknowledge that visual imagery exists is to realise that there is a large mismatch between their subjective experiences and those of most other people.

[...] There is a scientific and moral imperative for learning about the diverse forms of our phenomenology. Scientifically, it prevents us from making claims that the majority experience (or the scientist’s experience) is everyone’s experience. Morally, it encourages us to go beyond the ancient advice to ‘know thyself’ which can lead to excessive introspection, and to strive to know others. And to do that requires that we open ourselves up to the possibility that their experiences may be quite different from our own... (MORE - missing details)
NASA podcast page also features a transcript.
- - - - - - - -

Episode 121: The Artemis and ethics report explained

INTRO: In this episode, we chat with Dr. Zach Pirtle, a policy analyst for NASA’s Office of Technology, Policy and Strategy about NASA’s Artemis and Ethics workshop, which explored the ethical, legal, and societal implications of its Artemis and Moon to Mars missions.

Join us as Dr. Zach Pirtle delves into the ethical and societal dimensions of NASA’s Artemis and Moon to Mars missions in the podcast “Towards a Lunar Code: The Artemis and Ethics Report Explained.” In this discussion, Dr. Pirtle highlights the workshop’s efforts to address key questions about how NASA should consider the ethical and societal implications of space exploration.

The podcast covers a wide range of topics, including sustainability on the Moon, the balance between humans and robots in space missions, and the importance of engaging underrepresented groups in these ethical discussions. We’ll explore the complex challenges and thoughtful considerations that underpin NASA’s commitment to responsible and value-driven space exploration.

In this episode you’ll learn about:
  • The critical importance of addressing the ethical and societal implications of NASA’s Artemis and Moon to Mars missions, going beyond just the technical and scientific aspects of space exploration.
  • How sustainability on the Moon is a key concern, with a focus on balancing current generational needs with the responsibility of preserving lunar environments for future generations.
  • The challenges and benefits of engaging with historically underrepresented groups, science fiction authors, artists, and experts from the humanities to enrich the dialogue on space exploration ethics.
  • The complexities of international collaborations in space exploration and how deconfliction and safety measures are crucial when multiple actors are present on the lunar surface. (MORE - the podcast/transcript)
Last edited:
As exemplified by some continental philosophers and humanities scholars, "better thinking" wouldn't necessarily ensure less reality impairment. If your original premises or starting dogma is out of sync with how the world works -- or loaded with a biased agenda -- then improved consistency in terms of how you conduct a reasoning process and what you output from it isn't going to turn a rock into an edible.

Akin to the many intellectuals still trying to salvage Marxism. (Which seems especially daft when there are so many flourishing offshoots, thanks to pioneers like Gramsci who abstracted from Marx what the broader theme was that subsumed his ideology.)

- - - - - - - - - - - - -

Does Studying Philosophy Make People Better Thinkers?

ABSTRACT: Philosophers often claim that doing philosophy makes people better thinkers. But what evidence is there for this empirical claim? This paper reviews extant evidence and presents some novel findings.

After discussing the oft-mentioned question of standardized testing scores, we review research on Philosophy for Children and critical thinking skills among college students. We then present new empirical findings, indicating that on average philosophers are better at logical reasoning, more reflective, and more open-minded than non-philosophers.

We also present some preliminary and suggestive evidence that, although some of these differences are due primarily to selection effects, others may not be. Accordingly, a key takeaway is that more data are needed. We conclude with concrete suggestions for philosophers and philosophy departments interested in gathering data to support the claim that studying philosophy makes people better thinkers.
How do reasonable people disagree?

INTRO: U.S. politics is heavily polarized. This is often regarded as a product of irrationality: People can be tribal, are influenced by their peers, and often get information from very different, sometimes inaccurate sources.

Tribalism and misinformation are real enough. But what if people are often acting rationally as well, even in the process of arriving at very different views? What if they are not being misled or too emotional, but are thinking logically?

“There can be quite reasonable ways people can be predictably polarized,” says MIT philosopher Kevin Dorst, author of a new paper on the subject, based partly on his own empirical research...
Usage of the "they" pronoun is non-plural; Torres is non-binary.
- - - - - - - -

‘What if everybody decided not to have children?’ The philosopher questioning humanity’s future

EXCERPTS: Émile Torres [...] thinks that it would not be a bad thing if humanity ceased to exist. “...most pro-extinctionists would say that most ways of going extinct would be absolutely unacceptable. But what if everybody decided not to have children? I don’t see anything wrong with that.” [see Voluntary Human Extinction Movement]

Torres has just written a book called Human Extinction: A History of the Science and Ethics of Annihilation. [...] Their basic thesis is that while human extinction is an ancient concern, the rise of Christianity removed it from public discourse. ... For Torres, the happiness of future generations is pure abstraction. As they do not exist, there is no loss if they never do exist. As they write:

“I am, tentatively, inclined to agree with Schopenhauer’s sentiment that Being Never Existent would have been best. Those who disagree with this find themselves in the uncomfortable position of arguing that all the good things that have happened throughout human history can somehow compensate for, or counterbalance, all the bad things that have happened – a claim that, I believe, most people would find difficult or impossible to justify after a few minutes of reflecting on the most horrific crimes and atrocities of our past.”

[...] Torres’s position could be viewed as worryingly nihilistic. They reject that label, pointing out that their concern is the avoidance of human suffering, including any suffering that would occur through humanity’s extinction. While they can appreciate the theoretical benefits of not existing, in practice it’s not an end they wish to see or promote... (MORE - missing details)

RELATED (wikipedia): Antinatalism
Last edited:
AI consciousness: scientists say we urgently need answers

EXCERPTS: “Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress,” says Robert Long, a philosopher at the Center for AI Safety...

[...] In comments to the United Nations, members of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness.

[...] humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we don’t recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Wrongly attributing consciousness would also be problematic ... because humans should not spend resources to protect systems that don’t need protection.

[...] Some of the questions raised by the AMCS to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes... (MORE - missing details)
Does quantum theory imply the entire Universe is preordained?

INTRO: Was there ever any choice in the Universe being as it is? Albert Einstein could have been wondering about this when he remarked to mathematician Ernst Strauss: “What I’m really interested in is whether God could have made the world in a different way; that is, whether the necessity of logical simplicity leaves any freedom at all.”

US physicist James Hartle, who died earlier this year aged 83, made seminal contributions to this continuing debate. Early in the twentieth century, the advent of quantum theory seemed to have blown out of the water ideas from classical physics that the evolution of the Universe is ‘deterministic’. Hartle contributed to a remarkable proposal that, if correct, completely reverses a conventional story about determinism’s rise with classical physics, and its subsequent fall with quantum theory. A quantum Universe might, in fact, be more deterministic than a classical one — and for all its apparent uncertainties, quantum theory might better explain why the Universe is the one it is, and not some other version... (MORE - details)
Another Freud fad cycle. No surprise. In this post-Western era where knowledge is decolonized, meritocracy is fading, and native or regional traditional beliefs revived and elevated once more (after systemic oppression of the past)... There will certainly be a welcoming door for even old pseudoscience of the West to be reinvigorated, too. Even the current tsunami of research paper retractions can be solved in that new spirit by likewise lowering the standards of guidelines and peer review. (The latter are so Eurocentric slash racist and WEIRD to begin with, after all.) ;)
- - - - - - - - - - -

‘Psychoanalysis has returned’: why 2023 brought a new Freud revival

EXCERPTS: He [Freud] is currently emerging from a few decades of denigration, given one slide at most (and only to be called a quack) in introductory psychology classes, reduced to prurient and possibly antisemitic stereotypes in the culture, his ideas brushed aside for their “lack of evidence”.

The idiom of psychoanalysis remains a conventional, shorthand way to discuss romance and parent-child relationship [...] It describes the way we think about categories of working relationships proximate and distant ... and admiring one-sided relationships with intimate strangers ... The power of psychoanalysis remains. As Harold Bloom, whom I would not champion otherwise, bluntly put it: “Throwing Freud out will not get rid of him, because he is inside us. His mythology of the mind has survived his supposed science, and his metaphors are impossible to evade.”

This is not the first Christmas that has brought murmurs of a Freud revival. [...] Zooming ahead to our century, we’ve been treated to both documentary and biopic...

Yet this resurrection of Freud feels different, because more and more people are not just interested in what Papa Freud said, but in trying out his method of treatment themselves... (MORE - missing details)
Last edited:
Kurt Gödel's argument for life after death

EXCERPTS: As the foremost logician of the 20th century, Kurt Gödel is well known for his incompleteness theorems and contributions to set theory, the publications of which changed the course of mathematics, logic and computer science.

[...] Thanks to Marianne’s direct question about Gödel’s belief in an afterlife, we get his mature views on the matter. She asked him for this in 1961, a time when he was in top intellectual form and thinking extensively about philosophical topics at the Institute for Advanced Study (IAS) in Princeton, New Jersey, where he had been a full professor since 1953 and a permanent member since 1946. The nature of the exchange compelled Gödel to detail his views in a thorough and accessible manner... (MORE - details)
Torture & Ticking Bombs

Edward Hall is sceptical about this infamous ethical example’s usefulness.

EXCERPTS: Philosophers love thought experiments, and few have been as influential in contemporary moral and political philosophy as ‘the ticking bomb’.

The idea was famously employed by Michael Walzer in his seminal treatment of the problem of dirty hands (Political Action, 1973), and has been the topic of heated discussion ever since.

[...] The ticking bomb is commonly invoked to justify torturing terrorist suspects, and the thought experiment pervades media discussion of this issue. ... However, the suggestion that the ticking bomb scenario justifies the use of torture in emergency situations has been subjected to penetrating criticism... (MORE - details)