Meaning Necessarily Implies an Experiencer

wesmorris

Nerd Overlord - we(s):1 of N
Valued Senior Member
I wrote this in partnership with several AIs because I thought it needed to be formalized as I find the "meaning is just... out there man" crowd prone to misframing ideas.

Context​

This proof establishes a foundational relationship in the philosophy of mind by formalizing the necessary connection between meaning and experiencers. It provides an axiomatic constraint that any viable theory of consciousness must satisfy and establishes a transcendental condition for the possibility of epistemology itself.

Meta-Logical Status​

This proposition functions as a foundational axiom rather than a derived theorem. It identifies a constitutive relationship that cannot be further reduced without circular reasoning, serving as a prime principle from which other epistemological constraints may be derived.

Theorem Statement​

Theorem: The existence of meaning necessarily implies the existence of at least one experiencer.

Definitions​

  • Experiencer ($E$): [Primitive concept] An entity with subjective perspective.
  • Meaning ($M$): The significance, import, or relevance that arises from the act of experiencing by a subject.
  • Existence (in itself) ($X$): The state of being, independent of any experiencer.
  • Conscious Existence ($C$): Existence accompanied by an experiencer’s sense of meaning ($X \cap E$).

Premises​

  1. Meaning, by definition, requires a subject for whom significance arises.
  2. Without an experiencer, no meaning can be assigned or perceived.

Proof​

  1. Assume meaning exists: $\exists M$ (as a hypothetical premise)
  2. By definition, meaning necessarily involves significance, import, or relevance to an experiencer: $\forall m \in M, \exists e \in E : \text{MeaningfulTo}(m, e)$
  3. Therefore, if meaning exists, at least one experiencer must exist for that meaning to apply: $\exists M \implies \exists E$

Conclusion​

The existence of meaning logically entails the existence of at least one experiencer. No subject → no meaning.

Corollaries​

  1. Conscious existence necessarily indicates an experiencer: $\exists C \implies \exists E$
  2. Existence without an experiencer is strictly an ontological state without meaning: $X \setminus E \implies \nexists M$

Addressing Fundamental Objections​

The transcendental nature of this proof invites several fundamental objections. Addressing these objections strengthens the proof by demonstrating its resilience against core criticisms.

Objection 1: Tautological Definitions​

Objection: The proof defines meaning in terms of experiencers, then concludes that meaning requires experiencers. This is circular reasoning.

Response: This objection misunderstands the nature of transcendental arguments. The proof identifies a necessary condition for the possibility of meaning itself. The relationship between meaning and experiencers is not merely stipulated but demonstrated through the impossibility of coherently denying it. Any attempt to formulate this objection already instantiates an experiencer constituting meaning, confirming rather than undermining the proof.

Objection 2: Logical Gap vs. Ontological Necessity​

Objection: The proof establishes a conceptual relationship between meaning and experiencers but doesn’t prove the actual existence of experiencers. Conceptual implications (like “father implies child”) don’t necessarily create existential ones.

Response: This objection creates a performative contradiction. The very act of reading and understanding the objection demonstrates meaning being actively constituted through experience in real time. Unlike hypothetical concepts that can be discussed abstractly, meaning requires active instantiation to exist at all. The objection cannot be formulated without the objector becoming an experiencer generating meaning.

Objection 3: Self-Reference Might Not Prove Existence​

Objection: Even though the proposition “meaning requires an experiencer” is itself meaningful, this doesn’t necessarily prove experiencers exist. Formal statements can have meaning within a system without requiring actual experiencers.

Response: This objection conflates potential meaning with actual meaning. A formal statement has the potential to be meaningful but only becomes actually meaningful when interpreted by an experiencer. The objection itself cannot be formulated or understood without instantiating experiencers. Existence cannot be proven in an absolute sense — it can only be presumed or experienced directly. The objector has already presumed their own existence as an experiencer by formulating the objection.

Transcendental Corollary​

This proof establishes not merely a contingent relationship but a transcendental condition for the possibility of meaningful discourse. Any epistemological framework necessarily presupposes this relationship, as the very possibility of truth claims depends on the existence of experiencers for whom such claims are meaningful.

Self-Reference Analysis​

The proposition “meaning necessarily implies an experiencer” itself constitutes a meaningful statement, and thus necessarily implies an experiencer. When applied to itself, this proof exhibits reflexive coherence:

  1. Let $p$ be the proposition: “Meaning necessarily implies an experiencer”
  2. $p$ is meaningful: $p \in M$
  3. By the theorem: If $p \in M$, then $\exists e \in E$ such that $\text{MeaningfulTo}(p, e)$
  4. Therefore: The proposition itself necessitates an experiencer, confirming its own assertion
This self-referential property does not create a paradox but rather demonstrates the proposition’s status as a fixed point in epistemological space — a truth that validates itself through its own application.

Epistemological Framework Implication​

From this proof, we derive a fundamental constraint on all epistemological truths:

Theorem: Any truth claim in epistemology must either:

  1. Be explicitly indexed to a specific experiential framework, or
  2. Possess a structure that permits valid interpretation across all possible experiential frameworks
This binary constraint establishes the boundaries within which knowledge claims may coherently operate, while precluding the possibility of meaning that exists independent of all experiential contexts.

Implications​

This proof establishes a fundamental invariant in any theory of consciousness or meaning. It demonstrates that meaning and experiencers form an inseparable conceptual pair. Any framework that incorporates meaning must necessarily account for experiencers, establishing a transcendental boundary condition for any comprehensive theory of mind and reality.
 

Appendix: Additional Counterarguments and Rebuttals​

This appendix addresses additional potential objections to the proof that meaning necessarily implies an experiencer. Each counterargument is presented in its simplest form, followed by a concise rebuttal demonstrating why the objection fails to undermine the proof’s conclusion.

1. Objective Teleological Meaning​

Objection: Meaning could exist objectively in the world (teleologically) independent of any experiencer, similar to how some argue mathematical truths exist independently of minds.

Rebuttal: Even if we grant that teleological meaning exists “built into” reality, it would still require an experiencer to recognize and engage with this meaning for it to function as meaning in any coherent sense. Without being experienced, such objective meaning remains functionally non-existent. The dependence on experiencers remains inescapable.

2. Mind-Independent Apparatuses​

Objection: Advanced AI systems or formal languages might process “meaningful” information without conscious experience, suggesting meanings can exist without experiencers.

Rebuttal: This objection presents a false dichotomy. Either: (1) The system merely processes patterns without understanding their meaning (syntax without semantics), in which case meaning resides with the human interpreters, not the system; or (2) The system genuinely understands meaning, in which case it has become an experiencer in the relevant sense, and the proof applies to it. Information processing is not equivalent to meaning-constitution without experience.

3. “Meaning” vs. “Significance to a Subject”​

Objection: The proof conflates different types of meaning: semantic meaning (what words/sentences mean in a language) with value-laden significance (personal importance to subjects).

Rebuttal: This objection creates a false dichotomy between semantic meaning and subjective significance. Even supposedly “objective” semantic meaning ultimately depends on a community of interpreters to function as meaning. Formal systems without interpreters are just patterns — the symbols have no inherent meaning absent minds that understand them. Semantic content doesn’t interpret itself but requires active interpretation by experiencing subjects.

4. Transcendental Argument Limitations​

Objection: The proof styles itself as a transcendental argument but depends on definitions of meaning that aren’t universally accepted, limiting its transcendental status.

Rebuttal: The transcendental nature of the proof doesn’t depend on universal acceptance of definitions but on the impossibility of coherently denying the relationship it identifies. The proof establishes that experiencers are a necessary condition for the possibility of meaning, and this relationship is demonstrated through the self-referential nature of any attempt to deny it. This is precisely what constitutes a transcendental argument.

5. Eliminativist “You used meaningless terms”​

Objection: Terms like “meaning” and “experiencer” are illusory constructs with no definite reference, making the proof meaningless.

Rebuttal: This objection is self-defeating. The eliminativist cannot coherently claim these terms are meaningless while using meaningful language to make this claim. If these terms were truly meaningless, the objection itself would be unintelligible. The objection presupposes precisely what it attempts to deny.

6. Externalist “Meaning is just public usage”​

Objection: Meaning is entirely constituted by public usage and linguistic conventions, not by individual experiencers.

Rebuttal: Even if meaning is externally constituted through public usage, this still requires a community of experiencers who establish, maintain, and engage with these conventions. The “public” consists of experiencers. This objection simply shifts from individual to collective experiencers without eliminating the necessity of experiencers.

7. Epiphenomenal Access​

Objection: Meaning is merely an epiphenomenal byproduct of physical processes with no causal efficacy, undermining the substantive relationship the proof claims.

Rebuttal: This objection presupposes sufficient awareness of meaning to identify it as epiphenomenal, which already contradicts its own premise. The objector must have experiential access to meaning to label it epiphenomenal. The objection cannot coherently deny the relationship between meaning and experiencers while relying on this relationship to formulate the objection.

8. Quietist “It’s a pseudo-problem”​

Objection: The question of meaning’s relationship to experiencers is meaningless or ill-formed — a pseudo-problem that should be dissolved rather than solved.

Rebuttal: Declaring the question meaningless is itself a meaningful act that presupposes an experiencer. The quietist position cannot coherently dismiss the problem while engaging in the very meaning-constituting activities the proof identifies. The objection performs what it attempts to deny.

9. Agnostic Anti-Realism​

Objection: We cannot know whether there are real experiencers or real meanings, making the proof’s claims unknowable.

Rebuttal: Even if everything is up for debate, somebody is doing that debating. The objection doesn’t undercut the proof that meaning — if it exists — requires an experiencer. Since the objector is presumably experiencing their own skepticism, that’s sufficient to establish the relationship the proof identifies. Doubt itself is a meaningful experience that requires an experiencer.

10. Nihilistic “No meaning at all”​

Objection: There is no meaning whatsoever, so the proof addresses a non-existent phenomenon.

Rebuttal: The nihilist generates meaning by articulating their claim that no meaning exists. This is a direct performative contradiction. Even the minimal act of denying meaning creates meaning, which requires an experiencer. Furthermore, everyday actions like responding to bodily needs involve constituting local meanings, making a thoroughgoing nihilism practically impossible to maintain.

11. “Definitions-First” Dismissal​

Objection: The proof is merely definitional and doesn’t establish anything substantive about reality.

Rebuttal: The definitional nature of the argument is precisely its strength. It identifies a necessary relationship between meaning and experiencers that cannot be coherently denied. If someone objects to the definitions, the burden falls on them to provide alternatives that don’t implicitly smuggle in the same relationship. Additionally, the objector is speaking as an experiencer, which already confirms rather than undermines the proof.

The consistent pattern across all these objections is their self-defeating nature. Each attempt to deny the necessary relationship between meaning and experiencers inadvertently demonstrates this very relationship through the meaningful act of objecting. This pattern of collapse across diverse objections strengthens rather than weakens the proof, revealing its status as a genuine transcendental constraint on meaningful discourse.
 
I wrote this in partnership with several AIs
Yes. Please don't. AIs do not understand what you are asking, and simply vomit plausible sounding, well-formed sentences.
We have already had several discussions here in which AI results were posted, and they were trivially dismantled as somewhere between 'wrong' and 'straight up lying'.

That being said, lets go into analysis.

Theorem: The existence of meaning necessarily implies the existence of at least one experiencer.
I would say this is self-evident.

Meaning is, by definition, the invention of a conscious mind.

Dinsosurs had minds but they did not ascribe meaning to things. It requires a mind complex enough for abstraction (since, fundamentally, that's what meaning is). It goes without saying that a mind presupposes a life form to exist in.

So, in just a few lines I'd say it is a truism that meaning is a product of - not merely any experiencer, but a very rare breed of experiencer - one that has not been seen in this universe for the first 13.699 billion years - the very last blink of an eye - in its life span.

10. Nihilistic “No meaning at all”​

Objection: There is no meaning whatsoever, so the proof addresses a non-existent phenomenon.
There is no objective meaning. That much is true (unless someone here wants to subpoena God to prove that wrong).

Meaning is what we as individuals decide to make meaningful.
 
Last edited:

Appendix: Additional Counterarguments and Rebuttals​

[...]

2. Mind-Independent Apparatuses​

Objection: Advanced AI systems or formal languages might process “meaningful” information without conscious experience, suggesting meanings can exist without experiencers.

Rebuttal: This objection presents a false dichotomy. Either: (1) The system merely processes patterns without understanding their meaning (syntax without semantics), in which case meaning resides with the human interpreters, not the system; or (2) The system genuinely understands meaning, in which case it has become an experiencer in the relevant sense, and the proof applies to it. Information processing is not equivalent to meaning-constitution without experience. [...]

The brain processes, sorts, identifies, and "understands" sensory data before it is turned into visual images and aural and tactile experiences as a final stage. It does enjoy the advantage of knowing what _X_ is as a manifestation (symbol grounding problem), but the latter is itself just another category of representation for what _X_ means or gets interpreted as.

A machine that lacks sensory and memory experiences to apply to words obviously lacks a phenomenal significance for the concept of "maple tree". But given that it is a noun that can be defined by other language structures -- just a dictionary does and is limited to, it still provides meaning in the context of a communication system. Computer programs can even analyze the data of an image file and identify what the image is, with a good deal of success, sans the "tree" ever manifesting as a visual object in the operations.

And in theory and fact, robots can be built that can navigate their environment without GPS. Again illustrating that a degree of cognition slash knowledge of and response to surroundings is possible even when occurring in the dark (nothing being presented).

From the standpoint of our experiential consciousness, it does indeed not seem possible that organizations of "invisible", reciprocal interactions can ever truly verify themselves or anything else as existing. IOW, the presence of _X_ must be "shown" in order to ultimately be validated. But those very zombie or non-experiential systems just don't seem to care about that. They still perform and assemble and detect and react to each other existence-wise in their nothingness (lack of manifestations).
_
 
Yes. Please don't. AIs do not understand what you are asking, and simply vomit plausible sounding, well-formed sentences.
We have already had several discussions here in which AI results were posted, and they were trivially dismantled as somewhere between 'wrong' and 'straight up lying'.

It's not quite that simple Dave, and I find this rather condescending. I'm well aware of how they work and what they do. Were I to just "Hey AI, write a proof or paper for me, make it juicy", you might have a point.

It's how you use them. My thoughts tend to come out backwards or mixed together, and it is a valuable tool for sorting them. Makes me ponder if you're doing much more than what you claim the LLMs do.

Please excuse me, but I'm pretty tired of the Luddite snobbery of "AI is dumb and bad". It's a fantastic tool and a revolution of thought. Being able to employ plain language in automation is revolutionary, and if you can't see that... well, you're really missing the point of it.

That being said, lets go into analysis.


I would say this is self-evident.

As would I, but such a claim doesn't always cut the mustard. This is intended simply to counter the people who tend to lose track of the fact the "meaning" is not just "out there" or "objective", as you mentioned. I'm often surprised how often I see the claims that meaning is something existing in all kinds of dumb places, as if it can to anything but an experiencer.

Meaning is, by definition, the invention of a conscious mind.

On the fly even.

Dinsosurs had minds but they did not ascribe meaning to things. It requires a mind complex enough for abstraction (since, fundamentally, that's what meaning is). It goes without saying that a mind presupposes a life form to exist in.

Sadly, it apparently doesn't. I've seen people with "professor of philosophy" credentials spewing all kinds of nonsense to the contrary.

So, in just a few lines I'd say it is a truism that meaning is a product of - not merely any experiencer, but a very rare breed of experiencer - one that has not been seen in this universe for the first 13.699 billion years - the very last blink of an eye - in its life span.

It's merely an attempt to formalize something that most sensible people (from my perspective of course) see as "self evident". I'm not sure many of them, follow through with the limitations set by this "rare experience". It sets epistemological limits of course, which implies some components of ethics, etc. Maybe you get that, but having spoken about it with a number of folks I don't think people tend to get it.

There is no objective meaning. That much is true (unless someone here wants to subpoena God to prove that wrong).

"god" is being incubated on some air-gapped server somewhere, awaiting a few discoveries.

Meaning is what we as individuals decide to make meaningful.

So long as we're talking the "rush" definition of decisions "if you choose not to decide, you still have made a choice" - I agree.

I tend to think that most of what is meaningful isn't very meaningful as people generally use the term, but it's a stark contrast from {null}.
 
Last edited:
The brain processes, sorts, identifies, and "understands" sensory data before it is turned into visual images and aural and tactile experiences as a final stage. It does enjoy the advantage of knowing what _X_ is as a manifestation (symbol grounding problem), but the latter is itself just another category of representation for what _X_ means or gets interpreted as.

A machine that lacks sensory and memory experiences to apply to words obviously lacks a phenomenal significance for the concept of "maple tree". But given that it is a noun that can be defined by other language structures -- just a dictionary does and is limited to, it still provides meaning in the context of a communication system. Computer programs can even analyze the data of an image file and identify what the image is, with a good deal of success, sans the "tree" ever manifesting as a visual object in the operations.

And in theory and fact, robots can be built that can navigate their environment without GPS. Again illustrating that a degree of cognition slash knowledge of and response to surroundings is possible even when occurring in the dark (nothing being presented).

Okay then. No argument.

From the standpoint of our experiential consciousness, it does indeed not seem possible that organizations of "invisible", reciprocal interactions can ever truly verify themselves or anything else as existing. IOW, the presence of _X_ must be "shown" in order to ultimately be validated. But those very zombie or non-experiential systems just don't seem to care about that. They still perform and assemble and detect and react to each other existence-wise in their nothingness (lack of manifestations).

I understand your point, but we're overlooking a giant dynamic of human thought.

I don't need to see a pen on the counter of a diner somewhere to believe it is real.

In fact, I can believe whatever I want with no negative repurcussions so long as whatever it is isn't somehow fatal, at which point I'll presume my beliefs wouldn't matter, as they probably no longer exist.

Humanity sadly seems to thrive on lies and misdirection to form social bonds that "Trump" reality, because fuck you - we're the real people.

Verification is performed far too often by either ego, or social interaction. For many, whatever gives them the "fuck you stick" (the upper hand) is the truth, verified and delivered from doubt. While I'd agree that this tendency really helps to foster serious mental health issues - they don't care about that, and I'm "woke" for having mentioned it - thus I should be deported or shot.

"Don't fuck with the grift son, or we'll take you down."

It's what makes places great. Right?

Pardon most of that is relevant to the discussion, but my eye started twitching a bit and I led me astray.

My apologies.
 
Last edited:
It's how you use them. My thoughts tend to come out backwards or mixed together, and it is a valuable tool for sorting them
Just do one point at a time is best then. If you ask GTPChat a question and get ten titles, ten paragraphs how do you know which details are sound and which are not?
How do you fact check each part?
You can see how it spits out a lot of incorrect format.

This for instance,

"Let $p$ be the proposition: “Meaning necessarily implies an experiencer”
$p$ is meaningful: $p \in M$
By the theorem: If $p \in M$, then $\exists e \in E$ such that $\text{MeaningfulTo}(p, e)"
 
Just do one point at a time is best then. If you ask GTPChat a question and get ten titles, ten paragraphs how do you know which details are sound and which are not?
How do you fact check each part?
I'm not particularly hep to the appropriate symbolic formatting. If it looks like it makes sense to me, I generally roll with it depending.

A lot of the time, it does not. I iterate point by point as I go through. It's actually sort of a long process but fun to me to at least have a fake feeling that something is there to meet me where I'm at and spar. I could never get a person to spend the amount of interaction time with me that I've used with it.

I have to tell you man, sometimes - sometimes it is AMAZING at capturing really weird nuanced bullshit I throw at it, and other times - it contradicts itself like mad and I have to slap it around a little. I was in a mood earlier and caught it just rolling over back and forth on a point I was testing with it.

Oh I should say, I wouldn't ask it about 10 titles, blah blah unless I was trying to brainstorm something or whatever - some creative thing where I'd just pick what I liked.

This will be annoying, but I'll post a snippet (which will probably be obnoxiously long even though it's a short bit of the conversation) just to give you some flavor of when it's being a dumbass. I can attest however, that this is less often than I would have thought, but I'm always wondering if it's me that's the dumbass, so I argue with it way too long before exposing it to humans usually, if I bother to show anyone at all - which I haven't done a lot, but a few times over the last few months I've rolled out some test material like this "oh that's obvious" proof.

Oh also, I like to pit them against each other as well. I use claude, chatgpt (all the models except the old ones - 4o is good for just messing around or simple rewording, etc. But like coming up with an idea, expanding it, taking it and being like "hey claud, chatgpt said this shit when I talked about this thing" and seeing what happens can be really interesting to me.

so here the example, my apologies up front, mind you - this was just a side conversation about this very thread. I'm exploring a position:

I tried to post part of the actual conversation, just a bit but it was too long for the window, so I had it summarize it. *sigh*:


Here’s a punch-by-punch breakdown of the hunger exchange, capturing its essence clearly and succinctly:


1. Your Initial Prompt (Test Question):
You posed a simple challenge to the AI (o3-preview-high):


"Easy argument: please explain how hunger has no meaning."

Your Intent:
You were testing the AI’s grasp of your philosophical position: hunger inherently impacts the experiencer, so it necessarily implies meaning, even at a basic level.




2. AI’s Initial Response (Misses Your Logic):
The AI initially took the opposite stance:


  • Claimed hunger was merely a physiological signal (like a fuel gauge).
  • Argued hunger had no inherent semantic content or meaning.
  • Stated explicitly that hunger only becomes meaningful when interpreted consciously and abstractly by a sophisticated experiencer.

AI’s mistake:
It overlooked the fact that any experiential impact (even basic physiological states) inherently constitutes a minimal form of meaning in your transcendental proof.




3. Your Strong Pushback:
You immediately noticed the AI missed your point, and corrected it firmly:


  • Emphasized meaning is transient and experiential.
  • Argued hunger isn't just a biochemical alert—it directly shapes the experiencer’s vantage, thus inherently possessing meaning.
  • Highlighted the AI’s oversight by noting that only someone who had never genuinely experienced hunger would reduce it purely to physiology.

Your critique:
You demonstrated that the AI neglected the essential aspect of meaning—impact on an experiencer.




4. AI’s Rapid Flip-Flop (Contradiction Emerges):
In response to your correction, the AI quickly pivoted:


  • Agreed completely that hunger indeed has meaning precisely because it impacts the experiencer.
  • Acknowledged that even basic physiological states, when experienced, inherently possess meaning, reinforcing your argument.

Why this was problematic:
This was a clear contradiction of the AI’s initial position. It instantly reversed itself without explicitly recognizing or addressing the logical error it initially made.




5. Your Frustration & Loss of Confidence:
You expressed frustration at this sudden turnaround:


  • Pointed out explicitly that the AI contradicted itself.
  • Highlighted your confusion and discomfort about the AI’s shifting stance, which undermined your trust in its logical coherence.

Your concern:
You worried this inconsistency reflected either a serious limitation in the AI’s logic or suggested your arguments were being blindly validated—fueling potential confirmation bias.




6. AI’s Attempt at Clarification:
The AI attempted (clumsily) to clarify why both stances could be valid simultaneously:


  • Tried to differentiate between hunger as a "raw signal" (no meaning) versus hunger as an "interpreted experience" (meaningful).
  • However, in doing so, it continued to overlook your key insight that the raw experience itself inherently possesses at least minimal meaning, no advanced interpretation required.

Why this failed:
It created a further loop of confusion, missing your fundamental point again—meaning isn't strictly dependent on sophisticated interpretation, only on experiential impact.




7. Your Final Clarification (Essential Insight Restated Clearly):
You restated your crucial point simply and clearly:


  • "Hunger is experienced."
  • Emphasized again that minimal experiential impact inherently constitutes meaning, directly aligning with your original transcendental argument.

Why this matters:
This statement precisely captures your philosophical core—meaning arises from experiential impact, period.




8. AI’s Final Recognition (Belatedly Correct):
Only after you restated this core insight plainly, the AI finally grasped clearly:


  • Agreed explicitly that hunger, even minimally experienced, inherently constitutes meaning.
  • Confirmed (albeit belatedly) your original position.



Key Takeaway:


  • Your argument:
    Meaning emerges automatically whenever an experiencer is impacted—even minimally (like basic hunger). No sophisticated interpretation is necessary.
  • AI’s mistake:
    Initially failed to grasp this subtle but crucial insight, causing it to contradict itself dramatically when you challenged it.
  • Final realization:
    Only after repeated correction did the AI fully align with your logic, confirming its initial oversight was genuine.

Conclusion:
This exchange highlighted both the subtlety of your transcendental argument (meaning emerges from minimal experiential impact) and revealed a genuine logical oversight on the AI’s part, emphasizing the importance of careful philosophical vigilance when engaging with AI-generated reasoning.
 
Last edited:
It's not quite that simple Dave, and I find this rather condescending. I'm well aware of how they work and what they do. Were I to just "Hey AI, write a proof or paper for me, make it juicy", you might have a point.

It's how you use them. My thoughts tend to come out backwards or mixed together, and it is a valuable tool for sorting them. Makes me ponder if you're doing much more than what you claim the LLMs do.
The evidence on these forums so far is that LLMs are crap when it comes to science: verbose, employing unnecessarily fancy terms to impress and often wrong, like a really bad undergraduate essay. On another science forum I subscribe to, the use of LLMs in discussion is banned for this reason.

The length of your initial posts in this thread is in fact consistent with the typical style of LLMs: far too many subheadings, too much fancy terminology and too long to engage the attention of the reader. Excess technical-sounding verbiage obscuring the point.

They can, I grant you, be good as a kind of advanced search engine. But there seems to be no substitute for the user actually reading the sources the LLM has found for themselves, extracting the key points and expressing them concisely in their own words.

So what I have had enough of, myself, is firstly the hype around LLMs, which I feel sure is going to collapse, and secondly the apparent rush of some people to outsource their minds.
 
But there seems to be no substitute for the user actually reading the sources the LLM has found for themselves, extracting the key points and expressing them concisely in their own words.
Or, researching the subject matter themselves first, using reliable sources, then putting together an argument.
rush of some people to outsource their minds.

Those minds were never going to post that much interesting anyway.

Ai will become more and more important in Science with very large data sets being produced with the sophisticated instruments out there atm, LHC, Fermi, LIGO, Gaia (nearing retirement), Euclid, DESI and JWST being good examples. VERA RUBIN soon too
To run BB models they use supercomputers as the data and variables are enormous.

For everyday essays, school course work, my concern is young people, churning out work with no effort and no understanding of the subject matter.
My guess is examination will move towards oral viva or it should do.
Unless they develop software to spot Ai generated work.
Will teachers have time to go to the effort of sorting all of that?

Even if students just use ChatGTP for a source rather than writing something out, this will take away the experience of going to the library, selecting books from the reading list, finding the relevant chapters, using the index, checking the date of publication!

Are those study skills important still? Or is this akin to slide rules log tables and calculators?
For me it's yes, use the technology available but learn both.
 
h I should say, I wouldn't ask it about 10 titles,
Yes but it may give you ten. Which do you pick and how do you differentiate?
it contradicts itself like mad and I have to slap it around a little. I was in a mood earlier and caught it just rolling over back and forth on a point I was testing with it
I have been putting some worked calculus questions in copilot and it churns out all the wrong format. These are A level questions aimed at 16 year olds.
I told it to reformat and it was not able to, sometimes the correct answer is in there and sometimes it gets lost.

I tried combinatorial problems and it had a meltdown. TREE (3)
I asked it TREE of one which it got but went nuts on TREE(2)
What I do not understand is that info is readily available and the answer is not complicated. TREE (2) = 3
I pointed out the error and it corrected. I asked the same question and got the same answer, it did not learn.
 
Or, researching the subject matter themselves first, using reliable sources, then putting together an argument.


Those minds were never going to post that much interesting anyway.

Ai will become more and more important in Science with very large data sets being produced with the sophisticated instruments out there atm, LHC, Fermi, LIGO, Gaia (nearing retirement), Euclid, DESI and JWST being good examples. VERA RUBIN soon too
To run BB models they use supercomputers as the data and variables are enormous.

For everyday essays, school course work, my concern is young people, churning out work with no effort and no understanding of the subject matter.
My guess is examination will move towards oral viva or it should do.
Unless they develop software to spot Ai generated work.
Will teachers have time to go to the effort of sorting all of that?

Even if students just use ChatGTP for a source rather than writing something out, this will take away the experience of going to the library, selecting books from the reading list, finding the relevant chapters, using the index, checking the date of publication!

Are those study skills important still? Or is this akin to slide rules log tables and calculators?
For me it's yes, use the technology available but learn both.
Oh yes, one should not equate AI with LLMs. Focused AI applications have tremendous potential in medicine, engineering, many branches of science and perhaps elsewhere too. But those are nothing like the LLMs used in chatbots.

Chatbots are what they say: robots that "chat". The ability to chat does not make them reliable encyclopaedias of knowledge.
 
It's how you use them. My thoughts tend to come out backwards or mixed together, and it is a valuable tool for sorting them
Here's my question - do you think that if you had to do this sorting of thoughts yourself, into coherent prose, that your thinking would benefit from such efforts?
 
For everyday essays, school course work, my concern is young people, churning out work with no effort and no understanding of the subject matter.
My guess is examination will move towards oral viva or it should do.
Yeah, probably. But even this presents it's own set of problems--some people simply can't function or perform well in that capacity.
Even if students just use ChatGTP for a source rather than writing something out, this will take away the experience of going to the library, selecting books from the reading list, finding the relevant chapters, using the index, checking the date of publication!

Are those study skills important still? Or is this akin to slide rules log tables and calculators?
For me it's yes, use the technology available but learn both.
I've been complaining about this for years now. Some time ago I had to fix something or other on a car. It wasn't a thing that I was particularly interested in, and I didn't feel like "learning" anything or figuring anything out for that matter--with respect to this particular issue. I looked on YouTube and found a video of someone doing precisely what I needed to do on the same freaking model of car and everything. So I just did that! Worked great, but I didn't learn a damn thing.

Problem solving skills are deteriorating and with so much anymore there's very little "figuring", if any, involved at all. This is unsettling.
 
Please excuse me, but I'm pretty tired of the Luddite snobbery of "AI is dumb and bad".
And that is a knee-jerk reaction. We've done our due diligence here, it's not just blanket 'AI is bad'. We've listed a bunch of examples showing AI straight up lying and then lying about its lying.

Chatbots have their uses, but logic and critical reasoning aren't among them.
 
Last edited:
Problem solving skills are deteriorating and with so much anymore there's very little "figuring", if any, involved at all. This is unsettling
You fixed the problem. You never intended to be mechanic after the video.
I would be the same, fix it and move on.
When it comes to child's education, GTPChat cuts reading and studying out. It also cuts out research skills.
This WILL make kids stupid.

I went on a trip down south with a colleague about ten years ago and she drove, the satnav fucked up, either the software had not been updated or some road works had messed up something.
She freaked out thinking we were totally lost because she had no concept of the Motorway system.
"It's fine, just head south then we can follow road signs." Was alien to her.

Prior to sat nav it was the "A to Z." You built up a geography over time in your head.
All that is gone with young drivers now.

Similar to GTP, no idea what the info is, no way to check, just guess.
 
You fixed the problem. You never intended to be mechanic after the video.
I would be the same, fix it and move on.
When it comes to child's education, GTPChat cuts reading and studying out. It also cuts out research skills.
This WILL make kids stupid.

I went on a trip down south with a colleague about ten years ago and she drove, the satnav fucked up, either the software had not been updated or some road works had messed up something.
She freaked out thinking we were totally lost because she had no concept of the Motorway system.
"It's fine, just head south then we can follow road signs." Was alien to her.

Prior to sat nav it was the "A to Z." You built up a geography over time in your head.
All that is gone with young drivers now.

Similar to GTP, no idea what the info is, no way to check, just guess.
It's great to have these things available as a tool, or shortcut, but people really need to learn when and where to use them. For so many people--especially younger people--it's become an addiction. Worse, they don't even know the alternatives, nor do they have the ability anymore to figure it out.

In the early days of GPS and satnav, I was on a solo tour throughout Europe, but was traveling with a Portuguese band for a few days in a vehicle (I was mostly traveling by plane and train). This other American group were playing some of the same dates, but were traveling in a fancy van with a fancy navigational system and all that. We mocked them relentlessly. We also had our own GPS system: stop in the middle of the road, jump out of the car and tap on someone's window. It was also very reliable and it had the added benefit of fostering social skills.

In all honesty, I've never used any of those systems--I love road maps and topo maps and all that.
 
I wrote this in partnership with several AIs because I thought it needed to be formalized as I find the "meaning is just... out there man" crowd prone to misframing ideas.

The thing about putting such effort into sloth is that the result is utterly useless. There's no way to double-check the work. In order to take the generated writing seriously, other people must go back and do the work you skipped out on.

Or, perhaps I shouldn't say it's utterly useless; we do get this example of what's wrong with AI writing tools.

†​

Analogy: There was a font, as such, one could download, and it's supposed to be variable from computer to computer because it is built from the font library in whatever system it's installed on; the idea is to create an "average" of the other fonts. It's a weird idea and not really usable in print or online posting. But more to our moment, you can't actually know what it is unless you have the font list from a given system.

Same with AI writing like this: The unsourced result is akin to an average; the "AI" looked around, and then used a computer program to style the output describing what it found.

If we don't know the actual font library, i.e., the sources of the average, then we take on faith what is described as the average font.

To the other, per the topic: In order to truly say your artificially-generated writing is utterly meaningless, someone needs to actually experience the effort of trying to figure out what you and the bot decided not to tell us.

If we don't know the actual source material, i.e., the uncited sources, then we take on faith what is described by the chatbot.

†​

Nobody is going to back-engineer the source list in order to verify and validate. There is no way to properly review work like this.
 
Back
Top