Plato's Beard

From all this, I would gather that the success of speech acts in referring rests also with the person(s) hearing the speech act. For the naive listener, "Louise" refers to no creature in particular. For that listener, "Louisa May AllCat, a 7 week old kitten presently lodged at the Vat household at 51 Bedlam St, Spearfish SD, USA" would add information sufficient to generating reference. For a more informed listener trusting in the speaker or writers veracity, Louise is sufficient.

Searle points out that there are two very different strands in the philosophy of language. First, there is a more logical, even vaguely mathematical, approach taken by the likes of Frege and Russell - who were, after all, logicians and mathematicians themselves.

On this view, language is very idealized: words and names themselves refer, irregardless of what speakers are doing. Speakers are treated as something like a pain in the ass lol, interfering with the good logical work being done.

Searle regards this whole approach as hopelessly misguided: You will not understand language unless you understand what speakers do with language. Words do not refer by themselves; people use words to refer.

"Speech acts", then, entered the picture around the 1950s, vaguely adumbrated by Wittgenstein, and later expanded upon by the likes of Austin and Searle himself.



Now, on the former approach, reference is achieved simply in virtue of the words used. The listener, therefore, is irrelevant.

On the speech acts approach, meanwhile, the listener is likewise irrelevant to successful reference. I refer to Frank Sinatra (or perhaps fail to do so) in virtue of the description I associate with the name.

I refer to your Louise when I say to friends in the pub "I'd love to meet Louise" as long as I'm linking it to the appropriate description (e.g. "the kitten that joined TheVat's household a few weeks ago"). Without making this description explicit, my drinking buddies wouldn't have a clue who I'm talking about, of course. That, as they say, is their problem.

Their reaction would probably be, "Who are you talking about? Which Louise? Who do you mean?"

. . . and I'd proceed to elaborate . . . if they buy me a beer first.
 
Sense seems to arrive from a set of transactions between speaker and listener. When a physicist speaks of gravity now, I stipulate that he will mean in the sense of a pseudoforce arising from the curvature of spacetime around massive objects, whose effects are observable everytime I pluck Louise off my leg and drop her to the floor as a reminder that my blood supply is limited. Though Ike Newton could emerge from a time machine and use the same word, I would stipulate a different sense of that word which, though it accounts satisfactorily for dropping Louise, does not account for spacetime curvature, the precession of Mercury, etc.

What Frege says about "sense" is really quite bizarre. It's kind of out there in a Platonic realm or something (rubbing elbows with the idea of the triangle perhaps). One thing he's quite adamant about: sense is not in the head. When using names with a sense, then, speakers have to somehow "grasp" the sense which is linked to a particular name.

Now in mathematics we don't need to worry about ambiguous names: all symbols (e.g. 2) are unambiguously defined.

But as you rightly note, in the far messier world of natural language, almost all names (e.g. Louise) are ambiguous -- massively so. What now Mr Frege? Um, he mutters something about "speaking different languages". Better take it up with him lol.

Hence Prof Searle's contempt!
 
I have to say that taken together, what Axocanth has written in this thead constitutes a valuable introduction to the philosophy of language. Certainly to the problem-of-reference aspects. The rest of the board would be well advised to read these posts carefully and to try to think along with them.

It's not just navel-gazing either, these ideas have clear relevance to the philosophies both of science and religion, as Axocanth has ably pointed out.

Welcome to the board Axocanth. You've obviously studied this stuff at the university level. Are you a philosophy teacher?
 
[...] Right...but when we say "unicorns don't exist" we mean a real physical creature, not an abstract idea. Hence Plato's Beard: How can something that doesn't exist be said to not exist?

A genuine, physical unicorn can't be presented, so that is not what the noun in "unicorns don't exist" is representing. It's merely the idea of the creature. Another way to view the statement (and probably better) is that "unicorn" entails an imaginary creature by definition, so the statement "unicorns don't exist" is merely analytically expressing one of the properties of the unicorn concept: That the creature is imaginary.

unicorn: An imaginary creature represented as a white horse with a long horn growing from its forehead.

Analytic propositions:

"All bachelors are unmarried."
"All triangles have three sides."

But "unicorns don't exist" is arguably a vulnerable characteristic, similar to "all crows are black". One can preconditionally define _X_ in an absolute manner, but a contingent empirical fact somewhere in space and time might refute it. Or -- because a unicorn intrinsically means "imaginary", a discovered unicorn thereby has to be classified with a different term or appended with an adjective that makes it distinct from the other (designated as "real unicorn" or whatever).
 
Last edited:
I have to say that taken together, what Axocanth has written in this thead constitutes a valuable introduction to the philosophy of language. Certainly to the problem-of-reference aspects. The rest of the board would be well advised to read these posts carefully and to try to think along with them.

It's not just navel-gazing either, these ideas have clear relevance to the philosophies both of science and religion, as Axocanth has ably pointed out.

Welcome to the board Axocanth. You've obviously studied this stuff at the university level. Are you a philosophy teacher?

Thanks, Yazata. I enjoyed your contributions to Magical's "Dogmatic Skepticism" thread too, and added some ramblings of my own. Yes, the philosophy of language has important implications for other disciplines too, for example scientific realism and even the notorious "Ontological Argument" for the existence of God (see CC's post above).

To answer your question, I've never taken a philosophy course in my life, let alone teach one (lol). It's just something I've taken a close interest in for quite some years now, simply for its intrinsic fascination. Sounds like you guys feel the same way!
 
Right...but when we say "unicorns don't exist" we mean a real physical creature, not an abstract idea. Hence Plato's Beard: How can something that doesn't exist be said to not exist?


You're quite right. Well spotted! With a typical subject-predicate statement such as "Dogs eat meat", we normally take this to be asserting both "there are dogs" and "they have the property of eating meat". We first assert their existence and then predicate something of them.

A statement such as "Unicorns don't exist" has exactly the same superficial structure. It therefore does appear to be asserting both "there are unicorns" and "they have the property of not existing" - a contradiction.

Frege, Russell et al warn us not to be misled. Superficial structure can be misleading as to what a statement is actually asserting. And they have their respective ways of analyzing the apparent contradiction away.


Adding to CC's response above, I pointed out earlier in the thread (see above) that philosophers, starting with Kant I think, noticed that there was something very peculiar about the predicate [x exists] or [x doesn't exist]. It doesn't function like an ordinary predicate.

If you were writing a letter of recommendation, say, extolling the virtues of a potential employee, I don't suppose you'd write the following: "Smith is a good worker, always punctual, oh, and he exists too." lol.

In existential statements, then, such as "Unicorns don't exist" or "Cassowaries don't exist", Frege argues that the referent of the subject term is not a physical object -- contra yourself above and unlike regular statements -- but rather a concept. "Unicorns don't exist" is analyzed thus:

"The concept 'unicorn' is not instantiated"

That is to say, what is being asserted -- properly understood -- is "There are no instances of a horse-like creature with a single horn". It's being asserted that nothing in reality satisfies that concept or description. And the above sentence yields a value of true.

Existence, then, is taken to be a "second-level" predicate, functioning differently from regular predicates such as [Xs don't sleep].



Now compare with what CC says above:

"A genuine, physical unicorn can't be presented, so that is not what the noun in "unicorns don't exist" is representing. It's merely the idea of the creature."

Frege would say, assuming I've got him right, that the physical referent (if there is one) is not "presented" (as CC puts it) in any existential statement. "Cassowaries don't exist" gets exactly the same treatment as "Unicorns don't exist". Again, what is being asserted is

"The concept "cassowary" is not instantiated"

and in this case the statement yields a value of false.

In all existential statements, then, it's the "idea" (cf. concept) that is presented, not just statements containing terms whose ontological credentials are suspect (e.g. unicorns).
 
See 1:04:00 - end for more




1:09:06 - "Existence is not a property of objects"

1:11:00 - "The subject term [in an existential statement] can't refer to objects, because if it did then you'd have to presuppose that the statement was true in order to state it."

Notice also how I shamelessly ripped off Searle's letter of recommendation example.
 
Last edited:
P.S. It's not something I've looked into at all, but presumably philosophy of language has implications for artificial intelligence too. How else will your house-cleaning robot or talking sex toy (lol) be able to address questions like "Who are you talking about?"

Does anyone here know anything about this?
 
A genuine, physical unicorn can't be presented, so that is not what the noun in "unicorns don't exist" is representing. It's merely the idea of the creature. Another way to view the statement (and probably better) is that "unicorn" entails an imaginary creature by definition, so the statement "unicorns don't exist" is merely analytically expressing one of the properties of the unicorn concept: That the creature is imaginary.

The problem with this view, it seems to me, is that semantics is being held hostage by epistemology. In many cases -- perhaps all cases if you worry about Matrix scenarios and Cartesian demons -- we don't know whether something or someone is imaginary or not.

Magical was doing something similar earlier in the thread, asserting that only sentences whose subject terms refer are meaningful. Our judgements on the meaningfulness (or not) of a sentence, then, are dependent on whether or not a term contained therein refers.

I don't think this can be right. We can surely make meaningful statements containing terms like "dark matter" or "Robin Hood", say, without being sure that either is real.
 
Last edited:
The problem with this view, it seems to me, is that semantics is being held hostage by epistemology. In many cases -- perhaps all cases if you worry about Matrix scenarios and Cartesian demons -- we don't know whether something or someone is imaginary or not.

Magical was doing something similar earlier in the thread, asserting that only sentences whose subject terms refer are meaningful. Our judgements on the meaningfulness (or not) of a sentence, then, are dependent on whether or not a term contained therein refers.

I don't think this can be right. We can surely make meaningful statements containing terms like "dark matter" or "Robin Hood", say, without being sure that either is real.

Language treated as floating on its own, not being a descriptive tool for representing the manifested world, and the latter thereby playing no role in determining anything about language or the status of _X_ word?

This is along the line of the symbol grounding problem. For which a dictionary serves as a kind of starting metaphor for.

Figuratively, a picture-less dictionary can't leap outside of its restricted realm of language into the phenomenal world that most of its words actually represent. One term simply references combinations of other terms for its meaning. The relationships are regulated or have internal consistency, but no non-descriptive understanding in terms of what humans experience (images, sounds, tactile sensations, etc).

Accordingly, when words are "floating on their own" like that -- the connection with the reality they originally represented has been severed, then the status of "real" is left being determined by the definition itself, or one of the definitions of the word. If an item like "imaginary" was set as one of unicorn's properties, then that's it -- end of story. It's a rule of the game, or there's no "reality" beyond the dictionary to modify the pre-established circumstance of unicorn being classified as "not real".

Obviously what "real" itself signifies is unclear in such a context -- but whatever it is, unicorn doesn't receive it, by default. There are words or expressions in the dictionary that seem to allude to something more -- some other manner of existence or a higher and radically different system of representation that might base "real" or "not real" on other factors than the dogma of the definition. But there's no access to it (for the dictionary, anyway).

Now move up from primitive, static dictionary to dynamic ChatGPT. Barring occasional mishaps and imperfections, ChatGPT outputs oodles of meaningful paragraphs based on what words and phrases that are most likely to come next according to their statistical occurrence in the data it is trained on, and sources it retains (and algorithmic rules derived from those probabilities).

But ChatGPT likewise can't apprehend or understand what those words mean in terms of phenomenal affairs (the realm of sensory systems and brain experiences). It's a philosophical zombie manipulating tokens (coded electrical patterns) in the "dark". Its own logorrhea of responses and the governed interactions outputting them are invisible to it.

  • Leibniz: Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions [including computation]. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception [experience, manifestation, or feeling].

In essence, there's an entire "other version of reality" that would elude an AI-brain equipped robot that is only representing the world as symbol-equivalent operations. Even though said robot can still successfully navigate the terrain of our manifested reality with its limited, lower-level method of representation.

Of course, even our sensory experiences are yet another mediating representational system (like language), rather than direct apprehension of a mind-independent manner of existence (indirect realism).
_
 
Last edited:
Language treated as floating on its own, not being a descriptive tool for representing the manifested world, and the latter thereby playing no role in determining anything about language or the status of _X_ word?

Before probing any further into a very interesting reply, can we just clear up the first paragraph? Is the above your understanding of what I've been saying about language? i.e. Do you think that is my position? Or is it your own understanding of what language does?

It seems to me that you are imputing the above view to myself, and if you are (perhaps I'm misreading) it's not at all my position. To repeat:

John Searle -- one of my fave philosophers -- is fond of remarking that what distinguishes the philosopher is that he/she puzzles over things that any normal (or sane!) person simply takes for granted.

Searle also says that the most fundamental question in the philosophy of language is: "How does language hook up to reality?"

Now, one can deny altogether that it does; we're all in the Matrix or something. Otherwise, the question has to be addressed, and yes, it is somewhat miraculous that we can use words and names to hook up with real people, places, objects, etc. that are possibly far removed from us in space and time.


Everything I've written in this thread so far presupposes (whether rightly or wrongly) a mind-independent external world inhabited by cassowaries and lemurs, TheVat and Louise (his kitten), Frank Sinatra and Mount Everest, but probably not unicorns. Language hooks us up with this external reality. We can refer to mind-independent objects, people, etc. in this external reality.

Note first that the process is fallible: sometimes we take ourselves to be referring to something real when in fact we are not. If it turns out that Frank Sinatra never existed (the mother of all conspiracy theories!) then all sentences ever uttered or written containing the name "Frank Sinatra" purportedly referring to that man actually failed to refer (cf. all statements made by 18th century phlogiston theorists).

Note second that whether or not terms do refer is a question of epistemology. The semantics of the sentence "Frank Sinatra was born in Hoboken" tells us that what is being asserted is (i) there is/was such a person, and (ii) he was born in Hoboken. It does not follow from the semantics, of course, that there actually was such a person, and whether what is predicated of him is true. We leave that in the capable hands of the epistemologists and paparazzi.

Note third that sometimes we "suspend the rules", in cases of discussing fiction, for example.


Thus . . .

Language treated as floating on its own, not being a descriptive tool for representing the manifested world, and the latter thereby playing no role in determining anything about language or the status of _X_ word?

Absolutely not! - if this is the view you're imputing to me.

On a simple denotation-without-connotation view (e.g. J. S. Mill), and taking a sentence such as "Frank Sinatra was born in Hoboken" as our example, the meaning of the subject term -- or the semantic contribution of the subject term, if you prefer -- just is Frank Sinatra -- the mind-independent, flesh-and-blood singing sensation. The meaning is identified with the referent.

On the Fregean view, as we saw, it gets a bit trickier: the meaning of the name has two components (sense and reference).

In both cases, our language is firmly locked onto what you call the "manifested world", by which I assume you mean an external mind-independent reality, right? It's certainly not "free floating"; it's not a set of gears connected up to nothing else external to itself, to steal a Wittgensteinian metaphor.

Or have I misunderstood?


What an idealist -- one who holds there is no mind-independent reality -- would make of all this I have no idea, and it gives me a sore head just thinking about it lol.


As for the implications of all this for A.I. -- another intriguing issue you've raised -- my first Searle-inspired thought would be that ChatGPT (i) has no thoughts at all, (ii) has no understanding of anything, and (iii) no more refers to anything than does the sentence "Frank Sinatra rules!" inscribed in the sand at the oceanfront caused by freak meteorological conditions.

It takes more than just making noises or marks in the sand to refer.
 
  • Like
Reactions: C C
Before probing any further into a very interesting reply, can we just clear up the first paragraph? Is the above your understanding of what I've been saying about language? i.e. Do you think that is my position? Or is it your own understanding of what language does?

It seems to me that you are imputing the above view to myself, and if you are (perhaps I'm misreading) it's not at all my position. To repeat:

Everything I've written in this thread so far presupposes (whether rightly or wrongly) a mind-independent external world inhabited by [...]

If the case, it would have been a sudden "eureka!" insight or key to what maybe I was missing beforehand. For clarifying why some of these struggles and paradoxes revolving around language are taking place. IOW, if there's a "game" the overall analytic community is/was playing where one isn't allowed to cheat by peeking at what language is representing, or take into account what the point of converting the world to symbols or communication tokens was to begin with.

And hardly anything new. Daniel Dennett wrote a paper back in the late 1960s (or was it the early '70s?) contending that "there are no pictures" (i.e., phenomenal experiences) in the brain. That neural processes are merely dabbling in descriptions. And Wilfrid Sellars -- one of his influences -- had been accused at times in the past (rightly or wrongly) of asserting that all awareness is a linguistic affair. Really, many who have toyed with the idea of eliminative materialism have gone down the route of attributing mental states to being an illusion of language. (Never mind how that snowballs to skeptically devour everything else as a consequence, if sensory manifestations are the foundational starting point of knowledge.)

And pure mathematics, in its effort to be liberated from practical applications, can arguably be construed as quantitative language striving to float on its own.
  • "Hardy saw the distinction between pure and applied mathematics to be simply that applied mathematics sought to express physical truth in a mathematical framework, whereas pure mathematics expressed truths that were independent of the physical world."
At any rate, machines that manipulate information units do deal in no other brand of "world representation" than that of language or symbol manipulation. When excluding panexperientialism (as materialism normally does), it's certainly a situation that intelligence can contingently be limited to -- possibly even the biological or naturally evolved kind (somewhere in the universe)...

Blindsight (Watts novel)

Consciousness gets in the way
_
 
Last edited:
P.S. It's not something I've looked into at all, but presumably philosophy of language has implications for artificial intelligence too. How else will your house-cleaning robot or talking sex toy (lol) be able to address questions like "Who are you talking about?"

Does anyone here know anything about this?


With AI, I think of Searle and Chomsky, who grasped that AI in its present builds can have syntax but no allied semantics, i.e. meaning. Meaning seems to be what sentient beings pass back and forth, invoking experiences and interpretations, conducting transactions in which they drill down to the nature of the referent. Conscious beings, as they perform speech acts, enter into a collegial task of world building.

Take the sentence: The house is blue. Early in life, we start going outside, and (unlike mice, say) come to understand that much house talk revolves around externalities of the house. This novice speaker might hear the sentence and inquire, "you mean the outside of the house?" If the inside of the house happens to be lime green, then the listener has to navigate an ambiguity to get to meaning. House color statements mean the external siding. And the use of a definite article, the, means one particular house and NOT "the house, as an architectural type."

If an AI like the current megawatt-sucking LLMs says this phrase, they have no understanding of its meaning, only a sort of stochastic sampling that will produce a syntactical realm where blueness happens to correlate with exteriors of houses and statements about them. The syntax can be elegant, but the only meaning is what happens to find its way from the humans creating the training data to the humans reading the generated text. In between are flipping bits and unsentient algorithms.
 
Another example that came to me as I was chatting with a family member (not Louise) about The Third Man - suppose I ask ChatGPT "why is the main character named Harry Lime?" While an LLM could certainly mine some critical writings about the oeuvre of Graham Greene and perhaps parrot some reasonable response, the originating speech acts (and accompanying meanings) come from other humans. The LLM cannot make the intuitive leap your or I might make, as we reflect on our experiences of limes, author surnames, and attributes that the character and author may share, and the possibility that authors sometimes like to inject aspects of themselves into narratives. So we humans can flash on, hey, lime is both a fruit and also a shade of green (Greene), and that cynical bastard up in the Ferris Wheel has got some Graham Greene in him! Suddenly, the name Harry Lime takes on a new layer of meaning for us. Perhaps we learn more about Greene and his sometimes self-deprecating humor. Perhaps we shift our attention to Holly Martins (the other main character) - holly is green, too? Can that be a coincidence?
 
The Vat said: With AI, I think of Searle and Chomsky, who grasped that AI in its present builds can have syntax but no allied semantics, i.e. meaning. Meaning seems to be what sentient beings pass back and forth, invoking experiences and interpretations, conducting transactions in which they drill down to the nature of the referent. Conscious beings, as they perform speech acts, enter into a collegial task of world building.

This ties into Searle's Chinese Room thought experiment in which a person who doesn't speak Chinese sits in a room and outputs answers to questions written on slips of paper by following the assembly instructions for Chinese symbols (a program iow) This situation essentially demonstrates how a machine could plausibly appear to understand language at a semantic level when in fact there is no such understanding at all.

"The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally."--- https://plato.stanford.edu/entries/chinese-room/

My question is what is that property or capacity inside of minds that allows them to experience meaning in language that distinguishes it from merely constructing syntactically meaningful sentences? And if AI only appears to be performing meaningful speech acts, do these in fact really mean anything at all? We seem to assume as much when we read AI text. But that suggests meaning lies in the self-contained matrix of language itself and not in the consciously intended act of speaking it.
 
Last edited:
My question is what is that property or capacity inside of minds that allows them to experience meaning in language that distinguishes it from merely constructing syntactically meaningful sentences? And if AI only appears to be performing meaningful speech acts, do these in fact really mean anything at all? We seem to assume as much when we read AI text. But that suggests meaning lies in the self-contained matrix of language itself and not in the consciously intended act of speaking it.

Searle's one-word answer: intentionality.


Edit: He does draw a distinction, though, between so-called "original intentionality" or "intrinsic intentionality" (i.e. that inside your head) and "derived intentionality" (e.g. marks on paper).

How is this distinction justified (if I remember right)? Intrinsic intentionality could not mean anything other than what it does mean.
 
Last edited:
@ TheVat

There might be a bright future in academia for Louise. Have you seen CC's latest post in "Compromised Science"? lol

We're totally off topic here (hope Magical doesn't mind), but just curious about Sabine. I first heard of her only a year or two ago, you seem more familiar than me. Does she take a lot of flak from the scientific community for the things she says? I wonder both how she gets away with it, and whether sometimes she doesn't go far enough. There's plenty to defend about science done well; that said, there's an awful lot of junk and misinformation too.

I actually commented on one of her vids in which she singled out dark matter for special opprobrium -- "It's unfalsifiable", Sabine told her listeners, "Defenders of the theory can always find some way to reconcile apparently inconsistent evidence."

"Why stop at dark matter?", I wrote, "They're all unfalsifiable, at least in any sense that distinguishes science from other pursuits. Defenders of any theory can always find some way to reconcile apparently inconsistent evidence."

No reply.

I always need to emphasize at these junctures, lest I'm misunderstood, that I'm not criticizing the scientists themselves for doing what they do with their theories. What I'm criticizing is the nonsense that the public are routinely told about science -- often from the scientists themselves.
 
No....the theory of gravity referred to a an observed phenomenon that does indeed exist. What it actually was has since been scientifically redefined. The referent is the same, but the definition of it has changed,
Newton works fine for canon balls and rockets, it is an approximation.
GR is completely different, it is not a force but a feature space time geometry and the mass/energy within it. It is also an approximation because it does not play nice with QT.
GR is good for things like black holes, the perihelion of Mercury and universes, Newton will still get you to the moon.
 
Back
Top