Speranza
Bill Croft writes:
"Convention is perhaps the most neglected fundamental property of language. [...] How can a linguist decide whether a metaphor is conventional? There is no easy way, and little or no research that I know of on
the topic (please direct me to any!).
2
Convention is perhaps the most neglected fundamental property of language."
Is it? I mean, is it a fundamental property?
It does not seem to be, in any case, a
fundamental property of meaning per se, at least for the self-avowed Gricean. Cf.:
One of the leading ideas in my treatment of meaning [is] that meaning
is not to be regarded exclusively, or even PRIMARILY, as a feature of
language or of linguistic utterances. There are many instances of nonlinguistic vehicles of communication, mostly unstructured but sometimes
exhibiting at least rudimentary structure; and my account of meaning [is]
designed to allow for the possibility that non-linguistic and indeed NONCONVENTIONAL ’utterances’, perhaps even manifesting some degree
of structure, might be within the powers of creatures who lack any linguistic OR OTHERWISE CONVENTIONAL apparatus for communication,
but who are not thereby deprived of the capacity to MEAN this or that by
things they do.
H. P. Grice, ’Reply to Richards’, in Richard Grandy & Richard Warner,
PGRICE, Philosophical Grounds of Rationality: Intentions, Categories,
Ends. Clarendon, p. 85.
Or:
It seems to me that there are ... different problems connected with meaning in which questions of value might arise. [A] minor problem has to do
with the realtion what ... I may call [expression] meaning and [utterer]
meaning. It seems plausible to suppose that to say that an [expression]
means something (to say that ’John is a bachelor’ means that John is an
unmarried male, or whatever it is) is to be somehow understood in terms
of what particular users of that expression mean on particular occasions.
The first possible construal of this is rather crude: namely, that usually
people DO use this expression in this way. A construal which seems to
me rather BETTER is that IT IS CONVENTIONAL to use this expression
in this way. Now, I do not think that even the most subtle or sophisticated
interpretation of this construal will do, because I do not think that meaning is ESSENTIALLY connected with convention. What is essentially
connected with is some way of FIXING what [an expression means]: convention is indeed one of these ways, but it is not the only one. I can invent
a language, call it Deutero-Esperanto, which nobody ever speaks. That
makes me the authority, and I can lay down what is proper. ... The general suggestion would therefore be that to say what an [expression] means
... is to say what is in general OPTIMAL for speakers ... to do with that
[expression]. As regards what is OPTIMAL in any PARTICULAR kind
of case, there would have to be a cash value, an account of WHY this is
optimal. For example, it might be that IT IS CONVENTIONAL to use
this [expression] in this way. ... What we get in every case, as a unification of all these accounts, is the optimality or propriety of a certain form
of behaviour.
3
Grice, WOW, Studies in the way of words, p. 299. Originally in
“Meaning Revisited”, in N. V. Smith, ed. Mutual Knowledge, London:
Academic Press.
It’s interesting that for Grice metaphor is always, notably NON-CONVENTIONAL.
His only example in ’-Logic and Conversation (now WOW, p. 34) being:
“You are the cream in my coffee” ⇒ [roughly] You are my pride and joy.
I.e. metaphor comes out as a non-conventional (indeed “conversational”) implicature,
never as a conventional one. He was never sure what a conventional implicature would
look like but he looked (unlike Karttunen) for other types of cases as illustrations of
this (“therefore”, “but”, as e.g. in WOW, p. 120).
Metaphor would be a conversational (and thus nonconventional) implicature because it derives from the conversational maxims (notably “Quality”): thus, Grice allows that there may be maxims, other than what he dubs “conversation”, which may
generate non-conventional implicatures; metaphor (being so basic) is not one of these:
There are, of course, all sorts of other maxims (aesthetic, social, or moral
in character) . . . that are also normally observed by participants in talk exchanges, and these may also generate nonconventional implicatures. The
conversational maxims, however, and the conversational implicatures connected with them [incl. metaphor], are, specially connected (I hope) with
the particular purposes that talk (and so, talk exchange) is adapted to serve
and is primarily employed to serve. I have stated my maxims as if this
purpose were a maximally efficient exchange of information.
Grice, WOW, p. 28.
Larry Gorbet (lgorbet@unm.edu)
Mon Nov 25 2002
I fail to see how Grice’s comments disagree with what Croft wrote. The latter comments on convention as basic to language, and Grice is explicitly talking about meaning in general (not just in language). I would suspect that Croft doesn’t take meaning
to be limited to language, unless it is by definition.
Certainly some cognitivists have treated conventionalization as an essential part of
the process by which conceptualizations in general become linguistic “meanings”.
To me, the very essence of that part of language that we conveniently label the lexicon is that it conventionally associates conventional forms with conventional meanings. If I have no idea what other members of my speech community take to be conventional meanings of a form I use, how can I figure out the problem of what my
use of that form in this linguistic context and in this particular situation will mean for
them? Even though convention doesn’t lead algorithmically to situational meaning, it
is a necessary component to reducing the task of comprehension to a (barely) doable
one.
4
From: J L Speranza (jls@netverk.com.ar)
Mon Nov 25 2002
Isn’t this a picture of what Relevance-Theorists have criticised as “the code-based
model” versus the “inference-based model” built on Gricean lines that they adopt?
In any case, and, for the record, I note that I omitted what I think is yet another
perhaps interesting use of “conventional” as used by Grice in his attempt at formalising
what “meaning” is about.
When providing his very general definition of, granted, “mean” (as ascribed to
“utterer”, even) he does provide for a variable “c”, which is to stand for a “mode of
correlation”.
Amongst the “modes of correlation”, he does list “conventional”, but only along
with alternative ones such as “iconic” and “associative”.
Ranges of variables f: features of utterance c: modes of correlation (such
as iconic, associative, CONVENTIONAL).
WOW, p. 103
“U means by uttering x that p” is true iff (E.Phi)(E.f)(E.c) [i.e. there
exists a feature phi, a feature f, and a mode of correlation c] I. U utters
x intending x to be such that anyone who has phi would think that 1. x
has f. 2. f IS CORRELATED IN WAY C with psi-ing that p [having
psychological attitude psi with content p. JLS] 3. (E. Phi’): U intends x
to be such that anyone who has Phi’ would think, via thinking (1) and (2),
that U psi-s that p. 4. In view of (3), U psi-s that p. and II. (operative only
for certain substituends for "asterisk sub-psi" [qua mode-marker. JLS]).
U utters x intending that, should there actually be anyone who has Phi, he
would via thinking (4) himself psi that p; and III. It is not the case that,
for some inference element E, U intends x to be such that anyone who has
phi will both 1’. rely on E in coming to psi that p. 2’. Think that (E.Phi’):
U intends x to be such that anyone who has Phi’ will come to psi that p
WITHOUT relying on E.
Grice, WOW, p. 114).
Given that for Grice, ultimately, metaphor belongs in utterer’s meaning (WOW, p. 34),
how much of metaphor may not be said to be associativity-based rather than convention-based, for example? (“You’re the cream in my coffee” ⇒ (roughly) “You are my
pride and joy”).
Gorbet:
If I have no idea what other members of my speech community take to be
conventional meanings of a form I use, how can I figure out the problem
of what my use of that form in this linguistic context and in this particular
situation will mean for them?
5
Well, a famous Gricean take “against convention” is again essayed by Davidson in his
contribution to PGRICE, ed. Grandy. He does think there is a way:
I conclude that there is no such thing as A LANGUAGE, not if A LANGUAGE is anything like what many philosophers and linguists have supposed. There is therefore no such thing to be learned, mastered, or born
with. We must give up the idea of a clearly defined shared structure which
language-users acquire and then apply to cases. And we should try again
to say how CONVENTION in any IMPORTANT sense is involved in language; or, as I think, we should give up the attempt to illuminate how we
communicate by appeal to conventions.
(Davidson in Philosophical Grounds of Rationality: intentions,
categories, ends, p. 174).
And, mind, he’s not just talking about metaphor, is he.
Sherman Wilcox (wilcox@unm.edu)
Tue Nov 26 2002
On 11/25/02, J L Speranza said:
Isn’t this a picture of what Relevance-Theorists have criticised as “the
code-based model” . . . ?
I don’t think it is. Could you say why you do?
Amongst the “modes of correlation”, he does list “conventional”, but only
along with alternative ones such as “iconic” and “associative”.
I’d be very interested in your take on how Grice thinks iconicity and convention interact. Or, if they are “alternative” mode of correlation, does this imply that they do NOT
interact?
Well, a famous Gricean take “against convention” is again essayed by
Davidson in his contribution to PGRICE, ed. Grandy. He does think there
is a way:
I conclude that there is no such thing as A LANGUAGE, not if
A LANGUAGE is anything like what many philosophers and
linguists have supposed. There is therefore no such thing to
be learned, mastered, or born with. We must give up the idea
of a clearly defined shared structure which language-users acquire and then apply to cases. And we should try again to say
how CONVENTION in any IMPORTANT sense is involved
in language; or, as I think, we should give up the attempt to
illuminate how we communicate by appeal to conventions.
6
(Davidson in Philosophical Grounds of Rationality: intentions,
categories, ends, p. 174).
This is pretty intriguing. I admit to knowing not a stitch of Davidson. And so I have no
idea if I understand what he means by convention, or whether it is what Larry Gorbet,
and I (and other cognitive linguists, I would assume) mean by convention.
Could you tell me: (1) what does Davidson mean by convention, and (2) if we give
up the attempt to illuminate how we communicate by appeal to conventions, how then
DO we communicate?
From: J L Speranza (jls@netverk.com.ar)
Tue Nov 26 2002
S. Wilcox refers to L. Gorbet’s description pertaining to the various “conventions”
associated with the workings of “the lexicon”, and writes:
I don’t think [Gorbet’s description] is [an instance of the often-criticised
’code-based’ model of communication.] Could you say why you do?
Well, in the code-model, the correlation between a “linguistic” form and what Gorbet
calls a “conventional” meaning is pre-patterned, pre-established, fixed, “ready-made”.
No role at all for inference. For the inference-model, even this “fixing” should allow
for a large amount of principled “reasoning”.
I’d be very interested in your take on how Grice thinks iconicity and convention interact. Or, if they are "alternative" mode of correlation, does
this imply that they do NOT interact?
Precisely, the iconic and the conventional modes of correlation do NOT interact. It
is obvious that, if x means y via the ICONIC mode of correlation, then ~(x means y
via the CONVENTIONAL mode of correlation). I would think that all onomatopoeia
in English (and for that matter, Chinese) belong in this area. I don’t subscribe to the
Merriam-Webster, but found this in their site: http://www.m-w.com/service/etymology.htm
Imitation of sounds Words can be created by onomatopoeia, the naming
of things by a more or less exact reproduction of the sound associated
with it. Words such as "buzz", "his", "guffaw", "whiz", and "pop") are of
imitative origin.
NOTE:
You can use the Etymology search in the dictionary to find a list of words
created via onomatopoeia. At the dictionary search page, select the Etymology search option, type "imitative" in the input box, and click the
"Search" button.
I would think that clicking the search button like that would give us a grand display
of how much of the “linguistic” meanings we live by are in no really interesting way
conventional.
Referring to Davidson, op.cit., p. 174,
7
This is pretty intriguing. I admit to knowing not a stitch of Davidson. And
so I have no idea if I understand what he means by convention, or whether
it is what Larry Gorbet, and I (and other cognitive linguists, I would assume) mean by convention. Could you tell me: (1) what does Davidson
mean by convention, and (2) if we give up the attempt to illuminate how we
communicate by appeal to conventions, how then DO we communicate?
I would guess (but mind, only guess) that what Prof. Davidson means by “convention” is more like what Bill Croft means by convention which is more or less what
D. K. Lewis meant by convention in his Harvard book, which was more or less what
“he” previously meant by convention in his Harvard PhD under W. V. O. Quine, "Conventions OF LANGUAGE".
I think Lewis’s attempt was brilliant when it comes to, for example, the feature of
ARBITRARINESS (i.e., in Gricean parlance: NON-ICONICITY):
if x means y via the CONVENTIONAL mode of correlation, then x means
y ARBITRARILY.
On the other hand, I think Lewis went somehow over the top by requiring “commonground status” to the whole proceedings. Why, I can certainly establish a “convention”
I myself will abide with. That should not mean that, to fulfil Lewis’s commonground
status (of “mutual” or “common” knowledge) that I should know that I know that I
know that (and so on ad infinitum) that x means y via the “conventional” mode of
correlation, does it.
In any case, Davidson’s pretty “deconstructionist” piece in the Gricean festschrift
is rather general (it’s a pity Grice did not discuss it in print). Davidson’s take on
convention is expressed in what he finds the “third” misguided “principle” in much
philosophy and linguistics:
MISGUIDED principle P-3: “First meanings” [by which he means the
truth-conditional side to content, as per a Tarski-schema] are governed
by learned conventions (or regularities). The systematic knowledge or
competence of the speaker or interpreter is learned IN ADVANCE OF
OCCASIONS OF INTERPRETATION and is *CONVENTIONAL* in
character.
Davidson, op. cit., p. 161.
It is the combination of this principle with two other principles that ’build’ up the
altogether wrong picture of the thing:
Misguided principle P1: First meaning is systematic Misguided principle
P2: First meanings are shared.
Wilcox:
if we give up the attempt to illuminate how we communicate by appeal to
conventions, how then DO we communicate?
8
Well, Davidson holds this to be, precisely, an open question. Perhaps expecting that
there is a “theory” that will account for how we communicate is the wrong answer
altogether to an ill-formulated question.
In the essay, Davidson attempts to formulate, however, what he dubs a “passing
theory”, as opposed to a “prior” theory (of interpretation).
I have distinguished what I have been calling the prior theory [as when
semanticists talk of ’truth-theory’ for a Language L. JLS] and what I shall
henceforth call the PASSING theory. For the hearer, the prior theory expresses how he is prepared IN ADVANCE to interpret an utterance of the
speaker, while the passing theory is how he DOES interpret the utterance.
For the speaker, the prior theory is what he BELIEVES the interpreter’s
prior theory to be, while his passing theory is the theory he INTENDS the
interpreter to use.
op. cit., p. 165
Davidson concludes that while P1 and P2 can be reformulated in various ways, this is
not so with the really misguided P3:
The problem we have been grappling with depends on the assumption
that communication by speech requires that speaker and interpreter have
learned or somehow acquired a common method (or theory) of interpretation—
as being able to operate on the basis of SHARED CONVENTIONS, rules,
or regularities. The problem arose when we realised [in discussing malaprops,
metaphor, and such] that no method (or theory) fits this bill. The solution
to the problem is clear. In linguistic communication nothing corresponds
to a linguistic competence AS OFTEN DESCRIBED, i.e. as summarised
by principles (1)–(3). The solution is to GIVE UP the principles. Principles (1) and (2) SURVIVE when understood in rather unusual ways, but
principle (3) cannot stand, AND IT IS UNCLEAR what can take its place.
Davidson had previously referred to the work of Grice:
Grice has done more than anyone else to bring these problems [of interpretation of ’implicature’ and stuff. JLS] to our attention and to help sort
them out. . . . He has explored the general ’principles’ behind our ability
to figure out such implicatures, and these principles must, of course, be
’known’ to speakers who expect to be taken up on them. Whether ’knowledge’ of these principles [the cooperative principle, e.g. JLS] ought to be
_INCLUDED_ in the description of linguistic COMPETENCE may not
have to be SETTLED: [in any case] they are things A CLEVER PERSON
COULD OFTEN FIGURE OUT WITHOUT PREVIOUS TRAINING
OR EXPOSURE
op. cit., p. 162.
Davidson’s point would be, I guess, that if these sorts of Gricean ’principles’ can be
’reasoned out’, there’s no need to endow them with the character of a ’convention’.
9
George Lakoff (lakoff@cogsci.berkeley.edu)
Tue Nov 26 2002
It is nice to see good ol’ topics from the 60’s—Paul Grice’s implicatures and David
Lewis’ conventionality—taken up again. The phenomena need to reconsidered seriously within the cognitive linguistics context. But when Sherman Wilcox writes “I admit to knowing not a stitch of Davidson,” I fear that he isn’t the only one, and that most
folks in the cognitive linguistics tradition may also not know the context of Grice’s and
Lewis’ work either. Since I shared a history with them (they were friends of mine back
when I was working on logic), I think a bit that history might be useful—especially
since it is relevant to the current discussion. Their work cannot now be taken at face
value and has to be thought of in a historical perspective, for reasons that will become
clear below.
Paul Grice’s lectures on implicature (Language and Conversation) were given as
the William James Lectures at Harvard in 1967. I was teaching there at the time and
I attended. David Lewis was a grad student there and, I believe, he was in the room
too. Grice’s intent was conservative. Strawson had given lots of examples showing
the inadequacy of Russell’s symbolic logic in general and his Theory of Descriptions
in particular. Grice was defending Russell. His argument was that you could keep
Russellian logic for semantics and truth conditions, while getting the real natural language examples right by adding a theory of conversation on top of the logic. Since
I was trying to incorporate logic and pragmatics into linguistics at the time (1967), I
became enamored of Paul’s work. He, however, refused to publish it. I managed to get
a copy and distributed over 1,000 copies through the linguistic underground by 1973,
and also managed to get chapter 2 published in the Cole-Morgan volume on Speech
Acts in 1975. (The story involves a bar in Austin, Texas.)
Paul was an objectivist who insisted that all meaning was literal. Nonetheless,
much of Paul’s work was insightful—although his one metaphor example was pitifully
analyzed. The only way Paul’s theory could deal with metaphor was to claim that
metaphors had a literal meaning conveyed via implicature. Searle later tried applying
this idea in his paper on metaphor in the Ortony volume, a disastrous attempt.
During the 70’s, Paul’s work became taken very seriously by those trying to keep
formal logic as a theory of thought—with the result that it got reinterpreted—for good
reason. Gazdar did a formalization within logic of the maxim of quantity in his dissertation. Grice’s student Deirdre Wilson (she had typed his manuscript) realized that
all the maxims could be seen as instances of relevance. Her theory of relevance also
tried to preserve formal logic as a theory of semantics. When Fillmore formulated
frame semantics, I realized that relevance—and with it Gricean implicature—could be
handled via frame-based inference with a cognitive linguistics framework. The formal
mechanism for doing this precisely did not exist then (the 70’s), though it does now—
Narayanan’s simulation semantics within NTL. It would be a great thesis topic for
someone to work out the technical details now that a technical mechanism is available.
David Lewis’ Harvard dissertation on Convention was a product of the same era—
1968, if I remember correctly. David was also an objectivist—of the most extreme
variety. It’s worth taking a look at his essay in the Davidson-Harman volume of the
Semantics of Natural Language, where he argues that meaning has nothing to do with
psychology—neither mind nor brain. For David, meaning could only be a correspon10
dence between formal symbols and the objective world, where the objective world was
taken as being modeled via settheoretical models. The symbols were to be linked to
the world-models via some mathematical function. For human languages, that function he claimed was determined by convention—which is why he wrote his thesis on
the topic. But “convention” could not be could not be a matter of human psychology for David; it had to be objective as well. David’s idea was to use the economic
theory of his time—utility theory—to provide what he took as an objectivist account
of convention, since utility was seen as something objective in the world. The irony
here, of course, is that Danny Kahneman, my former cognitive science colleague at
Berkeley—now at Princeton—just won the Nobel Prize in economics for proving that
such a view of economics cannot be maintained. The examples he used were cases that
revealed how people really reason: by prototype, frame, and metaphor—the staples of
cognitive linguistics.
David’s work, like Paul’s, was insightful, despite the objectivist intellectual tradition in which it was embedded. They were both super-smart people who transcended
the theories they were brought up with. Both theories were exemplary products of their
time, the late 60’s (a period I enjoyed and am particularly fond of). But the intellectual
tradition in which the theories were embedded cannot be taken seriously today, and so
the work cannot be taken at face value. The theories were formulated before the age of
cognitive science and neuroscience. We now know from those fields that objectivism is
false (see the survey in Women, Fire, and Dangerous Things and the update in Philosophy in the Flesh). We know that every aspect of thought and language works through
human brains, which are structured to run bodies and which create understandings that
are not objectively true of the world.
Metaphor is an important part of this story. The neural theory of metaphor (see
PITF) explains how the system of conceptual metaphor is learned, why certain conceptual metaphors are universal and others are not, why the system is structured around
primary metaphor, why metaphor acquisition works as it does, why conceptual metaphors
preserve image-schemas, why metaphorical inference works as it does, and why conceptual metaphors tend to take sensory-motor concepts as conceptual source domains
and non-sensory-motor concepts as targets.
Convention also makes sense only in neural terms. What each of us takes as conventional must be instantiated in our synapses. The question is, what is the mechanism? In some cases, the usage-based theories of gradual entrenchment may make
sense. For other cases, they don’t. Metaphor is a case where those theories make no
sense, as I pointed out in my previous note. The old entrenchment theories simply
cannot explain what the neural theory of metaphor explains.
Bill Croft aks, “How can a linguist decide whether a metaphor is conventional?”
and he claims, “There is no easy way, and little or no research that I know of on the
topic (please direct me to any!).” It is true that there is no easy way. The work is hard.
But there is a huge amount of research on the topic. I refer him to chapter 6 of Philosophy in the Flesh (pp. 81–87), where nine forms of convergent evidence are listed—and
to the references at the end of the book, where massive literature on the research is
cited. Croft himself, for all his many accomplishments, is, to my knowledge, not a
metaphor researcher. For those who are, there’s a lot to know.
In summary: Cognitive linguistics is committed to being consistent with what is
known about the brain and the mind. That changes over time, and cognitive linguistics
11
must change with it. Entrenched ideas about entrenchment may have to change as
well. The ideas of Paul Grice and David Lewis from the 60’s cannot just be taken over
into cognitive linguistics as they were formulated. They cannot be taken at face value.
They have to be rethought on the basis of what has been learned since. This is not just
true of Grice and Lewis. My old work on generative semantics from the 60’s had lots
of neat insights as well. But they too have to be rethought. Some can be translated into
cognitive linguistics—others cannot. None of this is easy or obvious. It is important
to know the history of all this work. Those who do not know history are doomed to
repeat it.
Larry Gorbet (lgorbet@unm.edu)
Tue Nov 26 2002
I wrote
I fail to see how Grice’s comments disagree with what Croft wrote. ...
Certainly some cognitivists have [however] treated conventionalization
as an essential part of the process by which conceptualizations in general
become linguistic "meanings". To me, the very essence of that part of language that we conveniently label THE LEXICON is that it conventionally
associates conventional forms with conventional meanings. If I have no
idea what other members of my speech community take to be conventional
meanings of a form I use, how can I figure out the problem of what my use
of that form in this linguistic context and in this particular situation will
mean for them? Even though convention doesn’t lead algorithmically to
situational meaning, it is a necessary component to reducing the task of
comprehension to a (barely) doable one.
and J L Speranza commented
Isn’t this a picture of what Relevance-Theorists have criticised as ’the
code-based model’ versus the ’inference-based model’ built on Gricean
lines that they adopt?
I don’t think it is at all. The inferences of Gricean analysis don’t exist in a vacuum (as
no inference can, of course). One component of those inferences—not by a long, long
shot, of course, the only one—is knowledge of linguistic convention. As I process
the quotation from Speranza immediately above, I make use of all sorts of knowledge
of linguistics and linguistic theorizing plus a variety of methods of inference and assumptions about the writer’s possible intent. But all that is utterly incapable of leading
me to any semblance of understanding unless I also utilize my belief that the term
“Relevance-Theorists” has a certain meaning (well, a certain set of related meanings)
that is shared by a community of users of English. Knowledge of linguistic conventions. And if my beliefs about how various bits of language are shared are erroneous,
my inferences will be faulty just as they would be if other relevant beliefs (e.g. about
language use or intentions) were erroneous.
As for onomatopoeic forms (e.g. English buzz), they are not arbitrary, but they
most certainly are conventional. I am quite capable of saying instead [zz], with no
12
vowel, but I don’t, and the reason is because buzz is the conventional form for representing that sound and *_zz_ is not. Even though the latter is a more accurate imitation.
Jordan Zlatev (Jordan.Zlatev@ling.lu.se)
Wed Nov 27 2002
George Lakoff writes:
Convention also makes sense only in neural terms. What each of us takes
as conventional must be instantiated in our synapses. The question is,
what is the mechanism? In some cases, the usage-based theories of gradual entrenchment may make sense. For other cases, they don’t. Metaphor
is a case where those theories make no sense, as I pointed out in my previous note. The old entrenchment theories simply cannot explain what the
neural theory of metaphor explains.
I wish to disagree. And furthermore suggest that not only metaphor theory, but cognitive linguistics in general lacks a proper understanding of the concept of “convention”
(or the closely related one, “norm”), which I consider as THE most fundamental concept for explaining language, pace “Relevance-Theorists” and in agreement with Larry
Gorbet.
The reason for the above claim is that, as Bill Croft also points out, convention
is a social, rather than an individual, and even less so a NEURAL phenomenon. The
three levels of social-normative, individual-mental and neural-physical are categorically (and ontologically) distinct, if interrelated, corresponding to Popper’s “three
worlds” (cf. Objective Knowledge 1962).
The one who has worked out the implications of this for modern linguistics best (to
my knowledge) is Esa Itkonen, well-known among “functional”, but unfortunately not
“cognitive” linguists. Below I quote from a brief article (which I use in my Semantics
class), where Esa defends Frege’s Sinn from cogling critiques, summarises the thesis
of the “social ontology of meaning” (and language in general) and returns the critique
back to cognitive linguistic approaches of reducing meaning to “image schemas” or
the like:
It is the basic tenet of Itkonen (1978) and (1983) that language is primarily a normative entity. The grammarian does not describe what is said or
how it is understood, but what ought to be said or how it ought to be understood. And because the norms (or rules) of language determine these
“ought”-aspects cannot be individual (as shown by Wittgenstein’s privatelanguage argument), they must be social. Social norms do not exist in a
vacuum, but are rather “supported by” individual persons and, thus, by
individual minds. Language as a social and normative entity is investigated by “autonomous linguistics” [P.S. Esa means Panini not Chomsky!
JZ] . . . Language as a non-social (= individual-psychological) and nonnormative entity is investigated by “psycholinguitics”. Yet even if both
socio-linguistics and psycholinguistics investigate what happens, rather
than what ought to happen, they have to view their data through the spectacles provided by autonomous linguistics.
13
Itkonen: 54, 1997 Yearbook of the Linguistic Association of Finland
George finishes:
Some can be translated into cognitive linguistics—others cannot. None of
this is easy or obvious. It is important to know the history of all this work.
Those who do not know history are doomed to repeat it.
With this I agree completely. But hopefully, “cognitive linguistics” will be rethought
as well in the process . . .
George Lakoff (lakoff@cogsci.berkeley.edu)
Wed Nov 27 2002
That there is a social component to many norms is unquestioned. But there is also
a neural component. You can’t have a conceptual metaphor without it being characterized in your brain. In the case of PRIMARY conceptual metaphors—the ones that
are learned by functioning in the world regardless of culture—the neurally embodied
component is the major one and the cultural component appears to play a minor role at
best. Yet these primary metaphors are part of one’s “conventional” language use and
are also part of the “norms.”
In short, you can’t have a PURELY socio-cultural notion of convention for conventional metaphors.
This is a very big deal in developing a cognitive sociology and anthropology.
Lewis’ attempt to use utility has failed—as Kahneman’s work shows. Do you know
anything to replace it? Is there any viable theory of socio-cultural convention at
present—one that take the neural contribution into account?
From: Jordan Zlatev (Jordan.Zlatev@ling.lu.se) Date: Wed Nov 27 2002—18:05:32
PST
Dear George, (and Cogling!)
In order to answer your question, we need to first clarify some terminology. For
me (and Popper, Itkonen, Clark etc) a norm or convention is an object of common
knowledge, not just “shared”, but “known to be shared” (minimally) in a community.
It requires reflexive knowledge, and this distinguishes it from a simple “regularity of
behavior”, which doesn’t. (There are differences and difficulties in characterizing common (mutual) knowledge, but these not concern us here.) This kind of knowledge implies both self-consiousness, and other-consciousness (“theory-of-mind”). Now there
are good socio-cultural arguments, from Vygotsky to Tomasello, to believe social experience is a necessary condition for the development of self/other-consciousness. If
so, there can be no such thing as a non-social norm or convention. So it is not just that
society is a “component” for “many” norms, it is (I believe) a condition for any norms,
linguistic norms in particular.
But notice that this does not mean that one need to adopt a “purely socio-cultural
approach” to metaphor, or to language in general, if by that is meant one which declares biology irrelevant. It does however mean that what you call “primary metaphors”,
14
(hypothetical) mappings between domains of experience esptablished by physical experience, if I understand correctly, are not conventional per se (as I am using the term),
but serve as the ground/motivation on which convention does its thing. That is how
we get both the near universal general patterns, and the substantial language-specific
differences. That is the approach I tried to follow in my 1997-book “Situated Embodiemnt” and which I am refining in my present work, but let me skip further selfadvertisments.
Instead, I can off-hand refer to at least five ongoing attempts to integrate (neuro)biology
and culture in the attempt to explain the “emergence” of conventions in general, and
language in particular: Chris Sinha’s Language and Representation 1988, Terry Deacon’s The Symbolic Species 1996, Merlin Donald’s The Origin of the Modern Mind
1991, Katherin Nelson’s The emergence of the mediated mind 1996 and Mike Tomasello’s,
The cultural origins of human cognition, 1999. (This doesn’t mean that they are (all)
right, but there is at least a lot of work going on!) My current favorite is the last one—a
very good argument that the basic human BIOLOGICAL adaptation compared to other
primates in one for living SOCIALLY. I consider it likely that this is also the crucial
“neural component” for learning conventions . . .
Let me end here, and express my hope that others who like me have felt that the
notion of convention has been underrepresented, to say it mildly, in cognitive linguistics to join the discussion. Thank you, George, for reacting, and Bill Croft for starting
the discussion in the first place.
Friday, January 31, 2020
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment